
sp_whoisactive 11.32

SQL Server, performance, data, analytics

I was speaking to a client recently about how very complex queries might not always be getting the most efficient query plan due to compilation timeouts. Joseph Pilov has written a good description here: Understanding Optimizer Timeout and how Complex queries can be Affected in SQL Server
The following query will list SQL Agent jobs that are due to start execution at the same time:
;with cteNextRunDatetime as
(
SELECT job_id, next_scheduled_run_date = max(next_scheduled_run_date)
FROM msdb.dbo.sysjobactivity
WHERE next_scheduled_run_date IS NOT NULL
GROUP BY job_id
)
, cteSimultaneousJobsDates as
(
SELECT next_scheduled_run_date
FROM cteNextRunDatetime
GROUP BY next_scheduled_run_date
HAVING COUNT(*) > 1
)
SELECT
sj.name,
sj.description,
c.next_scheduled_run_date
FROM
msdb.dbo.sysjobs sj
JOIN cteNextRunDatetime sja ON sj.job_id = sja.job_id
JOIN cteSimultaneousJobsDates c on c.next_scheduled_run_date = sja.next_scheduled_run_date
WHERE
enabled = 1
ORDER BY
sja.next_scheduled_run_date
Getting the server restart time for an on-premise SQL Server is simple, and in fact there are several ways using sys.dm_os_sys_info, sys.dm_exec_sessions, sys.traces, or sys.databases
In an Azure SQL Database, you don’t get access to those system objects.
Brent Ozar posted a way to get the approximate Azure SQL Database restart date/time but I found that some of the wait types can produce large outliers and skew the result:

Instead, I’ve modified to use a standard statistics technique to reject outlier values that are outside 1.5 times the interquartile range:
;with cte as
(
SELECT wait_time_ms
FROM sys.dm_os_wait_stats w with(nolock)
WHERE wait_type IN
(
'BROKER_TASK_STOP',
'DIRTY_PAGE_POLL',
'HADR_FILESTREAM_IOMGR_IOCOMPLETION',
'LAZYWRITER_SLEEP',
'LOGMGR_QUEUE',
'REQUEST_FOR_DEADLOCK_SEARCH',
'XE_DISPATCHER_WAIT',
'XE_TIMER_EVENT'
)
)
select
approx_ms_since_restart = AVG(wait_time_ms),
approximate_restart_date = DATEADD(s, AVG(-wait_time_ms)/1000, GETDATE())
from
cte
cross join
(select
q1 = min(wait_time_ms),
q3 = max(wait_time_ms),
iqr = max(wait_time_ms) - min(wait_time_ms)
from (select
wait_time_ms,
row_number() over (order by wait_time_ms) as seqnum,
count(*) over (partition by null) as total
from cte
) t
where seqnum = cast(total * 0.25 as int) or seqnum = cast(total * 0.75 as int)
) qts
where (wait_time_ms >= q1 - 1.5 * iqr) AND (wait_time_ms <= q3 + 1.5 * iqr)
This tells me my Azure server restarted on the 21st Sept 2018 around 23:46.
It’s not as complicated as it first looks:
The first part is obviously the same as Brent Ozar’s query, it gets a list of waits and their respective cumulative wait times. We generate a row number ordered by wait time and the total number of rows, and then pick 2 values using the row number at positions a quarter and three quarters along that list (the first and third quartile values). We then reject any value that is smaller than the 1st quartile value minus 1.5 times the interquartile range and any value that is larger than the 3rd quartile value plus 1.5 times the interquartile range.
Note: DATEADD() only accepts an integer for its second parameter, and using milliseconds(ms) overflows for even short periods of server up-time. In fact, since it’s approximate it might be better to use:
approximate_restart_date = DATEADD(minute, AVG(-wait_time_ms)/60000, GETDATE())
If anyone needed a single concrete reason to upgrade to SQL Server 2019 (when its released), ignoring all the new features, here it is:
2628 – String or binary data would be truncated in table ‘%.ls’, column ‘%.ls’. Truncated value: ‘%.*ls’.
That’s right! After something like 25+ years, Microsoft are finally fixing the cursed “BINARY OR STRING DATA WOULD BE TRUNCATED”!
Seen via Brent Ozar’s post: What’s New in SQL Server 2019’s sys.messages: More Unannounced Features
SQL Server alerts are a free and easy way to get notified of corruption, agent job failures, or major failures before you get angry phone calls from users.
You should have already set up Database Mail and an alert operator to receive notifications. TIP: Always use an email distribution list for your operator notification email. If you use a person’s email, they might change jobs, or be on holiday…
The script below sets up the standard SQL Server Agent alerts for severity 17 through 25 as well as specific alerts for 823, 824 and 825 errors, and a bunch of others:
USE [msdb];
GO
SET NOCOUNT ON;
BEGIN TRANSACTION
-- Change these as required...
DECLARE @Normal_Delay_Between_Responses int = 900 -- in seconds
DECLARE @Medium_Delay_Between_Responses int = 3600
DECLARE @Long_Delay_Between_Responses int = 14400
DECLARE @OperatorName sysname = N'SQLDBAGroup'
DECLARE @CategoryName sysname = N'SQL Server Agent Alerts';
DECLARE @DelayBetweenResponses int
DECLARE @TotalAlerts int
DECLARE @Row int = 1
DECLARE @AlertName sysname
DECLARE @SeverityNo int
DECLARE @ErrorNo int
DECLARE @Alerts TABLE
(
Id int IDENTITY(1,1) NOT NULL
,SeverityNo int NULL
,ErrorNo int NULL
,AlertName sysname NOT NULL
,DelayBetweenResponses int NOT NULL
);
INSERT @Alerts (SeverityNo, AlertName, DelayBetweenResponses)
VALUES
(17, 'Alert - Sev 17 - Insufficient Resources', @Normal_DELAY_BETWEEN_RESPONSES)
,(18, 'Alert - Sev 18 - Nonfatal Internal Error', @Normal_DELAY_BETWEEN_RESPONSES)
,(19, 'Alert - Sev 19 - Fatal Error in Resource', @Normal_Delay_Between_Responses)
,(20, 'Alert - Sev 20 - Fatal Error in Current Process', @Normal_Delay_Between_Responses)
,(21, 'Alert - Sev 21 - Fatal Error in Database Process', @Normal_Delay_Between_Responses)
,(22, 'Alert - Sev 22 - Fatal Error: Table Integrity Suspect', @Normal_Delay_Between_Responses)
,(23, 'Alert - Sev 23 - Fatal Error: Database Integrity Suspect', @Normal_Delay_Between_Responses)
,(24, 'Alert - Sev 24 - Fatal Error: Hardware Error', @Normal_Delay_Between_Responses)
,(25, 'Alert - Sev 25 - Fatal Error', @Normal_Delay_Between_Responses);
INSERT @Alerts (ErrorNo, AlertName, DelayBetweenResponses)
VALUES
(601, 'Alert - Error 601 - NOLOCK scan aborted due to data movement', @Long_Delay_Between_Responses)
,(674, 'Alert - Error 674 - Exception occurred in destructor', @Long_Delay_Between_Responses)
,(708, 'Alert - Error 708 - Low virtual address space or low virtual memory', @Medium_Delay_Between_Responses)
,(806, 'Alert - Error 806 - Audit failure: page read from disk failed basic integrity checks', @Medium_Delay_Between_Responses)
,(823, 'Alert - Error 823 - I/O Error: http://support.microsoft.com/kb/2015755', @Normal_Delay_Between_Responses)
,(824, 'Alert - Error 824 - Consistency-based I/O error: http://support.microsoft.com/kb/2015756', @Normal_Delay_Between_Responses)
,(825, 'Alert - Error 825 - File Read Retry: http://support.microsoft.com/kb/2015757', @Normal_Delay_Between_Responses)
,(832, 'Alert - Error 832 - Constant page has changed: http://support.microsoft.com/kb/2015759', @Normal_Delay_Between_Responses)
,(833, 'Alert - Error 833 - Long I/O request', @Long_Delay_Between_Responses)
,(855, 'Alert - Error 855 - Uncorrectable hardware memory corruption detected', @Normal_Delay_Between_Responses)
,(856, 'Alert - Error 856 - SQL Server has detected hardware memory corruption, but has recovered the page', @Normal_Delay_Between_Responses)
,(1205, 'Alert - Error 1205 - Transaction Deadlock arbitrated', @Long_Delay_Between_Responses)
,(3401, 'Alert - Error 3401 - Errors occurred during recovery while rolling back a transaction', @Normal_Delay_Between_Responses)
,(3410, 'Alert - Error 3410 - Data in filegroup is offline', @Medium_Delay_Between_Responses)
,(3414, 'Alert - Error 3414 - Recovery Error', @Long_Delay_Between_Responses)
,(3422, 'Alert - Error 3422 - Database was shutdown', @Long_Delay_Between_Responses)
,(3452, 'Alert - Error 3452 - Recovery inconsistency', @Long_Delay_Between_Responses)
,(3619, 'Alert - Error 3619 - Could not write a checkpoint because the log is out of space', @Medium_Delay_Between_Responses)
,(3620, 'Alert - Error 3620 - Automatic checkpointing is disabled because the log is out of space', @Medium_Delay_Between_Responses)
,(3959, 'Alert - Error 3959 - Version store is full', @Normal_Delay_Between_Responses)
,(5029, 'Alert - Error 5029 - Warning: Log has been rebuilt', @Long_Delay_Between_Responses)
,(5144, 'Alert - Error 5144 - Autogrow of file was cancelled by user or timed out', @Long_Delay_Between_Responses)
,(5145, 'Alert - Error 5145 - Long Autogrow of file', @Medium_Delay_Between_Responses)
,(5182, 'Alert - Error 5182 - New log file created', @Long_Delay_Between_Responses)
,(9001, 'Alert - Error 9001 - Transaction log not available', @Normal_Delay_Between_Responses)
,(9002, 'Alert - Error 9002 - Transaction log full', @Normal_Delay_Between_Responses)
,(17173, 'Alert - Error 17173 - Ignored trace flag', @Long_Delay_Between_Responses)
,(17883, 'Alert - Error 17883 - Non-yielding Worker on Scheduler', @Medium_Delay_Between_Responses)
,(17884, 'Alert - Error 17884 - New queries assigned to process have not been picked up by a worker thread', @Medium_Delay_Between_Responses)
,(17887, 'Alert - Error 17887 - Worker appears to be non-yielding on Node', @Medium_Delay_Between_Responses)
,(17888, 'Alert - Error 17888 - All schedulers on Node appear deadlocked', @Medium_Delay_Between_Responses)
,(17890, 'Alert - Error 17890 - A significant part of sql server process memory has been paged out', @Long_Delay_Between_Responses)
,(17891, 'Alert - Error 17891 - Worker appears to be non-yielding on Node', @Medium_Delay_Between_Responses)
SELECT @TotalAlerts = COUNT(*) FROM @Alerts
IF NOT EXISTS(SELECT * FROM msdb.dbo.sysoperators WHERE name = @OperatorName)
BEGIN
RAISERROR ('SQL Operator %s does not exist', 18, 16, @OperatorName);
RETURN;
END
IF NOT EXISTS (SELECT * FROM msdb.dbo.syscategories
WHERE category_class = 2 -- ALERT
AND category_type = 3 AND name = @CategoryName)
BEGIN
EXEC dbo.sp_add_category @class = N'ALERT', @type = N'NONE', @name = @CategoryName;
END
BEGIN TRY
WHILE @Row <= @TotalAlerts
BEGIN
SELECT
@AlertName = @@SERVERNAME + ' - ' + AlertName
,@SeverityNo = SeverityNo
,@ErrorNo = ErrorNo
,@DelayBetweenResponses = DelayBetweenResponses
FROM
@Alerts
WHERE
Id = @Row
IF EXISTS (SELECT * FROM msdb.dbo.sysalerts WHERE [name] = @AlertName)
BEGIN
EXEC msdb.dbo.sp_delete_alert @name = @AlertName
END
IF @SeverityNo IS NOT NULL
BEGIN
EXEC msdb.dbo.sp_add_alert
@name = @AlertName,
@message_id = 0,
@severity = @SeverityNo,
@enabled = 1,
@Delay_Between_Responses = @DelayBetweenResponses,
@include_event_description_in = 1,
@category_name = @CategoryName,
@job_id = N'00000000-0000-0000-0000-000000000000'
END
IF @ErrorNo IS NOT NULL
BEGIN
-- Errors 855 and 856 require SQL Server 2012+ and Enterprise Edition
-- [Also need Windows Server 2012+, and hardware that supports memory error correction]
IF @ErrorNo NOT IN (855, 856)
OR (LEFT(CONVERT(CHAR(2),SERVERPROPERTY('ProductVersion')), 2) >= '11' AND SERVERPROPERTY('EngineEdition') = 3)
BEGIN
EXEC msdb.dbo.sp_add_alert
@name = @AlertName,
@message_id = @ErrorNo,
@severity = 0,
@enabled = 1,
@Delay_Between_Responses = @DelayBetweenResponses,
@include_event_description_in = 1,
@category_name = @CategoryName,
@job_id = N'00000000-0000-0000-0000-000000000000'
END
END
EXEC msdb.dbo.sp_add_notification @Alert_Name = @AlertName, @Operator_Name = @OperatorName, @notification_method = 1
SELECT @Row = @Row + 1
END
END TRY
BEGIN CATCH
PRINT ERROR_MESSAGE()
IF @@TRANCOUNT > 0
BEGIN
ROLLBACK
END
END CATCH
IF @@TRANCOUNT > 0
BEGIN
COMMIT
END
I’ve recently been using Azure’s SendGrid to send email and came across this article Cross-Platform Email Design with some good advice in it.
It’s disappointing that many email clients still prefer inline CSS styles…
Posting this code snippet, so I don’t forget it!
public static string GetPublicIPAddress()
{
string result = string.Empty;
// List of public URLs to be tried in order
string[] checkIPUrl =
{
"https://ipinfo.io/ip",
"https://checkip.amazonaws.com/",
"https://api.ipify.org",
"https://icanhazip.com",
"https://wtfismyip.com/text"
};
using (var client = new WebClient())
{
client.Headers["User-Agent"] = "Mozilla/4.0 (Compatible; Windows NT 5.1; MSIE 6.0) " +
"(compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)";
foreach (var url in checkIPUrl)
{
try
{
result = client.DownloadString(url);
}
catch
{
// Ignore errors
}
if (!string.IsNullOrEmpty(result))
break;
}
}
return result.Replace("\n", "").Trim();
}
Currently, the Microsoft Azure Storage Explorer doesn’t have the ability to move messages from one Azure queue to another. During development, I sometimes need to retry messages that are in the poison message queue. A poison message is a message that has exceeded the maximum number of delivery attempts to the application. This happens when a queue-based application cannot process a message because of errors.
It’s not difficult to write, but thought it might save someone a few minutes, so I’m posting it here. You’ll need to add a NuGet package reference to Microsoft.NET.Sdk.Functions
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Queue;
void Main()
{
const string queuename = "MyQueueName";
string storageAccountString = "xxxxxx";
RetryPoisonMesssages(storageAccountString, queuename);
}
private static int RetryPoisonMesssages(string storageAccountString, string queuename)
{
CloudQueue targetqueue = GetCloudQueueRef(storageAccountString, queuename);
CloudQueue poisonqueue = GetCloudQueueRef(storageAccountString, queuename + "-poison");
int count = 0;
while (true)
{
var msg = poisonqueue.GetMessage();
if (msg == null)
break;
poisonqueue.DeleteMessage(msg);
targetqueue.AddMessage(msg);
count++;
}
return count;
}
private static CloudQueue GetCloudQueueRef(string storageAccountString, string queuename)
{
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(storageAccountString);
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
CloudQueue queue = queueClient.GetQueueReference(queuename);
return queue;
}
It seems that the maximum Microsoft’s RSA provider supports is 2048 bits: Is there a limit for RSA key exponents in .NET?
I’ve just been been experimenting with 3072 and 4096 bit keys without success. Error message is “Bad Data” !
Nice one Microsoft! Thanks for you concern about security!
Surprising since NIST’s recommendations are that 3072 bit keys are required “if security is required beyond 2030”