SQL Server: Diagnosing Intermittent Connectivity Issues

Common causes of intermittent SQL Server connectivity issues:

  • Queries/transactions holding long running exclusive locks on tables.  Queries waiting on these locks may be timing out.
  • A faulty switch, network card, cable or other network infrastructure.
  • Another device on the network with the same IP address.
  • Incorrect connection protocol being used.
  • Virus scanners and incorrect exclusion lists.
  • Network/Port security scanners.
  • VM snapshot backups.
  • Under-powered SQL Server or TLog backups over an under-powered network connection or large TLog file growth settings.
  • SQL Server under extreme memory pressure.
  • An application doing slow row-by-row processing while SQL Server is sending the data fast; this often shows up as a high value for ASYNC_NETWORK_IO waits.

The obvious question is “What changed?”!

When different parts of the system are managed by different people, the culprit can be a seemingly innocuous change that is not well documented or communicated to the various stakeholders.

Is a server alias being used?

Have TCP/IP dynamic ports been enabled for the TCP/IP protocol?

See How to Configure a SQL Server Alias for a Named Instance on a Development Machine

Is the SQL Server a named instance?

If you don’t specify the protocol in the connection string, then it will use the protocol specified for the database driver. The protocol order, by default, is Shared Memory, TCP/IP and then Named Pipes.

To list connections and the protocol they are using:

SELECT * FROM sys.dm_exec_connections

–WHERE session_id = @@SPID;

To specify which protocol to use in a connection string, add tcp: before the server name:

var connectionString = “Data Source=tcp:Server\Instance; …”;      

Is IPv6 being used?

Was the server upgraded?

If so, there might be queries that are timing out due to optimiser/cardinality estimator changes.

Are all applications experiencing intermittent connections?

If you have a monitoring tool connected to the SQL Server, is it able to connect even when the problem is occurring?

Are there any errors in the SQL Server or application server logs?

If all application servers lose network connectivity regularly at the same time, it might be a hardware issue such as a faulty switch or network card.

Does it happen to all application servers?

If you have several application or web servers, are they all affected?

[As an aside, if you’ve only got one, you might want to consider setting up another for load balancing and troubleshooting.]

Are all queries in the application affected, or just some queries?

Some queries/transactions may be holding long running exclusive locks on tables.  Queries waiting on these locks might be timing out.

Do timeouts occur at regular days/times?

Could it be regularly scheduled port security scans, or VM backups, or large transaction log file growths?  Use a fixed transaction log file growth setting (128MB is commonly used) rather than a percentage.

During the timeouts, is the application server able to ping the SQL Server?

Set up a script to ping the SQL Server every 10-15 seconds. If you are not able to ping the SQL Server next time the application has query timeouts, then it is not a SQL Server problem.

Faster I/O Subsystem Causing More WRITELOG Waits?

Paul Randal explains how WRITELOG waits can increase with a faster I/O subsystem:

I was contacted last week by someone who was asking how are faster disks causing more WRITELOG waits. They were seeing lots of these waits, with an average wait time of 18ms. The log was stored on a Raid-1 array, using locally-attached spinning disks in the server. They figured that by moving the log to Raid-1 array of SSDs, they’d reduce the WRITELOG wait time and get better workload throughput.

They did so and got better performance, but were very surprised to now see WRITELOG as the most frequent wait type on the server, even though the average wait time was less than 1ms, and asked me to explain…

Azure SQL DB Storage Bottleneck

If you are using Azure SQL Databases, you should definitely read this post by Brent Ozar: There’s a bottleneck in Azure SQL DB storage throughput. The bottom line:
the transaction log throughput currently appears to bottleneck at 16 cores!

The bit where he compares AWS costs/relative performance is also an eye opener:

  • 8 cores, 1,991 per month: 64 minutes</li><li><strong>16 cores,3,555 per month: 32 minutes (and interestingly, it’s the same speed with zone redundancy enabled)
  • 80 cores, 18,299 per month: 32 minutes</strong></li><li>Just for reference: 8-core AWS EC2 i3.2xl VM,1,424 per month with SQL Server Standard Edition licensing: 2 minutes (and I don’t put that in to tout AWS, I just happen to have most of my lab VMs there, so it was a quick comparison)

TransactionScope Default Isolation Level

When you create a new TransactionScope, you might be surprised to find that the default isolation level is Serializable.

TransactionScope’s default constructor defaults the isolation level to Serializable and the timeout to 1 minute. In SQL Server, SERIALIZABLE transactions are rarely useful and prone to deadlocks.

If you are using a TransactionScope and getting deadlocks in your application (and hopefully you have a retry mechanism for SQLException 1205), it might be due to the Serializable isolation level.

Instead of using the default constructor:

using (var scope = new TransactionScope())
{
    // .... Do Stuff ....

    scope.Complete();
}

Use this:



using (var scope = TransactionUtils.CreateTransactionScope())
{
    // .... Do Stuff ....

    scope.Complete();
}

.....

public class TransactionUtils
{
    public static TransactionScope CreateTransactionScope()
    {
        var transactionOptions = new TransactionOptions
        {
            IsolationLevel = IsolationLevel.ReadCommitted,
            Timeout = TransactionManager.MaximumTimeout
        };

        return new TransactionScope(TransactionScopeOption.Required, transactionOptions);
    }
}

[The other gotcha relates to SQL Server 2014 and below: TransactionScope and Connection Pooling]

Reducing Azure Functions Cold Start Time

You can host a serverless function in Azure in two different modes: Consumption plan and Azure App Service plan. The Consumption plan automatically allocates compute power when your code is running. Your app is scaled out when needed to handle load, and scaled down when code is not running. You don’t have to pay for idle VMs or reserve capacity in advance.

The consumption plan is likely the one you opted to use. You get 1 million free executions and 400,000 GB-s of resource consumption per month and pay-as-you-go pricing beyond that. It can also scale massively. The downside of using the consumption plan is that if there is no activity after five minutes (the default), your code will be unloaded by the Azure Functions host. The next time your function is invoked, your function will be loaded from scratch. I’ve experienced start up times ranging from a few seconds to several minutes.

With version 1.x of Azure functions you can raise the unload functiontimeout to 10 minutes in the host.json settings file.

If you want to keep your functions in memory, you either need to be calling them more frequently than the function timeout setting, or you could create another function in the same app. service plan that gets called periodically using a timer trigger:


using System;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;

namespace SQLFrontlineLoaderFunctionApp
{
    public static class KeepAliveTimerFunction
    {
        // Timer trigger is in UTC timezone 
        [FunctionName("KeepAliveTimerFunction")]
        public static void Run([TimerTrigger("0 */9 * * * *")]TimerInfo myTimer, TraceWriter log)
        {
            log.Info($"Alive: {DateTime.Now}");
        }
    }
}

This empty function will be called every 9 minutes (“0 */9 * * * *”). You could also keep your functions warm during a certain period during the day. This timer trigger cron expression would run every 9 minutes between 4am and 10am (UTC time zone): TimerTrigger(“0 */9 4-10 * * *”)]

Refs.

Postgresql Unused Indexes

To find indexes that have not been used since the last statistics reset with pg_stat_reset(), run this on your production server:

SELECT 
    u.schemaname,
    u.relname AS tablename,
    u.indexrelname AS indexname,
    pg_relation_size(u.indexrelid) AS index_size
FROM 
    pg_catalog.pg_stat_user_indexes u
    JOIN pg_catalog.pg_index i ON u.indexrelid = i.indexrelid
WHERE
    u.idx_scan = 0           -- never been scanned
    AND 0 <> ALL (i.indkey)  -- no index column is an expression
    AND NOT EXISTS           -- index not used to enforce a constraint
       (SELECT 1 FROM pg_catalog.pg_constraint c WHERE c.conindid = u.indexrelid)
ORDER BY 
    pg_relation_size(u.indexrelid) DESC;

Indexes that are very rarely used can also be good candidates for removal; replace u.idx_scan = 0 in the query with a different threshold, e.g. u.idx_scan < 5. BUT be aware of indexes whose purpose is to support queries which are run infrequently such as monthly reporting queries.

And, always script out your indexes before you remove them.

Further useful index scripts can be found here.

SQL Agent Jobs Starting at the Same Time

The following query will list SQL Agent jobs that are due to start execution at the same time:

;with cteNextRunDatetime as
(
    SELECT job_id, next_scheduled_run_date = max(next_scheduled_run_date)
    FROM msdb.dbo.sysjobactivity 
    WHERE next_scheduled_run_date IS NOT NULL
    GROUP BY job_id 
)
, cteSimultaneousJobsDates as
(
    SELECT next_scheduled_run_date 
    FROM cteNextRunDatetime
    GROUP BY next_scheduled_run_date 
    HAVING COUNT(*) > 1
)
SELECT 
    sj.name, 
    sj.description, 
    c.next_scheduled_run_date
FROM 
    msdb.dbo.sysjobs sj
    JOIN cteNextRunDatetime sja ON sj.job_id = sja.job_id
    JOIN cteSimultaneousJobsDates c on c.next_scheduled_run_date = sja.next_scheduled_run_date
WHERE
    enabled = 1
ORDER BY 
    sja.next_scheduled_run_date