How can SQLFrontline help me as a DBA, Accidental DBA or Developer?

If you’re doing a good job as a DBA you want to have an in-depth knowledge of all the SQL Servers you are responsible for. That includes the SQL Server hardware, configuration and performance. You want to be pro-active and not swamped. You want active notifications of when things change or fail, and ideally you would like this documented, so you can show it to your boss!

Rather than you having to remember to run checks on servers (and when you get busy, it’s easy to miss something), SQLFrontline does this for you on a scheduled basis (usually once per day, but configurable). You don’t have to install any software or configure any local repositories.

As an accidental DBA (despite reading blogs and investing time learning SQL Server) you might not be aware of everything required to make SQL Server reliable and perform well. SQLFrontline embodies SQL Server industry best practices. Not only does it notify you of problems found, but it also explains the problem and shows you how to fix them.

As a developer, you want to be made aware of any design decisions that could affect performance, and/or track down any existing performance/blocking issues.

I use SQLFrontline in my day-to-day SQL Server consultancy business. It generally takes me less than an hour to do what would have previously taken over a week (running and documenting the results of over 300 checks can be time consuming). I recently used SQLFrontline to efficiently diagnose and fix all the SQL Servers for an entire business, making them standardised and industry best practice in the process.

SQLFrontline: a Diagnostic and Monitoring Service for all your SQL Servers

If you’re an overworked DBA (or a DBA that wants to be responsible for more servers and databases), an accidental DBA, or a developer wanting to get a better handle on your SQL Server and databases, SQLFrontline can help you do that.

SQLFrontline is a lightweight, low impact, nothing to install diagnostic tool for all SQL Server versions 2008+. It carries out over 300+ checks (more checks are added frequently), across the areas of security, reliability, performance, configuration and database design. Issues are organised by server and by priority (Critical, High, Medium, Info).

It compares and notifies you what was fixed between data collections. It has a built in reminder list for high priority items that haven’t been fixed over time.

It even has a demo mode, so you can try it out for free.

Here’s an excerpt from an example notification email:

SQL Server: Don’t Make the Query Optimiser’s Job More Difficult

Part of my job is tuning complex queries: I’ve seen some recently with eye watering complexity. A post from Erik Darling explains how abstraction can be the cause of poor performance in SQL Server:

Sometimes it’s views, CTEs, or derived tables. Sometimes it’s functions. obviously functions can have a weirder set of effects, but the general idea is the same.

If you start chaining things, or nesting them together, you’re making the optimizer’s job harder and likely introducing a lot of overhead.

There’s no “caching” of steps in a query. If you nest a view however-many-levels-deep, each step isn’t magically materialized.

Same goes for CTEs. If you string a bunch together and reference them multiple times, you’ll start to see some very repetitive branches in your query plans.

Now, there are tricks you can play to get what happens inside of one of these steps “fenced off”, but not to get the result set fully materialized.

In addition, as your query becomes complex, the query optimiser eventually gives up and produces a less than efficient query plan because there are too many potential query plans to choose from.

Erik references Grant Fritchey’s post from 2012, The Seven Sins against TSQL Performance, which is still as relevant today.

SQL Server Security: Find Users with Weak Passwords

Data Breaches are common, and their cause is often as simple as the use of weak passwords.

SQL Server has an internal system function, PWDCOMPARE(), that can be used to find SQL logins with a weak password. A list of very common weak passwords can be found here as well as many other places.

IF OBJECT_ID('tempdb..#CommonPasswords') IS NOT NULL
    DROP TABLE #CommonPasswords;

CREATE TABLE #CommonPasswords(Password varchar(30) COLLATE Latin1_General_CS_AS not null primary key);

INSERT INTO #CommonPasswords(Password) VALUES
(''),
('123'),
('1234'),
('12345'),
('123456'),
('1234567'),
('12345678'),
('123456789'),
('1234567890'),
('987654321'),
('123qwe'),
('mynoob'),
('18atcskd2w'),
('55555'),
('555555'),
('3rjs1la7qe'),
('google'),
('zxcvbnm'),
('000000'),
('1q2w3e'),
('1q2w3e4r5t'),
('1q2w3e4r'),
('qwerty'),
('qwerty123'),
('password'),
('p@ssword'),
('p@ssw0rd'),
('password1'),
('p@ssword1'),
('password123'),
('passw0rd'),
('111111'),
('1111111'),
('abc123'),
('666666'),
('7777777'),
('654321'),
('123123'),
('123321'),
('iloveyou'),
('admin'),
('nimda'),
('welcome'),
('welcome!'),
('!@#$%^&*'),
('aa123456'),
('lovely'),
('sunshine'),
('shadow'),
('princess'	),
('solo'),
('football'),
('monkey'),
('Monkey'),
('charlie'),
('donald'),
('Donald'),
('dragon'),
('Dragon'),
('trustno1'),
('letmein'),
('whatever'),
('hello'),
('freedom'),
('master'),
('starwars'),
('qwertyuiop'),
('Qwertyuiop'),
('qazwsx'),
('corona'),
('woke'),
('batman'),
('superman'),
('login');

SELECT 
    name,
    create_date,
    is_disabled
FROM 
    sys.sql_logins sl (nolock)
    cross apply #CommonPasswords cp
WHERE 
    PWDCOMPARE(cp.Password, sl.password_hash) = 1

UNION ALL

SELECT 
    name,
    create_date,
    is_disabled
FROM 
    sys.sql_logins sl (nolock)
WHERE 
    PWDCOMPARE(sl.name, sl.password_hash) = 1; -- password same as username


Troy Hunt has collected the passwords from several major data breaches, and he has made the passwords searchable.

Postgresql Unused Indexes

To find indexes that have not been used since the last statistics reset with pg_stat_reset(), run this on your production server:

SELECT 
    u.schemaname,
    u.relname AS tablename,
    u.indexrelname AS indexname,
    pg_relation_size(u.indexrelid) AS index_size
FROM 
    pg_catalog.pg_stat_user_indexes u
    JOIN pg_catalog.pg_index i ON u.indexrelid = i.indexrelid
WHERE
    u.idx_scan = 0           -- never been scanned
    AND 0 <> ALL (i.indkey)  -- no index column is an expression
    AND NOT EXISTS           -- index not used to enforce a constraint
       (SELECT 1 FROM pg_catalog.pg_constraint c WHERE c.conindid = u.indexrelid)
ORDER BY 
    pg_relation_size(u.indexrelid) DESC;

Indexes that are very rarely used can also be good candidates for removal; replace u.idx_scan = 0 in the query with a different threshold, e.g. u.idx_scan < 5. BUT be aware of indexes whose purpose is to support queries which are run infrequently such as monthly reporting queries.

And, always script out your indexes before you remove them.

Further useful index scripts can be found here.