1) Has the table schema been examined to minimise storage space per row? Whilst obvious, it is surprising how much can be gained from this. (I’ve managed to halve the row sizes in a previous project’s data warehouse)
Techniques include using the smallest data types possible:
a) Use varchar instead of nvarchar.
b) Use smalldatetime unless you need the precision. [The smalldatetime datatype is accurate to the nearest minute, whereas datetime has a precision of 3.33 milliseconds.]
2) Turn on database compression (this has a CPU overhead, but can reduce total database size by over half).
3) Turn on backup compression.
4) Ensure instant file initialisation is turned on.
5) * Use multiple file groups to manage backups.
6) * Place critical tables in PRIMARY file group to enable piecemeal restore (SQL Server 2005 onwards)
7) Use partitioning to break a table out into several pieces that can be stored on different sets of drives: rule of thumb is around the ’20 – 30 million rows per partition’ mark. Of course, this somewhat depends on the ‘natural’ partitioning key range (for example, 1 month per partition)
8) Have a clear idea of your high availability requirements (maximum downtime) up front, and validate your disaster recovery plans.
Refs.