Changes to Microsoft Certified Master: Microsoft SQL Server 2008

When I first heard about the SQL Server Microsoft Certified Master (MCM) certification, it immediately struck me as something worth doing. Then I found out how much it cost! Microsoft have recently announced changes to the original program. In order to pursue the SQL Server Microsoft Certified Master certification you still need the pre-requisites MCITP: Database Developer 2008 (Exam 70-450) and MCITP: Database Administrator 2008 (Exam 70-451). If you work with SQL Server, these 2 exams are relatively easy (unlike the rest of the MCM). 

The Original Program:

  • Cost: 18,500 for 3 continuous weeks of onsite training, based in Seattle  <li>Three written multiple-choice exams during the training  <li>One final six-hour lab</li></ul>  This has some obvious drawbacks: US18,500 is a considerable sum of money to find in one go (even with the strong Aussie dollar), and that’s in addition to taking 3 continuous weeks away from work. Plus the course only happened in Redmond, so add travelling costs as well. 

    The New Program:

    Microsoft will no longer provide training for the SQL MCM. Instead, training will provided by approved third-party vendors. I’ve just visited the Prometric site but cannot find any test centres for the MCM exams in Australia!

    If the MCM is something you are interested in, Microsoft (in partnership with SQLskills) has made 40 hours of free introductory training available for the MCM program, which you can download here. These videos give an indication of the breadth of material you need to know for each area (rather than everything you actually need to know). There is also an excellent reading list put together by SQLskills at Microsoft Certified Master – SQL Server Online Training – Links and Study Aids

    (Ref.: Brent Ozar’s blog, Paul Randell’s blog)

DevJam 2010

I really enjoy our User Group’s end of year ‘DevJam’ events (despite the fact that it involves a bit of work to organise). It is great to see so many local developers willing to give up their time to present a talk. Around 50 people attended last Thursday’s event, so it was a relief when the pizza finally arrived. A crowd of developers deprived of pizza is a riot waiting to happen 😉 !

It was a great line-up of topics with a nice mixture of content, and every presenter did a great job of informing and entertaining. Every talk received a good number of audience votes, but we have to have some winners in order to give out the prizes! The favourite talks (in order) were:

  1. Adrian McGrath  – WPF: Design Time Data
  2. Alistair Waddell  – A simple approach to test your data driven app
  3. Michael Minutillo – Dynamic UI in WinForms

Thank you to all the sponsors who help make these events possible: Microsoft (for the venue and VS2010 MSDN door prize), UGSS for the pizza and refreshments, JetBrains, Joe Albahari, RedGate, TekPub, Scooter Software, DevExpress, TechSmith. [The presenter VS2010 with MSDN prize was donated by myself from the MVP program.]

Thank you to everyone who attended DevJam and meetings throughout the year. Special thanks to Dave Gardner who fetched and delivered the beer.

[ If you are interested in running one of these events at your User Group, I put together some notes here:  Running a DevJam Community Event ]

Reminder: Perth .NET User Group, Thurs 2nd Dec: DevJam Community Event

Here’s the line-up of talks for our end of year DevJam community event: 

  • Mike Bain – Using SpecFlow and WatiN
  • Michael Minutillo – Dynamic UI in WinForms
  • Piers Williams – Rx:Reactive Framework
  • Alistair Waddell – A simple approach to test your data driven app
  • Dave Gardner – NuGet: The caramel goodness of open source in your projects
  • Darren Helsby – Asynchronous programming in .NET 5
  • Sam Ritchie – Ditching BizTalk for NServiceBus
  • Adrian McGrath – WPF: Design Time Data

Where and when is DevJam happening?:

    • DATE:   Thursday, Dec 2nd, 5:30pm
    • VENUE:  Enex100 Seminar Room, Level 3, 100 St Georges Terrace, Perth
    • COST:   Free. All welcome

There will be pizza and refeshments provided. Not to mention some great prizes!…

SQL Server 2008: Using Hex Values in TSQL Scripts

I was updating several TSQL scripts today containing ARGB values and thinking (not for the first time) that it was a shame I couldn’t use the Hex values directly (eg 0xFFB40000) rather than converting to the harder to manage base10 integer equivalent (-4980736).

The SQL Server 2008 release updated the CONVERT() function to be able to do just that!

     CONVERT(varbinary(4), ‘0xFFB40000’, 1)

The last Style parameter governs the the conversion.

From Books Online:

Binary Styles

When expression is binary(n), varbinary(n), char(n), or varchar(n), style can be one of the values shown in the following table. Style values that are not listed in the table return an error.

 

Value

Output

0 (default)

Translates ASCII characters to binary bytes or binary bytes to ASCII characters. Each character or byte is converted 1:1.

If the data_type is a binary type, the characters 0x are added to the left of the result.

1, 2

If the data_type is a binary type, the expression must be a character expression. The expression must be composed of an even number of hexadecimal digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F, a, b, c, d, e, f). If the style is set to 1 the characters 0x must be the first two characters in the expression. If the expression contains an odd number of characters or if any of the characters are invalid an error is raised.

If the length of the converted expression is greater than the length of the data_type the result will be right truncated.

Fixed length data_types that are larger then the converted result will have zeros added to the right of the result.

If the data_type is a character type, the expression must be a binary expression. Each binary character is converted into two hexadecimal characters. If the length of the converted expression is greater than the data_type length it will be right truncated.

If the data_type is a fix sized character type and the length of the converted result is less than its length of the data_type; spaces are added to the right of the converted expression to maintain an even number of hexadecimal digits.

The characters 0x will be added to the left of the converted result for style 1.

ASP.NET Code Samples Collection

If you missed this in the last MSDN Flash, J. D. Meier (Developer Guidance, Patterns & Practices) has a roundup of various ASP.NET code sample on his blog: ASP.NET Code Samples Collection

The ASP.NET Code Samples Collection is a roundup and map of ASP.NET code samples from  various sources including the MSDN library, www.ASP.net, Code Gallery, CodePlex, and Microsoft Support.

Determine if your .NET Application has a Memory Leak

You can use Perfmon to determine if your .NET application is leaking memory (managed or unmanaged).

Run Perfmon.exe and add these counters:

  • Process / Private bytes
  • .NET CLR Memory / # bytes in all heaps
  • .NET CLR LocksAndThreads / # of current logical threads

Run your application for a representative length of time and exercise its functionality.

If Private bytes is increasing but # bytes in all heaps is not, then unmanaged memory is leaking. If both both are increasing, then managed memory is leaking.  If the # of current logical threads is increasing beyond what you expect, then thread stacks are leaking.

If Private Bytes is periodically jumping in 1MB increments with a corresponding increase in # of current logical Threads, a thread stack leak is the likely cause.

[Tracking it down is another matter! For that, one of the commercially available tools is invaluable: Memprofiler, ANTS Memory Profiler, dotTrace]

.NET CLR Large Object Heap

I was recently talking to several colleagues about the CLR garbage collector, and the question of where the size of 85,000 bytes as the threshold for objects on the Large Object Heap (LOH) comes from (instead of the seemingly more obvious value of 65,535).

This value was determined as a result of performance tuning by the CLR garbage collector team.

One of the reasons you should try to keep your object allocation sizes below this value (and off the Large Object Heap) is because unlike the Generation 0,1 and 2 area, the LOH is not compacted.

An excellent reference for the LOH is here: Large Object Heap Uncovered

There were changes made in .NET 4.0 as to how the LOH performs, but according to this connect issue there are still improvements that could be made:

Based on the example provided, we were able to allocate nearly 23 times as much memory before running out of memory on the large object heap going from version 3.5 to version 4. That’s not to say we are finished addressing fragmentation issues—we will continue to pay attention as we improve in future versions. In the .NET 4 release, we heard from customers that latency was a high priority. So that is where we have spent much of our focus. [Brandon Bray: Lead program manager responsible for the garbage collector in the .NET Framework]

.NET 4.0 introduced differences between the workstation and server versions of the garbage collector:

Comparing Workstation and Server Garbage Collection

Threading and performance considerations for workstation garbage collection:

  • The collection occurs on the user thread that triggered the garbage collection and remains at the same priority. Because user threads typically run at normal priority, the garbage collector (which runs on a normal priority thread) must compete with other threads for CPU time.

    Threads that are running native code are not suspended.

  • Workstation garbage collection is always used on a computer that has only one processor, regardless of the setting. If you specify server garbage collection, the CLR uses workstation garbage collection with concurrency disabled.

Threading and performance considerations for server garbage collection:

  • The collection occurs on multiple dedicated threads that are running at THREAD_PRIORITY_HIGHEST priority level.

  • A dedicated thread to perform garbage collection and a heap are provided for each CPU, and the heaps are collected at the same time. Each heap contains a small object heap and a large object heap, and all heaps can be accessed by user code. Objects on different heaps can refer to each other.

  • Because multiple garbage collection threads work together, server garbage collection is faster than workstation garbage collection on the same size heap.

  • Server garbage collection often has larger size segments.

  • Server garbage collection can be resource-intensive. For example, if you have 12 processes running on a computer that has 4 processors, there will be 48 dedicated garbage collection threads if they are all using server garbage collection. In a high memory load situation, if all the processes start doing garbage collection, the garbage collector will have 48 threads to schedule.

If you are running hundreds of instances of an application, consider using workstation garbage collection with concurrent garbage collection disabled. This will result in less context switching, which can improve performance.

Ref.: Fundamentals of Garbage Collection

Reminder: Perth .NET User Group Meeting: Thurs Nov 4th, Production .NET Debugging with Piers Williams

In this talk, you will learn how to debug your application in the field with this exploration of advanced debugging techniques for managed code applications. Hang, crash, leak or other – we will go beyond Visual Studio debugging and talk about replicating and isolating problems using Perfmon, WMI, WinDbg, and automating the diagnosis with PowerShell using PowerDbg.

  • TOPIC:  Production .NET Debugging with Piers Williams

  • DATE:   Thursday, Nov 4th, 5:30pm – 7:00pm

  • VENUE: Enex100 Seminar Room, Level 3, 100 St Georges Terrace, Perth

  • COST:   Free. All welcome

Piers is a .NET Architect with Beacon Technology, and a contributor to PowerDbg (http://powerdbg.codeplex.com).

There will be a door prize of a choice of license from JetBrains (one of ReSharper , TeamCity Build Agent, dotTrace Profiler, RubyMine, IntelliJ IDEA), and several coupons to Tekpub‘s high-quality, online screencasts for programmers.

If you have any problems getting in to the venue, please call me on 0422 971 731

MSMVPS Site

When I was awarded as a MVP, I had to decide whether to move my blog from blogger.com to the msmvps.com site. Conversations with a few MVPs about the site’s often downtime, convinced me to leave my blog where it is. Seems like it was a good decision; I haven’t been able to access the msmvps site for several hours today.

Upgrading from SQL Server 2000 to 2008

I put together these short notes a while back while helping with an upgrade.

 

Side-by-Side Upgrade versus In-Place Upgrade

There are two fundamental strategies for upgrading:

  • In-place upgrade: uses the SQL Server 2008 Setup program to directly upgrade a SQL Server 2000 instance to SQL Server 2008. The older SQL Server instance is replaced.
  • Side-by-side upgrade: performs operations to move all or data and other database components from SQL Server 2000 to a separate SQL Server 2008 instance.

In this case, we opted for a side-by-side upgrade, where a backup of the current database on SQL Server 2000 was restored to a clean install of SQL Server 2008 R2 and then upgraded in place.

 

Before upgrade

  • Run the Microsoft SQL Server 2008 Upgrade Advisor for all databases (including system databases). Review any deprecated features identified by the upgrade advisor (e.g. Notification Services, backup with TRUNCATE ONLY etc.) (SQL Server 2008’s TSQL parser is stricter and some things that ran in 2000 might fail in 2008, but it is mostly deprecated TSQL language features anyway)
  • If using a SAN: Benchmark the SAN using SQLIO and capture baseline data before performing the upgrade.
  • Run database/application performance benchmarks
  • Formulate and test a rollback plan
  • Ensure database consistency: Run DBCC CHECKDB on databases to be upgraded to ensure that they are in a consistent state before performing the upgrade.
  • Back up all important files: Back up all SQL Server databases from the instance to be upgraded (including master, model and msdb), data and transaction log files, as well as any application files, script files, extract files, etc. so that you can completely restore them if necessary.)
  • Perform transaction log backup for all user databases and disable all the jobs including jobs for taking full backups, jobs to defrag and other maintenance plans. [It is also recommended to take an image of your database server, in case you will have to rollback.]
  • If the operating system of the server is Windows 2003, ensure Windows service pack 1 is installed. This needs to be done on all the nodes of a cluster in a Failover Clustered environment.
  • Reserve enough disk space: Estimate the disk space required to upgrade SQL Server components, user databases, and any database files that might need to be created during the upgrade process. Might need two to three times the amount of disk space during the upgrade process.
  • Ensure enough space for system databases: Configure system databases (master, model, msdb, and tempdb) to auto-grow during the upgrade process, and make sure that they have enough disk space for this growth.
  • Transfer logins: Ensure all logins are transferred to the master database on the target server before upgrading the database.
  • Disable all start up stored procedures: The upgrade process will usually stop and start services multiple times on the SQL Server instance being upgraded. Stored procedures set to execute on start-up might interfere with the upgrade process.
  • Stop replication: Stop any replication and make sure that the replication log is empty at the start of the upgrade process.
  • Quit all applications: Certain applications, including all services with SQL Server dependencies, might cause the upgrade process to fail if local applications are connected to the instance being upgraded.

After upgrade

  • Review all logs
  • Revisit Upgrade Advisor recommendations
  • Rebuild Indexes
  • Reconfigure Log Shipping (if this was in use)
  • Transfer and verify SQL Agent jobs and maintenance tasks
  • Verify security settings: especially if there any cross server and/or cross-domain access privileges
  • Recreate any full text indexes removed from the databases. Re-populate Full-Text catalogs
  • Update the statistics for all the databases. This step may take some time depending on the size of database.
  • Change compatibility level: After initially restoring SQL Server 2000/2005 databases to SQL Server 2008, the compatibility level will still be 80/90, which corresponds to SQL Server 2000/2005 respectively. To take advantage of the new features of SQL Server 2008, you must change the compatibility level to 100.
  • Rebuild Indexes: To optimize query performance.
  • Update usage counters: In earlier versions of SQL Server, the values for the table and index row counts and page counts can become incorrect. To correct any invalid row or page counts, run DBCC UPDATEUSAGE on all databases following the upgrade.

 

Resources