Large Object Heap and Arrays of Double

If you were asked where objects greater than or equal to 85,000 bytes are allocated in .NET, you would no doubt say on the Large Object Heap (LOH). What would you say if you were asked where an array of 1000 doubles would be allocated? Currently, as of .NET 4.0, it will be allocated on the Large Object Heap!

William Wegerson (aka OmegaMan), a C# MVP, posted this item to Connect: Large Object Heap (LOH) does not behave as expected for Double array placement that describes and reproduces the behaviour:

byte[] arrayLessthan85K = new byte[84987]; // Note: 12 byte object overhead 84987 + 12 = 84999
Console.WriteLine("byteArrayLessthan85K: {0}", GC.GetGeneration(arrayLessthan85K)); // Returns 0

byte[] array85K = new byte[85000];
Console.WriteLine("byteArray85K: {0}", GC.GetGeneration(array85K)); // Returns 2

double[] array999Double = new double[999];
Console.WriteLine("array999Double: {0}", GC.GetGeneration(array999Double)); // Returns 0

double[] array1000double = new double[1000];
Console.WriteLine("array1000double: {0}", GC.GetGeneration(array1000double)); // Returns 2

By looking at the garbage collection generation on object creation (GC.GetGeneration), we can identify if objects reside the LOH or not. If immediately created in generation 2 then that suggests we are in the LOH.

The reason why double arrays with 1000 or more items are allocated on the LOH is performance, due to the fact that the LOH is aligned on 8 byte boundaries. This allows faster access to large arrays and the trade-off point was determined to be 1 thousand doubles.

According to Claudio Caldato, CLR Performance and GC Program Manager, “there’s no benefit to applying this heuristic on 64-bit architectures because doubles are already aligned on an 8-byte boundary”. Subsequent changes have been made to this heuristic that should appear in a future release of the .NET Framework.

As a side note, the expected behaviour is seen if you use Array.CreateInstance() :

// As noticed by @Romout in the comments to that post, the same behaviour is not seen when using Array.CreateInstance()
double[] array1000doubleCreateInstance = (double[])Array.CreateInstance(typeof(double), 1000); // Returns 0
Console.WriteLine("With array-create: " + GC.GetGeneration(array1000doubleCreateInstance));

// Indeed, the expected tipping point into the LOH occurs when using Array.CreateInstance
// (85000 / 8) = 10625, need 12 bytes for object overhead, nearest is 16 (2*8) bytes so effectively 10623 doubles
double[] array1000doubleCreateInstance2 = (double[])Array.CreateInstance(typeof(double), 10623); // Returns 0
Console.WriteLine("With array-create: " + GC.GetGeneration(array1000doubleCreateInstance2));

double[] array1000doubleCreateInstance3 = (double[])Array.CreateInstance(typeof(double), 10624); // Returns 2
Console.WriteLine("With array-create: " + GC.GetGeneration(array1000doubleCreateInstance3));

Newline in Summary XML

You probably know this already, but just in case you don’t! If you want line breaks in your popup tooltip descriptions in Visual Studio, you add the element to your XML summary comments e.g.:

///  
/// Main comment
/// Line 1
/// Line 2
///

public bool SomeProperty { get; set; }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/white-space: pre;/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

and it appears like this:

VSToolTip

SQL Server Compact Toolbox

If you are using SQL Server Compact Edition (CE), and have not seen this already, the SQL Server Compact Toolbox is a Visual Studio 2010 Pro or higher add-in (for SQL Server CE 3.5/4.0) and standalone app (for 4.0), that adds scripting, import, export, migrate, rename, run script, manage replication and more to your SQL Server Compact Data Connections in VS Server Explorer.

Written by Erik Ejlskov Jensen who’s aptly named blog, Everything SQL Server Compact, contains a wealth of tips, tricks and techniques relating to SQL Server Compact.

The toolbox adds several features to Server Explorer:

Scripting:

  • Script tables, including data, both DDL and DML
  • Script entire schema, optionally with data, from SQL Server Compact and SQL Server 2005/2008 databases
  • Import to SQL Server Compact from a SQL Server 2005/2008 database or a CSV file
  • Migrate from SQL Server Compact to SQL Server and SQL Azure
  • Migrate from SQL Server to SQL Server Compact
  • Create database diff scripts, compare with a SQL Server Compact or even a SQL Server database

Query editing:

  • Basic, free form query execution
  • Editor with syntax colouring
  • Parse SQL scripts
  • Display graphical estimated and actual execution plan
  • Check query duration

Other features:

  • Rename tables
  • Generate detailed DGML files for visualizing table columns and relationships (requires VS 2010 Premium or higher to view)
  • Generate an Entity Data Model (EDMX) in the current project for both 3.5 and 4.0 in any applicable project (WPF, WinForms, Class Library)
  • Remove invalid connection definitions from the Toolbox (and Server Explorer)
  • Create and manage SQL Server Merge Replication subscriptions
  • Data types node with documentation tooltips lists the 18 available data types
  • File version check (for version 2-4)
  • Upgrade version 3.x files to version 4 via the “Add version 4 connection” dialog
  • About dialog with detailed SQL Server Compact version information

Another of his posts, SQL Compact 3rd party tools, lists several 3rd party tools for CE, both commercial and non-commercial.

SQL Server 2008: Query Hash Statistics

Bart Duncan has released a very useful addition to the DataCollector capture/reporting abilities of SQL Server 2008. Query Hash Statistics can do low-overhead query cost monitoring, utilising the query fingerprint and query plan fingerprint (aka query hash/query plan hash) features that were added in SQL Server 2008. Query fingerprints enable you to get the cumulative cost of all executions of a query even if the query is non-parameterized and has different inline literal values for each execution. Previously, the only way to get this type of query performance data was to capture a Profiler trace and run the trace through a post-processing tool.

Once installed, and sufficient data has been collected, you can access the collected information via 2 custom reports.

queryHash1

Perth .NET User Group Meeting: 6pm Thurs, 12th May: Introduction to the .NET Reactive Extensions (Rx) with Lee Campbell and James Miles

Join us at the Perth .NET user group, Thurs May 12th (6pm) where Lee Campbell and James Miles join up to present an Introduction to Rx, aka the .NET Reactive Extensions. Rx is a product from Erik Meijer’s team at Microsoft that allows you to compose asynchronous and event based programs using observable collections and a Linq style syntax.

The presenters will guide you through the background and basics of Rx, and introduce you to the terminology that is peculiar to Rx. James and Lee will compare code written with and without Rx and show case code to demonstrate the power of Rx in the areas of resource management, fluent and familiar Linq syntax, composable nature, testability of asynchronous and concurrent queries, and the and the ability to tame side effects.

  • TOPIC:  Introduction to the .NET Reactive Extensions (Rx) with Lee Campbell and James Miles
  • DATE:   Thursday, May 12th, 6:00pm – 7:30pm
  • VENUE: Enex 100 Seminar Room, Level 3, 100 St Georges Terrace, Perth
  • COST:   Free. All welcome

I’m under strict instructions to ask everyone to come armed with questions about Rx!

Please Note: this talk will start at 6pm (not our usual time of 5:30pm due to the venue’s availability)

Full details here: http://perthdotnet.org/blogs/events/archive/2011/05/07/introduction-to-the-net-reactive-extensions-rx-with-lee-campbell-and-james-miles.aspx

Reminder: Perth .NET User Group: ASP.NET MVC framework with Michael Minutillo

Scott Hanselman is fond of saying that programming components are like Lego pieces and right now “the lego pieces coming out of Microsoft are the right size”. One important piece of the web stack is the ASP.NET MVC framework. Since its initial release in March 2009 there has been a new version of the framework released every year and it has quickly become the platform of choice for .NET developers creating web sites. January 2011 saw the version 3.0 release which introduces a number of changes and new features. Additionally, at the MIX11 conference earlier this month Microsoft released the “MVC3 tools refresh” which make developing MVC3 applications in Visual Studio 2010 a highly productive experience.

Join us at the Perth .NET user group, Thurs May 5th, where we will look at the new Razor View Engine, Unobtrusive Javascript, Integrated Scaffolding, better support for IoC integration. We will also touch on SQL CE 4, NuGet and Entity Framework 4.1 (Magic Unicorns Editions). Come and see the Lego pieces and be inspired by what you can build.

  • TOPIC:  ASP.NET MVC framework with Michael Minutillo
  • DATE:   Thursday, May 5th, 5:30pm – 7:00pm
  • VENUE: Enex 100 Seminar Room, Level 3, 100 St Georges Terrace, Perth
  • COST:   Free. All welcome

Mike Minutillo is .NET software engineer with a B.Sc. in computer science. In 2000, Mike started writing .NET software to fund his university studies and has been an active member of the .NET community ever since. Mike is a regular attendee at the Perth .NET Community of Practice where he has given presentations on new features of C#, ASP.NET MVC and Test-Driven Philosophy. In 2009 he started the Perth ALT.NET user group which meets monthly to discuss software engineering tools and practices in the .NET development space. Mike is co-author of Professional Visual Studio 2010. He maintains a technical blog at http://wolfbyte-net.blogspot.com/ and can be contacted at http://twitter.com/wolfbyte/.

There will be a door prize of a choice of license from JetBrains (one of ReSharper , TeamCity Build Agent, dotTrace Profiler, dotCover , RubyMine, IntelliJ IDEA, PyCharm).

When to avoid CQRS

Interesting post from Udi Dahan (who was previously an exponent of CQRS):

“It looks like that CQRS has finally “made it” as a full blown “best practice”.

Please accept my apologies for my part in the overly-complex software being created because of it.

I’ve tried to do what I could to provide a balanced view on the topic with posts like Clarified CQRS and Race Conditions Don’t Exist.

It looks like that wasn’t enough, so I’ll go right out and say it:

      Most people using CQRS (and Event Sourcing too) shouldn’t have done so.

Should we really go back to N-Tier? When not using CQRS (which is the majority of the time), you don’t need N-Tier either…

. . .

In Summary

So, when should you avoid CQRS?

The answer is most of the time.”

SQL Server: How to Share Data Between Stored Procedures

Erland Sommarskog has an excellent SQL Server article “How to Share Data Between Stored Procedures” tackling these two questions:

  • How can I use the result set from one stored procedure in another, also expressed as How can I use the result set from a stored procedure in a SELECT statement?
  • How can I pass a table as a parameter from one stored procedure to another?

He discusses several methods, and points out their advantages and disadvantages. [Adding here so I remember where to find it in future!]