Reminder: Perth .NET User Group: Thurs 3rd May 5:30pm – 7pm: Rich Client MVC with Jake Ginnivan

 

A quick reminder that Jake Ginnivan will be presenting this Thurs 3rd May on Rich Client MVC (MVVM+C) architecture and will demonstrate some of the common issues (and solutions) encountered in WPF, Silverlight and WP7 development.

  • TOPIC:  Rich Client MVC? with Jake Ginnivan
  • DATE:   Thurs May 3rd, 5:30pm – 7:00pm
  • VENUE: Enex 100 Seminar Room, Level 3, 100 St Georges Terrace, Perth
  • COST:   Free. All welcome

More details here.

Perth .NET User Group Meeting, Thurs Apr 5th: MVC with Michael Minutillo

The future of application development lies in the web. Customers demand web sites and applications that are more interactive, perform faster, can service more concurrent users, and render across a wider array of devices than ever before. And they want new features. And they want them every day. In this session Mike will take you on a guided tour through the ASP.NET MVC stack with stops along the way to look at the new features and improvements found in the version 4 beta that will aid you in rapidly developing web applications on the Microsoft stack.

  • TOPIC:  MVC with Michael Minutillo
  • DATE:   Thurs Apr 5th, 5:30pm – 7:00pm
  • VENUE: Enex 100 Seminar Room, Level 3, 100 St Georges Terrace, Perth
  • COST:   Free. All welcome

    Mike Minutillo is a Senior Developer/Software Architect at The Birchman Group in Perth.  In 2000, Mike started writing .NET software to fund his university studies and has been an active member of the .NET community ever since. Mike is a regular attendee at the Perth .NET Community of Practice where he has given presentations on new features of C#, ASP.NET MVC and Test-Driven Philosophy. Mike is also the co-author of Professional Visual Studio 2010. You can contact Mike at his blog http://codermike.com

  • Perth .NET User Group Meeting, Thurs 1st March: TSQL Developer Tips

    I’ll be presenting a TSQL developer focused talk at the Perth .NET User Group tomorrow (Thurs 1st March, 5:30pm). I’ll cover a few techniques for speeding up and improving TSQL code.

    • TOPIC:  TSQL Developer Tips: Mitch Wheat
    • DATE:   Thurs Mar 1st, 5:30pm – 7:00pm
    • VENUE: Enex 100 Seminar Room, Level 3, 100 St Georges Terrace, Perth
    • COST:   Free. All welcome

    If you haven’t noticed, the UG web site is currently down (due to circumstances beyond my control). I’m trying to get it back up, but it might take a little while…

    SQL Server Maintenance Plans: Updating Statistics, A Simple Gotcha

    Out of the box, SQL Server’s maintenance plans enable you to set up common maintenance tasks such as running a CHECKDB, backing up up databases and logs, rebuilding indexes etc.

    In terms of rebuilding indexes, there are two flavours: a reorganise and a full-blown rebuild (Reorganizing and Rebuilding Indexes). Reorganising an index is usually done when the index is not heavily fragmented.

    It is not uncommon to find a maintenance plan that first full rebuilds all indexes (regardless of fragmentation level, but more on that in a moment), and then subsequently updates all statistics. Not only is this unnecessary, it is actually worse than simply rebuilding all indexes!  The update statistics task samples your data, whereas a full index rebuild also updates the associated statistics but does so by performing a FULL SCAN of your data. That’s right: updating statistics with the default setting of sample after a full index rebuild will leave you with (potentially) less accurate statistics.

    Rather than rebuilding all indexes regardless whether they need it or not, Ola Hallengren’s maintenance script contains a production tried-and-tested stored procedure SQL Server Index and Statistics Maintenance for rebuilding and reorganizing indexes and updating statistics. It has ability to run for a set period to fit into tight maintenance windows, set fragmentation threshold levels, and update statistics, such as unindexed column statistics that are not updated as part of an index rebuild. Highly recommended.

    SQL Server 2008 Very Large Databases: A few notes

    1) Has the table schema been examined to minimise storage space per row? Whilst obvious, it is surprising how much can be gained from this. (I’ve managed to halve the row sizes in a previous project’s data warehouse)

    Techniques include using the smallest data types possible:

    a) Use varchar instead of nvarchar.

    b) Use smalldatetime unless you need the precision. [The smalldatetime datatype is accurate to the nearest minute, whereas datetime has a precision of 3.33 milliseconds.]

    2) Turn on database compression (this has a CPU overhead, but can reduce total database size by over half).

    3) Turn on backup compression.

    4) Ensure instant file initialisation is turned on.

    5) * Use multiple file groups to manage backups.

    6) * Place critical tables in PRIMARY file group to enable piecemeal restore (SQL Server 2005 onwards)

    7) Use partitioning to break a table out into several pieces that can be stored on different sets of drives: rule of thumb is around the ’20 – 30 million rows per partition’ mark. Of course, this somewhat depends on the ‘natural’ partitioning key range (for example, 1 month per partition)

    8) Have a clear idea of your high availability requirements (maximum downtime) up front, and validate your disaster recovery plans.

    Refs.

    Bootstrapping and Reflections on Trusting Trust

    The term “Bootstrapping” comes from the term “pulling yourself up by your own bootstraps” (Wikipedia). In computing parlance, a bootstrap loader is the code that initially runs when a machine starts, whose job is to load the rest of the operating system. [I can remember (just) old mainframes where the bootstrap code was entered by hand using switches on the front panel!]

    If you’ve never read Ken Thompson’s “Reflections on Trusting Trust” (his Turing Award acceptance speech), it’s definitely worth reading:

    It describes a very clever backdoor mechanism based on the fact that people only review source (human-written) code, and not compiled machine code. A program called a compiler is used to create the second from the first, and the compiler is usually trusted to do an honest job.

    Thompson’s paper describes a modified version of the Unix C compiler that would:

    • Put an invisible backdoor in the Unix login command when it noticed that the login program was being compiled, and as a twist
    • Also add this feature undetectably to future compiler versions upon their compilation as well.

    http://en.wikipedia.org/wiki/Reflections_on_Trusting_Trust#Reflections_on_Trusting_Trust

    How did I get onto this topic? Someone asked the question “How can you write a compiler in it’s own language?”

    If you were writing a new C++ compiler, you would just write it in C++ and compile it with an existing C++ compiler. If you were creating a compiler for a completely new language, you would need to write the new compiler in another language first. Usually this would be another programming language, but it can be assembly, or even machine code.

    If you want to bootstrap a compiler, you would generally write a compiler for a small subset of the language. Then in this minimal version of the language, you would write a compiler for the full language. This often occurs iteratively rather than in a single pass.

    http://en.wikipedia.org/wiki/Bootstrapping_(compilers)

    The first self-hosting compiler (excluding assemblers) was written for Lisp by Hart and Levin at MIT in 1962. They wrote a Lisp compiler in Lisp, testing it inside an existing Lisp interpreter. Once they had improved the compiler to the point where it could compile its own source code, it was self-hosting.

    There is an interesting article about bootstrapping a compiler from the ground up titled Bootstrapping a simple compiler from nothing.

    Measuring/Comparing .NET Performance in Visual Studio

    I’m guessing everyone knows that Visual Studio (Premium and Ultimate Editions) has a code profiler built-in. But did you know you can use it to compare and show differences and improvements between profiling sessions?

    There are also other commercial code profiling tools available from RedGate and JetBrains, and Eqatec (which has a free version).