Wednesday, February 29, 2012

Understanding Managed Extensibility Framework

Managed Extensibility Framework (MEF) is a new library which was introduced with .NET framework 4.0 for creating lightweight, extensible applications. What we can do with MEF is really amazing and it really just surprised me. So thought to write a post about MEF, because I am pretty sure it's going to be one of the nicest concepts introduced by Microsoft.

When we are developing an application, if we thought about the cost and the time, mainly it will be 20% for the original software development and the rest 80% will be for the software maintenance. The thing is later we will have to add new modules to original application which means we will have to extend the original application. When extending an application, as developers we know basically two things. That is KNOWN part and UNKNOWN part. The known part is since we know how we developed the application, we have the ability to change the source code. The unknown part is, if we want to plug our application with a third party application, we know nothing about their implementation. Because when we are developing, it wasn't a requirement.

So the best practice is to compose an application from various of pieces. As one of the principles in Object Oriented Design says, which is Open/Closed Principle, software entities should be open for extension but closed for modification. That is we should be able to extend the behavior of classes etc without modifying the source code. Classes' should have one reason to change. That is if one of the dependencies change like recently introduced bug etc.

Managed Extensibility Framework

Managed Extensibility Framework is designed to help developers to extend the application without changing the original source code. The core concept behind MEF is, MEF is aiming on composing an application which is dynamically compiled rather than statically compiled. To bring all these nice features MEF stands upon few concepts.
  1. Parts
    • An application is built of Parts. This is the core of MEF. You can think of it like building blocks of MEF.
  2. Export
    • Once you have implemented a Part, you can Export it. By exporting you are declaring "This part is available for the rest of the system. If you want it, you can consume it.".
  3. Import
    • Now think of another Part.  He is saying "I want this part for me to be perfect. I want to consume it.". When he is using your part, that is importing.
  4. Compose
    • To satisfy their needs we take all the Exports, we take all the Imports, we match them with each other and we are putting them with each other. How MEF really does is, in MEF there is a concept called Catalogs. Catalogs discovered the Parts and provide the Parts. And finally we have the Container which is a match maker. It is responsible for matching all the Exports with Imports.

For more information on MEF visit MSDN,

Hope you all got some basic understanding in Managed Extensibility Framework. Please feel free to post your valuable feedback.

Happy Coding.


Tuesday, February 28, 2012

DataReader vs DataSet Performance

When accessing data with ADO.NET, you can use either a DataReader or a DataSet. For SqlClient provider, provider-specific object for DataReader is SqlDataReader which comes in
System.Data.SqlClient namespace. DataSet comes in System.Data namespace.

I will start with DataReader first. DataReader is part of ADO.NET connected architecture. I am not going to explain what is ADO.NET connected architecture and disconnected architecture. Because sometimes back I have blogged about Disconnected Data Access Architecture in ADO.NET. In DataReaders what hapens is, the DataReader loads one row at a time from the database. Each time the DataReader's Read() method is called, the DataReader discards the current row and advances to the next row. If there is a row it will return true and if there is not, it will return false. A DataReader is limited to being read-only and forward-only. That is, the information retrieved from the database cannot be modified by the DataReader, nor can the DataReader retrieve records in a random order. Instead, a DataReader is limited to accessing the records in sequential order, from the first one to the last one, one record at a time. So DataReaders are connected data objects because they require an active connection to the database.

DataSet is part of disconnected architecture in ADO.NET. Actually disconnected architecture is based on DataSets. DataSets can be thought of as in-memory databases. First get connected to the database, DataSet get filled up with data and connection to the database gets disconnected. Now you have all the data in the DataSet and you can manipulate them and write back changes to the database. Just like a database, DataSet is comprised of a set of tables, a DataSet is made up of a collection of DataTable objects. Whereas a database can have relationships among its tables, along with various data integrity constraints on the fields of the tables, so too can a DataSet have relationships among its DataTables and constraints on its DataTables' fields.

So, if we talk about the performance, there is a huge gap in these two. It's said that the DataReader is roughly thirty times more performant than the DataSet. In DataSets, since all the data will be loaded into memory, there is a overhead in memory.

Here is a nice article where you can see from the results, it doesn't seem to matter whether we are retrieving 10 or 10,000 records, the performance degradation of the DataSet is dramatic when compared to the DataReader.
     A Speed Freak's Guide to Retrieving Data in ADO.NET

Happy Coding.


Friday, February 24, 2012

dynamic in C#

.NET framework 4.0 has introduced this new type which is dynamic. When we use the type dynamic, compile-time type checking will be bypassed and the type checking will be resolved at run time. I will explain it with a simple example. Let's take the following code.

     class Program
          static void Main(string[] args)
               dynamic t = new TestClass();

     public class TestClass
          public void TestMethod()

Now if I build the solution, it will be successfully build. In my Main method, I have created an object of 'TestClass' and it's type is dynamic. Now here comes one of the nice thing.

dynamic I

When I try to call the 'TestMethod', intellisense will show me "This operation will be resolved at run time". It's of course because I have used dynamic. Look at the following image.

dynamic II

As you can see here I have called two methods. One is 'TestMethod' which is defined in 'TestClass' and the other is 'Test' which is not defined. The other nice thing is when I run the above program it will successfully compiled and will print "Dynamic". And just after control leaving the first ReadLine only it will throw the error.

dynamic III

So the point is, it will only throw you the error in the run time and not in the compile time.

I can write the same using var type. But when using var, the compiler determines the type in compile time. So not like using dynamic, intellisense will show all the available options. And if I use var here instead of dynamic and I try to call the method 'Test', compiler will throw me the error in compile time. So that's something about dynamic.

Happy Coding.


Claims Based Authentication & Classic Mode Authentication in SharePoint 2010

When you are creating a new web application in SharePoint 2010, you will have to select one of these two authentication modes : Claims Based Authentication or Classic Mode Authentication.


I was always selecting the first one not because I knew why I am selecting it, but because it's defaultly selected. So today I thought to learn about these two authentication modes and select the suitable one rather than just selecting the default option. I am writing down what I learned, so anyone who is doing the same thing like me can stop repeating it in the future.

SharePoint 2010 supports variety of authentication methods which will fall into various authentication method categories. Since I am no professional of these and these are some serious topics of their own, I will just write it this way.

Method category Authentication methods
Windows authentication NTLM
Forms-based authentication Lightweight Directory Access Protocol (LDAP)
Microsoft SQL Server database or other database
Custom or third-party membership and role providers
SAML token-based authenticationActive Directory Federation Services (AD FS) 2.0
Third-party identity provider
Lightweight Directory Access Protocol (LDAP)

In SharePoint 2010, authentication modes determine how client computers authenticate with it's resources. SharePoint 2010 supports these two authentication modes,
  1. Claims Based Authentication
  2. Classic Mode Authentication
Authentication is the process of determining whether someone or something is, in fact, who or what it is declared to be based on the user identity.In an Active Directory Domain Services (ADDS) installed environment user identity is based on a user account. You have a user account which will contain all the information of your user name, password, group membership information etc. To authenticate your account what the application would do is, it will match the information that you supplied with the information in the Active Directory.

The nice point is this. If you use Claims Based Authentication, you can use all the supported authentication methods listed in the above table. And if you use Classic Mode Authentication, you will only be able to use methods under Windows authentication category.

In Claims Based Authentication what will happen is, user obtains a security token that is digitally signed by a commonly trusted identity provider and contains a set of claims. Each claim represents a specific item of data about the user such as his or her name, group memberships, and role on the network. Claims-based authentication is user authentication that utilizes claims-based identity technologies and infrastructure. Applications that support claims-based authentication obtain the security token from the user and use the information within the claims to determine access to resources. No separate query to a directory service like ADDS is needed. Claims-based authentication in Windows is built on Windows Identity Foundation (WIF) which is a prerequisite to install SharePoint 2010.

In Classic Mode Authentication, user accounts are treated by SharePoint Server 2010 as Active Directory Domain Services (ADDS) accounts.

Hope you all got a good understanding about Claims Based Authentication & Classic Mode Authentication. Appreciate your feedback.

Happy Coding.


Thursday, February 23, 2012

Decompiling a .NET dll

Today I wanted to decompile a .NET dll. While looking for a way of doing it, it got to my mind that some times back I had a trouble distinguishing "Decompile & Disassemble". Those days the both meant same to me and thought to write a brief explanation about this first.

The traditional explanation of these terms are as follows:
  1. Decompile - To convert assembly language to a high level language. 
  2. Disassemble - To convert machine language to assembly language.
Common Intermediate Language(CIL) is equivalent to assembly language for a CPU. In .NET, dll(Dynamic Link Library) is an assembly which is built up with the CIL code. Assembly is used for deployment, versioning, and security. There are two types of assemblies,
  1. Process assemblies (EXE)
  2. Library assemblies (DLL)
Now hope you all got a clear understanding about these two. So to decompile a .NET dll, I found out this nice open-source .NET assembly browser and decompiler.

It has a lot of nice features,
  • Assembly browsing
  • IL Disassembly
  • Decompilation to C#
  • Saving of resources
  • Save decompiled assembly as .csproj
  • Search for types/methods/properties (substring)
  • Hyperlink-based type/method/property navigation
  • Base/Derived types navigation
  • Navigation history
  • BAML to XAML decompiler
  • Save Assembly as C# Project
  • Find usage of field/method
  • Extensible via plugins (MEF)

To get it to work, please remember that you need to have Microsoft .NET Framework 4 installed in your machine.

Happy Coding.


Wednesday, February 22, 2012

ClickOnce for Google Chrome

Even though I am a big lover of Microsoft, I definitely don't love Internet Explorer. I love Google Chrome more, so the default browser for me is always Chrome. But have you ever had trouble running ClickOnce applications from Chrome?. Well I always had that trouble and what Chrome would do when clicked on ClickOnce applications is, it will download the Application Manifest. And then we click on Application Manifest, it will throw an error saying "Cannot download the application. The application is missing required files.Contact application vendor for assistance.".

Cannot Start Application

What I normally do in such situations is copy the link from Chrome and open the link with IE. IE will nicely run the ClickOnce application from the browser.

But sometimes this changing browser thing can be really painful. So thought to do some permanent fix and found out this nice little extension that will launch ClickOnce applications from Chrome.

You can add it to your chrome from foloowing link.
     ClickOnce for Google Chrome™.

It's weird that Chrome itself is installed with Microsoft's ClickOnce technology, but they don't support it in their browser themselves.

Happy Coding.


Tuesday, February 14, 2012

Microsoft Business Intelligence(BI) Stack

"Business Intelligence" or "BI" is been one of the mostly discussing and most interesting topics in the industry these days. It is a very broad category of applications and technologies that is used to manipulate, discover and analyse data, so after reviewing those data people can take business decisions easily. Microsoft defines BI in the following way.

"Business intelligence (BI) simplifies information discovery and analysis, making it possible for decision-makers at all levels of an organization to more easily access, understand, analyze, collaborate, and act on information, anytime and anywhere."

Since today I am going to write about Microsoft BI Stack, I will try not to write more about BI, because it's a very large topic.

Microsoft first entered into BI market in the year of 1998 with OLAP(online analytical processing) Services in SQL Server 7.0. In 2000, Microsoft released SQL Server 2000. SQL Server 2000 is Microsoft's most significant release of SQL Server to date. With SQL Server 2000 Analysis Services, Microsoft started to touch the grounds of BI. Since first market entry, Microsoft was trying to make BI not only for the few business analysts and some executives but for everyone in the organization. And that's why Microsoft has integrated support for SQL Server Analysis Services(SSAS) into its Office products specially Microsoft Excel. Excel can be used as a SSAS client at a much lower cost than third-party client tools.

Following are the current Microsoft products that supports BI which are known as Microsoft BI Stack and all these things are very large areas which we should learn deeply.
  • SQL Server 2008 R2
  • SQL Server Analysis Services (SSAS)
  • SQL Server Integration Services (SSIS)
  • SQL Server Reporting Services (SSRS)
  • Data mining using SSAS
  • Excel
  • PowerPivot
  • SharePoint
  • Visio

Microsoft BI Stack

Hoping to write more about BI as I learn. Appreciate your feedback.

Happy Coding.