A Mind-Shift on Identity Management with Geneva

October 5th, 2009 No comments

With the pending introduction of Microsoft’s Geneva Framework and Geneva Server (now officially named Windows Identify Foundation and Active Directory Federation Services, respectively), a claims-based and federated security model is now available to the .NET world.  The use of SAML-based authentication tokens distributed by Secure Token Servers (STS’s) is primed to be the next step in providing a more simplified identity management scheme throughout organizations and beyond into the “cloud” of Azure and Internet-based systems.

Claims-based authentication scenario

Claims-based authentication scenario

If you haven’t been exposed to the concept of claims-based security, it’s a bit of a mind-shift from how application rights and user properties have been typically implemented so it may take some time to fully grasp.  A claim, to put it simply, is any attribute that can be ascribed to a user (or any resource).  For example, a user’s claims may consist of his name, birth date, gender, and role within an organization.

What makes this different from traditional role-based security is that these claims are authenticated by a trusted third-party.  One of the best analogies is to think of a person going to buy alcohol at a bar.  The bartender must prove that the person is of legal age so he asks for an authenticated record from a trusted third-party, which in this case is a driver’s license from the DMV.  The claim is that this person is over 21 years old, and the identity provider is the Department of Motor Vehicles.

A Boon to Developers and Organizations

OK, so this is all well and good but how does this make developing applications easier?  The short answer is that claims alone don’t make things much easier, but what does simplify matters is the use of federated authentication.  In our previous example, the bar knew nothing about the person buying a drink.  There was no big filing cabinet with everybody’s name and birth records stored in the back room of the bar (at least you hope not).  The problem is that this is how many applications work today.  Each application stores off its own set of users and profile data, and therefore, the application (and consequently, the application developers) must be responsible for authenticating users.

By utilizing federation, the job of validating that a user is who he claims to be is now handed off to a third party, and a trust is established between our application (the relying party, or RP).  If our identity provider (IP) says that Joe Smith is really Joe Smith, we can trust that this is true.  Immediately, you can probably see that this is a boon for developers everywhere, who are tired of creating user login pages and databases.  In addition, this now enables Single Sign-On (SSO) within a network of applications that share the same IP.

Putting it All Together

Now that you can probably see how claims and federated security can be of benefit, the next question is how all of this works within the current world of application security.  The good news is that Microsoft seems to have done an admirable job of building on top of existing technologies (e.g., Active Directory and ASP.NET authentication) and providing flexibility to leverage existing security mechanisms (e.g., OpenID, Live ID, etc.).

The Geneva Framework is a set of assemblies that forms the foundation of the entire security suite.  Using the Framework (otherwise known as Windows Identity Foundation, or WIF), developers can claims-enable their ASP.NET applications with just a handful of configuration settings.  In addition, WIF can be used to create a custom Secure Token Server (STS) that can perform user authentication and claims look-ups using any technique imaginable.  This open foundation will encourage developers and IT organizations to move towards this model.  In addition, Geneva Server is a robust and freely available STS that can be rolled out within an organization, making federated security a reality in fairly short order.

Further Reading

This discussion barely scratches the surface and depth of Geneva so I would encourage you to read more on the blogs and Microsoft sites out there:

Categories: .NET, Identity Management

Microsoft Releases ASP.NET MVC 2 Preview 2

October 1st, 2009 2 comments

Today, Microsoft released ASP.NET MVC 2 Preview 2, the latest beta version of the MVC framework.

Along with a lot of the great new additions already seen in the first preview release, it looks like they’ve added a lot of flexibility in extending the validation processing, both on the client and server sides.

Having worked with the first version of ASP.NET MVC, I’m excited to see how much work they continue to put into the framework. Much like other Microsoft technologies, the product will get much better in version 2 and beyond once they’ve gotten feedback from the user community and they’ve had time to refine the product. I expect more and more developers will start looking to MVC as a genuine option since I know many people have been scared to move away from the comfort of ASP.NET web forms.

Categories: .NET, ASP.NET MVC

Configuring Multiple Attribute Stores in Geneva Server

September 30th, 2009 No comments

The new Active Directory Federation Services (formerly named Geneva Server) is an extensible Secure Token Server (STS) that enables claims-based authentication. When an application requests for a user to be authenticated against AD FS, it not only expects back a valid token stating the user’s identity, but it can also specify a set of claims (user attributes) to be returned in the form of a SAML token. These claims are not stored within AD FS but instead reside in an externally configured Attribute Store.

Out of the box, AD FS provides several options for the Attribute Store: an LDAP source (such as Active Directory DS), SQL Server, or a custom store defined in a .NET library. In many situations, there may not be a single source for all of the user’s profile data (e.g., birth date, email address, phone numbers, etc.) In these situations, AD FS gives you the ability to have several stores and then determine which attribute store to use based on the claim being requested. Setting up this within AD FS (at least in the beta version) is not the most intuitive process.

Configuring Attribute Stores

The first step is to configure the attribute stores within AD FS, which is accomplished in the Attribute Stores section. An Active Directory store which points to the domain AD instance is setup by default so that’s taken care of. Next, we will need to add our secondary attribute store. When you add a new store, you will see that you have three options: Active Directory, LDAP, or SQL. For an LDAP or SQL source, you simply need to provide a connection string. For my application, I needed to access a SQL Server instance, so I just gave it a unique name and plugged in the SQL connection string.

AD FS Attribute Store

AD FS Attribute Store

Claim Rules

Once the attribute stores have been setup, you need to setup claims rules either at the Relying Party or Identity Provider level to dictate which claims will be retrieved from which attribute store. Optionally, these claims can also be converted into another claim. Both of these tasks are accomplished using Microsoft’s new claim rule language. The syntax for defining claims transformations is sparsely documented at this point, and the only definitive source that I’ve found is on TechNet: http://technet.microsoft.com/en-us/library/dd807118%28WS.10%29.aspx.

To access the user data that’s stored in our SQL Server database, we need to write the query using a claim rule. Within the Relying Parties section, right click on the appropriate application and select “Edit Claim Rules…” Next, we will need to create an Advanced Rule since there currently isn’t a nice wizard to step us through this process. Within the rule definition window, type up your rule using syntax such as below:

c:[Type == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/dateofbirth"]
=> issue(store = "AdventureWorks Attribute Store", types = ("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/dateofbirth"), query = "SELECT BirthDate FROM [HumanResources].[Employee] WHERE LoginID = {0}", param = c.Value);

This isn’t an easy mechanism and hopefully Microsoft polishes this interface in future releases, but in the end, we do have claims being sourced from multiple locations, which will be very useful when developing a claims-enabled application.

Categories: .NET, Identity Management

DTO Assembler

September 1st, 2009 1 comment

When writing services that pass data between processes, it is oftentimes beneficial and wise to package the data in simple classes called DataTransferObjects (DTOs).  The database-matching Entity objects are not good choices for serialization since they may contain too much information, too little information, could be many layers deep, and expose the database structure to consuming clients.

The Assembler pattern is used to build up the DTO objects before sending results back from a method and is also responsible for reversing this process when clients pass DTOs to the service.  This build process involves mapping Entity classes to DTO classes, but there will not necessarily be a one-to-one correspondence between properties.  In either case, the process of mapping matching properties can be a laborious programming task.

Enter the AutoMapper

One option for overcoming this chore is to use generated code, which can be sufficient for exact matches but doesn’t address more complicated scenarios.  The other option is to use mapping code, and AutoMapper (http://www.codeplex.com/AutoMapper) is a CodePlex project meant to solve exactly this problem.  By default, the AutoMapper library copies property values from one class to another based on property names and also allows for more complicated mappings.

DtoAssembler

For my particular set of DTOs, the mappings were mostly one-to-one to the underlying database entities and did not require many changes.  To simplify things, I created a generic DtoAssembler class that takes two class types – TSource and TDestination – as the input and output types for the mappings.  Next, we simply create a map using the CreateMap static method and then call the Map to perform the conversion.

using AutoMapper;

public static class DtoAssembler<TSource, TDestination>
{
        public static void MapObject(TSource entity, TDestination destination)
        {
            Mapper.CreateMap<TSource, TDestination>();
            Mapper.Map<TSource, TDestination>(entity, destination);
        }

        public static TDestination MapObject(TSource entity)
        {
            Mapper.CreateMap<TSource, TDestination>();

            TDestination dto = Mapper.Map<TSource, TDestination>(entity);

            return dto;
        }

        public static List<TDestination> MapList(List<TSource> entities)
        {
            List<TDestination> dtoList = new List<TDestination>();

            foreach (TSource entity in entities)
            {
                dtoList.Add(MapObject(entity));
            }

            return dtoList;
        }
}

From within the service code, the call is as simple as this to create the DTO:

TeamDto teamDto = DtoAssembler<TeamEntity, TeamDto>.MapObject(team);

As you can see, creating a simple DTO mapping can be greatly simplified by using the AutoMapper. For more complicated mapping scenarios, take a look at the AutoMapper documentation for examples.

Categories: .NET, WCF

Closing WCF Service References

September 1st, 2009 No comments

One aspect of using WCF services that took a little bit of time to figure out is the lifespan of the service connection.  Unlike standard web services in .NET, the connection to a WCF service is only closed when the Close method is explicitly called or the service proxy object is disposed.  In the latter case, the normal practice would be to wrap the object in a using statement as below:

using (EmailServiceClient svc = new EmailServiceClient())
{
   svc.SendEmail(fromAddress, fromName, toEmail, toName, message);
}

However, there are problems with the how the Dispose method was implemented that could cause an exception to be thrown and not properly caught as described in this MSDN article: http://msdn.microsoft.com/en-us/library/aa355056.aspx.  Therefore, the best practices dictates that Close is called explicitly and the operations are wrapped in a try/catch block:

EmailServiceClient svc = null;
try
{
   svc = new EmailServiceClient();
   svc.SendMail(fromAddress, fromName, toEmail, toName, message);
   svc.Close();
}
catch (CommunicationException e)
{
   svc.Abort();
}
catch (TimeoutException e)
{
   svc.Abort();
}
catch (Exception e)
{
   svc.Abort();
   throw;
}

Since this is fairly lengthy to write for every service call, I instead added a wrapper class (based on code found in this blog: http://bloggingabout.net/blogs/erwyn/archive/2006/12/09/WCF-Service-Proxy-Helper.aspx).

public class ServiceProxyHelper<TProxy, TChannel> : IDisposable
    where TProxy : ClientBase<TChannel>, new()
    where TChannel : class
{
    ///
    /// Private instance of the WCF service proxy.
    ///
    private TProxy _proxy;

    ///
    /// Gets the WCF service proxy wrapped by this instance.
    ///
    public TProxy Proxy
    {
        get
        {
            if (_proxy != null)
            {
                return _proxy;
            }
            else
            {
                throw new ObjectDisposedException("ServiceProxyHelper");
            }
        }
    }

    public TChannel Channel { get; private set; }

    ///
    /// Constructs an instance.
    ///
    public ServiceProxyHelper()
    {
        _proxy = new TProxy();
    }

    ///
    /// Disposes of this instance.
    ///
    public void Dispose()
    {
        try
        {
            if (_proxy != null)
            {
                if (_proxy.State != CommunicationState.Faulted)
                {
                    _proxy.Close();
                }
                else
                {
                    _proxy.Abort();
                }
            }
        }
        catch (CommunicationException)
        {
            _proxy.Abort();
        }
        catch (TimeoutException)
        {
            _proxy.Abort();
        }
        catch (Exception)
        {
            _proxy.Abort();
            throw;
        }
        finally
        {
            _proxy = null;
        }
    }

The new calls to our service now look like this:

using (ServiceProxyHelper<EmailServiceClient, EmailService> svc =
   new ServiceProxyHelper<EmailServiceClient, EmailService>())
{
   svc.Proxy.SendMail(fromAddress, fromName, toEmail, toName, message);
}
Categories: .NET, WCF