Randolph Cabral’s Weblog

…beauty is just a dimmer switch away…

Back to Blogging!

Posted by Randolph Cabral on Thursday, April 28, 2011

After a few years hiatus, I’ve decided to start blogging again.  Please visit my new blog at http://randolphcabral.tumblr.com. This time, I plan to blog more regularly and include other topics beyond programming but still relating to technology. Make no mistake. I will most certainly continue to blog about programming, but I want to expand the conversation to include other technology interests as well. So, if you’re up for some technical discourse I hope you’ll drop by the new blog and sling your thoughts my way.

One last thing, I will leave this blog up as an archive of my early technical musings.

Posted in Uncategorized | Comments Off on Back to Blogging!

Raising INotifyPropertyChanged Events with an Extension Method

Posted by Randolph Cabral on Friday, October 2, 2009

Okay, I’m probably not the only one out there that is tired of writing the NotifyPropertyChanged() method in all my entity and ViewModel classes.  And I’m probably not the only one who is afraid of magical strings. So today, I’m posting a simple reusable extension method that targets all classes that implement the INotifyPropertyChanged interface.  And while we’re at it, why not take advantage of lambda expressions to give us the benefit of Visual Studio’s intellisense/automatic code completion features as well.

Take 1 : The Way It Is

Let’s start with the standard implementation of the INotifyPropertyChanged interface. The following should look familiar. Notice that each setter has a call to the NotifyPropertyChanged() method which needs a magic string to pass to the event args.  Careful! If you misspell or mis-case the magic string, you’re gonna get smacked with a runtime exception if you aren’t paying attention. If you’re into that sort of thing, you should probably find professional help.

    1   public class Camel : INotifyPropertyChanged
    2   {
    3     private int _Age;
    4     public int Age
    5     {
    6       get
    7       {
    8         return _Age;
    9       }
   10       set
   11       {
   12         _Age = value;
   13         NotifyPropertyChanged("Age"); //What's with the magic strings?
   14       }
   15     }
   16
   17     private string _Name;
   18     public string Name
   19     {
   20       get
   21       {
   22         return _Name;
   23       }
   24       set
   25       {
   26         _Name = value;
   27         NotifyPropertyChanged("Mane"); //This may result in a runtime exception.
   28       }
   29     }
   30
   31     private bool _HasGingivitis;
   32     public bool HasGingivitis
   33     {
   34       get
   35       {
   36         return _HasGingivitis;
   37       }
   38       set
   39       {
   40         _HasGingivitis = value;
   41         NotifyPropertyChanged("HasGingivitis");
   42       }
   43     }
   44
   45     //Do I really need to write this method for all of my classes?? I'm too lazy.
   46     private void NotifyPropertyChanged(string propertyName)
   47     {
   48       if (null != PropertyChanged)
   49         PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
   50     }
   51
   52     #region INotifyPropertyChanged Members
   53
   54     public event PropertyChangedEventHandler PropertyChanged;
   55
   56     #endregion
   57   }

Take 2 : The Way I Want It To Be

Now let’s look at a sample using the extension method implementation. With this implementation our beloved magic strings are replaced with intellisense-friendly lambda expressions. Notice that the calls to the extension method requires the use of ‘this.’ syntax. Additionally, the extension method needs a reference to the delegate in order to fire the event.

    1   public class Camel : INotifyPropertyChanged
    2   {
    3     private int _Age;
    4     public int Age
    5     {
    6       get
    7       {
    8         return _Age;
    9       }
   10       set
   11       {
   12         _Age = value;
   13         this.NotifyPropertyChanged(e => e.Age, PropertyChanged);
   14       }
   15     }
   16
   17     private string _Name;
   18     public string Name
   19     {
   20       get
   21       {
   22         return _Name;
   23       }
   24       set
   25       {
   26         _Name = value;
   27         this.NotifyPropertyChanged(e => e.Name, PropertyChanged);
   28       }
   29     }
   30
   31     private bool _HasGingivitis;
   32     public bool HasGingivitis
   33     {
   34       get
   35       {
   36         return _HasGingivitis;
   37       }
   38       set
   39       {
   40         _HasGingivitis = value;
   41         this.NotifyPropertyChanged(e => e.HasGingivitis, PropertyChanged);
   42       }
   43     }
   44
   45     #region INotifyPropertyChanged Members
   46
   47     public event PropertyChangedEventHandler PropertyChanged;
   48
   49     #endregion
   50   }

Where the Magic Happens

Here’s the implementation for the extension method.  It’s pretty straightforward and self-explanatory so I’ll save you the time and just leave it at that. Have fun with it!

    1 namespace Extensions
    2 {
    3   public static class ExtensionMethods
    4   {
    5     public static void NotifyPropertyChanged<TEntity>(this TEntity entity,
            Expression<Func<TEntity, object>> propertySelector,
            PropertyChangedEventHandler propertyChanged)
    6       where TEntity : INotifyPropertyChanged
    7     {
    8       if (propertyChanged != null)
    9         propertyChanged(entity,
                new PropertyChangedEventArgs(FormatPropertyName(propertySelector)));
   10     }
   11
   12     static string FormatPropertyName<TEntity>
            (Expression<Func<TEntity, object>> propertySelector)
   13     {
   14       var memberExpression = propertySelector.Body as MemberExpression;
   15       return memberExpression.Member.Name;
   16     }
   17   }
   18 }

Posted in C#, ComponentModel, Extension Methods, INotifyPropertyChanged | 1 Comment »

Business Objects Enterprise 11.5 Shared Secret…Shhhh!

Posted by Randolph Cabral on Thursday, May 29, 2008

I had to blog about this because I’m sure many of you who need to deal with the BOE XI .NET APIs will find this one useful.  Also because I know I’m going to need this info again at some point and I figure this is the best place to memorialize this knowledge.  In this post, I’ll specifically talk about my experience in enabling trusted authentication to the BOE server using ‘secEnterprise’ security with the BOXI .NET APIs.  A note for Java developers – I think most of the code samples should work with the exception of the use of concrete types.  Most of the types are replaced with interfaces in Java and the naming convention is such that you should be able to add an ‘I’ to the beginning of the type names.  Why they didn’t do this for .NET is a mystery to me.

The first thing we need to do is open the secEnterprise folder on both the client machine and on the server.  The path is as follows – [Drive:\Business Objects Installation Folder]\BusinessObjects Enterprise 11.5\win32_x86\plugins\auth\secEnterprise.  Here you should find a single file named secEnterprise.dll.  If you don’t see this file, or folder, you are going to need to install the BOE XI R2 SP2 client tools.  The one I installed is dated 3/6/2007.  Once installed, the above mentioned files and folders should be present.  Once located, it’s time to add a new text file and name it “TrustedPrincipal.conf”.  Using your favorite plain vanilla text editor (I use notepad) add a line that reads, “SecretPassword=[word or phrase without spaces using 0-9, A-Z, and a-z characters only]”.  Obviously, the word/phrase must match on the client and server installations. NOTE:  If the server already has this file and a password/phrase is already present, do not change it!  I recommend copying the server value to your client file.  But, if you’re feeling macho and absolutely need to change it, a reboot of the server may be required to flush the old value from cache.

Now, the following code sample will allow you to “impersonate” any user that has been granted access to the BOE server using the trusted principal method.

  SessionMgr sess = new SessionMgr();

  TrustedPrincipal tp = sess.CreateTrustedPrincipal(“John.Doe”, “YourBoxiServerName”);

  EnterpriseSession esess = sess.LogonTrustedPrincipal(tp);

With a little bit more code, we can use the current thread to detect the security principal and parse the user name so we can pass it in as a method argument.

  string identName = System.Threading.Thread.CurrentPrincipal.Identity.Name;

 

  if (string.IsNullOrEmpty(identName))

    throw new System.Security.SecurityException(“Not logged in!”);

 

  int startIdx = identName.IndexOf(‘\\’) + 1;

  string username = identName.Substring(startIdx, identName.Length – startIdx);

 

  SessionMgr sess = new SessionMgr();

  TrustedPrincipal tp = sess.CreateTrustedPrincipal(username, “YourBoxiServerName”);

  EnterpriseSession esess = sess.LogonTrustedPrincipal(tp);

One last thing to note is that I’m referencing a combination of version 11.5.3300.0 and 11.5.3700.0 of the CrystalDecisions assemblies.

Well, that’s it.  I hope someone out there finds this post helpful. 

Posted in BOXI .NET SDK, C#, General, Programming | Tagged: , , | 1 Comment »

Exploring N-Tier Architecture with LINQ to SQL (Part 4 of n)

Posted by Randolph Cabral on Wednesday, May 21, 2008

Sanity Check

Now that we’ve completed the first of what I’m sure will be many coding iterations of our business entity layer solution it is now time to do some testing.  In this post we’ll take some time to make sure our approach holds up to a typical usage scenario.  To do this, I’ve put together a Visual Studio solution that contains a web application and a couple of class libraries.  One class library for our LINQ to SQL entities and another for our business entities.  We’ll attempt to build out the domain model and use it in our web UI layer to demonstrate how we could interact with the business entities.

Let’s start with the LINQ to SQL class library project.  In this project we have Northwind.dbml, Northwind.dbml.layout, and Northwind.designer.cs files.  The files and its contents were generated by the Visual Studio 2008 IDE using SQLMetal after I dragged the Customer, Order, Order_Detail, and Product tables on to the design surface from the Server Explorer.  I’ve taken a screen shot of the resulting diagram from the designer (below).  Each box represents a table in the database which in turn is generated into a corresponding LINQ to SQL class complete with mappings and a DataContext.

 

Business Context

Moving over to the business entity project, I’ve written a class for each LINQ to SQL entity as well as a business context class that wraps the DataContext class.  The following code sample implements the business context (in its entirety) in much the same way we explored in Part 2 of this series.  An item to note is the addition of two methods that handle attachment of disconnected entities for inserting and deleting at lines 97 and 108 respectively.  In order to support all business entity types, we use reflection to find the LinqEntity member and use the composed DataContext to invoke data store logic.

    1   public partial class NorthwindBusinessContext : IDisposable

    2   {

    3     public NorthwindBusinessContext()

    4     {

    5       DataContext = new NorthwindDataContext();

    6     }

    7 

    8     public event PersistStateEvent OnBeforePersistState;

    9     public event PersistStateEvent OnAfterPersistState;

   10 

   11     protected NorthwindDataContext DataContext { get; set; }

   12 

   13     #region Factory Methods

   14 

   15     public CustomerBus GetCustomerBusBy(string customerID)

   16     {

   17       var linqEntity = DataContext.GetTable<Customer>()

   18         .Single(entity => entity.CustomerID == customerID);

   19       var ret = new CustomerBus(linqEntity);

   20       return ret;

   21     }

   22 

   23     public IEnumerable<CustomerBus> GetAllCustomerBuses()

   24     {

   25       var ret = new Collection<CustomerBus>();

   26       var result = DataContext.GetTable<Customer>();

   27       foreach (var linqEntity in result)

   28       {

   29         var entity = new CustomerBus(linqEntity);

   30         ret.Add(entity);

   31       }

   32       return ret;

   33     }

   34 

   35     public OrderBus GetOrderBusBy(int orderID)

   36     {

   37       var linqEntity = DataContext.GetTable<Order>()

   38         .Single(entity => entity.OrderID == orderID);

   39       var ret = new OrderBus(linqEntity);

   40       return ret;

   41     }

   42 

   43     public IEnumerable<OrderBus> GetAllOrderBuses()

   44     {

   45       var ret = new Collection<OrderBus>();

   46       var result = DataContext.GetTable<Order>();

   47       foreach (var linqEntity in result)

   48       {

   49         var entity = new OrderBus(linqEntity);

   50         ret.Add(entity);

   51       }

   52       return ret;

   53     }

   54 

   55     public Detail GetDetailBy(int orderID, int productID)

   56     {

   57       var linqEntity = DataContext.GetTable<Order_Detail>()

   58         .Single(entity => entity.OrderID == orderID && entity.ProductID == productID);

   59       var ret = new Detail(linqEntity);

   60       return ret;

   61     }

   62 

   63     public IEnumerable<Detail> GetAllDetails()

   64     {

   65       var ret = new Collection<Detail>();

   66       var result = DataContext.GetTable<Order_Detail>();

   67       foreach (var linqEntity in result)

   68       {

   69         var entity = new Detail(linqEntity);

   70         ret.Add(entity);

   71       }

   72       return ret;

   73     }

   74 

   75     public ProductBus GetProductBusBy(int productID)

   76     {

   77       var linqEntity = DataContext.GetTable<Product>()

   78         .Single(entity => entity.ProductID == productID);

   79       var ret = new ProductBus(linqEntity);

   80       return ret;

   81     }

   82 

   83     public IEnumerable<ProductBus> GetAllProductBuses()

   84     {

   85       var ret = new Collection<ProductBus>();

   86       var result = DataContext.GetTable<Product>();

   87       foreach (var linqEntity in result)

   88       {

   89         var entity = new ProductBus(linqEntity);

   90         ret.Add(entity);

   91       }

   92       return ret;

   93     }

   94 

   95     #endregion

   96 

   97     public void DeleteOnPersistState(object entity)

   98     {

   99       var linqEntityRef = entity.GetType().GetProperty(“LinqEntity”,

  100         System.Reflection.BindingFlags.Instance

  101         | System.Reflection.BindingFlags.NonPublic

  102         | System.Reflection.BindingFlags.Public

  103         | System.Reflection.BindingFlags.FlattenHierarchy).GetValue(entity, null);

  104       var linqEntityType = linqEntityRef.GetType();

  105       DataContext.GetTable(linqEntityType).DeleteOnSubmit(linqEntityRef);

  106     }

  107 

  108     public void AddOnPersistState(object entity)

  109     {

  110       var linqEntityRef = entity.GetType().GetProperty(“LinqEntity”,

  111         System.Reflection.BindingFlags.Instance

  112         | System.Reflection.BindingFlags.NonPublic

  113         | System.Reflection.BindingFlags.Public

  114         | System.Reflection.BindingFlags.FlattenHierarchy).GetValue(entity, null);

  115       var linqEntityType = linqEntityRef.GetType();

  116       DataContext.GetTable(linqEntityType).InsertOnSubmit(linqEntityRef);

  117     }

  118 

  119     public void PersistState()

  120     {

  121       if (OnBeforePersistState != null)

  122         OnBeforePersistState(this, new EventArgs());

  123 

  124       DataContext.SubmitChanges();

  125 

  126       if (OnAfterPersistState != null)

  127         OnAfterPersistState(this, new EventArgs());

  128     }

  129 

  130     #region IDisposable Members

  131 

  132     public void Dispose()

  133     {

  134       DataContext.Dispose();

  135     }

  136 

  137     #endregion

  138   }

 

Business Entity

Below is a sample of the CustomerBus business entity.  I’ve removed some of the properties and associations that implement the same pattern for brevity.  Each of the business classes implemented in this project employ the design constructs we explored in Part 3 of this series.  Additionally, I’ve written an IValidateable interface and a SimpleDataValidator class that reads the custom attributes found on each public property and populates a collection of validation errors when any of the tests fail.  More on validation in a future post.  In this post we’ll focus on the business context and the business entities.

    1   public partial class CustomerBus : IEntity, IValidateable

    2   {

    3     public CustomerBus()

    4     {

    5       Validators.Add(new SimpleDataValidator());

    6     }

    7 

    8     internal CustomerBus(Customer linqEntity) : this()

    9     {

   10       LinqEntity = linqEntity;

   11     }

   12 

   13     private Customer _linqEntity = null;

   14     internal Customer LinqEntity

   15     {

   16       get

   17       {

   18         if (_linqEntity == null)

   19           _linqEntity = new Customer();

   20 

   21         return _linqEntity;

   22       }

   23       set

   24       {

   25         _linqEntity = value;

   26       }

   27     }

   28 

   29     #region Properties

   30 

   31     [Required(“‘CustomerID’ is a required field.”)]

   32     [MaxLength(5, “The ‘CustomerID’ field cannot accept more than 5 characters.”)]

   33     public virtual string CustomerID

   34     {

   35       get

   36       {

   37         return LinqEntity.CustomerID;

   38       }

   39       set

   40       {

   41         LinqEntity.CustomerID = value;

   42       }

   43     }

   44 

   45     [Required(“‘CompanyName’ is a required field.”)]

   46     [MaxLength(40, “The ‘CompanyName’ field cannot accept more than 40 characters.”)]

   47     public virtual string CompanyName

   48     {

   49       get

   50       {

   51         return LinqEntity.CompanyName;

   52       }

   53       set

   54       {

   55         LinqEntity.CompanyName = value;

   56       }

   57     }

   58     …

   59 

   60     #endregion

   61 

   62     #region Associations

   63 

   64     public virtual void AddOrderBus(OrderBus entity)

   65     {

   66       _Orders.Add(entity);

   67       LinqEntity.Orders.Add(entity.LinqEntity);

   68     }

   69 

   70     public virtual void RemoveOrderBus(OrderBus entity)

   71     {

   72       _Orders.Remove(entity);

   73       LinqEntity.Orders.Remove(entity.LinqEntity);

   74     }

   75 

   76     public virtual OrderBusCollection Orders

   77     {

   78       get

   79       {

   80         if (_Orders == null)

   81           _Orders = new OrderBusCollection(LinqEntity.Orders);

   82 

   83         return _Orders;

   84       }

   85     }

   86     private OrderBusCollection _Orders = null;

   87 

   88 

   89     #endregion

   90 

   91     #region IValidateable Members

   92 

   93     public void ClearValidationErrors()

   94     {

   95       ValidationErrors.Clear();

   96     }

   97 

   98     public bool HasValidationErrors

   99     {

  100       get { return (ValidationErrors.Count > 0); }

  101     }

  102 

  103     public List<IValidator> Validators

  104     {

  105       get { return _validators; }

  106     }

  107     private List<IValidator> _validators = new List<IValidator>();

  108 

  109     public void Validate()

  110     {

  111       foreach (var v in Validators)

  112       {

  113         v.Validate(this);

  114       }

  115 

  116       if (HasValidationErrors)

  117       {

  118         var errorMessages = string.Empty;

  119 

  120         foreach (var v in ValidationErrors)

  121           errorMessages += string.Format(“{0}”, v.GetValidationErrorMessage());

  122 

  123         if (ValidationErrorHandling == ValidationErrorHandling.ThrowException)

  124           throw new ValidationException(errorMessages);

  125       }

  126     }

  127 

  128     public ValidationErrorHandling ValidationErrorHandling

  129     {

  130       get

  131       {

  132         return _validationErrorHandling;

  133       }

  134       set

  135       {

  136         _validationErrorHandling = value;

  137       }

  138     }

  139     private ValidationErrorHandling _validationErrorHandling = ValidationErrorHandling.ThrowException;

  140 

  141     public List<IValidationError> ValidationErrors

  142     {

  143       get { return _validationErrors; }

  144     }

  145     private List<IValidationError> _validationErrors = new List<IValidationError>();

  146 

  147     #endregion

  148   }

 

Application Tier 

In the web application, we have a web.config file and a Default.aspx with its code file Default.aspx.cs.  We won’t be making any modifications to the HTML markup in the Default.aspx file because in this test all we want to do is run some basic reads and updates on the data store.  To be able to use the classes we’ve defined in the business library project we’ll add a reference to the project.

 

Now let’s take a look at the code written in the Page_Load() event handler.  Starting with line 10, we instantiate a business context object so that we can request a CustomerBus object with one of the factory methods.  We want to test the object tracking capabilities of the underlying DataContext so on line 12 we make sure the City property is updated.  To test the associations we loop through the Orders collection and modify the OrderDate property.  Taking this a step further, we do a similar operation and traverse the OrderDetails collection and update both quantity and price by incrementing each property.  Finally, a call to the PersistState() method completes this test.  Remember, each time we traverse the object graph’s entity associations, we incur a lazy-load query against the data store because such is the default behavior of LINQ to SQL.  You can easily modify this option by setting the DataContext DeferredLoadingEnabled property to false.  This won’t hurt our lazy-loading code in the business entity layer because the underlying DataContext will have preloaded the entities.  Our business entities will simply request the objects from the DataContext.

    1 using System;

    2 using Northwind.Entities;

    3 

    4 namespace L2SnTierTestWeb

    5 {

    6   public partial class _Default : System.Web.UI.Page

    7   {

    8     protected void Page_Load(object sender, EventArgs e)

    9     {

   10       var ctx = new NorthwindBusinessContext();

   11       var cust = ctx.GetCustomerBusBy(“ALFKI”);

   12       cust.City = (cust.City == “Berlin”) ? “Hamburg” : “Berlin”;

   13 

   14       foreach (var order in cust.Orders)

   15       {

   16         order.OrderDate = DateTime.Now;

   17         foreach (var orderdetail in order.OrderDetails)

   18         {

   19           orderdetail.Quantity++;

   20           orderdetail.Product.UnitPrice++;

   21         }

   22       }

   23 

   24       ctx.PersistState();

   25     }

   26   }

   27 }

 

SQL Profiler

At this point we can open up SQL Profiler and inspect the resulting queries.  On the reads, we can see a total twenty of queries, one for the initial customer query and one for each lazy-loading relationship.  Again, if you want the object graph to preload, setting a property on the DataContext is all that is required.  On the update side, which starts at line 130, we are greated with a very nice set of batched updates that handle customers, orders, and details.   This is consistent with using LINQ to SQL out of the box.  Only now we’re working with our own business entities where we can add domain specific logic.

    1 exec sp_reset_connection

    2 go

    3 exec sp_executesql N‘SELECT [t0].[CustomerID], [t0].[CompanyName], [t0].[ContactName], [t0].[ContactTitle], [t0].[Address], [t0].[City], [t0].[Region], [t0].[PostalCode],

    4 [t0].[Country], [t0].[Phone], [t0].[Fax]

    5 FROM [dbo].[Customers] AS [t0]

    6 WHERE [t0].[CustomerID] = @p0′, N‘@p0 nvarchar(5)’, @p0 = N‘ALFKI’

    7 go

    8 exec sp_reset_connection

    9 go

   10 exec sp_executesql N‘SELECT [t0].[OrderID], [t0].[CustomerID], [t0].[EmployeeID], [t0].[OrderDate], [t0].[RequiredDate], [t0].[ShippedDate], [t0].[ShipVia], [t0].[Freight],

   11 [t0].[ShipName], [t0].[ShipAddress], [t0].[ShipCity], [t0].[ShipRegion], [t0].[ShipPostalCode], [t0].[ShipCountry]

   12 FROM [dbo].[Orders] AS [t0]

   13 WHERE [t0].[CustomerID] = @p0′, N‘@p0 nvarchar(5)’, @p0 = N‘ALFKI’

   14 go

   15 exec sp_reset_connection

   16 go

   17 exec sp_executesql N‘SELECT [t0].[OrderID], [t0].[ProductID], [t0].[UnitPrice], [t0].[Quantity], [t0].[Discount]

   18 FROM [dbo].[Order Details] AS [t0]

   19 WHERE [t0].[OrderID] = @p0′, N‘@p0 int’, @p0 = 10643

   20 go

   21 exec sp_reset_connection

   22 go

   23 exec sp_executesql N‘SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock],

   24 [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued]

   25 FROM [dbo].[Products] AS [t0]

   26 WHERE [t0].[ProductID] = @p0′, N‘@p0 int’, @p0 = 28

   27 go

   28 exec sp_reset_connection

   29 go

   30 exec sp_executesql N‘SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock],

   31 [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued]

   32 FROM [dbo].[Products] AS [t0]

   33 WHERE [t0].[ProductID] = @p0′, N‘@p0 int’, @p0 = 39

   34 go

   35 exec sp_reset_connection

   36 go

   37 exec sp_executesql N‘SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock],

   38 [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued]

   39 FROM [dbo].[Products] AS [t0]

   40 WHERE [t0].[ProductID] = @p0′, N‘@p0 int’, @p0 = 46

   41 go

   42 exec sp_reset_connection

   43 go

   44 exec sp_executesql N‘SELECT [t0].[OrderID], [t0].[ProductID], [t0].[UnitPrice], [t0].[Quantity], [t0].[Discount]

   45 FROM [dbo].[Order Details] AS [t0]

   46 WHERE [t0].[OrderID] = @p0′, N‘@p0 int’, @p0 = 10692

   47 go

   48 exec sp_reset_connection

   49 go

   50 exec sp_executesql N‘SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock],

   51 [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued]

   52 FROM [dbo].[Products] AS [t0]

   53 WHERE [t0].[ProductID] = @p0′, N‘@p0 int’, @p0 = 63

   54 go

   55 exec sp_reset_connection

   56 go

   57 exec sp_executesql N‘SELECT [t0].[OrderID], [t0].[ProductID], [t0].[UnitPrice], [t0].[Quantity], [t0].[Discount]

   58 FROM [dbo].[Order Details] AS [t0]

   59 WHERE [t0].[OrderID] = @p0′, N‘@p0 int’, @p0 = 10702

   60 go

   61 exec sp_reset_connection

   62 go

   63 exec sp_executesql N‘SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock],

   64 [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued]

   65 FROM [dbo].[Products] AS [t0]

   66 WHERE [t0].[ProductID] = @p0′, N‘@p0 int’, @p0 = 3

   67 go

   68 exec sp_reset_connection

   69 go

   70 exec sp_executesql N‘SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock],

   71 [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued]

   72 FROM [dbo].[Products] AS [t0]

   73 WHERE [t0].[ProductID] = @p0′, N‘@p0 int’, @p0 = 76

   74 go

   75 exec sp_reset_connection

   76 go

   77 exec sp_executesql N‘SELECT [t0].[OrderID], [t0].[ProductID], [t0].[UnitPrice], [t0].[Quantity], [t0].[Discount]

   78 FROM [dbo].[Order Details] AS [t0]

   79 WHERE [t0].[OrderID] = @p0′, N‘@p0 int’, @p0 = 10835

   80 go

   81 exec sp_reset_connection

   82 go

   83 exec sp_executesql N‘SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock],

   84 [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued]

   85 FROM [dbo].[Products] AS [t0]

   86 WHERE [t0].[ProductID] = @p0′, N‘@p0 int’, @p0 = 59

   87 go

   88 exec sp_reset_connection

   89 go

   90 exec sp_executesql N‘SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock],

   91 [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued]

   92 FROM [dbo].[Products] AS [t0]

   93 WHERE [t0].[ProductID] = @p0′, N‘@p0 int’, @p0 = 77

   94 go

   95 exec sp_reset_connection

   96 go

   97 exec sp_executesql N‘SELECT [t0].[OrderID], [t0].[ProductID], [t0].[UnitPrice], [t0].[Quantity], [t0].[Discount]

   98 FROM [dbo].[Order Details] AS [t0]

   99 WHERE [t0].[OrderID] = @p0′, N‘@p0 int’, @p0 = 10952

  100 go

  101 exec sp_reset_connection

  102 go

  103 exec sp_executesql N‘SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock],

  104 [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued]

  105 FROM [dbo].[Products] AS [t0]

  106 WHERE [t0].[ProductID] = @p0′, N‘@p0 int’, @p0 = 6

  107 go

  108 exec sp_reset_connection

  109 go

  110 exec sp_executesql N‘SELECT [t0].[OrderID], [t0].[ProductID], [t0].[UnitPrice], [t0].[Quantity], [t0].[Discount]

  111 FROM [dbo].[Order Details] AS [t0]

  112 WHERE [t0].[OrderID] = @p0′, N‘@p0 int’, @p0 = 11011

  113 go

  114 exec sp_reset_connection

  115 go

  116 exec sp_executesql N‘SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock],

  117 [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued]

  118 FROM [dbo].[Products] AS [t0]

  119 WHERE [t0].[ProductID] = @p0′, N‘@p0 int’, @p0 = 58

  120 go

  121 exec sp_reset_connection

  122 go

  123 exec sp_executesql N‘SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock],

  124 [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued]

  125 FROM [dbo].[Products] AS [t0]

  126 WHERE [t0].[ProductID] = @p0′, N‘@p0 int’, @p0 = 71

  127 go

  128 exec sp_reset_connection

  129 go

  130 SET TRANSACTION ISOLATION LEVEL READ COMMITTED;BEGIN TRANSACTION

  131 go

  132 exec sp_executesql N‘UPDATE [dbo].[Customers]

  133 SET [City] = @p1

  134 WHERE [CustomerID] = @p0′, N‘@p0 nchar(5),@p1 nvarchar(6)’, @p0 = N‘ALFKI’, @p1 = N‘Berlin’

  135 go

  136 exec sp_executesql N‘UPDATE [dbo].[Orders]

  137 SET [OrderDate] = @p1

  138 WHERE [OrderID] = @p0′, N‘@p0 int,@p1 datetime’, @p0 = 10643, @p1 = ‘May 21 2008 12:28:08:667AM’

  139 go

  140 exec sp_executesql N‘UPDATE [dbo].[Orders]

  141 SET [OrderDate] = @p1

  142 WHERE [OrderID] = @p0′, N‘@p0 int,@p1 datetime’, @p0 = 10692, @p1 = ‘May 21 2008 12:28:08:677AM’

  143 go

  144 exec sp_executesql N‘UPDATE [dbo].[Orders]

  145 SET [OrderDate] = @p1

  146 WHERE [OrderID] = @p0′, N‘@p0 int,@p1 datetime’, @p0 = 10702, @p1 = ‘May 21 2008 12:28:08:680AM’

  147 go

  148 exec sp_executesql N‘UPDATE [dbo].[Orders]

  149 SET [OrderDate] = @p1

  150 WHERE [OrderID] = @p0′, N‘@p0 int,@p1 datetime’, @p0 = 10835, @p1 = ‘May 21 2008 12:28:08:687AM’

  151 go

  152 exec sp_executesql N‘UPDATE [dbo].[Orders]

  153 SET [OrderDate] = @p1

  154 WHERE [OrderID] = @p0′, N‘@p0 int,@p1 datetime’, @p0 = 10952, @p1 = ‘May 21 2008 12:28:08:693AM’

  155 go

  156 exec sp_executesql N‘UPDATE [dbo].[Orders]

  157 SET [OrderDate] = @p1

  158 WHERE [OrderID] = @p0′, N‘@p0 int,@p1 datetime’, @p0 = 11011, @p1 = ‘May 21 2008 12:28:08:697AM’

  159 go

  160 exec sp_executesql N‘UPDATE [dbo].[Order Details]

  161 SET [Quantity] = @p2

  162 WHERE ([OrderID] = @p0) AND ([ProductID] = @p1)’, N‘@p0 int,@p1 int,@p2 smallint’, @p0 = 10643, @p1 = 28, @p2 = 28

  163 go

  164 exec sp_executesql N‘UPDATE [dbo].[Order Details]

  165 SET [Quantity] = @p2

  166 WHERE ([OrderID] = @p0) AND ([ProductID] = @p1)’, N‘@p0 int,@p1 int,@p2 smallint’, @p0 = 10643, @p1 = 39, @p2 = 34

  167 go

  168 exec sp_executesql N‘UPDATE [dbo].[Order Details]

  169 SET [Quantity] = @p2

  170 WHERE ([OrderID] = @p0) AND ([ProductID] = @p1)’, N‘@p0 int,@p1 int,@p2 smallint’, @p0 = 10643, @p1 = 46, @p2 = 15

  171 go

  172 exec sp_executesql N‘UPDATE [dbo].[Order Details]

  173 SET [Quantity] = @p2

  174 WHERE ([OrderID] = @p0) AND ([ProductID] = @p1)’, N‘@p0 int,@p1 int,@p2 smallint’, @p0 = 10692, @p1 = 63, @p2 = 33

  175 go

  176 exec sp_executesql N‘UPDATE [dbo].[Order Details]

  177 SET [Quantity] = @p2

  178 WHERE ([OrderID] = @p0) AND ([ProductID] = @p1)’, N‘@p0 int,@p1 int,@p2 smallint’, @p0 = 10702, @p1 = 3, @p2 = 19

  179 go

  180 exec sp_executesql N‘UPDATE [dbo].[Order Details]

  181 SET [Quantity] = @p2

  182 WHERE ([OrderID] = @p0) AND ([ProductID] = @p1)’, N‘@p0 int,@p1 int,@p2 smallint’, @p0 = 10702, @p1 = 76, @p2 = 28

  183 go

  184 exec sp_executesql N‘UPDATE [dbo].[Order Details]

  185 SET [Quantity] = @p2

  186 WHERE ([OrderID] = @p0) AND ([ProductID] = @p1)’, N‘@p0 int,@p1 int,@p2 smallint’, @p0 = 10835, @p1 = 59, @p2 = 28

  187 go

  188 exec sp_executesql N‘UPDATE [dbo].[Order Details]

  189 SET [Quantity] = @p2

  190 WHERE ([OrderID] = @p0) AND ([ProductID] = @p1)’, N‘@p0 int,@p1 int,@p2 smallint’, @p0 = 10835, @p1 = 77, @p2 = 15

  191 go

  192 exec sp_executesql N‘UPDATE [dbo].[Order Details]

  193 SET [Quantity] = @p2

  194 WHERE ([OrderID] = @p0) AND ([ProductID] = @p1)’, N‘@p0 int,@p1 int,@p2 smallint’, @p0 = 10952, @p1 = 6, @p2 = 29

  195 go

  196 exec sp_executesql N‘UPDATE [dbo].[Order Details]

  197 SET [Quantity] = @p2

  198 WHERE ([OrderID] = @p0) AND ([ProductID] = @p1)’, N‘@p0 int,@p1 int,@p2 smallint’, @p0 = 10952, @p1 = 28, @p2 = 15

  199 go

  200 exec sp_executesql N‘UPDATE [dbo].[Order Details]

  201 SET [Quantity] = @p2

  202 WHERE ([OrderID] = @p0) AND ([ProductID] = @p1)’, N‘@p0 int,@p1 int,@p2 smallint’, @p0 = 11011, @p1 = 58, @p2 = 53

  203 go

  204 exec sp_executesql N‘UPDATE [dbo].[Order Details]

  205 SET [Quantity] = @p2

  206 WHERE ([OrderID] = @p0) AND ([ProductID] = @p1)’, N‘@p0 int,@p1 int,@p2 smallint’, @p0 = 11011, @p1 = 71, @p2 = 33

  207 go

  208 exec sp_executesql N‘UPDATE [dbo].[Products]

  209 SET [UnitPrice] = @p1

  210 WHERE [ProductID] = @p0′, N‘@p0 int,@p1 money’, @p0 = 3, @p1 = $19.0000

  211 go

  212 exec sp_executesql N‘UPDATE [dbo].[Products]

  213 SET [UnitPrice] = @p1

  214 WHERE [ProductID] = @p0′, N‘@p0 int,@p1 money’, @p0 = 6, @p1 = $34.0000

  215 go

  216 exec sp_executesql N‘UPDATE [dbo].[Products]

  217 SET [UnitPrice] = @p1

  218 WHERE [ProductID] = @p0′, N‘@p0 int,@p1 money’, @p0 = 28, @p1 = $63.6000

  219 go

  220 exec sp_executesql N‘UPDATE [dbo].[Products]

  221 SET [UnitPrice] = @p1

  222 WHERE [ProductID] = @p0′, N‘@p0 int,@p1 money’, @p0 = 39, @p1 = $27.0000

  223 go

  224 exec sp_executesql N‘UPDATE [dbo].[Products]

  225 SET [UnitPrice] = @p1

  226 WHERE [ProductID] = @p0′, N‘@p0 int,@p1 money’, @p0 = 46, @p1 = $21.0000

  227 go

  228 exec sp_executesql N‘UPDATE [dbo].[Products]

  229 SET [UnitPrice] = @p1

  230 WHERE [ProductID] = @p0′, N‘@p0 int,@p1 money’, @p0 = 58, @p1 = $22.2500

  231 go

  232 exec sp_executesql N‘UPDATE [dbo].[Products]

  233 SET [UnitPrice] = @p1

  234 WHERE [ProductID] = @p0′, N‘@p0 int,@p1 money’, @p0 = 59, @p1 = $64.0000

  235 go

  236 exec sp_executesql N‘UPDATE [dbo].[Products]

  237 SET [UnitPrice] = @p1

  238 WHERE [ProductID] = @p0′, N‘@p0 int,@p1 money’, @p0 = 63, @p1 = $52.9000

  239 go

  240 exec sp_executesql N‘UPDATE [dbo].[Products]

  241 SET [UnitPrice] = @p1

  242 WHERE [ProductID] = @p0′, N‘@p0 int,@p1 money’, @p0 = 71, @p1 = $30.5000

  243 go

  244 exec sp_executesql N‘UPDATE [dbo].[Products]

  245 SET [UnitPrice] = @p1

  246 WHERE [ProductID] = @p0′, N‘@p0 int,@p1 money’, @p0 = 76, @p1 = $27.0000

  247 go

  248 exec sp_executesql N‘UPDATE [dbo].[Products]

  249 SET [UnitPrice] = @p1

  250 WHERE [ProductID] = @p0′, N‘@p0 int,@p1 money’, @p0 = 77, @p1 = $22.0000

  251 go

  252 COMMIT TRANSACTION

  253 go

 

Sorry for the long post.  I’ll try to break it up more next time.  The take away here is that our efforts to abstract away the LINQ to SQL entities and DataContext have been successful so far.  I didn’t want to get too wrapped up in the minutia, but for those of you who absolutely need to loosen the coupling between the business layer from the LINQ to SQL layer I would recommend extracting the interface out of each of the LINQ to SQL entities and its DataContext.  This would allow for the flexibility of dependency injection (interface injection in this case) and TDD at the business layer.  Keep in mind that at some point the benefits of this loose coupling versus the return may start to diminish depending on your project.  I’ll let you decide what’s best for your project.

In the next part of this series, we’ll take a look at what we can do to finally marry the ActiveRecord pattern to our abstracted model.  I will concede that at this point, there may be no need to come full circle with AR.  I’m completely sold on the idea of using the Unit of Work pattern (context) in lieu of AR.  What I grappled with at first was this issue of trust.  I needed to trust the context and relinquish control.  The AR pattern gave me ultimate control of what DML operations were called and when they were executed on the data store.  As a developer, you’re going to need to develop a trusting relationship with the context.  Then and only then will you be able to embrace its liberating capabilities.

Posted in C#, Domain Model, LINQ to SQL, Object Oriented Programming, Software Architecture | Tagged: , , , , | 14 Comments »

Exploring N-Tier Architecture with LINQ to SQL (Part 3 of n)

Posted by Randolph Cabral on Thursday, May 8, 2008

Part 3 – Creating Business Entities from Data Entities

In part one of this series we looked at how LINQ to SQL entities are structured by analyzing its members and mappings.  We concluded that that they aren’t suitable in a model where we would derive from them because doing so would hide the LINQ to SQL mappings from the query provider.  Creating a translation layer to handle loading and tracking of business entites would be too monumental a task to take on.   Clearly that’s not an option either.  Ideally, we want to encapsulate the data entities in business entities without sacrificing code re-usability.  Fortunately, we have a neat pattern that we can use to do just that and it’s called object composition.

Do the names Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides ring a bell?  Do you know what I’m referring to when I say, “GoF”?  Yes, you guessed it.  The Gang of Four and their book, Design Patterns: Elements of Reusable Object-Oriented Software.  In the book, they cite several important design techniques such as programming to an interface and object composition.  The one we’re interested in right now is object composition.  Take for example a business requirement where every customer record must have address data for billing and shipping.  Because the requirements for the Customer object only require a couple of addresses we can model the relationship with object composition using an Address class.  The code sample below illustrates how we would write the Customer class using this pattern.  Notice that there are two public Address properties which represent billing and shipping data as separate object instances.

public class Address

{

  public string Street { get; set; }

  public string City { get; set; }

  public string PostalCode { get; set; }

}

 

public class Customer

{

  public Address BillingAddress { get; set; }

  public Address ShippingAddress { get; set; }

}

With a little tweaking, we should be able to apply this pattern to reuse the LINQ to SQL entities within our business classes.  In the following code sample, we start by writing a business layer customer class called CustomerBus with an internal constructor that accepts a LINQ to SQL entity reference.  A private member that composes the LINQ to SQL Customer class keeps the reference in a safe place.  Now here’s where we deviate a little from the composition pattern.  Rather than adding a public property to expose the LINQ to SQL class, we’ll add public properties that expose each of the LINQ to SQL entity properties instead.  Doing this gives us more granular control in dealing with the data entity properties.  For example, we can change the names of the exposed properties, hide certain properties, and create custom domain specific validation logic for those properties we choose to expose.

//Business class

public class CustomerBus

{

  //Internal constructor that accepts LINQ entity reference.

  internal CustomerBus(Customer linqEntity)

  {

    LinqEntity = linqEntity;

  }

 

  //Private LINQ entity reference

  private Customer LinqEntity { get; set; }

 

  //Sample property that exposes a LINQ entity property

  public string CustomerID

  {

    get

    {

      //Expose the LINQ entity property

      return LinqEntity.CustomerID;

    }

    set

    {

      //Add domain specific validation logic here if desired.

      LinqEntity.CustomerID = value;

    }

  }

 

  …

}

 

Associations

Let’s take a look at how we would create relationships using this model.  One advantage of keeping a reference to the LINQ to SQL entity is that we have access to the relationships in the data layer.  Remember, our business entities can be constructed with the LINQ to SQL entities passed in as an argument to the internal constructor.  We can use the existing relationships created by LINQ to SQL in the data layer for free.  Our job just got a lot easier because all we need to worry about now is promoting the LINQ to SQL entity to a business entity.  Subsequently, we’re going to need to expose the LINQ to SQL entity member by declaring the property as “internal”.  This exposes the member to the other business entities in the assembly safely.  We’ll need this so we can read the LINQ to SQL entity to customize the add and remove logic in our collections.

//Business class

public class CustomerBus

{

 

  //Changed from private to internal.

  //And added custom getter and setter

  //LINQ entity reference

  private Customer _linqEntity = null;

  internal Customer LinqEntity

  {

    get

    {

      if(_linqEntity == null)

        _linqEntity = new Customer();

 

      return _linqEntity;

    }

    set

    {

      _linqEntity = value;

    }

  }

 

  …

Simply exposing the EntitySet<T> member from the data layer is not desirable in this case because that would expose the LINQ to SQL entities.  Support for customized add and remove logic to handle the EntitySet<T> reference requires a specialized collection class.  Our new collection class derives from System.Collections.ObjectModel.Collection<T> which implements many of the behaviors we need to manage the contained entities.  By deriving from Collection<T>, we get all of its functionality for free.  Next we can create a custom constructor that accepts the EntitySet<T> reference and declare it internal to the assembly.  We want to do this so that new collections of this type can only be instantiated from within this assembly to ensure that all instantiated objects are connected to a context.  The constructor will be responsible for promoting the LINQ to SQL entities to business entities and adding them to the collection.  Next we can add the specialized add and remove methods that hide the base class implementations to support management of the EntitySet<T> reference.  Now adds and removes are tracked by a context and can be processed as a batch.  Cool!

//Specialized business collection for OrderBus class

public class OrderBusCollection : Collection<OrderBus>

{

  //Internal constructor that uses the EntitySet<T> to hydrate itself

  internal OrderBusCollection(EntitySet<Order> entitySet)

  {

    EntitySet = entitySet;

    foreach (var linqEntity in EntitySet)

    {

      var entity = new OrderBus(linqEntity);

      base.Add(entity);

    }

  }

 

  private EntitySet<Order> EntitySet { get; set; }

 

  public new void Add(OrderBus entity)

  {

    EntitySet.Add(entity.LinqEntity);

    base.Add(entity);

  }

 

  public new void Remove(OrderBus entity)

  {

    EntitySet.Remove(entity.LinqEntity);

    base.Remove(entity);

  }

 

  public new void RemoveAt(int index)

  {

    Remove(this[index]);

  }

}

At this point, we can add the custom collection members to the business class as seen in the following code sample.  The custom “lazy” getter implementation ensures that we load the collection once it is requested and keeps a reference to the instance for future reference.  Now we have a working model that deals with the LINQ to SQL entities behind the scenes allowing for presentation code to interact purely with business entities.  Additionally, we can implement business-specific logic in the business classes in the form of validations or custom behaviors.

//Business class

public class CustomerBus

{

  //Association to orders

  public virtual OrderBusCollection Orders

  {

    get

    {

      if (_Orders == null)

        _Orders = new OrderBusCollection(LinqEntity.Orders);

 

      return _Orders;

    }

  }

  private OrderBusCollection _Orders = null;

 

  …

Yes, I know it’s lot of code to have to write by hand.  But this is one of those busy tasks that can clearly be remedied with a custom code generator.  I’m currently working on one that I will make available in a future post.  Until then, there will be a lot of code to write.  The result of writing the code is clear.  We’ve created a business entity layer that abstracts the data entities away and gives us a nice place to add domain logic.  Another point of consternation might be with the pervasive use of the LINQ to SQL entities in the business layer.  This is one case where I feel the benefit of having those entity references localized in each business class outweigh the challenges that might be introduced with changes in the data tier.  We can somewhat isolate those issues with a custom code generator which should take most of the pains away.  The code generator I’m working on declares each business class as a partial class.  This will allow us to keep our custom implementation separate from anything that was automatically generated by the code generator.

In the next post, we’ll revisit the BusinessContext class to give it functionality to attach newly constructed business objects.  We’ll also take a look at the result of our efforts so far with a sample web application that uses our new business layer and take a peek at SQL Profiler to check out how our changes impact LINQ to SQL’s query provider.

Posted in C#, Domain Model, LINQ to SQL, Object Oriented Programming, Programming, Software Architecture | Tagged: , , , , | 6 Comments »

You’re Not A “Real Programmer”

Posted by Randolph Cabral on Wednesday, April 30, 2008

I recently met up with a fellow programmer buddy of mine who has been working for a software development group that has gone through several managers recently.  Each manager, varying in both development and management experience, had varying ideas on how to approach the requirements for a specific project.  Frequent turn over in management continued to complicate things.  Eventually, they found a manager that was willing to stick it out.  Unfortunately, his management style doesn’t exactly create a creative environment where developers can thrive. 

The group has been charged with converting an existing windows forms client application for the web.  After exhaustive research into which web technology platform they wanted to develop on the manager decided to go against the tide and force the group to implement the new web application in C++.  Any developer in the group who expressed a difference in opinion was quickly shut down with the manager hinting at the notion that “real programmers” code in C++.

I have no problem with C++, but given that the requirements call for a web application, I’m going to have to respectfully disagree.  And what’s wrong with managed code anyway?  I guess only those who know how to manage resources manually are “real programmers”.  Many successful enterprise applications, both for the web and for windows, are written in Java or C#.  There are plenty of open source web development frameworks written for Java that increase productivity incredibly.  Similarly, ASP.NET with C# creates a powerful way to programatically access HTML controls. 

In other words, we have power tools now that help us with the construction process.  Don’t get me wrong, there’s still a valid and relevant use for the handsaw.  Building a house is not one of those uses.

Posted in C#, General, Object Oriented Programming, Programming, Software Architecture | Tagged: , , , | 1 Comment »

Exploring N-Tier Architecture with LINQ to SQL (Part 2 of n)

Posted by Randolph Cabral on Thursday, April 24, 2008

Part 2 – The DataContext

To get us started on the right foot, let’s take a quick look at what is needed to write a custom DataContext class.  What’s really remarkable about the sample code below is that it is complete.  Simply deriving from the System.Data.Linq.DataContext base class and providing a connection string gives us all of the functionality we need to work effectively with the data entities and the data source.  Obviously, the connection string should be abstracted into a configuration file and read in.  The code below was written this way to illustrate how easy it is to create a custom context for your project.  In our case, we’ll use the IDE and SQLMetal to generate our DataContext class which adds properties to expose each table dragged onto the designer surface.  The important thing to remember is that the GetTable<TEntity>() method is exposed as a public member which I will use in my samples to illustrate its IQueryable<T> properties.

  public partial class NorthwindDataContext: System.Data.Linq.DataContext

  {

    public MyDataContext()

      : base(“Data Source=.;Initial Catalog=Northwind;Integrated Security=True”)

    {

    }

  }

LINQ to SQL’s DataContext is both a blessing and a curse all at once.  I say this because it gives us a lot of out-of-the-box functionality with object state tracking and identity mapping, but it creates a problem when we want to separate the data layer from the business logic layer.  Imagine a typical business case where the properties of a business object are updated and then persisted to the data store.  Using an Active Record implementation, the instance of the object is created with a static factory method.  After modifying the object properties, a call to the Save() method is all that is needed to persist the state to the data store.

Customer cust = Customer.GetByCustomerId(“ALFKI”);

cust.City = “Frankfurt”;

cust.Save();

Using LINQ to SQL’s DataContext, the pattern changes quite a bit.  Instead of a call to the static factory method on the class, as with the previous example, the entry point to retrieve an object instance changes to the DataContext.  Since there is no Save() method defined for the class, a call to SubmitChanges() on the DataContext is needed to invoke persistence logic. 

using (NorthwindDataContext ctx = new NorthwindDataContext())

{

  Customer cust = ctx.GetTable<Customer>().Single(c => c.CustomerID == “ALFKI”);

  cust.City = “Frankfurt”;

  ctx.SubmitChanges();

}

Obviously, the Active Record implementation is concise and far easier to read when compared with the DataContext sample.  But that’s barely scratching the surface of potential issues.  Most troubling to me is that the DataContext object is now referenced in the business layer creating a strong coupling of the data and business layers.  How do we elegantly manage the DataContext without making it a first-class citizen in the business layer?  There are many different ways to approach this problem.  One way would be to wrap the context in a context manager utility class that exposes a method to load a customer entity by key.  Let’s call this class the NorthwindBusinessContext.  With the NorthwindBusinessContext class we could manage the lifecycle of a given context by implementing the IDisposable interface.  We can also ensure the integrity of context references to table entities that the DataContext exposes as shown with the GetCustomerById() method.  Lastly, the PersistState() method exposes the persistence method of the DataContext object which is, of course, necessary to invoke data store persistence logic. 

  public partial class NorthwindBusinessContext : IDisposable

  {

    NorthwindDataContext DataContext = null;

 

    public NorthwindBusinessContext()

    {

      DataContext = new NorthwindDataContext();

    }

 

    public Customer GetCustomerById(string customerId)

    {

      return DataContext.GetTable<Customer>().Single(c => c.CustomerID == customerId);

    }

 

    public void PersistState()

    {

      DataContext.SubmitChanges();

    }

 

    #region IDisposable Members

 

    public void Dispose()

    {

      DataContext.Dispose();

    }

 

    #endregion

  }

There has been some talk about how to scope the DataContext to a specific thread/web request/execution context to help ensure the integrity of the given DataContext.  I think there is a lot of merit in researching how to scope a given DataContext, however, I will not be exploring those efforts in this post.  We’ll leave that topic for another post, perhaps in the near future.

 

So why go to all this trouble of writing a BusinessContext?  Why not just use the DataContext outright and skip this step?  The answer: decoupling.  By encapsulating the DataContext in a single class, we can shield ourselves from any changes that are made to the DataContext implementation.  No one can argue that a single point of failure is a lot easier to refactor than a pervasive failure.  Things are looking good so far.  We have a BusinessContext class that composes the DataContext class and encapsulates DataContext specific implementation by implementing business specific methods for loading and persisting data entities.  Luckily, all this code is not to bad to write and can easily be incorporated to a custom code generator allowing us to write only the customizations needed by the domain layer.

 

Now all we have left is to deal with those data entities.  We want to deal with business entities, not data entities.  How do we best create a business entity that best allows for reuse of the data entities?  In my next post we’ll explore options on how to tackle this task in a fun an arguably elegant way.

 

Posted in Active Record, C#, Domain Model, LINQ to SQL, Object Oriented Programming, Software Architecture | Tagged: , , , , | 3 Comments »

Exploring N-Tier Architecture with LINQ to SQL (Part 1 of n)

Posted by Randolph Cabral on Wednesday, April 23, 2008

Part 1 – Analyzing LINQ to SQL Entities

For the past six months or so I’ve been playing a lot with LINQ to SQL to dissect and examine the internals of what makes it tick.  It didn’t take long before I became a fan of LINQ to SQL, expression trees and IQueryable<T>.  Those guys at Microsoft have really hit the nail on the head with this stuff.  I started writing lots of LINQ to SQL code, debugging and watching SQL Profiler.  I had a blast trying to find ways to break it.  For the most part, it did really well in producing accurate, not always concise, SQL statements; albeit some queries could stand to be refactored.  Regardless, LINQ to SQL delivers beyond my expectations.  This is the point where I started to look for an elegant way to integrate LINQ to SQL with a classic N-Tier architecture.  Unknowingly, I took the “red pill” and I dove head first into the rabbit hole.  “How hard can this be?”, I thought to myself.

In my previous post, I compared the relative merits and challenges of the Active Record pattern with the Unit of Work pattern.  The Active Record pattern, in my opinion, creates a natural coding paradigm where business objects expose persistence behaviors as public methods.  The challenge being that Active Record implementations ranging from Ruby to Subsonic to Jammer.NET have had issues with cardinality.  Since most Active Record implementations do not implement the Identity Map pattern, lazy loading relationships in the object graph could create an infinite loop condition when recursive persistence logic is invoked.  LINQ to SQL avoids these types of issues by implementing the Unit of Work pattern through its DataContext class.  Because the DataContext tracks object state, persistence calls are batched with only those objects that need to be updated and there would be no need to traverse the object graph.  A challenge with the DataContext lies with cases where the objects become disconnected from the DataContext.  Another potential issue with the DataContext, in my opinion, is code clutter.  I personally don’t like having logic to re-attach disconnected objects to the context cluttering up my code.

Weighing both patterns and considering their benefits and disadvantages, I decided to try an experiment to marry the two patterns.  The Active Record persistence behaviors can encapsulate the logic required for re-attaching and, in turn, the Unit of Work container can handle tracking of object state and persistence.  The only thing missing is the minister and a prenup to make it official.  So off I go to try my hand at match-making.

Let’s inspect the entity class that was generated by LINQ to SQL’s code generator (SQLMetal).  Firstly, as seen in the following code sample, each generated class is adorned with a TableAttribute to create a mapping to the database table.  Additionally, the class is declared partial to allow for customizations to be kept in a separate file.  This is nice, but falls short for our desire to create a business layer on top of these data entities.  The class then implements a couple of interfaces from the System.ComponentModel namespace to support notification of property changes.  It is really nice that it doesn’t need to derive from a concrete class as this gives us the opportunity to create a custom base class that all entities can derive from should we choose to do so.  Unfortunately, the class is not adorned with the SerializableAttribute.  The designer allows you to control the “Serialization Mode”, but this does not do what I was expecting it to do.  Setting this property to “Unidirectional” causes the properties of the data entities to get adorned with a DataMemberAttribute which is a WCF XML serialization method.

//LINQ to SQL’s mapping attribute implementation

[Table(Name=“dbo.Customers”)]

public partial class Customer : INotifyPropertyChanging, INotifyPropertyChanged

{

  …

}

Looking further in the class I found the public property mapping implementation.  I was surprised to find its mapping attributes were adorned to the public properties (below).   I was surprised because this meant that I could hide the mapping by simply deriving from the class and declaring a new property with the same name.  What about polymorphism? I know what you’re thinking, this is supposed to be a data entity not a domain entity.  I agree.  However, I’m a believer in code reuse.  If the data model entities closely match the domain model, why not derive from those entities to maximize code reuse?  Furthermore, your domain entities can completely encapsulate domain logic from the data entity without having to model yet another class that implements all of the same properties.  In other words, you get all of the data without all of the work for free.  All that’s left to do is to write the business logic in a class that elegantly holds only business layer implementation.  So naturally, I was surprised to see the mapping made at the property.

//LINQ to SQL’s mapping attribute implementation

[Column(Storage = “_CompanyName”, DbType = “NVarChar(40) NOT NULL”, CanBeNull = false)]

public string CompanyName

{

  get { … }

}

I later realized that LINQ to SQL deliberately creates mappings this way in order to support deferred execution.  Because the public property is the referenced member of the class in the expression tree, a simple reflection call is all that is needed to retrieve the attribute instance.  Formatting a SQL command statement becomes possible this way and is, indeed, a very nifty way of storing schema details without the need for an external mapping file (as found with the Hibernate mapping paradigm).  I prefer having an inline attribute-based approach to Hibernate’s XML-based mapping files.  I’ve always felt that having the mapping definitions inline with code was a cleaner way to implement mappings as the definitions are always only a click or two away.  However, I do recognize the simplicity and elegance of an external mapping file as well as having the ability to cache the map in memory for performance.

As a quick test, I tried to derive from the LINQ to SQL entity and run a query from the context with out success.  The reflection call that discovers the mapping attribute does not flatten the hierarchy and can’t retrieve the base property reference.  Bummer.  Microsoft, if you’re reading this, please modify the GetProperty() method call to include a binding flag value of “BindingFlags.FlattenHierarchy”.  I think it would be really cool to have business objects that derive from data objects that are context connected.  Maybe the answer is to wait for the ADO.NET Entity Framework for this type of support.

Next, we find the associations section.  This is where relationships to other entities are made.  Here we are introduced to the AssociationAttribute and the EntitySet<T> object.  The EntitySet<T> came as a surprise to me because I was expecting a simple Collection<T> or even simpler, an array of Order objects to handle relationships.  According to the MSDN documentation for EntitySet<T>, it provides deferred loading and relationship maintenance for the one-to-one and one-to-many collection relationships for LINQ to SQL applications.

[Association(Name=“Customer_Order”, Storage=“_Orders”, OtherKey=“CustomerID”)]

public EntitySet<Order> Orders

{

  get

  {

    return this._Orders;

  }

  set

  {

    this._Orders.Assign(value);

  }

}

Another relationship object that is found in the generated code is the EntityRef<T> class.  Again, for deferred loading purposes a specialized class is used to handle direct parent relationships.

private EntityRef<Customer> _Customer;

How do these references get instantiated? Well, with the constructor of course! Yeah, I was confused at first too.  The concept to note here is that the EntitySet<T> object is instantiated but not the actual entity until the Entity property is requested.  I’m not sure what to think of this one.  I’m guessing they wanted to push the responsibility of tracking when a related entity has been requested to another class in an effort to keep things clean with the entity class implementation.  Certainly, this could have been achieved by adding more properties to the class.

public Customer()

{

  this._Orders = new EntitySet<Order>(new Action<Order>(this.attach_Orders), new Action<Order>(this.detach_Orders));

  …

}

So far so good.  The entities generated by LINQ to SQL are relatively straight forward and relationships to other entities are handled relatively elegantly.  The class does not derive from another concrete class and there is no data access logic to muddy up the waters.  Inline attributes and the use of custom relationship classes make up the list of less desirable items, but shouldn’t impede our efforts to create an N-Tier architecture.  Out of the box, LINQ to SQL provides an excellent way to quickly model a data layer with rich relationship structures.

In the next post, we’ll take a look at the DataContext class and evaluate how we could best take advantage of its object tracking capabilities.  I’ll also explore writing a context manager class that helps ease the pains in dealing with the DataContext.

Posted in Active Record, C#, Domain Model, Jammer.NET, LINQ to SQL, Object Oriented Programming, POCO, Software Architecture | Tagged: , , , , , | 2 Comments »

Active Record Vs. Unit of Work

Posted by Randolph Cabral on Thursday, April 3, 2008

Lately, I’ve been doing some research on which path to take – Active Record or a pure Domain Model using the Unit of Work pattern.  Clearly there are advantages and disadvantages to both.  I’m hoping to find a good compromise between simplicity and flexibility.  When I say ‘simplicity’, I’m talking about the ever-elusive ‘elegant solution’ where we are free from framework traps and anemic domain models.  As for ‘flexibility’, I’m referring to the ability to quickly change delivered functionality without having to rip apart the guts of the existing code base.  The thing that I keep getting hung up on is deciding which of the two best fits these goals.  I’ve used both approaches and like them both, but this is one of those debates that has no clear winner.  Let’s take a closer look at each approach.

The Active Record Pattern

In this model, an entity and its properties are directly mapped to a database table and its columns.  Additionally, every entity is implemented with its own insert, update, delete and select (CRUD) behaviors.  Method names like Save, Delete and Load are commonplace with many Active Record implementations.  Rounding out the package adds various static methods that expose query capabilities to allow filtering.  Having these behaviors included with the entities provides a convenient way of persisting state back to a data source.  I have used this pattern a lot and have had many successful projects that I credit the pattern for.  Needless to say, as with most solutions, it isn’t perfect.  There are a few cases where this pattern can create some interesting challenges.

Consider this case:  Cardinality, A.K.A. relationships with other entities.   Think of the classic customer and order example.  A customer can have many orders which is to say that the customer object has a one-to-many relationship with order objects.  Remember, the Active Record pattern exposes public Save methods on both the customer object and the order object.  What would you expect to happen when you call the Save method on the customer object?  Certainly the customer object will be saved to the database, but what about the orders?  If you’re using Ruby’s Active Record implementation, the orders that are associated with the customer object may or may not be saved depending on whether the customer’s Save method call is an initial or subsequent call.  Using Jammer.NET’s implementation, the associated orders are not saved at all.  The developer must take action by either modifying the Save method on the customer object to handle cascade saving or by calling the Save method on the orders collection manually.

The Unit of Work Pattern

According to Martin Fowler’s anemic domain model anti-pattern, the domain model should only have objects that are named after the nouns in the domain space and connected to each other with rich relationship structures.  Furthermore, the behaviors of each object should model only those rules that are domain space specific.  This means that the Active Record pattern violates this rule because CRUD behaviors are forms of data store persistence logic not domain space logic.  Going by this definition, we must implement persistence using the Unit of Work pattern.  Two great examples of this pattern is implemented by Hibernate and Microsoft’s LINQ to SQL technology.  Hibernate and LINQ to SQL take a sort of hybrid approach which combines both the Active Record pattern’s metadata mapping and the Unit of Work pattern.  This hybrid approach creates a very clean model for handling persistence regardless of cardinality between entity objects.  Using LINQ to SQL, the entities, for the most part, adhere to the notion of POCOs which are persistence ignorant.  Great right? Unfortunately, as with the Active Record approach, this model is also not without its challenges.

Consider this case: Objects that become disconnected from its Unit of Work container.  Let’s assume that you are writing a web application and you’re coding up a three step registration wizard.  Ideally, having the UoW container stay in scope through completion of the wizard is preferred.  Unfortunately, that may not be the best idea if your application needs have as light a footprint on memory usage as possible.  Considering this, you decide to let the UoW container go out of scope once your business objects are loaded causing the business objects to become disconnected.  Finally, the registration wizard reaches its last step which is to persist the state of the registration business object to the data store.  Because the UoW container got garbage collected a new container instance must be created and the business object must be re-attached.  I know what you’re probably thinking, “This logic is not so hard to write.  Why all the heartburn?”  The problem I have is with the additional lines of code that must be written to re-attach the entities.  These lines of code are entirely house keeping items that, in my opinion, muddy up the code.  In my opinion, this pattern also does not code as naturally as the Active Record pattern.  However, I do think it’s the better pattern in the sense that related entities are automatically handled by the UoW container.

Sound Off!

So what do you think?  Which pattern gets your vote?  Have I missed anything?  Help me to choose which way to go.  I’m sure this debate will likely live on for many years and I look forward to hearing from all of you.

Posted in Active Record, C#, Domain Model, Jammer.NET, LINQ to SQL, Object Oriented Programming, POCO, Software Architecture | Tagged: , , , , , , , , , | 2 Comments »

My First Blog Post

Posted by Randolph Cabral on Friday, March 28, 2008

Firstly, welcome to my home on the web!  My name is Randolph Cabral, and I’m a Software Developer working for AMB Property Corporation in San Francisco, California.

Well, I guess the time has come.  I’m starting my own personal blog.  I never imagined I would start one because I didn’t want to corrupt the world with my useless musings, and trust me, I have many.  I realize now that there is a compelling reason to maintain and write to a blog after all, and that reason is to share.

I’m not sure how or why this “sharing” concept had escaped me for so long.  I like to think that I’m a generous and charitable person.  But I guess that hasn’t been the case with certain aspects of my professional life.  So here I am at the dawn of my attempt to share as much as my peers are willing to bear.  But I’m not talking about sharing pointless politically charged opinions or to journal how my cat just learned to flush the toilet (no offense to those who blog about politics or their latest pet exploits).  I’m talking about sharing something that others will hopefully find useful.  I’m talking about sharing my technical endeavors as a computer software developer.

For years, I’ve frequented many technical blogs which have helped me on countless projects from a best practices stand point and also from an architectural standpoint.  I continue to learn a great deal from their code snips and reading about the nature of the requirements that lead to the solution.  It’s time that I give back and make my experiences public.  It is my hope that at the very least, one of my posts helps a fellow programmer.

Let me be clear that I am by no means an expert in the field of software development.  To make such a claim would be pompous and arrogant of me.  I can only claim that I have had positive experiences in writing and implementing software which continues to live on as successful productive enterprise applications.

I look forward to sharing my experiences with you all – good or bad.

Posted in General | Tagged: | 1 Comment »