Linq To Sql: POCO and Value Objects

Fetching POCO Entities and Value Objects using Linq To SQL

Linq To Sql support neither POCO Entities nor Value Objects when using it as an O/R Mapper.
What we can do is to treat it as a simple auto generated Data Access Layer instead.

By treating it as a DAL we can manually handle the data to object transformations in a type safe manner.
If we for example want to fetch a list of POCO Customers that also have an immutable Address value object associated to them,
we could use the following code to accomplish this:

//Poco prefix only used to distinguish between l2s and poco entities here
IList<Customer> FindCustomers(string name)
{
   var query = from customer in context.Customers
                   where customer.Name == name
                   select new PocoCustomer
                   {
                      Id = customer.Id,
                      Name = customer.Name,
                      Address = new PocoAddres
                            (customer.AddressStreet,
                             customer.AddressZipCode,
                             customer.AddressCity)
                   };

    return query.ToList();
}

This approach is quite handy if you work with multiple data source and don’t want to mix and match entities with different design in the same domain.

I’m sure many will find this approach quite dirty, but I find it quite pragmatic;
You can be up and running with a clean domain model in just a few minutes and simply hide the Linq To Sql stuff behind your DAL classes.

This works extremely well if you are into the “new” Command Query Separation style of DDD.
You can use Linq To Sql to create typed transformations from your Query layer and expose those as services.

Personally I’ve grown a bit tired of standard O/R mapping frameworks, simply because they try to do too much.
There is a lot of magic going on, it’s hard to keep track on what gets loaded into memory and when they will hit the database.

If I’m required to use both a memory profiler and a O/R mapper profiler in order to use the framework successfully, then something is very wrong with the whole concept.

This dumbed down DAL approach to Linq To Sql however makes the code quite explicit, you know when you hit the DB and what you get from it.
Sure you lose features like dirty tracking that mappers generally give you, but this can be accomplished by applying a Domain Model Management framework on top of  your POCO model.
Or maybe you just want to expose your objects as services and don’t care about those features.

[Edit]
In reply to Patriks comment:

If you go for Command Query Separation, you would only query the query layer, so you wouldn’t need to handle updates there.
And when it comes to writing data, you do that in the command layer , the commands carries the changes made from the GUI and thus you wouldn’t need to “figure out” what has changed.
The commands will carry that information for you.

Tracking changes in the GUI could simply be done by storing snapshots of the view specific data when you send a query.
Then pass a user modified projection together with the original snapshot to a command builder.
You could then submit the commands for processing.
[/Edit]

( hmmm, I somehow managed to turn a post about Linq To Sql into a rant about other O/R mappers, I usually do it the other way around :-) )

9 thoughts on “Linq To Sql: POCO and Value Objects”

  1. It might /seem/ like a good idea but it will hurt as hell if you build anything that Linq To Sql initially couldn’t handle.

    In my current project they went down this route and have had to abandon it due to Linq To Sql nastiness, we have now migrated over to NHibernate.

    Querying works ok, about 50% of what needed to be done could be done with Linq To Sql. Any other case there had to be very ugly linq queries or stored procedures.

    Inserting and updating graphs? Forget about it, you’ll be traversing the graph and manually try to figure out what has changed.

    So, for RAD and forms over data it looks like a good idea. But for anything with a little business logic and at least some intelligent models, you’ll be out of luck sooner then you can install an other ORM.

  2. Using CQS you would not insert or update using this model, only query it.
    CQS promotes view specific projections, so RAD works fine here.

    And regarding graphs, it depends if your aggregates have direct or indirect associations.

    And tracking changes would in the CQS case be handled elsewhere, the commands would carry the changes only, and thus you wouldn’t need to figure out _what_ changed.. the commands tells you that..

  3. Well,

    A couple of things.

    1) Querying like this is a PAIN. LTS can’t query graphs, it just can’t. Single entities without inheritence, yeah sure. Anything beyond that prepare for a bumpy ride.

    2) You can definatly use CQS in the way you explain it. Though, CQS does not say that the models have to differ. It just says that the commands and queries in the domain should be separate.

    If you are in a web environment or a request / response in any other scenario. You might fill your commands with data from what you get in your request and insert / update it (this is usually forms over data).

    If you try to do /intelligent/ work in a domain model. You will still need to figure out what has changed and what hasn’t and configure your command with that. This is a pain if you try to use LTS as the persisting party.

  4. 1) You wouldn’t need to if you go for a CQS model with view specific data.

    In such case it would be more of a one table to one root object mapping.

    2) No it doesnt say the models have to differ.
    But if they don’t then there is little reason to use CQS.
    In such case you could just aswell fetch normal entities and commit the changes made to them.

    Passing the same DTO’s back and forth doesn’t make it CQS.
    A command should state what changes are supposed to be made.
    Not just say “Here is how the object looks now”, because in that case, yes you would need to figure out what changed..
    But that would be ill designed commands.

    “”
    If you try to do /intelligent/ work in a domain model. You will still need to figure out what has changed and what hasn’t and configure your command with that. This is a pain if you try to use LTS as the persisting party.
    “”

    Any work in the domain model would be done when the commands are beeing processed.
    the commands would be filled from the presentation model and no ORM would be involved at all there.

    And just to be clear. Im really really not promoting LTS as a competent mapper.
    I’m simply saying that if you need view specific transformations, this is a very RAD approach.
    Using RAD at the query layer won’t hurt the rest of the application.

  5. 1) You wouldn’t need to if you go for a CQS model with view specific data
    Ehm, yes you do. If you don’t creat views in the database you’d have to ask the database for interesting views with some kind of query.

    Take this example,

    You have orders, theese orders reserver product quantities from warehouses. If you want a view that shows where the orders got their quantities from, you’ll need a query (assuming that information is in the same database ofcourse). LTS will have extreme problems with this when there is intelligent views you want.

    2)
    You missunderstand me.

    There is nothing in Meijers CQS that states that the query and command can’t work on the same domain model. What it does state is that operations on the domain should not query and do commands at the same time.

    I’m also not saying that you send the model to the command and make it figure out what changes. I’m saying that the command can /use/ the model.

    Take for instance the command “cancel order”. When doing so there is several things that might have to happen, you do this with a command on a model. Thus, state in the model will have changed and need to be persisted.

    This is where change tracking will help you persist the state changes the command did on the model itself.

    Though, Udi Dahan talks about CQS on a service level where have separate services for commands and queries (and separate database).

    This has however nothing to do with the original thoughs that Meijer had on CQS.

  6. @Patrik.

    1) I should have said view specific _data_ model,
    which is what Greg Young is suggesting, the query data source holds denormalized data.

    In such case it is very easy to map projections against a single table or view.

    So in the example you give, there would either be a denormalized table holding the product origins, or possibly a view that aggregates that for you.

    If you are going for that approach or if you simply want to expose a projection as objects for whatever reason, e.g. if you want to migrate data and you want to clean it up a bit.. this works very well..

    2)
    There are plenty of ways to do changetracking, and you know that too since you were building a change tracking system yourself just a few weeks ago.

    And in the case of cancel order, in the most naive case I guess that would simply set a cancelled flag to true on an order.. so there is not much to “figure out” there.. just perform the action and commit.

    Also depending on the style of CQS DDD you use, you might persist the commands instead of the actual entities.

    This way you get a complete audit log where you can replay every action ever processed in your system.
    And thus rebuild the system state for any given time.

    A specific entitys state would then be the sum of all the commands that ever affected it.

    (Optimizations for not having to replay every command is out of this scope)

    “”
    This has however nothing to do with the original thoughs that Meijer had on CQS.
    “”

    I explicitly said CQS style of DDD, so lets stick to that context.

  7. S>>I explicitly said CQS style of DDD, so lets stick to that context.

    So you did, my mistake.

    >>Also depending on the style of CQS DDD you use, you might persist the commands instead of the actual entities.

    This sounds more like Martin Fowlers “Event Source pattern” then CQS.

    >>In such case it is very easy to map projections against a single table or view.
    Also in that case there will be a lot of transformation logic in the message to the query store. I don’t necessary agree with Greg that this is optimal, you put a lot of responsibility on the database schema and on the message transport, not to mention all the transaction work that need to go on to make the data integrity stick.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s