Saturday, October 10, 2015

Moving iTunes library without using the Media folder

Ok, so this post is really not about development (i.e. this is a developer blog), but I just needed to get this weirdness off my chest.

Recently I had the need to move my iTunes library - all the physical files that is - to a new location on my Windows system. Now, I don't use iTunes Media folder due to the "Keep iTunes Media folder organized" setting, since I don't want iTunes to try and organize my music files. I like to handle that myself. However, all guides I could find about moving the iTunes library made use of the Media folder, so I wanted to find another way. The problem is, though, that in theory there isn't any other way.

With iTunes shut down I started to look into where it stores its library information. In Windows that is somewhere like this: C:\Users\[username]\Music\iTunes
In this folder there is a file called "iTunes Library.itl". The file contains information about all music files, including the full path of each music file. But the library file is in a proprietary format with no easy way of reading. Next to the .itl file is another library file called "iTunes Music Library.xml". This XML file contains the same information about the music files, and since it is XML it can be read into any text editor.

So I tried to do a search and replace to change the paths pointing to the music files to the new location. I saved the file and fired up iTunes. But it just ignored the XML completely and started up with an empty library. I then went through all the menus in iTunes to see if I could find a way to do an import of the XML file. No such luck. So I did some more research on the Internet and finally found a post, which after reading I really had my doubts about. But since I couldn't find any other suggestions, I decided to give it a shot.

The procedure is this:

  1. Close iTunes.
  2. Open the XML library file in a text editor and fix all file paths to point to the new location, and then save file.
  3. Open the .itl file in a text editor (it will look all weird) and just remove some of the contents, and then save file. This causes the file to be damaged, which is the intention.
  4. Start up iTunes. It should now notify you that it is reading the XML file, and after a while iTunes will open up with the relocated library loaded.
So it was step 3 that I found a little strange, but hey, doing it actually caused iTunes to read the XML file, reestablishing the moved library. Why, Apple, why oh why?!

As an end note I want to say that I'm not sure if this is always possible to do. In my case it was. But I noticed afterwards that the XML was deleted and didn't seem to be recreated. Maybe it will turn up again at some point.

Update:
Ok, so I just found out something about the XML library file. It is possible to generate it by exporting the library from within iTunes. It's just that finding the correct menu item for this can be a bit tricky. Nowadays, the default setting for the menu bar in iTunes is to not show it. There is, however, a menu at the top left corner in which there is a Library submenu. The problem is, though, that this does not contain anything for exporting the library. Instead you have to select Show Menu Bar, which shows the full menu bar. Then go to File->Library, and this Library submenu does contain a Export Library menu item, which you can use to export the library to XML.
I guess I didn't get the memo when they made that design decision :)

Monday, February 2, 2015

Umbraco deployment using Courier through source control

Disclaimer: I'm being pretty straight forward in this article. However, it is not the intention to endorse or belittle any product or technology, even though there may be statements that could possibly be interpreted that way by someone. Everything in this article is presented as facts as seen from my perspective. And so, if any statements appear to be directly wrong, I hereby apologize. Feel free to leave a comment. 

Introduction

Being fairly new to the world of Umbraco CMS, and coming from the world of Sitecore CMS, I've noticed both some good things and some not so good things about Umbraco.

On the good side, I think it is much easier to get started with Umbraco, than was my experience starting out with Sitecore. On the surface the two may appear similar, but when you actually start doing stuff, I find Sitecore more complex than Umbraco, especially for smaller solutions.

However, when it comes to larger solutions Umbraco definitely has its shortcomings. For example, when doing team development you really start to feel the pain, and it doesn't get better when introducing automatic deployment to multiple environments. Sitecore in itself is not really better in these matters, and it is only by making use of a third party tool, TDS (Team Development for Sitecore), that a real advantage is gained.

With TDS each developer can work in a complete separate environment (his local machine), even including the database. TDS serializes Sitecore items to text files, which can then be part of the solution's source code and therefore checked-in to a source control system. TDS also helps with synchronizing between Sitecore items in DB and textual items in the solution by providing a fairly simple-to-use UI. With regards to deployment TDS can make packages of Sitecore items, which can then be installed on the different environments using a small included command-line tool.

I haven't seen anything at that level for Umbraco. Usually team development occurs on a shared database, but with local source code. This makes feature-driven development kind of hard. Even though you work on isolated source code, you can't make changes in backoffice without the risk of disturbing someone else's work. With regards to deployment Umbraco has Courier, which when reading about it sounds very promising, but which in the current version (2.11) simply doesn't work (to clarify: it fails to transfer revisions to other environments). I hope this will be fixed soon :)

Courier provides the means of creating revisions, i.e. packages of Umbraco items. It lets you select which items to include in the revision, and can even automatically include any dependent items. There is one shortcoming though. When using automatic dependent item inclusion there is no distinction between content items and non-content items. So you'll end up with adding content items to the revision when using this functionality. This presents problems later when deploying to a live environment where editors have created content, since you risk overwriting their work. So it seems better to disable the automatic dependency thingy when creating revisions.

Another shortcoming of Courier is documentation and sample code, which is a couple of years behind the current version (2.11). This is too bad, because you really need this if you want to make use of the Courier API for creating command-line based tools for deployment automation.

Well then, all that being said, I think it's time for what this post is really about - deployment using Courier through source control.

Solution

In the company where I work we have multiple environments: DEV, TEST, PREPROD and PROD. Furthermore, we use GIT for source control, TeamCity as build server, and Octopus for deployment. Ultimately I would like to automate the whole process of deployment, so that I with the press of a button can deploy both application files and Umbraco items. Due to the shortcomings mentioned earlier this is currently wishful thinking. So for now I have settled for a more manual approach. Here's the deal.

In backoffice I have created what I call a long-lived Courier revision. It is just a normal revision, but it will stay there at all times, and when preparing for a release, I will simply just update this revision to reflect the current state of the non-content Umbraco items to be part of deployment. That is, the revision should always contain everything needed to deploy to a fresh environment. And it should be added without auto-including dependent items. Currently in my situation it means to include the following:
  • Datatypes
  • Document types
  • Macros
  • Templates
A note on adding Templates; I'm really only interested in including the database part of the templates, but actually the razor files are also added, which is unnecessary since they will already part of application files deployment. It is not catastrophic, just inconvenient.

Due to the issue mentioned earlier with Courier 2.11 unable to transfer revisions, and due to the fact that we (developers in my company) don't have access to PREPROD and PROD environments, an ingenious scheme had to be devised for getting the revision moved to these environments.

The solution is to include the revision as part of the source code. The revision is simply a folder structure with a bunch of files located at App_Data\courier\revisions\ and adding this to the GIT repository is perfectly doable. Since revision files are just plain XML files, having these source controlled gives the added benefit of being able to inspect changes (git diff) before committing.

Now, when deploying to an environment the revision just follows any other application files, and it is then a simple matter to go into to backoffice on that environment, select Courier, and install the revision.

I find this approach simple and pragmatic. However, I do hope to be able to automate it more in the future when Courier becomes a bit more mature/stable.

Tuesday, July 15, 2014

Group by in C# and linq.js

Being a C# developer I really like and use Linq a lot. It can simplify code a great deal. So it is only natural to want the same goodness in javascript. Luckily there is a framework - linq.js - that provides this functionality. However, the syntax is not quite the same, so it takes a little getting used to.

In this post I want to show an example of how to do a group by.

I have a bunch of people in a collection, each person defined by name, age and job. Now I want to group these people by job. The result should be a grouped collection, where each group contains the person objects belonging to a specific job.

In C# it looks something like this:

var people = new[] {
    new { Name = "Carl", Age = 33, Job = "Tech" },
    new { Name = "Homer", Age = 42, Job = "Tech" },
    new { Name = "Phipps", Age = 35, Job = "Nurse" },
    new { Name = "Doris", Age = 27, Job = "Nurse" },
    new { Name = "Willy", Age = 31, Job = "Janitor" }
};

var grouped = people.GroupBy(
    person => person.Job,
    (job, persons) => new { Job = job, Persons = persons });

foreach (var group in grouped)
{
    System.Diagnostics.Debug.WriteLine("job: " + group.Job);
    foreach (var person in group.Persons)
    {
        System.Diagnostics.Debug.WriteLine("   name: {0}, age: {1}, job: {2}",
            person.Name,
            person.Age,
            person.Job);
    }
}

The group by statement is fairly simple, and the output is exactly as expected:

job: Tech
   name: Carl, age: 33, job: Tech
   name: Homer, age: 42, job: Tech
job: Nurse
   name: Phipps, age: 35, job: Nurse
   name: Doris, age: 27, job: Nurse
job: Janitor
   name: Willy, age: 31, job: Janitor

The same thing in linq.js is a little bit more involved, and for me it did take some playing around before I ended up with the code below. But basically it is quite similar to the C# version.

var people = [
    { name: "Carl", age : 33, job: "Tech" },
    { name: "Homer", age : 42, job: "Tech" },
    { name: "Phipps", age : 35, job: "Nurse" },
    { name: "Doris", age: 27, job: "Nurse" },
    { name: "Willy", age: 31, job: "Janitor" }
];

var grouped = Enumerable
    .From(people)
    .GroupBy(
        function (person) { return person.job; }, // Key selector
        function (person) { return person; },     // Element selector
        function (job, grouping) {                // Result selector
            return {
                job: job,
                persons: grouping.source
            };
        })
    .ToArray();

alert(JSON.stringify(grouped));

And the result:

[{
    "job": "Tech",
    "persons": [{
        "name": "Carl",
        "age": 33,
        "job": "Tech"
    },
    {
        "name": "Homer",
        "age": 42,
        "job": "Tech"
    }]
},
{
    "job": "Nurse",
    "persons": [{
        "name": "Phipps",
        "age": 35,
        "job": "Nurse"
    },
    {
        "name": "Doris",
        "age": 27,
        "job": "Nurse"
    }]
},
{
    "job": "Janitor",
    "persons": [{
        "name": "Willy",
        "age": 31,
        "job": "Janitor"
    }]
}]

Disabling WebDAV in a Sitecore web application

As the trends for web applications moves towards more heavy clients, it puts new demands on how to structure the web application, e.g. it is now quite common to let the client handle the complexities of user interface functionality, and then just call the server for querying raw data. This could be done using ajax calls to query RESTful WebApi services for data. Javascript on the client will then handle processing and presentation of the data.

Now, RESTful web APIs with any self-respect will want to use common HTTP verbs, such as GET, POST, PUT, DELETE etc. But for Sitecore web applications hosted in IIS this turns out to be a problem. And the problem is called WebDAV. WebDAV takes over HTTP verbs like PUT and DELETE, so they cannot be used in, for example, a WebApi controller. In many situations WebDAV is not really needed by a Sitecore web application, but apparently it is enabled by default. And while it may not actually be enabled on the IIS, default Sitecore web.config somehow enables it anyway, at least enough to cause problems.

Disabling WebDAV in a Sitecore web application can be a bit tricky. So here is a way to do it.

  1. Open web.config
  2. Locate the log4net appender section "WebDAVLogFileAppender" and remove it or comment it out.
  3. Locate the log4net logger section "Sitecore.Diagnostics.WebDAV" and remove it or comment it out.
  4. Under <system.webserver> locate the handlers section and replace these lines:
    <add name="WebDAVRoot" path="*" verb="OPTIONS,PROPFIND" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v4.0.30319\aspnet_isapi.dll" resourceType="Unspecified" preCondition="classicMode,runtimeVersionv4.0,bitness32" />
    <add name="WebDAVRoot64" path="*" verb="OPTIONS,PROPFIND" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v4.0.30319\aspnet_isapi.dll" resourceType="Unspecified" preCondition="classicMode,runtimeVersionv4.0,bitness64" />
    <add verb="*" path="sitecore_webDAV.ashx" type="Sitecore.Resources.Media.WebDAVMediaRequestHandler, Sitecore.Kernel" name="Sitecore.WebDAVMediaRequestHandler" />
    
    with:
    <remove name="WebDAV" />
    
  5. Under <system.web> locate the handlers section and replace this line:
    <add verb="*" path="sitecore_webDAV.ashx" type="Sitecore.Resources.Media.WebDAVMediaRequestHandler, Sitecore.Kernel" />
    
    with:
    <remove name="WebDAV" />
  6. Remove the Sitecore.WebDAV.config file from App_Config\Include
As far as I have been able to find out, the only thing that will be missing in Sitecore after disabling WebDAV is the so called WebDAV dialog, which is something that can be opened in the media library to make it possible to drag'n'drop media files from the file system into Sitecore.

Notes:
Procedure devised using Sitecore 7.2

AutoMapper Children Value Resolver

When exposing data to the outside world (e.g. through a service) one could easily find oneself thinking about such matters as performance and load on the wire.

We may have a scenario where we need to expose a customer service, which could be used in different scenarios, where sometimes callers just want customer master data, and at other times callers want customers with their order history. Depending on the system landscape order data may come from another system than where the customer data is stored; and these systems may perform differently, so that retrieval of customer data could be a relatively inexpensive operation, whereas retrieval of order data could be more expensive.

A common practice when creating services is to transform the entities from the domain into DTO objects, and a widely used component for this is AutoMapper. But how to get AutoMapper to deal with the scenario above?

As stated, sometimes we want to expose only customer data, and sometimes order data should be included. The domain may have been implemented as an aggregate, where a customer has a collection of orders, like this:

public class Order
{
    public Guid Id { get; set; }
    public DateTime Created { get; set; }
    public string Text { get; set; }
}

public class Customer
{
    public Guid Id { get; set; }
    public string Name { get; set; }
    public string Address { get; set; }
    public IEnumerable<order> Orders { get; set; }
}

And DTOs like this (for some reason we don't want to expose the internal IDs):

public class OrderDto
{
    public DateTime Created { get; set; }
    public string Text { get; set; }
}

public class CustomerDto
{
    public string Name { get; set; }
    public string Address { get; set; }
    public IEnumerable<Order> Orders { get; set; }
}

The data retrieval code may have been implemented with lazy load, so that order data is only queried if used. However, since AutoMapper will map the Orders collection by default, orders will be queried. So we need to modify the way customers are mapped. To that end I've devised an IValueResolver called ChildrenResolver. It is a general resolver that can be used for any child collection, and it looks like this:

public class ChildrenResolver<TSource, TMember> : IValueResolver
{
    private readonly Func<TSource, IEnumerable<TMember>> _childrenExpression;

    public ChildrenResolver(Expression<Func<TSource, IEnumerable<TMember>>> childrenExpression)
    {
        _childrenExpression = childrenExpression.Compile();
    }

    public ResolutionResult Resolve(ResolutionResult source)
    {
        bool includeChildren = false;
        if (source.Context.Options.Items.ContainsKey("IncludeChildren"))
        {
            includeChildren = (bool)source.Context.Options.Items["IncludeChildren"];
        }
        return source.New(includeChildren ? _childrenExpression.Invoke((TSource)source.Value) : null);
    }
}

The constructor takes an expression selecting the children collection member from the source entity, i.e. in our scenario it tells the resolver that we want to map the Orders property of the Customer entity. The Resolve method first looks up an options item called IncludeChildren, which is a boolean that we will set from the outside. It tells the resolver whether or not we want it to resolve the specified children collection property, and if so it returns a ResolutionResult with the children collection.

The ChildrenResolver is then used when defining a mapping, like this:

Mapper.CreateMap<Customer, CustomerDto>()
    .ForMember(dto => dto.Orders, opt => opt
        .ResolveUsing<ChildrenResolver<Customer, Order>>()
        .ConstructedBy(() => new ChildrenResolver<Customer, Order>(entity => entity.Orders)));

The mapping defines that we want to map from Customer entity to CustomerDto, and for the Orders member of the DTO we want to use the ChildrenResolver, which is instructed to grab the Orders collection of the Customer entity (this gives the flexibility of not having a one-to-one naming relationship between source and target properties.) Notice the usage of ResolveUsing is a bit more complex than typically seen. Since the ChildrenResolver takes a constructor parameter, we need to tell AutoMapper that we will handle the resolver instantiation ourselves, which we do by using ConstructedBy method.

Finally, we are ready to use the whole thing in our customer service, which may be a WebApi controller with the following method:

[Route("api/customers/{id}")]
public CustomerDto GetCustomer(Guid id, bool includeChildren = false)
{
    var customer = _customerRepository[id];
    if (customer == null)
    {
        // todo: handle if customer not found
    }

    var customerDto = Mapper.Map<CustomerDto>(customer, opts =>
    {
        opts.Items["IncludeChildren"] = includeChildren;
    });
    
    return customerDto;
}

Notice that when we do the mapping, we set the IncludeChildren options item to specify whether or not we want children collections mapped, and in this case that information comes from a service parameter.

That's it. I hope someone finds this useful :-)

Notes:
The code is based on usage of AutoMapper version 3.2.1.

Tuesday, March 4, 2014

Dependency injection in Sitecore event handlers

Following my previous article about Dependency injection in Sitecore custom commands, I think it is only appropriate to continue with something similar for Sitecore event handlers. And when I say similar I mean very similar - in fact reading this article after having read the previous one may induce some sense of deja vu :-) To learn more about Sitecore events visit Using Events.

This article assumes an understanding of Sitecore events and the concept of dependency injection. The purpose is to show how to use dependency injection in Sitecore events.

The normal way of creating an event handler for a Sitecore event is to create a handler class with an EventHandler delegate, i.e. a method with the EventHandler signature, and then add some config to Sitecore's <events> section defining where to find the event handler implementation, so that Sitecore can instantiate the event handler and trigger the delegate when the event occurs.

The problem with this approach is that nowadays it is common to use dependency injection in software solutions, and letting Sitecore take care of creating instances of your custom code means that you loose the possibility of injecting the needed dependencies. Luckily there is also a way out of this morass.

Sitecore has created a class called Event, which is used for subscribing, unsubscribing, and raising events. The good news is that it is available for use.

So here is a suggestion on how to use it to obtain dependency injection in Sitecore events. It is based on using Autofac as IoC container.

First, create a base class for your event handlers:

namespace TestApp.Events
{
  public abstract class BaseEventHandler
  {
    public string FullName { get; private set; }

    protected BaseEventHandler(string fullName)
    {
      FullName = fullName;
    }

    public abstract void OnEvent(object sender, System.EventArgs args);
  }
}
This base class defines the EventHandler delegate method signature that all derived event handler classes must implement, but it also has one property, FullName, for holding the event name for registration purposes.

Next, create your event handler inheriting from BaseEventHandler like this:
namespace TestApp.Events
{
  public class MyEventHandler : BaseEventHandler
  {
    private readonly IMyDependency _myDependency;

    public MyEventHandler(string fullName, IMyDependency myDependency)
      : base(fullName)
    {
      _myDependency = myDependency;
    }

    public override void OnEvent(object sender, System.EventArgs args)
    {
      // event handler implementation
    }
  }
}
As you can see we inject a dependency in the constructor. The constructor furthermore calls the base constructor to set the event name.

Now, create a class for registering events using the Sitecore Event class:
namespace TestApp.Events
{
  public static class EventConfigurator
  {
    public static void Configure(System.Collections.Generic.IEnumerable<BaseEventHandler> eventHandlers)
    {
      foreach (var eventHandler in eventHandlers)
      {
        Sitecore.Events.Event.Subscribe(eventHandler.FullName, eventHandler.OnEvent);
      }
    }
  }
}
The Configure method takes a collection of BaseEventHandler objects (our event handler instances), then uses Subscribe method on the Event class to subscribe the events.

That is basically all the pieces we need. We just have to fit everything together in our bootstrapper (the place where all the dependencies are set up using the IoC container). This could look something like this:
...
var builder = new ContainerBuilder();

builder.RegisterType<MyDependency>().As<IMyDependency>().InstancePerLifetimeScope();

builder.RegisterType<MyEventHandler>().As<BaseEventHandler>().WithParameter("fullName", "mynamespance:mycategory:myevent").InstancePerLifetimeScope();

var rootContainer = builder.Build();

var eventHandlers = rootContainer.Resolve<IEnumerable<BaseEventHandler>>();
EventConfigurator.Configure(eventHandlers);
...
So we just register dependencies as usual. The new thing is that we now register our event handlers in code, instead of using a Sitecore config file. And then we call our EventConfigurator with a collection of instances of all our event handlers.

That's it. Plain and simple :-)

Update:
Please note that since the event handlers are resolved only once (at app startup), any injected dependencies are effectively singletons.

Saturday, March 1, 2014

Dependency injection in custom Sitecore commands

Custom commands are probably one of the more ignored features in Sitecore, but they can be quite powerful. They could for example be used for insert options on templates, thereby allowing code to run on creation of content items. To learn more about commands (or specifically Command Templates) visit "Sitecore CMS 6.0 or later Data Definition Cookbook" chapter 4.

This article assumes an understanding of Command Templates and the concept of dependency injection. The purpose is to show how to use dependency injection in custom commands in Sitecore.

The normal way of creating a Sitecore custom command is to create a class inheriting from Sitecore.Shell.Framework.Commands.Command, overriding the Execute method, and then adding some config to Sitecore's <commands> section defining where to find the command implementation, so that Sitecore can instantiate the command.

The problem with this approach is that nowadays it is common to use dependency injection in software solutions, and letting Sitecore take care of creating instances of your custom code means that you loose the possibility of injecting the needed dependencies. Luckily there is a way out of this morass.

Sitecore has created something called a CommandManager, which is used for registering, instantiating and looking up commands. The good news is that it is available for use.

So here is a suggestion on how to use it to obtain dependency injection in custom commands. It is based on using Autofac as IoC container.

First, create a base class for your commands:

namespace TestApp.Commands
{
public abstract class BaseCommand : Sitecore.Shell.Framework.Commands.Command
{
public string FullName { get; private set; }

protected BaseCommand(string fullName)
{
FullName = fullName;
}
}
}
Basically, we will just use this base class to "label" our custom commands, but it also has one property, FullName, for holding the command name for registration purposes.

Next, create your custom command inheriting from BaseCommand like this:
namespace TestApp.Commands
{
public class MyCommand : BaseCommand
{
private readonly IMyDependency _myDependency;

public MyCommand(string fullName, IMyDependency myDependency)
: base(fullName)
{
_myDependency = myDependency;
}

public override void Execute(Sitecore.Shell.Framework.Commands.CommandContext context)
{
// command implementation
}
}
}
As you can see we inject a dependency in the constructor. The constructor furthermore calls the base constructor to set the command name.

Now, create a class for registering custom commands using the Sitecore CommandManager:
namespace TestApp.Commands
{
public static class CommandConfigurator
{
public static void Configure(IEnumerable<BaseCommand> commands)
{
foreach (var command in commands)
{
Sitecore.Shell.Framework.Commands.CommandManager.RegisterCommand(command.FullName, command);
}
}
}
}
The Configure method takes a collection of BaseCommand objects (our custom command instances), then uses RegisterCommand method on the CommandManager to register the commands.

That is basically all the pieces we need. We just have to fit everything together in our bootstrapper (the place where all the dependencies are set up using the IoC container). This could look something like this:
...
var builder = new ContainerBuilder();

builder.RegisterType<MyDependency>().As<IMyDependency>().InstancePerLifetimeScope();

builder.RegisterType<MyCommand>().As<BaseCommand>().WithParameter("fullName", "mynamespance:mycategory:mycommand").InstancePerLifetimeScope();

var rootContainer = builder.Build();

var commands = rootContainer.Resolve<IEnumerable<BaseCommand>>();
CommandConfigurator.Configure(commands);
...
So we just register dependencies as usual. The new thing is that we now register our custom commands in code, instead of using a Sitecore config file. And then we call our CommandConfigurator with a collection of instances of all our custom commands.

That's it. Plain and simple :-)