Monday, October 1, 2018

Install Nextcloud as Docker container on Synology NAS

You can find guides out there explaining how to install Nextcloud as a Docker container, or rather as a number of containers, since you'll normally need a few supporting containers (DB, reverse proxy, etc.). However, I wanted to explore a simple way to set up Nextcloud, and I wanted to do it on my Synology NAS. Basically, what I wanted to do was the following:

  • Setup Nextcloud (using built-in SQLite database) Docker container on Synology NAS
  • Expose Nextcloud through Synology NAS built-in reverse proxy
  • Create and use a Let's Encrypt certificate for HTTPS
This guide assumes your Synology NAS supports Docker and you've already installed the Synology Docker app on your Synology NAS. The guide furthermore assumes you are using a Synology NAS volume called "volume1". If not, just replace with name of the volume you are using.
The guide is based on DSM 6.2.
  1. Go to where you administer your domains and add an A-record for a new subdomain, e.g. nextcloud.yourdomain.com
  2. Go to your router administration interface and setup port forwarding, e.g.: external-ip:6443 -> internal-ip:6301.
    Note: I'm using the port 6443 externally because that's what I want. I your case, you may want to use the standard SSL port 443, or something else entirely.
  3. Log in to Synology DSM, open Control Panel/Security/Certificate, and create a new Let's Encrypt certificate (since the default Synology certificate is NOT a trusted one):
    • Press "Add"
    • Select "Add a new certificate" and press "Next"
    • Select "Get a certificate from Let's Encrypt" and press "Next"
    • Enter the information needed by Let's Encrypt and press "Apply"
  4. Go to Control Panel/Application Portal/Reverse Proxy, and create a new entry:
    • Source: HTTPS, nextcloud.yourdomain.com, 6443, Enable HSTS
    • Destination: HTTP, localhost, 6301
  5. Go back to Control Panel/Security/Certificate, and press "Configure", then for "nextcloud.yourdomain.com:6443" select your newly created Let's Encrypt certificate.
  6. SSH into your Synology NAS (e.g. with PuTTy) using an account with administrative rights.
  7. Create a new folder called "nextcloud" located in "volume1/docker". This will be used to store all your Nextcloud data, so when you upgrade the Docker container your data remain in place.
    mkdir /volume1/docker/nextcloud
  8. Pull the Nextcloud image and run it as a container using the following command (note: it is recommended to pull/run using command line since Synology Docker app is limited in what you can configure):
    sudo docker run -d --name nextcloud -p 6301:80 -v /volume1/docker/nextcloud:/var/www/html nextcloud
You should now be able to open a browser and go to: https://nextcloud.yourdomain.com:6443 without any problems (even in Firefox :-)

Part 2:

Ok, so I installed Nextcloud. What I really wanted to use it for was for bookmark synchronization across browsers. With the unfortunate demise of Xmarks, I needed some other way of keeping all my bookmarks in sync, and this time I wanted to control everything myself, so I didn't have to rely on the potentially unreliable existence of yet another 3rd party cloud service.

In order to use Nextcloud for bookmarks, the first thing I did was to install an app called "Bookmarks". The term "app" can mean a lot of things these days, but in this case it means that you log in to your Nextcloud web interface, locate the app store, and find the app called "Bookmarks" and install it.

Next thing I did was installing the Floccus browser extension in the various browsers I use, and then I followed the Floccus instructions for how to sync bookmarks.

Saturday, July 14, 2018

Install Ubiquiti UniFi Controller as Docker container on Synology NAS

It's assumed your Synology NAS supports Docker and you've already installed the Synology Docker app on your Synology NAS. The guide assumes you are using a Synology NAS volume called "volume1". If not, just replace with name of the volume you are using.

  1. SSH into your Synology NAS (e.g. with PuTTy) using an account with administrative rights.
  2. Create a new folder called "unifi" located in "volume1/docker". This will be used to store all your UniFi Controller configs, so when you upgrade the Docker container your configs remain in place.
    mkdir /volume1/docker/unifi
  3. Pull the UniFi Controller Docker image from Docker Hub by typing the following command:
    sudo docker pull linuxserver/unifi-controller:latest
  4. Run the new UniFi Controller container using the following command (note: you can't do this using the Synology Docker app, since it's not possible to set all the configuration correctly through the UI):
    sudo docker run -d --name=unifi-controller --net=host --volume=/volume1/docker/unifi:/config -p 3478:3478/udp -p 10001:10001/udp -p 8080:8080 -p 8081:8081 -p 8443:8443 -p 8843:8843 -p 8880:8880 -p 6789:6789 linuxserver/unifi-controller:latest
  5. Finally, open a web browser and go to: https://<SYNOLOGY_IP>:8443

Note: If you are going to adopt an existing UniFi Access Point, it may be necessary to reset it to factory settings before the controller will be able to discover it (well, it was for me at least).

Update (Nov. 24 2019): Updated commands to reflect UniFi image renaming (was: linuxserver/unifi, now is: linuxserver/unifi-controller)

Saturday, October 10, 2015

Moving iTunes library without using the Media folder

Ok, so this post is really not about development (i.e. this is a developer blog), but I just needed to get this weirdness off my chest.

Recently I had the need to move my iTunes library - all the physical files that is - to a new location on my Windows system. Now, I don't use iTunes Media folder due to the "Keep iTunes Media folder organized" setting, since I don't want iTunes to try and organize my music files. I like to handle that myself. However, all guides I could find about moving the iTunes library made use of the Media folder, so I wanted to find another way. The problem is, though, that in theory there isn't any other way.

With iTunes shut down I started to look into where it stores its library information. In Windows that is somewhere like this: C:\Users\[username]\Music\iTunes
In this folder there is a file called "iTunes Library.itl". The file contains information about all music files, including the full path of each music file. But the library file is in a proprietary format with no easy way of reading. Next to the .itl file is another library file called "iTunes Music Library.xml". This XML file contains the same information about the music files, and since it is XML it can be read into any text editor.

So I tried to do a search and replace to change the paths pointing to the music files to the new location. I saved the file and fired up iTunes. But it just ignored the XML completely and started up with an empty library. I then went through all the menus in iTunes to see if I could find a way to do an import of the XML file. No such luck. So I did some more research on the Internet and finally found a post, which after reading I really had my doubts about. But since I couldn't find any other suggestions, I decided to give it a shot.

The procedure is this:

  1. Close iTunes.
  2. Open the XML library file in a text editor and fix all file paths to point to the new location, and then save file.
  3. Open the .itl file in a text editor (it will look all weird) and just remove some of the contents, and then save file. This causes the file to be damaged, which is the intention.
  4. Start up iTunes. It should now notify you that it is reading the XML file, and after a while iTunes will open up with the relocated library loaded.
So it was step 3 that I found a little strange, but hey, doing it actually caused iTunes to read the XML file, reestablishing the moved library. Why, Apple, why oh why?!

As an end note I want to say that I'm not sure if this is always possible to do. In my case it was. But I noticed afterwards that the XML was deleted and didn't seem to be recreated. Maybe it will turn up again at some point.

Update:
Ok, so I just found out something about the XML library file. It is possible to generate it by exporting the library from within iTunes. It's just that finding the correct menu item for this can be a bit tricky. Nowadays, the default setting for the menu bar in iTunes is to not show it. There is, however, a menu at the top left corner in which there is a Library submenu. The problem is, though, that this does not contain anything for exporting the library. Instead you have to select Show Menu Bar, which shows the full menu bar. Then go to File->Library, and this Library submenu does contain a Export Library menu item, which you can use to export the library to XML.
I guess I didn't get the memo when they made that design decision :)

Monday, February 2, 2015

Umbraco deployment using Courier through source control

Disclaimer: I'm being pretty straight forward in this article. However, it is not the intention to endorse or belittle any product or technology, even though there may be statements that could possibly be interpreted that way by someone. Everything in this article is presented as facts as seen from my perspective. And so, if any statements appear to be directly wrong, I hereby apologize. Feel free to leave a comment. 

Introduction

Being fairly new to the world of Umbraco CMS, and coming from the world of Sitecore CMS, I've noticed both some good things and some not so good things about Umbraco.

On the good side, I think it is much easier to get started with Umbraco, than was my experience starting out with Sitecore. On the surface the two may appear similar, but when you actually start doing stuff, I find Sitecore more complex than Umbraco, especially for smaller solutions.

However, when it comes to larger solutions Umbraco definitely has its shortcomings. For example, when doing team development you really start to feel the pain, and it doesn't get better when introducing automatic deployment to multiple environments. Sitecore in itself is not really better in these matters, and it is only by making use of a third party tool, TDS (Team Development for Sitecore), that a real advantage is gained.

With TDS each developer can work in a complete separate environment (his local machine), even including the database. TDS serializes Sitecore items to text files, which can then be part of the solution's source code and therefore checked-in to a source control system. TDS also helps with synchronizing between Sitecore items in DB and textual items in the solution by providing a fairly simple-to-use UI. With regards to deployment TDS can make packages of Sitecore items, which can then be installed on the different environments using a small included command-line tool.

I haven't seen anything at that level for Umbraco. Usually team development occurs on a shared database, but with local source code. This makes feature-driven development kind of hard. Even though you work on isolated source code, you can't make changes in backoffice without the risk of disturbing someone else's work. With regards to deployment Umbraco has Courier, which when reading about it sounds very promising, but which in the current version (2.11) simply doesn't work (to clarify: it fails to transfer revisions to other environments). I hope this will be fixed soon :)

Courier provides the means of creating revisions, i.e. packages of Umbraco items. It lets you select which items to include in the revision, and can even automatically include any dependent items. There is one shortcoming though. When using automatic dependent item inclusion there is no distinction between content items and non-content items. So you'll end up with adding content items to the revision when using this functionality. This presents problems later when deploying to a live environment where editors have created content, since you risk overwriting their work. So it seems better to disable the automatic dependency thingy when creating revisions.

Another shortcoming of Courier is documentation and sample code, which is a couple of years behind the current version (2.11). This is too bad, because you really need this if you want to make use of the Courier API for creating command-line based tools for deployment automation.

Well then, all that being said, I think it's time for what this post is really about - deployment using Courier through source control.

Solution

In the company where I work we have multiple environments: DEV, TEST, PREPROD and PROD. Furthermore, we use GIT for source control, TeamCity as build server, and Octopus for deployment. Ultimately I would like to automate the whole process of deployment, so that I with the press of a button can deploy both application files and Umbraco items. Due to the shortcomings mentioned earlier this is currently wishful thinking. So for now I have settled for a more manual approach. Here's the deal.

In backoffice I have created what I call a long-lived Courier revision. It is just a normal revision, but it will stay there at all times, and when preparing for a release, I will simply just update this revision to reflect the current state of the non-content Umbraco items to be part of deployment. That is, the revision should always contain everything needed to deploy to a fresh environment. And it should be added without auto-including dependent items. Currently in my situation it means to include the following:
  • Datatypes
  • Document types
  • Macros
  • Templates
A note on adding Templates; I'm really only interested in including the database part of the templates, but actually the razor files are also added, which is unnecessary since they will already part of application files deployment. It is not catastrophic, just inconvenient.

Due to the issue mentioned earlier with Courier 2.11 unable to transfer revisions, and due to the fact that we (developers in my company) don't have access to PREPROD and PROD environments, an ingenious scheme had to be devised for getting the revision moved to these environments.

The solution is to include the revision as part of the source code. The revision is simply a folder structure with a bunch of files located at App_Data\courier\revisions\ and adding this to the GIT repository is perfectly doable. Since revision files are just plain XML files, having these source controlled gives the added benefit of being able to inspect changes (git diff) before committing.

Now, when deploying to an environment the revision just follows any other application files, and it is then a simple matter to go into to backoffice on that environment, select Courier, and install the revision.

I find this approach simple and pragmatic. However, I do hope to be able to automate it more in the future when Courier becomes a bit more mature/stable.

Tuesday, July 15, 2014

Group by in C# and linq.js

Being a C# developer I really like and use Linq a lot. It can simplify code a great deal. So it is only natural to want the same goodness in javascript. Luckily there is a framework - linq.js - that provides this functionality. However, the syntax is not quite the same, so it takes a little getting used to.

In this post I want to show an example of how to do a group by.

I have a bunch of people in a collection, each person defined by name, age and job. Now I want to group these people by job. The result should be a grouped collection, where each group contains the person objects belonging to a specific job.

In C# it looks something like this:

var people = new[] {
    new { Name = "Carl", Age = 33, Job = "Tech" },
    new { Name = "Homer", Age = 42, Job = "Tech" },
    new { Name = "Phipps", Age = 35, Job = "Nurse" },
    new { Name = "Doris", Age = 27, Job = "Nurse" },
    new { Name = "Willy", Age = 31, Job = "Janitor" }
};

var grouped = people.GroupBy(
    person => person.Job,
    (job, persons) => new { Job = job, Persons = persons });

foreach (var group in grouped)
{
    System.Diagnostics.Debug.WriteLine("job: " + group.Job);
    foreach (var person in group.Persons)
    {
        System.Diagnostics.Debug.WriteLine("   name: {0}, age: {1}, job: {2}",
            person.Name,
            person.Age,
            person.Job);
    }
}

The group by statement is fairly simple, and the output is exactly as expected:

job: Tech
   name: Carl, age: 33, job: Tech
   name: Homer, age: 42, job: Tech
job: Nurse
   name: Phipps, age: 35, job: Nurse
   name: Doris, age: 27, job: Nurse
job: Janitor
   name: Willy, age: 31, job: Janitor

The same thing in linq.js is a little bit more involved, and for me it did take some playing around before I ended up with the code below. But basically it is quite similar to the C# version.

var people = [
    { name: "Carl", age : 33, job: "Tech" },
    { name: "Homer", age : 42, job: "Tech" },
    { name: "Phipps", age : 35, job: "Nurse" },
    { name: "Doris", age: 27, job: "Nurse" },
    { name: "Willy", age: 31, job: "Janitor" }
];

var grouped = Enumerable
    .From(people)
    .GroupBy(
        function (person) { return person.job; }, // Key selector
        function (person) { return person; },     // Element selector
        function (job, grouping) {                // Result selector
            return {
                job: job,
                persons: grouping.source
            };
        })
    .ToArray();

alert(JSON.stringify(grouped));

And the result:

[{
    "job": "Tech",
    "persons": [{
        "name": "Carl",
        "age": 33,
        "job": "Tech"
    },
    {
        "name": "Homer",
        "age": 42,
        "job": "Tech"
    }]
},
{
    "job": "Nurse",
    "persons": [{
        "name": "Phipps",
        "age": 35,
        "job": "Nurse"
    },
    {
        "name": "Doris",
        "age": 27,
        "job": "Nurse"
    }]
},
{
    "job": "Janitor",
    "persons": [{
        "name": "Willy",
        "age": 31,
        "job": "Janitor"
    }]
}]

Disabling WebDAV in a Sitecore web application

As the trends for web applications moves towards more heavy clients, it puts new demands on how to structure the web application, e.g. it is now quite common to let the client handle the complexities of user interface functionality, and then just call the server for querying raw data. This could be done using ajax calls to query RESTful WebApi services for data. Javascript on the client will then handle processing and presentation of the data.

Now, RESTful web APIs with any self-respect will want to use common HTTP verbs, such as GET, POST, PUT, DELETE etc. But for Sitecore web applications hosted in IIS this turns out to be a problem. And the problem is called WebDAV. WebDAV takes over HTTP verbs like PUT and DELETE, so they cannot be used in, for example, a WebApi controller. In many situations WebDAV is not really needed by a Sitecore web application, but apparently it is enabled by default. And while it may not actually be enabled on the IIS, default Sitecore web.config somehow enables it anyway, at least enough to cause problems.

Disabling WebDAV in a Sitecore web application can be a bit tricky. So here is a way to do it.

  1. Open web.config
  2. Locate the log4net appender section "WebDAVLogFileAppender" and remove it or comment it out.
  3. Locate the log4net logger section "Sitecore.Diagnostics.WebDAV" and remove it or comment it out.
  4. Under <system.webserver> locate the handlers section and replace these lines:
    <add name="WebDAVRoot" path="*" verb="OPTIONS,PROPFIND" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v4.0.30319\aspnet_isapi.dll" resourceType="Unspecified" preCondition="classicMode,runtimeVersionv4.0,bitness32" />
    <add name="WebDAVRoot64" path="*" verb="OPTIONS,PROPFIND" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v4.0.30319\aspnet_isapi.dll" resourceType="Unspecified" preCondition="classicMode,runtimeVersionv4.0,bitness64" />
    <add verb="*" path="sitecore_webDAV.ashx" type="Sitecore.Resources.Media.WebDAVMediaRequestHandler, Sitecore.Kernel" name="Sitecore.WebDAVMediaRequestHandler" />
    
    with:
    <remove name="WebDAV" />
    
  5. Under <system.web> locate the handlers section and replace this line:
    <add verb="*" path="sitecore_webDAV.ashx" type="Sitecore.Resources.Media.WebDAVMediaRequestHandler, Sitecore.Kernel" />
    
    with:
    <remove name="WebDAV" />
  6. Remove the Sitecore.WebDAV.config file from App_Config\Include
As far as I have been able to find out, the only thing that will be missing in Sitecore after disabling WebDAV is the so called WebDAV dialog, which is something that can be opened in the media library to make it possible to drag'n'drop media files from the file system into Sitecore.

Notes:
Procedure devised using Sitecore 7.2

AutoMapper Children Value Resolver

When exposing data to the outside world (e.g. through a service) one could easily find oneself thinking about such matters as performance and load on the wire.

We may have a scenario where we need to expose a customer service, which could be used in different scenarios, where sometimes callers just want customer master data, and at other times callers want customers with their order history. Depending on the system landscape order data may come from another system than where the customer data is stored; and these systems may perform differently, so that retrieval of customer data could be a relatively inexpensive operation, whereas retrieval of order data could be more expensive.

A common practice when creating services is to transform the entities from the domain into DTO objects, and a widely used component for this is AutoMapper. But how to get AutoMapper to deal with the scenario above?

As stated, sometimes we want to expose only customer data, and sometimes order data should be included. The domain may have been implemented as an aggregate, where a customer has a collection of orders, like this:

public class Order
{
    public Guid Id { get; set; }
    public DateTime Created { get; set; }
    public string Text { get; set; }
}

public class Customer
{
    public Guid Id { get; set; }
    public string Name { get; set; }
    public string Address { get; set; }
    public IEnumerable<order> Orders { get; set; }
}

And DTOs like this (for some reason we don't want to expose the internal IDs):

public class OrderDto
{
    public DateTime Created { get; set; }
    public string Text { get; set; }
}

public class CustomerDto
{
    public string Name { get; set; }
    public string Address { get; set; }
    public IEnumerable<Order> Orders { get; set; }
}

The data retrieval code may have been implemented with lazy load, so that order data is only queried if used. However, since AutoMapper will map the Orders collection by default, orders will be queried. So we need to modify the way customers are mapped. To that end I've devised an IValueResolver called ChildrenResolver. It is a general resolver that can be used for any child collection, and it looks like this:

public class ChildrenResolver<TSource, TMember> : IValueResolver
{
    private readonly Func<TSource, IEnumerable<TMember>> _childrenExpression;

    public ChildrenResolver(Expression<Func<TSource, IEnumerable<TMember>>> childrenExpression)
    {
        _childrenExpression = childrenExpression.Compile();
    }

    public ResolutionResult Resolve(ResolutionResult source)
    {
        bool includeChildren = false;
        if (source.Context.Options.Items.ContainsKey("IncludeChildren"))
        {
            includeChildren = (bool)source.Context.Options.Items["IncludeChildren"];
        }
        return source.New(includeChildren ? _childrenExpression.Invoke((TSource)source.Value) : null);
    }
}

The constructor takes an expression selecting the children collection member from the source entity, i.e. in our scenario it tells the resolver that we want to map the Orders property of the Customer entity. The Resolve method first looks up an options item called IncludeChildren, which is a boolean that we will set from the outside. It tells the resolver whether or not we want it to resolve the specified children collection property, and if so it returns a ResolutionResult with the children collection.

The ChildrenResolver is then used when defining a mapping, like this:

Mapper.CreateMap<Customer, CustomerDto>()
    .ForMember(dto => dto.Orders, opt => opt
        .ResolveUsing<ChildrenResolver<Customer, Order>>()
        .ConstructedBy(() => new ChildrenResolver<Customer, Order>(entity => entity.Orders)));

The mapping defines that we want to map from Customer entity to CustomerDto, and for the Orders member of the DTO we want to use the ChildrenResolver, which is instructed to grab the Orders collection of the Customer entity (this gives the flexibility of not having a one-to-one naming relationship between source and target properties.) Notice the usage of ResolveUsing is a bit more complex than typically seen. Since the ChildrenResolver takes a constructor parameter, we need to tell AutoMapper that we will handle the resolver instantiation ourselves, which we do by using ConstructedBy method.

Finally, we are ready to use the whole thing in our customer service, which may be a WebApi controller with the following method:

[Route("api/customers/{id}")]
public CustomerDto GetCustomer(Guid id, bool includeChildren = false)
{
    var customer = _customerRepository[id];
    if (customer == null)
    {
        // todo: handle if customer not found
    }

    var customerDto = Mapper.Map<CustomerDto>(customer, opts =>
    {
        opts.Items["IncludeChildren"] = includeChildren;
    });
    
    return customerDto;
}

Notice that when we do the mapping, we set the IncludeChildren options item to specify whether or not we want children collections mapped, and in this case that information comes from a service parameter.

That's it. I hope someone finds this useful :-)

Notes:
The code is based on usage of AutoMapper version 3.2.1.