tag:blogger.com,1999:blog-18664932108997723082024-03-14T05:43:13.740+01:00Maze's Developer BlogMazehttp://www.blogger.com/profile/01728030857336570214noreply@blogger.comBlogger16125tag:blogger.com,1999:blog-1866493210899772308.post-81460643185446808932018-10-01T21:53:00.000+02:002018-10-02T07:56:42.813+02:00Install Nextcloud as Docker container on Synology NASYou can find guides out there explaining how to install Nextcloud as a Docker container, or rather as a number of containers, since you'll normally need a few supporting containers (DB, reverse proxy, etc.). However, I wanted to explore a simple way to set up Nextcloud, and I wanted to do it on my Synology NAS. Basically, what I wanted to do was the following:<br />
<ul>
<li>Setup Nextcloud (using built-in SQLite database) Docker container on Synology NAS</li>
<li>Expose Nextcloud through Synology NAS built-in reverse proxy</li>
<li>Create and use a Let's Encrypt certificate for HTTPS</li>
</ul>
This guide assumes your Synology NAS supports Docker and you've already installed the Synology Docker app on your Synology NAS. The guide furthermore assumes you are using a Synology NAS volume called "volume1". If not, just replace with name of the volume you are using.<br />
The guide is based on DSM 6.2.<br />
<ol>
<li>Go to where you administer your domains and add an A-record for a new subdomain, e.g. nextcloud.yourdomain.com</li>
<li>Go to your router administration interface and setup port forwarding, e.g.: external-ip:6443 -> internal-ip:6301.<br />Note: I'm using the port 6443 externally because that's what I want. I your case, you may want to use the standard SSL port 443, or something else entirely.</li>
<li>Log in to Synology DSM, open Control Panel/Security/Certificate, and create a new Let's Encrypt certificate (since the default Synology certificate is NOT a trusted one):</li>
<ul>
<li>Press "Add"</li>
<li>Select "Add a new certificate" and press "Next"</li>
<li>Select "Get a certificate from Let's Encrypt" and press "Next"</li>
<li>Enter the information needed by Let's Encrypt and press "Apply"</li>
</ul>
<li>Go to Control Panel/Application Portal/Reverse Proxy, and create a new entry:</li>
<ul>
<li>Source: HTTPS, nextcloud.yourdomain.com, 6443, Enable HSTS</li>
<li>Destination: HTTP, localhost, 6301</li>
</ul>
<li>Go back to Control Panel/Security/Certificate, and press "Configure", then for "nextcloud.yourdomain.com:6443" select your newly created Let's Encrypt certificate.</li>
<li>SSH into your Synology NAS (e.g. with PuTTy) using an account with administrative rights.</li>
<li>Create a new folder called "nextcloud" located in "volume1/docker". This will be used to store all your Nextcloud data, so when you upgrade the Docker container your data remain in place.<br /><pre>mkdir /volume1/docker/nextcloud</pre>
</li>
<li>Pull the Nextcloud image and run it as a container using the following command (note: it is recommended to pull/run using command line since Synology Docker app is limited in what you can configure):<br /><pre>sudo docker run -d --name nextcloud -p 6301:80 -v /volume1/docker/nextcloud:/var/www/html nextcloud</pre>
</li>
</ol>
You should now be able to open a browser and go to: https://nextcloud.yourdomain.com:6443 without any problems (even in Firefox :-)<br />
<br />
Part 2:<br />
<br />
Ok, so I installed Nextcloud. What I really wanted to use it for was for bookmark synchronization across browsers. With the unfortunate demise of Xmarks, I needed some other way of keeping all my bookmarks in sync, and this time I wanted to control everything myself, so I didn't have to rely on the potentially unreliable existence of yet another 3rd party cloud service.<br />
<br />
In order to use Nextcloud for bookmarks, the first thing I did was to install an app called "Bookmarks". The term "app" can mean a lot of things these days, but in this case it means that you log in to your Nextcloud web interface, locate the app store, and find the app called "Bookmarks" and install it.<br />
<br />
Next thing I did was installing the Floccus browser extension in the various browsers I use, and then I followed the Floccus instructions for how to sync bookmarks.Mazehttp://www.blogger.com/profile/01728030857336570214noreply@blogger.com0tag:blogger.com,1999:blog-1866493210899772308.post-33501058078703985652018-07-14T16:49:00.000+02:002019-11-24T14:00:54.187+01:00Install Ubiquiti UniFi Controller as Docker container on Synology NAS<p>It's assumed your Synology NAS supports Docker and you've already installed the Synology Docker app on your Synology NAS. The guide assumes you are using a Synology NAS volume called "volume1". If not, just replace with name of the volume you are using.</p>
<ol>
<li>SSH into your Synology NAS (e.g. with PuTTy) using an account with administrative rights.</li>
<li>Create a new folder called "unifi" located in "volume1/docker". This will be used to store all your UniFi Controller configs, so when you upgrade the Docker container your configs remain in place.
<pre>mkdir /volume1/docker/unifi</pre>
</li>
<li>Pull the UniFi Controller Docker image from Docker Hub by typing the following command:<pre>sudo docker pull linuxserver/unifi-controller:latest</pre>
</li>
<li>Run the new UniFi Controller container using the following command (note: you can't do this using the Synology Docker app, since it's not possible to set all the configuration correctly through the UI):<pre>sudo docker run -d --name=unifi-controller --net=host --volume=/volume1/docker/unifi:/config -p 3478:3478/udp -p 10001:10001/udp -p 8080:8080 -p 8081:8081 -p 8443:8443 -p 8843:8843 -p 8880:8880 -p 6789:6789 linuxserver/unifi-controller:latest</pre>
</li>
<li>Finally, open a web browser and go to: <b>https://<SYNOLOGY_IP>:8443</b></li>
</ol>
<p>Note: If you are going to adopt an existing UniFi Access Point, it may be necessary to reset it to factory settings before the controller will be able to discover it (well, it was for me at least).</p>
<p><strong>Update (Nov. 24 2019):</strong> Updated commands to reflect UniFi image renaming (was: linuxserver/unifi, now is: linuxserver/unifi-controller)</p>Mazehttp://www.blogger.com/profile/01728030857336570214noreply@blogger.com0tag:blogger.com,1999:blog-1866493210899772308.post-41524490834551174052015-10-10T13:26:00.000+02:002015-11-04T20:24:04.875+01:00Moving iTunes library without using the Media folderOk, so this post is really not about development (i.e. this is a developer blog), but I just needed to get this weirdness off my chest.<br />
<br />
Recently I had the need to move my iTunes library - all the physical files that is - to a new location on my Windows system. Now, I don't use iTunes Media folder due to the "Keep iTunes Media folder organized" setting, since I don't want iTunes to try and organize my music files. I like to handle that myself. However, all guides I could find about moving the iTunes library made use of the Media folder, so I wanted to find another way. The problem is, though, that in theory there isn't any other way.<br />
<br />
With iTunes shut down I started to look into where it stores its library information. In Windows that is somewhere like this: C:\Users\[username]<username>\Music\iTunes</username><br />
In this folder there is a file called "iTunes Library.itl". The file contains information about all music files, including the full path of each music file. But the library file is in a proprietary format with no easy way of reading. Next to the .itl file is another library file called "iTunes Music Library.xml". This XML file contains the same information about the music files, and since it is XML it can be read into any text editor.<br />
<br />
So I tried to do a search and replace to change the paths pointing to the music files to the new location. I saved the file and fired up iTunes. But it just ignored the XML completely and started up with an empty library. I then went through all the menus in iTunes to see if I could find a way to do an import of the XML file. No such luck. So I did some more research on the Internet and finally found a post, which after reading I really had my doubts about. But since I couldn't find any other suggestions, I decided to give it a shot.<br />
<br />
The procedure is this:<br />
<ol>
<li>Close iTunes.</li>
<li>Open the XML library file in a text editor and fix all file paths to point to the new location, and then save file.</li>
<li>Open the .itl file in a text editor (it will look all weird) and just remove some of the contents, and then save file. This causes the file to be damaged, which is the intention.</li>
<li>Start up iTunes. It should now notify you that it is reading the XML file, and after a while iTunes will open up with the relocated library loaded.</li>
</ol>
<div>
So it was step 3 that I found a little strange, but hey, doing it actually caused iTunes to read the XML file, reestablishing the moved library. Why, Apple, why oh why?!</div>
<div>
<br /></div>
<div>
As an end note I want to say that I'm not sure if this is always possible to do. In my case it was. But I noticed afterwards that the XML was deleted and didn't seem to be recreated. Maybe it will turn up again at some point.<br />
<br />
<b>Update:</b><br />
Ok, so I just found out something about the XML library file. It is possible to generate it by exporting the library from within iTunes. It's just that finding the correct menu item for this can be a bit tricky. Nowadays, the default setting for the menu bar in iTunes is to not show it. There is, however, a menu at the top left corner in which there is a Library submenu. The problem is, though, that this does not contain anything for exporting the library. Instead you have to select Show Menu Bar, which shows the full menu bar. Then go to File->Library, and this Library submenu does contain a Export Library menu item, which you can use to export the library to XML.<br />
I guess I didn't get the memo when they made that design decision :)</div>
Mazehttp://www.blogger.com/profile/01728030857336570214noreply@blogger.com0tag:blogger.com,1999:blog-1866493210899772308.post-4596811598520567352015-02-02T22:24:00.003+01:002015-02-02T22:25:08.121+01:00Umbraco deployment using Courier through source control<i>Disclaimer: I'm being pretty straight forward in this article. However, it is not the intention to endorse or belittle any product or technology, even though there may be statements that could possibly be interpreted that way by someone. Everything in this article is presented as facts as seen from my perspective. And so, if any statements appear to be directly wrong, I hereby apologize. Feel free to leave a comment. </i><br />
<h2>
Introduction</h2>
Being fairly new to the world of Umbraco CMS, and coming from the world of Sitecore CMS, I've noticed both some good things and some not so good things about Umbraco.<br />
<div>
<br /></div>
<div>
On the good side, I think it is much easier to get started with Umbraco, than was my experience starting out with Sitecore. On the surface the two may appear similar, but when you actually start doing stuff, I find Sitecore more complex than Umbraco, especially for smaller solutions.<br />
<div>
<br /></div>
</div>
<div>
However, when it comes to larger solutions Umbraco definitely has its shortcomings. For example, when doing team development you really start to feel the pain, and it doesn't get better when introducing automatic deployment to multiple environments. Sitecore in itself is not really better in these matters, and it is only by making use of a third party tool, TDS (Team Development for Sitecore), that a real advantage is gained.</div>
<div>
<br /></div>
<div>
With TDS each developer can work in a complete separate environment (his local machine), even including the database. TDS serializes Sitecore items to text files, which can then be part of the solution's source code and therefore checked-in to a source control system. TDS also helps with synchronizing between Sitecore items in DB and textual items in the solution by providing a fairly simple-to-use UI. With regards to deployment TDS can make packages of Sitecore items, which can then be installed on the different environments using a small included command-line tool.</div>
<div>
<br /></div>
<div>
I haven't seen anything at that level for Umbraco. Usually team development occurs on a shared database, but with local source code. This makes feature-driven development kind of hard. Even though you work on isolated source code, you can't make changes in backoffice without the risk of disturbing someone else's work. With regards to deployment Umbraco has Courier, which when reading about it sounds very promising, but which in the current version (2.11) simply doesn't work (to clarify: it fails to transfer revisions to other environments). I hope this will be fixed soon :)</div>
<div>
<br /></div>
<div>
Courier provides the means of creating revisions, i.e. packages of Umbraco items. It lets you select which items to include in the revision, and can even automatically include any dependent items. There is one shortcoming though. When using automatic dependent item inclusion there is no distinction between content items and non-content items. So you'll end up with adding content items to the revision when using this functionality. This presents problems later when deploying to a live environment where editors have created content, since you risk overwriting their work. So it seems better to disable the automatic dependency thingy when creating revisions.</div>
<div>
<br /></div>
<div>
Another shortcoming of Courier is documentation and sample code, which is a couple of years behind the current version (2.11). This is too bad, because you really need this if you want to make use of the Courier API for creating command-line based tools for deployment automation.</div>
<div>
<br /></div>
<div>
<div>
Well then, all that being said, I think it's time for what this post is really about - deployment using Courier through source control.<br />
<br /></div>
</div>
<h2>
Solution</h2>
<div>
In the company where I work we have multiple environments: DEV, TEST, PREPROD and PROD. Furthermore, we use GIT for source control, TeamCity as build server, and Octopus for deployment. Ultimately I would like to automate the whole process of deployment, so that I with the press of a button can deploy both application files and Umbraco items. Due to the shortcomings mentioned earlier this is currently wishful thinking. So for now I have settled for a more manual approach. Here's the deal.</div>
<div>
<br /></div>
<div>
In backoffice I have created what I call a long-lived Courier revision. It is just a normal revision, but it will stay there at all times, and when preparing for a release, I will simply just update this revision to reflect the current state of the non-content Umbraco items to be part of deployment. That is, the revision should always contain everything needed to deploy to a fresh environment. And it should be added without auto-including dependent items. Currently in my situation it means to include the following:</div>
<div>
<ul>
<li>Datatypes</li>
<li>Document types</li>
<li>Macros</li>
<li>Templates</li>
</ul>
<div>
A note on adding Templates; I'm really only interested in including the database part of the templates, but actually the razor files are also added, which is unnecessary since they will already part of application files deployment. It is not catastrophic, just inconvenient.</div>
</div>
<div>
<br /></div>
<div>
Due to the issue mentioned earlier with Courier 2.11 unable to transfer revisions, and due to the fact that we (developers in my company) don't have access to PREPROD and PROD environments, an ingenious scheme had to be devised for getting the revision moved to these environments.</div>
<div>
<br /></div>
<div>
The solution is to include the revision as part of the source code. The revision is simply a folder structure with a bunch of files located at App_Data\courier\revisions\ and adding this to the GIT repository is perfectly doable. Since revision files are just plain XML files, having these source controlled gives the added benefit of being able to inspect changes (git diff) before committing.</div>
<div>
<br /></div>
<div>
Now, when deploying to an environment the revision just follows any other application files, and it is then a simple matter to go into to backoffice on that environment, select Courier, and install the revision.</div>
<div>
<br /></div>
<div>
I find this approach simple and pragmatic. However, I do hope to be able to automate it more in the future when Courier becomes a bit more mature/stable.</div>
Mazehttp://www.blogger.com/profile/01728030857336570214noreply@blogger.com4tag:blogger.com,1999:blog-1866493210899772308.post-28504592062235102742014-07-15T22:10:00.000+02:002014-07-15T22:37:07.062+02:00Group by in C# and linq.jsBeing a C# developer I really like and use Linq a lot. It can simplify code a great deal. So it is only natural to want the same goodness in javascript. Luckily there is a framework - linq.js - that provides this functionality. However, the syntax is not quite the same, so it takes a little getting used to.<br />
<br />
In this post I want to show an example of how to do a group by.<br />
<br />
I have a bunch of people in a collection, each person defined by name, age and job. Now I want to group these people by job. The result should be a grouped collection, where each group contains the person objects belonging to a specific job.<br />
<br />
In C# it looks something like this:<br />
<br />
<pre class="brush: csharp">var people = new[] {
new { Name = "Carl", Age = 33, Job = "Tech" },
new { Name = "Homer", Age = 42, Job = "Tech" },
new { Name = "Phipps", Age = 35, Job = "Nurse" },
new { Name = "Doris", Age = 27, Job = "Nurse" },
new { Name = "Willy", Age = 31, Job = "Janitor" }
};
var grouped = people.GroupBy(
person => person.Job,
(job, persons) => new { Job = job, Persons = persons });
foreach (var group in grouped)
{
System.Diagnostics.Debug.WriteLine("job: " + group.Job);
foreach (var person in group.Persons)
{
System.Diagnostics.Debug.WriteLine(" name: {0}, age: {1}, job: {2}",
person.Name,
person.Age,
person.Job);
}
}
</pre>
<br />
The group by statement is fairly simple, and the output is exactly as expected:<br />
<br />
job: Tech<br />
name: Carl, age: 33, job: Tech<br />
name: Homer, age: 42, job: Tech<br />
job: Nurse<br />
name: Phipps, age: 35, job: Nurse<br />
name: Doris, age: 27, job: Nurse<br />
job: Janitor<br />
name: Willy, age: 31, job: Janitor<br />
<br />
The same thing in linq.js is a little bit more involved, and for me it did take some playing around before I ended up with the code below. But basically it is quite similar to the C# version.<br />
<br />
<pre class="brush: csharp">var people = [
{ name: "Carl", age : 33, job: "Tech" },
{ name: "Homer", age : 42, job: "Tech" },
{ name: "Phipps", age : 35, job: "Nurse" },
{ name: "Doris", age: 27, job: "Nurse" },
{ name: "Willy", age: 31, job: "Janitor" }
];
var grouped = Enumerable
.From(people)
.GroupBy(
function (person) { return person.job; }, // Key selector
function (person) { return person; }, // Element selector
function (job, grouping) { // Result selector
return {
job: job,
persons: grouping.source
};
})
.ToArray();
alert(JSON.stringify(grouped));
</pre>
<br />
And the result:<br />
<br />
<pre class="brush: javascript">[{
"job": "Tech",
"persons": [{
"name": "Carl",
"age": 33,
"job": "Tech"
},
{
"name": "Homer",
"age": 42,
"job": "Tech"
}]
},
{
"job": "Nurse",
"persons": [{
"name": "Phipps",
"age": 35,
"job": "Nurse"
},
{
"name": "Doris",
"age": 27,
"job": "Nurse"
}]
},
{
"job": "Janitor",
"persons": [{
"name": "Willy",
"age": 31,
"job": "Janitor"
}]
}]
</pre>
Mazehttp://www.blogger.com/profile/01728030857336570214noreply@blogger.com2tag:blogger.com,1999:blog-1866493210899772308.post-68354427336108111832014-07-15T21:00:00.004+02:002014-07-21T16:48:24.800+02:00Disabling WebDAV in a Sitecore web applicationAs the trends for web applications moves towards more heavy clients, it puts new demands on how to structure the web application, e.g. it is now quite common to let the client handle the complexities of user interface functionality, and then just call the server for querying raw data. This could be done using ajax calls to query RESTful WebApi services for data. Javascript on the client will then handle processing and presentation of the data.
<br />
<br />
Now, RESTful web APIs with any self-respect will want to use common HTTP verbs, such as GET, POST, PUT, DELETE etc. But for Sitecore web applications hosted in IIS this turns out to be a problem. And the problem is called WebDAV. WebDAV takes over HTTP verbs like PUT and DELETE, so they cannot be used in, for example, a WebApi controller. In many situations WebDAV is not really needed by a Sitecore web application, but apparently it is enabled by default. And while it may not actually be enabled on the IIS, default Sitecore web.config somehow enables it anyway, at least enough to cause problems.
<br />
<br />
Disabling WebDAV in a Sitecore web application can be a bit tricky. So here is a way to do it.
<br />
<ol>
<li>Open web.config</li>
<li>Locate the log4net appender section "WebDAVLogFileAppender" and remove it or comment it out.</li>
<li>Locate the log4net logger section "Sitecore.Diagnostics.WebDAV" and remove it or comment it out.</li>
<li>Under <system.webserver> locate the handlers section and replace these lines:<br />
<pre class="brush: xml"><add name="WebDAVRoot" path="*" verb="OPTIONS,PROPFIND" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v4.0.30319\aspnet_isapi.dll" resourceType="Unspecified" preCondition="classicMode,runtimeVersionv4.0,bitness32" />
<add name="WebDAVRoot64" path="*" verb="OPTIONS,PROPFIND" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v4.0.30319\aspnet_isapi.dll" resourceType="Unspecified" preCondition="classicMode,runtimeVersionv4.0,bitness64" />
<add verb="*" path="sitecore_webDAV.ashx" type="Sitecore.Resources.Media.WebDAVMediaRequestHandler, Sitecore.Kernel" name="Sitecore.WebDAVMediaRequestHandler" />
</pre>
with:
<pre class="brush: xml"><remove name="WebDAV" />
</pre>
</li>
<li>Under <system.web> locate the handlers section and replace this line:<br />
<pre class="brush: xml"><add verb="*" path="sitecore_webDAV.ashx" type="Sitecore.Resources.Media.WebDAVMediaRequestHandler, Sitecore.Kernel" />
</pre>
with:
<pre class="brush: xml"><remove name="WebDAV" /></pre>
</li>
<li>Remove the Sitecore.WebDAV.config file from App_Config\Include
</li>
</ol>
As far as I have been able to find out, the only thing that will be missing in Sitecore after disabling WebDAV is the so called WebDAV dialog, which is something that can be opened in the media library to make it possible to drag'n'drop media files from the file system into Sitecore.<br />
<br />
Notes:<br />
Procedure devised using Sitecore 7.2Mazehttp://www.blogger.com/profile/01728030857336570214noreply@blogger.com0tag:blogger.com,1999:blog-1866493210899772308.post-36265406163935126052014-07-15T19:49:00.000+02:002014-07-15T21:11:25.750+02:00AutoMapper Children Value ResolverWhen exposing data to the outside world (e.g. through a service) one could easily find oneself thinking about such matters as performance and load on the wire.<br />
<br />
We may have a scenario where we need to expose a customer service, which could be used in different scenarios, where sometimes callers just want customer master data, and at other times callers want customers with their order history. Depending on the system landscape order data may come from another system than where the customer data is stored; and these systems may perform differently, so that retrieval of customer data could be a relatively inexpensive operation, whereas retrieval of order data could be more expensive.<br />
<br />
A common practice when creating services is to transform the entities from the domain into DTO objects, and a widely used component for this is <a href="http://automapper.org/" target="_blank">AutoMapper</a>. But how to get AutoMapper to deal with the scenario above?<br />
<br />
As stated, sometimes we want to expose only customer data, and sometimes order data should be included. The domain may have been implemented as an aggregate, where a customer has a collection of orders, like this:<br />
<br />
<pre class="brush: csharp">public class Order
{
public Guid Id { get; set; }
public DateTime Created { get; set; }
public string Text { get; set; }
}
public class Customer
{
public Guid Id { get; set; }
public string Name { get; set; }
public string Address { get; set; }
public IEnumerable<order> Orders { get; set; }
}
</pre>
<div>
<br />
And DTOs like this (for some reason we don't want to expose the internal IDs):<br />
<br />
<pre class="brush: csharp">public class OrderDto
{
public DateTime Created { get; set; }
public string Text { get; set; }
}
public class CustomerDto
{
public string Name { get; set; }
public string Address { get; set; }
public IEnumerable<Order> Orders { get; set; }
}
</pre>
<br />
The data retrieval code may have been implemented with lazy load, so that order data is only queried if used. However, since AutoMapper will map the Orders collection by default, orders will be queried. So we need to modify the way customers are mapped. To that end I've devised an IValueResolver called ChildrenResolver. It is a general resolver that can be used for any child collection, and it looks like this:<br />
<br />
<pre class="brush: csharp">public class ChildrenResolver<TSource, TMember> : IValueResolver
{
private readonly Func<TSource, IEnumerable<TMember>> _childrenExpression;
public ChildrenResolver(Expression<Func<TSource, IEnumerable<TMember>>> childrenExpression)
{
_childrenExpression = childrenExpression.Compile();
}
public ResolutionResult Resolve(ResolutionResult source)
{
bool includeChildren = false;
if (source.Context.Options.Items.ContainsKey("IncludeChildren"))
{
includeChildren = (bool)source.Context.Options.Items["IncludeChildren"];
}
return source.New(includeChildren ? _childrenExpression.Invoke((TSource)source.Value) : null);
}
}
</pre>
<br />
The constructor takes an expression selecting the children collection member from the source entity, i.e. in our scenario it tells the resolver that we want to map the Orders property of the Customer entity. The Resolve method first looks up an options item called IncludeChildren, which is a boolean that we will set from the outside. It tells the resolver whether or not we want it to resolve the specified children collection property, and if so it returns a ResolutionResult with the children collection.<br />
<br />
The ChildrenResolver is then used when defining a mapping, like this:<br />
<br />
<pre class="brush: csharp">Mapper.CreateMap<Customer, CustomerDto>()
.ForMember(dto => dto.Orders, opt => opt
.ResolveUsing<ChildrenResolver<Customer, Order>>()
.ConstructedBy(() => new ChildrenResolver<Customer, Order>(entity => entity.Orders)));
</pre>
<br />
The mapping defines that we want to map from Customer entity to CustomerDto, and for the Orders member of the DTO we want to use the ChildrenResolver, which is instructed to grab the Orders collection of the Customer entity (this gives the flexibility of not having a one-to-one naming relationship between source and target properties.) Notice the usage of ResolveUsing is a bit more complex than typically seen. Since the ChildrenResolver takes a constructor parameter, we need to tell AutoMapper that we will handle the resolver instantiation ourselves, which we do by using ConstructedBy method.<br />
<br />
Finally, we are ready to use the whole thing in our customer service, which may be a WebApi controller with the following method:<br />
<br />
<pre class="brush: csharp">[Route("api/customers/{id}")]
public CustomerDto GetCustomer(Guid id, bool includeChildren = false)
{
var customer = _customerRepository[id];
if (customer == null)
{
// todo: handle if customer not found
}
var customerDto = Mapper.Map<CustomerDto>(customer, opts =>
{
opts.Items["IncludeChildren"] = includeChildren;
});
return customerDto;
}
</pre>
<br />
Notice that when we do the mapping, we set the IncludeChildren options item to specify whether or not we want children collections mapped, and in this case that information comes from a service parameter.<br />
<br />
That's it. I hope someone finds this useful :-)<br />
<br />
Notes:<br />
The code is based on usage of AutoMapper version 3.2.1.</div>
Mazehttp://www.blogger.com/profile/01728030857336570214noreply@blogger.com0tag:blogger.com,1999:blog-1866493210899772308.post-20162132823045836832014-03-04T21:50:00.001+01:002014-11-12T20:41:40.797+01:00Dependency injection in Sitecore event handlersFollowing my previous article about <a href="http://maze-dev.blogspot.dk/2014/03/dependency-injection-in-custom-sitecore.html" target="_blank">Dependency injection in Sitecore custom commands</a>, I think it is only appropriate to continue with something similar for Sitecore event handlers. And when I say similar I mean very similar - in fact reading this article after having read the previous one may induce some sense of deja vu :-) To learn more about Sitecore events visit <a href="http://sdn.sitecore.net/Articles/API/Using%20Events.aspx" target="_blank">Using Events</a>.
<br />
<br />
This article assumes an understanding of Sitecore events and the concept of dependency injection. The purpose is to show how to use dependency injection in Sitecore events.
<br />
<br />
The normal way of creating an event handler for a Sitecore event is to create a handler class with an <a href="http://msdn.microsoft.com/en-us/library/system.eventhandler(v=vs.110).aspx" target="_blank">EventHandler</a> delegate, i.e. a method with the EventHandler signature, and then add some config to Sitecore's <events> section defining where to find the event handler implementation, so that Sitecore can instantiate the event handler and trigger the delegate when the event occurs.
<br />
<br />
The problem with this approach is that nowadays it is common to use dependency injection in software solutions, and letting Sitecore take care of creating instances of your custom code means that you loose the possibility of injecting the needed dependencies. Luckily there is also a way out of this morass.
<br />
<br />
Sitecore has created a class called Event, which is used for subscribing, unsubscribing, and raising events. The good news is that it is available for use.<br />
<br />
So here is a suggestion on how to use it to obtain dependency injection in Sitecore events. It is based on using Autofac as IoC container.<br />
<br />
First, create a base class for your event handlers:
<br />
<pre class="brush: csharp">namespace TestApp.Events
{
public abstract class BaseEventHandler
{
public string FullName { get; private set; }
protected BaseEventHandler(string fullName)
{
FullName = fullName;
}
public abstract void OnEvent(object sender, System.EventArgs args);
}
}
</pre>
This base class defines the EventHandler delegate method signature that all derived event handler classes must implement, but it also has one property, FullName, for holding the event name for registration purposes.
<br />
<br />
Next, create your event handler inheriting from BaseEventHandler like this:
<br />
<pre class="brush: csharp">namespace TestApp.Events
{
public class MyEventHandler : BaseEventHandler
{
private readonly IMyDependency _myDependency;
public MyEventHandler(string fullName, IMyDependency myDependency)
: base(fullName)
{
_myDependency = myDependency;
}
public override void OnEvent(object sender, System.EventArgs args)
{
// event handler implementation
}
}
}
</pre>
As you can see we inject a dependency in the constructor. The constructor furthermore calls the base constructor to set the event name.
<br />
<br />
Now, create a class for registering events using the Sitecore Event class:
<br />
<pre class="brush: csharp">namespace TestApp.Events
{
public static class EventConfigurator
{
public static void Configure(System.Collections.Generic.IEnumerable<BaseEventHandler> eventHandlers)
{
foreach (var eventHandler in eventHandlers)
{
Sitecore.Events.Event.Subscribe(eventHandler.FullName, eventHandler.OnEvent);
}
}
}
}
</pre>
The Configure method takes a collection of BaseEventHandler objects (our event handler instances), then uses Subscribe method on the Event class to subscribe the events.
<br />
<br />
That is basically all the pieces we need. We just have to fit everything together in our bootstrapper (the place where all the dependencies are set up using the IoC container). This could look something like this:
<br />
<pre class="brush: csharp">...
var builder = new ContainerBuilder();
builder.RegisterType<MyDependency>().As<IMyDependency>().InstancePerLifetimeScope();
builder.RegisterType<MyEventHandler>().As<BaseEventHandler>().WithParameter("fullName", "mynamespance:mycategory:myevent").InstancePerLifetimeScope();
var rootContainer = builder.Build();
var eventHandlers = rootContainer.Resolve<IEnumerable<BaseEventHandler>>();
EventConfigurator.Configure(eventHandlers);
...
</pre>
So we just register dependencies as usual. The new thing is that we now register our event handlers in code, instead of using a Sitecore config file. And then we call our EventConfigurator with a collection of instances of all our event handlers.
<br />
<br />
That's it. Plain and simple :-)
<br />
<br />
<b>Update:</b><br />
Please note that since the event handlers are resolved only once (at app startup), any injected dependencies are effectively singletons.Mazehttp://www.blogger.com/profile/01728030857336570214noreply@blogger.com5tag:blogger.com,1999:blog-1866493210899772308.post-20398132601130959222014-03-01T13:44:00.000+01:002014-03-04T21:40:59.694+01:00Dependency injection in custom Sitecore commandsCustom commands are probably one of the more ignored features in Sitecore, but they can be quite powerful. They could for example be used for insert options on templates, thereby allowing code to run on creation of content items. To learn more about commands (or specifically Command Templates) visit "Sitecore CMS 6.0 or later Data Definition Cookbook" chapter 4.
<br /><br />
This article assumes an understanding of Command Templates and the concept of dependency injection. The purpose is to show how to use dependency injection in custom commands in Sitecore.
<br /><br />
The normal way of creating a Sitecore custom command is to create a class inheriting from Sitecore.Shell.Framework.Commands.Command, overriding the Execute method, and then adding some config to Sitecore's <commands> section defining where to find the command implementation, so that Sitecore can instantiate the command.
<br /><br />
The problem with this approach is that nowadays it is common to use dependency injection in software solutions, and letting Sitecore take care of creating instances of your custom code means that you loose the possibility of injecting the needed dependencies. Luckily there is a way out of this morass.
<br /><br />
Sitecore has created something called a CommandManager, which is used for registering, instantiating and looking up commands. The good news is that it is available for use.<br />
<br />
So here is a suggestion on how to use it to obtain dependency injection in custom commands. It is based on using Autofac as IoC container.<br />
<br />
First, create a base class for your commands:
<pre class="brush: csharp">
namespace TestApp.Commands<br />{<br /> public abstract class BaseCommand : Sitecore.Shell.Framework.Commands.Command<br /> {<br /> public string FullName { get; private set; }<br /><br /> protected BaseCommand(string fullName)<br /> {<br /> FullName = fullName;<br /> }<br /> }<br />}
</pre>
Basically, we will just use this base class to "label" our custom commands, but it also has one property, FullName, for holding the command name for registration purposes.
<br /><br />
Next, create your custom command inheriting from BaseCommand like this:
<pre class="brush: csharp">
namespace TestApp.Commands<br />{<br /> public class MyCommand : BaseCommand<br /> {<br /> private readonly IMyDependency _myDependency;<br /><br /> public MyCommand(string fullName, IMyDependency myDependency)<br /> : base(fullName)<br /> {<br /> _myDependency = myDependency;<br /> }<br /><br /> public override void Execute(Sitecore.Shell.Framework.Commands.CommandContext context)<br /> {<br /> // command implementation<br /> }<br /> }<br />}
</pre>
As you can see we inject a dependency in the constructor. The constructor furthermore calls the base constructor to set the command name.
<br /><br />
Now, create a class for registering custom commands using the Sitecore CommandManager:
<pre class="brush: csharp">
namespace TestApp.Commands<br />{<br /> public static class CommandConfigurator<br /> {<br /> public static void Configure(IEnumerable<BaseCommand> commands)<br /> {<br /> foreach (var command in commands)<br /> {<br /> Sitecore.Shell.Framework.Commands.CommandManager.RegisterCommand(command.FullName, command);<br /> }<br /> }<br /> }<br />}
</pre>
The Configure method takes a collection of BaseCommand objects (our custom command instances), then uses RegisterCommand method on the CommandManager to register the commands.
<br /><br />
That is basically all the pieces we need. We just have to fit everything together in our bootstrapper (the place where all the dependencies are set up using the IoC container). This could look something like this:
<pre class="brush: csharp">
...<br />var builder = new ContainerBuilder();<br /><br />builder.RegisterType<MyDependency>().As<IMyDependency>().InstancePerLifetimeScope();<br /><br />builder.RegisterType<MyCommand>().As<BaseCommand>().WithParameter("fullName", "mynamespance:mycategory:mycommand").InstancePerLifetimeScope();<br /><br />var rootContainer = builder.Build();<br /><br />var commands = rootContainer.Resolve<IEnumerable<BaseCommand>>();<br />CommandConfigurator.Configure(commands);<br />...
</pre>
So we just register dependencies as usual. The new thing is that we now register our custom commands in code, instead of using a Sitecore config file. And then we call our CommandConfigurator with a collection of instances of all our custom commands.
<br /><br />
That's it. Plain and simple :-)Mazehttp://www.blogger.com/profile/01728030857336570214noreply@blogger.com0tag:blogger.com,1999:blog-1866493210899772308.post-32487792976634824572013-12-10T14:41:00.000+01:002013-12-10T14:41:57.864+01:00Syncing a forked git repository with changes in the original repositoryUsing the built-in support for Git in Visual Studio 2013 is great... to a certain extent. Because eventually it will happen that you need to do some Git operation that is simply not supported. Having used Git on only one project so far it happened two times already that the built-in functionality wasn't enough.<br /><br />
The first situation was when I needed to revert a pushed commit. So I looked and looked in VS to locate such an action, but to no avail. Searching the Internet turned out people recommend using the command prompt. I opened a standard command prompt from Windows start menu and tried out the recommended Git commands, only to find out that it didn't work because it wasn't in a context of a repository (or something like that), and as I remembered it this was the case even if I navigated to the correct 'workspace' folder. After some more research it turns out that it is possible to open a Git-ish command prompt from VS (it is in the 'action' menu, available many places in the built-in Git client). Opening the command prompt this way places it in the correct context (not entirely sure how - maybe some environment variable is set). Anyway I was able to do the revert from there.<br /><br />
But this is really not what this article is about :-) No, what I want to describe here is how to synchronize a forked Git repository with changes in the original repository. So again I start a Git-ish command prompt, e.g. from 'Unsynched Commits' page in the Git client in VS. From there I execute the following commands:<br />
<br />
First, link our repository with the remote (the original):
<br /><code>
git remote add upstream [full-git-url-to-remote-repo]
</code><br />
Now, fetch changes:
<br /><code>
git fetch upstream
</code><br />
Finally, do a merge, where [branch] can be master or some other branch:
<br /><code>
git merge upstream/[branch]
</code><br />
That should do the trick. If there are any conflicts, they will now show up in VS and you can resolve them from there.
Mazehttp://www.blogger.com/profile/01728030857336570214noreply@blogger.com0tag:blogger.com,1999:blog-1866493210899772308.post-23534907132079296592013-10-24T09:13:00.001+02:002013-10-24T09:13:53.929+02:00Installing Solr 4.5 as a Windows serviceI've recently had the good fortune to use the Solr search platform for a project. One thing, though, that is not straight forward is how to install Solr as a Windows service. There are several articles on the internet, but I couldn't find any that worked with Solr 4.5. So I gathered information from a number of out-dated articles and came up with the following approach, which has worked fine for me:
<br />
<ol>
<li>Download Solr 4.5 (http://lucene.apache.org/solr/)</li>
<li>Download the Non-Sucking Service Manager NSSM (http://nssm.cc/)</li>
<li>Create a folder to be used for the Solr service (e.g. D:\solr). From now on this will be called <solrdir></li>
<li>From the example folder of the Solr package copy the following files and folders to <solrdir>:</li>
<ol>
<li>etc</li>
<li>lib</li>
<li>logs</li>
<li>solr</li>
<li>webapps</li>
<li>start.jar</li>
</ol>
<li>In a command prompt go to <solrdir> and run the command <code>java -jar start.jar</code> to check that the Solr installation works. If it works then just stop it again.</li>
<li>From the NSSM package copy nssm.exe to <solrdir></li>
<li>In a command prompt go to <solrdir> and run the command below. Note: it may be necessary to replace back-slashes in <solrdir> with forward-slashes.<br/><code>nssm.exe install Solr C:\Windows\System32\java.exe "-Dsolr.solr.home=<solrdir>/solr -Djetty.home=<solrdir>/ -Djetty.logs=<solrdir>/logs/ -cp <solrdir>/lib/*.jar;<solrdir>/start.jar -jar <solrdir>/start.jar"</code></li>
</ol>
To remove the service run the following command:
<br/>
<code>nssm.exe remove Solr</code>
<br/>
<br/>
For convenience I've created the following batch script, which performs the nssm.exe install part. The script should be placed in <solrdir>.
<br/>
<code>
@echo off<br/>
set currentdir=%~dp0<br/>
set parameters=-Dsolr.solr.home=""%currentdir%solr"" -Djetty.home=""%currentdir%"" -Djetty.logs=""%currentdir%logs/"" -cp ""%currentdir%lib/*.jar;%currentdir%start.jar"" -jar ""%currentdir%start.jar""<br/>
<br/>
REM replace back-slashes with forward-slashes<br/>
set javaWeirdnessParameters=%parameters:\=/%<br/>
<br/>
nssm.exe install Solr C:\Windows\System32\java.exe "%javaWeirdnessParameters%"<br/>
</code>
Mazehttp://www.blogger.com/profile/01728030857336570214noreply@blogger.com0tag:blogger.com,1999:blog-1866493210899772308.post-39914289682910646342011-03-23T17:21:00.000+01:002011-03-23T17:51:28.787+01:00NServiceBus and SQLite in .NET 4.0I've just used the better part of a day trying to figure out how to get System.Data.SQLite.dll to work with NServiceBus in a .NET 4.0 project in VS2010.<div><br /></div><div>The problem is if you just set it up like normal, you'll get the following error:</div><div>"Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information."</div><div><br /></div><div>This is because there's no .NET 4.0 version of the dll released yet (2011-03-23). There's only a .NET 2.0 version and it contains unmanaged code. So what we must do it to explicitly tell the project that it is ok to use the dll by adding some configuration. This solution is documented in many articles on the Internet, but I could not get it to work with NServiceBus - or rather the NServiceBus.Host.exe to be precise which actually makes all the difference.</div><div><br /></div><div>As mentioned in various articles on the Internet I put the configuration in app.config of the project and then exactly nothing happened. After much frustration I stumbled over this article:<br /><a href="http://tech.dir.groups.yahoo.com/group/nservicebus/message/8951">http://tech.dir.groups.yahoo.com/group/nservicebus/message/8951</a></div><div><br /></div><div>The trick is to put the configuration in the <strong>NServiceBus.Host.exe.config</strong> file like this:</div><br /><pre class="brush: xml"><br /><?xml version="1.0" encoding="utf-8" ?><br /><configuration><br /> <!-- This is needed in order to get System.Data.SQLite to work in .NET 4.0, at least<br /> until a .NET 4.0 compatible version of System.Data.SQLite is released. --><br /> <startup useLegacyV2RuntimeActivationPolicy="true"><br /> <supportedRuntime version="v4.0"/><br /> </startup><br /></configuration><br /></pre><br /><div></div><div>NOTE: you must also ensure that the "Copy to Output Directory" setting for the config file is set to one of the copy options.</div><div><br /></div><div>This did the trick.</div>Mazehttp://www.blogger.com/profile/01728030857336570214noreply@blogger.com0tag:blogger.com,1999:blog-1866493210899772308.post-6517420310099378492011-03-18T09:44:00.000+01:002011-03-18T10:00:59.051+01:00Fuzzy text in VS2010So I finally moved to Visual Studio 2010. This is a very nice IDE, but one thing that annoys me is that the text (code) seems fuzzy compared to VS2008. My eyes don't like it.<div><br /></div><div>I googled it and there were several suggestions of using the free Visual Studio theme editor "Visual Studio Color Theme Editor":</div><div><a href="http://visualstudiogallery.msdn.microsoft.com/20cd93a2-c435-4d00-a797-499f16402378/">http://visualstudiogallery.msdn.microsoft.com/20cd93a2-c435-4d00-a797-499f16402378/</a></div><div><br /></div><div>However, this only allows for changing the theme of everything around the code area - not the code area itself.</div><div><br /></div><div>Then I found some articles about the fonts used in VS2010 and it turned out that it uses a font called Consolas for the code whereas VS2008 uses Courier New. So I simply changed Consolas to Courier New in VS2010 and my eyes were instantly happy :-)</div>Mazehttp://www.blogger.com/profile/01728030857336570214noreply@blogger.com0tag:blogger.com,1999:blog-1866493210899772308.post-35864696651617957672010-10-26T13:10:00.000+02:002010-11-20T18:24:49.660+01:00Transforming XML containing namespacesWhenever I get the task of transforming some XML to some other XML using XSL and namespaces are involved it is always a struggle for some reason. And I've experienced that I'm not the only one having problems with that.<div>Most examples of XML transformation that I stumble over on the web deals with transforming XML to HTML and almost never use namespaces. Therefore I've decided to write a little article where I describe how to transform from XML to XML including the usage of namespaces.</div><div><br /></div><div><b>The task</b></div><div>I want to transform an XML document containing information about CDs from one format into another using XSL. The source XML will use one namespace and the output XML another.</div><div><br /></div><div><b>Source XML document</b></div><div>This is the source XML document that I want to transform. Notice that it uses the "http://schemas.maze-dev.blogspot.com/2010/catalog" namespace.</div><pre class="brush: xml"><br /><?xml version="1.0" encoding="ISO-8859-1"?><br /><catalog xmlns="http://schemas.maze-dev.blogspot.com/2010/catalog"><br /> <name>Absolute Whatever Vol. 1</name><br /> <cd><br /> <title>Empire Burlesque</title><br /> <artist>Bob Dylan</artist><br /> <country>USA</country><br /> <company>Columbia</company><br /> <price>10.90</price><br /> <year>1985</year><br /> </cd><br /> <cd><br /> <title>Hide your heart</title><br /> <artist>Bonnie Tyler</artist><br /> <country>UK</country><br /> <company>CBS Records</company><br /> <price>9.90</price><br /> <year>1988</year><br /> </cd><br /> <cd><br /> <title>Greatest Hits</title><br /> <artist>Dolly Parton</artist><br /> <country>USA</country><br /> <company>RCA</company><br /> <price>9.90</price><br /> <year>1982</year><br /> </cd><br /></catalog><br /></pre><b>XSL transformation document</b><div>This is the transformation document. The xmlns:xsl="http://www.w3.org/1999/XSL/Transform" defines the standard 'xsl' transform namespace. However, the interesting part is xmlns="http://schemas.maze-dev.blogspot.com/2010/archive" which defines the output namespace (i.e. namespace in output XML) and xmlns:cat="http://schemas.maze-dev.blogspot.com/2010/catalog" which defines the namespace 'cat' used to reference elements in the input XML. The attribute exclude-result-prefixes="cat" states that the 'cat' namespace should not be included in the output XML.</div><pre class="brush: xml"><br /><?xml version="1.0" encoding="ISO-8859-1"?><br /><xsl:stylesheet version="1.0" <br /> xmlns:xsl="http://www.w3.org/1999/XSL/Transform"<br /> xmlns="http://schemas.maze-dev.blogspot.com/2010/archive"<br /> xmlns:cat="http://schemas.maze-dev.blogspot.com/2010/catalog" exclude-result-prefixes="cat"><br /> <xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/><br /><br /> <xsl:template match="/"><br /> <archive><br /> <xsl:apply-templates select="cat:catalog"/><br /> </archive><br /> </xsl:template><br /> <xsl:template match="cat:catalog"><br /> <name><xsl:value-of select="cat:name"/></name><br /> <xsl:apply-templates select="cat:cd"/><br /> </xsl:template><br /> <xsl:template match="cat:cd"><br /> <album><br /> <title>Title: <xsl:value-of select="cat:title"/></title><br /> <artist>Artist: <xsl:value-of select="cat:artist"/></artist><br /> </album><br /> </xsl:template><br /></xsl:stylesheet><br /></pre><b>Output XML document</b><div>This is the output XML document. Notice that it uses the "http://schemas.maze-dev.blogspot.com/2010/archive" namespace that was specified in the XSL document.</div><pre class="brush: xml"><br /><?xml version="1.0" encoding="UTF-8"?><br /><archive xmlns="http://schemas.maze-dev.blogspot.com/2010/archive"><br /> <name>Absolute Whatever Vol. 1</name><br /> <album><br /> <title>Title: Empire Burlesque</title><br /> <artist>Artist: Bob Dylan</artist><br /> </album><br /> <album><br /> <title>Title: Hide your heart</title><br /> <artist>Artist: Bonnie Tyler</artist><br /> </album><br /> <album><br /> <title>Title: Greatest Hits</title><br /> <artist>Artist: Dolly Parton</artist><br /> </album><br /></archive><br /></pre><div>So basically, that's it. Hope somebody finds it useful.<br /></div>Mazehttp://www.blogger.com/profile/01728030857336570214noreply@blogger.com0tag:blogger.com,1999:blog-1866493210899772308.post-38543979023628269222010-10-25T10:14:00.000+02:002010-10-25T10:59:33.180+02:00"Specified Cast is Not Valid" - LinqToSql error on submit<div>While working on a project using LinqToSql I got the error "Specified Cast is Not Valid" when calling SubmitChanges.</div><div><br /></div><div>After searching a bit on the web I found out the error is due to a problem in .NET Framework 3.5. Luckily there is a hotfix that addresses this problem. Here are some articles about the problem:</div><div><br /></div><div><a href="http://connect.microsoft.com/VisualStudio/feedback/details/351358/invalidcastexception-on-linq-db-submit-with-non-integer-key">http://connect.microsoft.com/VisualStudio/feedback/details/351358/invalidcastexception-on-linq-db-submit-with-non-integer-key</a></div><div><br /></div><div><a href="http://rexcitations.wordpress.com/2009/08/15/specified-cast-is-not-valid-error-using-linq-with-foreign-keys/">http://rexcitations.wordpress.com/2009/08/15/specified-cast-is-not-valid-error-using-linq-with-foreign-keys/</a></div><div><br /></div><div><div><a href="http://www.mha.dk/category/SQLDatabase.aspx">http://www.mha.dk/category/SQLDatabase.aspx</a></div></div><div><br /></div><div>The first article is very specific:</div><div>"When database contains two tables, both with automatic integer primary keys, and a relationship between a unique char field in one and a non-unique char field in the other, inserting new rows into the second table fails on submit with InvalidCastException."</div><div><br /></div><div>This is a bit too specific as to the usage of int and char, because the error occurs in other situations as well which can be seen in the second article. Here the error occurs using GUID and int. And to top that up, in my own situation the error occurred using int and long.</div><div><br /></div><div>The third article has a link from which to download a hotfix to solve the problem.</div><div><br /></div><div>If you want to obtain the hotfix in a more official way you can use this link:</div><div><a href="http://support.microsoft.com/hotfix/KBHotfix.aspx?kbnum=963657&kbln=en-us">http://support.microsoft.com/hotfix/KBHotfix.aspx?kbnum=963657&kbln=en-us</a></div><div><br /></div>Mazehttp://www.blogger.com/profile/01728030857336570214noreply@blogger.com0tag:blogger.com,1999:blog-1866493210899772308.post-87796733722350934202010-06-14T22:14:00.000+02:002010-10-20T21:16:09.071+02:00DataGridView with Typed DataSetI was recently working on an assignment creating a Windows Forms application with a DataGridView being fed data from some object collection, and NOT (as is the case with virtually all samples that can be found on the Internet) using the SqlDataAdapter.<br /><br />Even though this is a pretty simple task once you've found out how to do it, I did find it a bit tricky to get to work. So here's how it can be done.<br /><ol><li>Start Visual Studio and create a new Windows Forms application.</li><li>Add a DataSet to the project by right-clicking on the project and selecting Add -> New Item and selecting the DataSet template naming it ContactDataSet. This should bring up the DataSet designer.</li><li>Add a new DataTable to the designer area using the Toolbox. Name it Contact and add some columns to table (see details such as column data types in the sample code)<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_OJT0xBUJE28/TBaX0uOui6I/AAAAAAAABJg/qj6GkC8YtLI/s1600/Contact_DataTable.png"><img style="display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 188px; height: 211px;" src="http://1.bp.blogspot.com/_OJT0xBUJE28/TBaX0uOui6I/AAAAAAAABJg/qj6GkC8YtLI/s400/Contact_DataTable.png" alt="" id="BLOGGER_PHOTO_ID_5482736528388950946" border="0" /></a></li><li>Add a new class to the project and name it Address:<br /><pre class="brush: csharp"><br />public class Address<br />{<br /> public long Id { get; set; }<br /> public string Street { get; set; }<br /> public int HouseNumber { get; set; }<br /> public string PostalCode { get; set; }<br /> public string PostalDistrict { get; set; }<br />}<br /></pre><br /></li><li>Add another new class to the project and name it Contact:<br /><pre class="brush: csharp"><br />public class Contact<br />{<br /> public long Id { get; set; }<br /> public string FirstName { get; set; }<br /> public string LastName { get; set; }<br /> public int? Age { get; set; }<br /> public Address Address { get; set; }<br />}<br /></pre><br /></li><br /><li>In Form1.cs add this private member:<br /><pre class="brush: csharp"><br />private ContactDataSet _contactDataSet = new ContactDataSet();<br /></pre><br /></li><br /><li>Also in Form1.cs add this method for generating a collection of test data:<br /><pre class="brush: csharp"><br />private IList<Contact> GenerateData()<br />{<br /> IList<Contact> contactList = new List<Contact>();<br /> contactList.Add(new Contact<br /> {<br /> Id = 1,<br /> FirstName = "Peter",<br /> LastName = "Jackson",<br /> Age = 53,<br /> Address = new Address<br /> {<br /> Street = "Long Road",<br /> HouseNumber = 173,<br /> PostalCode = "12345",<br /> PostalDistrict = "Fake Town"<br /> }<br /> });<br /> contactList.Add(new Contact<br /> {<br /> Id = 2,<br /> FirstName = "Pavlov",<br /> LastName = "Ivanowich",<br /> Address = new Address<br /> {<br /> Street = "Park Avenue",<br /> HouseNumber = 1011,<br /> PostalCode = "98765",<br /> PostalDistrict = "Imaginaryville"<br /> }<br /> });<br /> contactList.Add(new Contact<br /> {<br /> Id = 3,<br /> FirstName = "Onslow",<br /> LastName = "Bucket",<br /> });<br /> return contactList;<br />}<br /></pre><br /></li><li>Furthermore, add this method for filling the DataSet with data from a collection:<br /><pre class="brush: csharp"><br />private void FillDataSet(IList<Contact> contactList)<br />{<br /> foreach (Contact contact in contactList)<br /> {<br /> ContactDataSet.ContactRow row = _contactDataSet.Contact.NewContactRow();<br /> row.FirstName = contact.FirstName;<br /> row.LastName = contact.LastName;<br /> if (contact.Age.HasValue)<br /> {<br /> row.Age = contact.Age.Value;<br /> }<br /> if (contact.Address != null)<br /> {<br /> row.Street = contact.Address.Street;<br /> row.HouseNumber = contact.Address.HouseNumber;<br /> row.PostalCode = contact.Address.PostalCode;<br /> row.PostalDistrict = contact.Address.PostalDistrict;<br /> }<br /> _contactDataSet.Contact.AddContactRow(row);<br /> }<br /><br /> // OK, so we have filled in the data we want to start up with.<br /> // We must now call AcceptChanges so that this data isn't change-tracked.<br /> _contactDataSet.AcceptChanges();<br />}<br /></pre><br /></li><li>Go to the Form Designer and add a BindingSource and name it contactBindingSource. Set the ContactDataSet as DataSource and Contact as DataMember.</li><li>In the Form Designer also add a DataGridView and set contactBindingSource as DataSource.<br /></li><li>In the Form1.cs add this code in the Load event handler:<br /><pre class="brush: csharp"><br />IList<Contact> contactList = GenerateData();<br />FillDataSet(contactList);<br /><br />contactBindingSource.DataSource = _contactDataSet;<br /></pre><br /></li><li>Run the application. The DataGridView should now display the test data.</li></ol>Mazehttp://www.blogger.com/profile/01728030857336570214noreply@blogger.com0