Why a Mac?

Today i get this question via Twitter:

@dnagir hey, just curious how do you feel about your Mac? What’ good/bad? I kind of look at getting one, but never used macs before.

And since in that blog not happened anything for a log time, I thought it would be the best to answer his questions here.

First i want to tell you that i am not an Apple fanboy and i am sure i don’t not went to be one. A colleague once told me that i am a Microsoft fanboy but that is not true. I like a lot of different kinds of technologies and not an software or hardware vendor. I am still don’t like IOS, so i will not buy an I* in the next future.

So why i bought a MacBook?

There are two distinct things that brought me to that direction. Last year i visited the .Net OpenSpace 2011 in Leipzig.  There I met a lot of cool guy’s and even if i never meet any of them before, they gave me the feeling like they are my friends for a log time. Then in July i visited .Net OpenSpace in Karlsruhe and met a lot of them again.
Especially Björn Rochel and Alexander Groß. They gave me a new fresh look about my plate into ruby and VIM world. Björn reminded me that its good to change the direct from time to time and try something new.
In addition to that I’ve started playing with Node.Js which is an command line runner like the way ruby works. It is starting to work on Windows, but if you really want to have fun with it, you need a unix based platform (for now). So i hammered Björn with a lot of questions about his MacBook and he answered me all of them with a lot of patience.

Second thing is that my father bought a Mac a while a go and he bothered me since then to buy a Mac so that i can help him to answer this technical questions.

Serendipity come in and he got a very interesting offer to buy a new Mac with special conditions. So he asked me if to join that offer. Since my old Notebook (a ThinkPad T60p) is now 4 years old. Its third battery is empty before i am arrive at work and the CPU can not play 1024p videos and compile at the same time 🙂 So after a quick look over the Apple store and a little bit thinking, my answer this time was yes.

How do you feel about your Mac?

Hmm 50% i like it and 50% it hate it. I like the Interface and how it look and feel. Even with mainly Windows and Debian experience its simple to get the most stuff done. You have not much options to configure you interface like its in Windows. But i don’t think that you need that much configuration. I also like that touchpad. On my T60p i mostly only work with the TouchPoint because the little PC touch pads are to unprecise to work with them. I think that is because they are only half the size of the Mac touch pad. The gestures on the Mac keyboard are very cool. You can scroll with two fingers. Switch desktop with three and a lot more.

The biggest pro of an mac is its case. Its only half as hight as my old T60p and it looks and feels very precious. I never understands why only Apple ever bought computers which are looking good?!? All other Notebooks witch are available are looking ugly like hell. Also my T60p was never the a beauty.

What is bad?

First of all the keyboard (for now). While the keys have a very good response and it feels very good to type on them, the keyboard layout is a real show stopper. Mac uses its Windows Key(CMD Key) a lot more then it Windows does. So you have some keystrokes like Strg+C on CMD+C and it also seems that Strg and Alt are exchanged from what i know on Windows. A big problem too is that a few keys are missing entirely like Entf, Page Up, Page Down. And others like @ and ~ are on completely different places. But i guess its only

Other problematic things are: A lot of good software is not free and and not cheap. On some places the Finder reacts different than i know it. Like if you replace a folder on Windows it gets merged with the existing folder. On Mac that means, delete and copy.

What MacBook did i bought?

A MacBook Pro 15″ with anti glare display. I want to use it in trains where you have problems to see anything with a glare display in the direct sunlight. The only bad is that it has a gray border around the display which does not look as cool as the glare one which is black.

It has only the 2.2mhz CPU. 200€ is a lot money for 0.1mhz and a little bit more cache. I bought only 4GB of ram because they sell 8GB for 200€ which is sold on Amazon or 52€.

I bought a 500GB HDD and upgraded it myself to an 240GB Kingston HyperX SSD which is fast as the Vertex 3 but it seems more robust. A friend of mine which is working for a system house meant that they have nearly no returning Kingston drivers but a lot from OCZ.

I typed that post in a very fast style to get it done in a short time frame. So please dont blame me on that typos. I definitely need to blog a little bit more but i am that type of person which read more likely then write.

Update: Changed the last paragraph because it was misunderstood. The Kingsten SSD has only three year warranty not five.

How Autofac’s Dynamic instantiation solves the service locator problem for ViewModels

Yesterday ive blogged about why I moved to Autofac. Now I want to show a really nice feature of Autofac which solves the service locator problem when you work with child ViewModels.

Let me give this example:

 
 public class OrderViewModel {     public ObservableCollection<OrderLineViewModel> OrderLines { get; set; }      public OrderViewModel(OrderDao orderDao)     {         OrderLines = new ObservableCollection<OrderLineViewModel>();          foreach(var orderLine in orderDao.GetOrderLines())         {             var viewModel = IoC.Resolve<OrderLineViewModel>(orderLine);             OrderLines.Add(viewModel);         }     } } 

What is the problem here? We need to create an OrderLineViewModel for each OrderLine we get from the database. We can not know how much OrderLines we need at build time of the OrderViewModel so we can not use constructor inject for that.

We have to use IoC.Resolve which is commonly known as service locator pattern. The real problem with that is that such a call only fails (because of missing dependencies) when it happens not on that point where the constructor of the root is resolved. In this example weights not much, but if this happens to code which is only used once in a month you have a problem.

So how Autofac can solve that:

 
 public class OrderViewModel {     public ObservableCollection<OrderLineViewModel> OrderLines { get; set; }      public OrderViewModel(OrderDao orderDao, Func<OrderLineViewModel,OrderLine> orderLineViewModelFactory)     {         OrderLines = new ObservableCollection<OrderLineViewModel>();          foreach(var orderLine in orderDao.GetOrderLines())         {             var viewModel = orderLineViewModelFactory(orderLine);             OrderLines.Add(viewModel);         }     } } 

What is the difference? The func is a dynamic service locator provided by Autofac, but instead at call time, Autofac has now the chance to know what types you need at runtime and now it can satisfy all dependencies at resolve time of the root objects constructor.

So what happens is that it fails now at the root of you application. Not one month later when your client calls this one view with the missing dependencies.

The same way Autofac supports Lazy<T> which is new in .Net 4.0 and resolves once on the first call. You can read more about that in this blog post from Nicholas Blumhardt which is the author of Autofac.

Why I moved from Windsor to Autofac

A long while ago as I started with dependency injection and inversion of control, I started with the container the most people use at this time. Since a read a lot of Ayende I naturally started with Castle Windsor.

Now after a long time of using it, it dose not feel very well anymore. The registration interface is fluent, but looks ugly and it is not very clear what a lot of methods on the configuration interface do. At the end Windsor feels old and ugly (like it is with Monorail compared to MVC).

So to look at the other containers I have to define my requirements:

  • It should be simple
  • It should be extendable
  • It should have a documentation

Now what containers are existing today. Here is a list of the most known ones with some initial thoughts on it.

  • Spring.Net
    • Large and cumbersome port from Java.
    • Looks like a lot of things a never need
  • StructureMap
    • Modern but I don’t like its over fluent interface (I don’t consider fluent interface which trying to build sentences very helpful)
  • Ninjet
    • Small but it saw a lot of attributes on the sample code (don’t like to do all with attributes since CAB)
    • The same about the fluent interfaces like StructureMap
  • Autofac
    • Small and very extendable codebase
    • Interface looks simple and clean

So it seems Autofac is the right to try. And after using it now for some time, I really really like it.

Autofac has a clean and simple core which can be easily extended. And itself makes a lot of use of that. It have nearly all the functions that Windsor and the other containers have, but as extension to the core. Examples are XML configuration, ASP MVC, Proxy and a lot more. Autofac can create components with parameters from its type or position not only from its name like Windsor dose.

So how dose a simple registration looks like in Autofac?

 
 var builder = new ContainerBuilder();  builder.RegisterType<MyComponent>()     .As<IMyComponent>()     .SingleInstance();  var container = builder.Build();  container.Resolve<IMyComponent>(); 

I also like the way you resolve multiple instances:

 
 var assembly = Assembly.GetExecutingAssembly();  builder.RegisterAssemblyTypes(assembly)     .Where(t => t.Name.EndsWith("Rule"))     .As<IOrderRule>();  var container = builder.Build();  var rules = container.Resolve<IEnumerable<IOrderRule>>(); 

Conclusion

At the most it’s a matter of taste yes, but I can only suggest that you try it out yourself.

Apply Command InputBindings on styles

This one is a little bit more tricky then the CommandProxy I’ve written before, because the CommandProxy dose not work here.

First you can simply not apply InputBindings from Styles because the property has not setter. You are only able to add bindings to the collection, but there is no way to do this in styles.
Second you can not place there resource for the CommandProxy inside of the style.

To make that work you need an property which you can set in the style. And this is my solution. Create an attached dependency property which set the InputBinding for you.

As example i use the LeftDoubleClick MouseGesture.

 
 public static class MouseCommands {     private static void LeftDoubleClickChanged(DependencyObject sender, DependencyPropertyChangedEventArgs args)     {         var control = (Control)sender;          if(args.NewValue != null && args.NewValue is ICommand)         {             var newBinding = new MouseBinding(args.NewValue as ICommand, new MouseGesture(MouseAction.LeftDoubleClick));             control.InputBindings.Add(newBinding);         }else         {             var oldBinding = control.InputBindings.Cast<InputBinding>().First(b => b.Command.Equals(args.OldValue));             control.InputBindings.Remove(oldBinding);         }     }      public static readonly DependencyProperty LeftDoubleClickProperty =         DependencyProperty.RegisterAttached("LeftDoubleClick",             typeof(ICommand),             typeof(MouseCommands),             new UIPropertyMetadata(LeftDoubleClickChanged));      public static void SetLeftDoubleClick(DependencyObject obj, ICommand value)     {         obj.SetValue(LeftDoubleClickProperty, value);     }      public static ICommand GetLeftDoubleClick(DependencyObject obj)     {         return (ICommand)obj.GetValue(LeftDoubleClickProperty);     } } 

And then you can apply this over a style. In my example here for an ListBoxItem in its ListBoxContainerStyle.

 
 <Style x:Key="ListBoxItemStyle" TargetType="{x:Type ListBoxItem}">     <Setter Property="Shared:MouseCommands.LeftDoubleClick" Value="{Binding AddCommand}" /> </Style>  

And that it is. Yes you have to create an attached dependency property for each Mouse- or KeyBinding you want to use, but the most time you dose not need all of them.

Commit with TortoiseGit / TortoiseSVN directly von VisualStudio

Albert Weinert has written a nice post about integration the git command line interface to Visual Studio without an additional plugin. After reading this i though about it a while. In both TortoiseGit and TortoiseSVN you can do all the common operations directly from commit window. So the best way should be to open the commit window directly with a shortcut in Visual Studio.

And here is how to do it.

Open Tools->External Tools

 image

Add an new entry. For the command go to your Tortoise installation directory and assign the TortoiseProc.exe . The arguments have to be “/path:”$(SolutionDir)” /command:commit”. This assumes that your *.sln file lives directly in you project root. Otherwise you can not commit changes the was made outside of you source directory. To fix this you could add an .. after the $(SolutionDir) variable.

After this you can open the commit window directly from your Visuel Studio tools menu. To assign a keyboard shortcut to it, you have to count the entry in your External Tools list. In my case the entry is at position 3.

So open Tools->Option. Go to Environment Keyboard.

image

Now you need to search for the Tools.ExternalCommand3 (where 3 is the number where your tool is). Now you can assign a keyboard shortcut for it.

I prefer Ctrl+C, Ctrl+G for TortoiseGit and Ctrl+C, Ctrl+S for TortoiseSVN.

Howto databind a ICommand to an InputBinding – Update

I have nice news, as i read in the What’s New in WPF Version 4 page in MSDN, data binding to InputBindings is now fully supported in WPF4.

Bind to commands on InputBinding

You can bind the Command property of an InputBinding class to an instance that is defined in code. The following properties are dependency properties, so that they can be targets of bindings:

    * InputBinding..::.Command
    * InputBinding..::.CommandParameter
    * InputBinding..::.CommandTarget
    * KeyBinding..::.Key
    * KeyBinding..::.Modifiers
    * MouseBinding..::.MouseAction

The InputBinding, MouseBinding, and KeyBinding classes receive data context from the owning FrameworkElement.

Update for post: CommandProxy: Howto databind a ICommand to an InputBinding

NHaml AST Parser – current state

A while ago I’ve started to rewrite NHaml from its ground up to use an AST based parsing approach instead of directly creating the output while parsing. I did this because of the following reasons:

  • Separate parser from output generation
  • Allow to reuse the parser for syntax highlighting plugins like this one for Visual Studio or a possible ones for Sharpdevelop or Monodevelop
  • Error reporting with line and index of error source
  • Fully parse the document and report all errors at the end
  • Better maintainability since the separate steps are less complicated

Since this all dose not fit within the current codebase, so I’ve started an empty one at GitHub. Since i work the most time at trains offline, git is much more useable for me than SVN and i really like the GitHub forking model. So if the other members are agree, it is possible that we move the source code of NHaml to GitHub later.

Currently the entire codebase pass the haml-spec test suite from Norman Clarke. Big thank to Norman, without that spec it would have cost me much more time to bring the codebase to the current state. But this means not that it is useable. The current DebugVisitor which generates the HTML output from AST is only a intermediate step. It only output HTML and no compiler or any dynamic code is involved.

The next step, is to track the line number and character index for each node. This is ware i work on currently. And here it is:

image  
This is a little test editor written with the Windows.Forms.RichTextBox. Initially I’ve used the WPF RichtTextBox, but since it is very very slow for this scenario and the optimization would cost much time I’ve switched over to Windows.Forms. Note: My goal is it no to create an editor for NHaml, i only use it for testing. As you can see not all nodes are highlighted yet.

The next step would be to report errors like when you only type “%” without a name behind (possibly this information could be used in a future syntax highlighting plugin too). After that I’ve want to extend the test suite for some missing parts like the alligator notation.

When all this is done, i plan to reintegrate the compiler and mvc stuff and publish it all together as NHaml 3.0. But don’t expect it to be soon.

Painless multi processing with Retlang

Retlang is a very nice and extensible library for handling multithreading the Erlang way, which means the concept of message passing and was created by Mike Rettig.

There is also a library from the Microsoft Robotics team called CCR which has the same goal. I’ve looked for it some time ago, but i dose not like its actor invocation syntax which looks really stodgily.

So lets start with some basic structures.

Fibers

A fiber is an queue where you can add actions that then where dequeued and executed on a different thread. What thread this is depends on the type of fiber. Fibers are also support scheduling of the added actions for later execution and for continues execution.

All fibers are guarantee to execute only one action at the same time, which is very different to what the .Net ThreadPool dose. This is an important behavior, since it guarantees that all objects are thread safe when they are shared by the same fiber.

The following fibers are available (version 0.4.2.0):

  • ThreadFiber – creates one thread and use this for execution
  • PoolFiber – uses the .Net ThreadPool
  • SynchronousFiber – only for Unit Testing
  • FormFiber – executes the actions on the Windows.Forms message thread
  • DispatcherFiber – executes the action on the given WPF Dispatcher

The last two are make the UI synchronization dead simple, and i plan to write later about them.

The most time you should use the PoolFiber, since it automatically distributes the action over all available CPUs and so it scale very well. Mike Retting measured that here.

ThreadFibers are should be only used if the added actions are taking a long time to execute. The reason is very simple. The most applications are creating a lot of fibers in their lifetime, in example one for each service. So you would have a lot threads in memory which only waiting for actions.

You should not share fibers for the whole application! Since fibers are guarantee to execute only one action at the same time, you would create a bottleneck. An exception are the Form- and DispatcherFiber because they are anyway restricted to execute only one action at the same time.

Channels

Channels are very lightweight in Retlang. They dose not have any threading code, all they dose is to deliver a published message of only one type to all of its subscribers.

As someone who knows the concept of a service bus, my initial thinking was to create global Channel<object> and handle all messages with it. But this is a very bad idea, since it creates a single bottleneck, its ugly (all subscribers a of type object) and it is not what Retlang is designed for.

There are also two other types of channels. The QueueChannel delivers the published message only to the first available subscriber instead of all. And the RequestReplyChannel blocks the sender until the subscriber sends back the reply.

how the channels help in multithreading when they dose not provide any threading?

This is where the Fibers fit in. An subscriber has to specify on which fiber it wants to handle the message. 

var fiber = new PoolFiber(); fiber.Start();  var channel = new Channel<string>();  channel.Subscribe(fiber, message=>{ Console.WriteLine(message); });  channel.Publish("hallo Retlang");  Console.ReadLine();
 

And that it is. All the other classes and interfaces are manly base classes and allow some nice extension points.

A PowerShell make clone – poshmake

After i read Ayendes post about psake i remembered on something that i written in January because it is also an PowerShell based build system. Back in 2008 i saw a video from Bruce Payette at Channel 9 where he shows a PowerShell based build system as demonstration how easy it is to write a domain specific language in PowerShell.

As i saw a lot guys talking about drop NAnt or MSBuid and move to rake at the begin of 2009 and i dont like rake because i think it isnt a good idea to force developers to use a completely new framework instead of allowing them to use the .Net Framework. So i remind me on this video.I searched a while if he ever published this code, but i never found something and i did not stumbled upon psake at this time (first before some weeks). So ive decided to investigate some time to extend my PowerShell skills and and reverse engineered it.

And here is it (Download)

And this is how it looks like (its the makefile.ps1 from the example directory):

target all clean hello.exe cshello.exe  target test all { 	"`n" 	"*" * 40 	"Running hello.exe" 	./cshello.exe 	"All done" 	"*" * 40 	"`n" }  target clean  {  write-host "cleaning files";  remove-item -force *.exe,*.obj,*.dll }  # build hello.exe target hello.exe main.obj,do_hello.obj,{write-host "after"} target main.obj main.c target do_hello.obj do_hello.c,message.h  # build cshello.exe target cshello.exe ((dir "test*.cs") + "mylib.dll")  # build the dll used by cshello.exe target mylib.dll mylib.cs  target output { 	echo "this is a output" }

To build it, simply start PowerShell, got to the example directory and type:

../poshmake

First, it is not feature complete and not ready for production code, so be careful.

How it works? Target is a function, which takes as first parameter the name of the new target and then a list of dependency in priority order. The special on it is that a target can be a file, an expression that returns a list of files or a script. So at example, the target clean has an script block as dependency, every time the target clean is called, the scriptblock gets executed. As other example the target cshello.exe has a list of cs files and one dll as dependency.

But where the build happens? Poshmake has an dictionary which contains predefined build targets and its dependency’s for some files types. In example there is a target for *.exe which can be have *.obj or *.cs as dependency’s. So for the cshello.exe it looks into the dependency array and found cs files it calles the csc with all cs files and uses all dll files in the dependency array to build the exe. When there obj files, it will call link to link them together, and so forth. Currently only the cs and dll targets are really compiling, the other targets currently only printing its dependent files.

But what if i want to change the parameters of the csc build? Simply put a script action before the *.cs files which adds the your parameter to the $CSCFLAGS variable.

If you want to build a special target (default is all), you simply can call

poshmake clean

If you want to know which targets available simply call

poskmake -list

If you have a problem and you dont know what poshmake dose use the -verbose parameter which gives you a detailed hint what poshmake dose.

poskmake all -verbose

So happy playing with it 🙂