Wednesday, 13 June 2012

Performance comparison: IIS 7.5 and IIS 8 vs. self-hosted mvc4 web api

Introduction

There are several ways to host your services with the new MVC4 Web Api framework. Traditionally, most people would use IIS which has several advantages such as application pools, recycling, monitoring and on-demand loading.

This post on stackoverflow has a more of in-depth discussion about the pros and cons of self-hosting so I won't re-iterate that here - this post assumes you only care about the raw performance comparison which I haven't been able to find so far. It will compare IIS7 and IIS8 (Windows 8 Consumer Preview) to see if there are any differences between different versions of IIS.

Test Environment

This test was run on an Intel i5 with 8GB ram using Apache Bench 2.3 - Rev. 655654 (from the XAMPP for Windows installation).

In these tests, the web-server and benchmark client were located on the same physical machine.

Windows 7 Ultimate Edition with IIS7 will be used and Windows 8 Consumer Preview with IIS 8 will be used for the Web server performance comparisons. Do not try this with the version of IIS Express that ships with Visual Studio - the performance was a fraction of "real" IIS 7.5 in my tests.

The Test Code

The nightly build of MVC4 from nuget (as of 11th June 2012) was used. For Self-Host, the host was a console application (I would expect a Windows Service to yield comparable results).

In both cases, the controller itself was located in the same, external assembly and contained the following code:

public class TestApiController : ApiController
{
public IList GetAll()
{
return new List<Foo>
{
new Foo {Id = 1, Name = "Foo 1", CreatedOn = DateTime.Now},
new Foo {Id = 2, Name = "Foo 2", CreatedOn = DateTime.Now},
new Foo {Id = 3, Name = "Foo 3", CreatedOn = DateTime.Now}
};
}
}

The console application was configured as follows to try and get as close to the same functionality as the Web equivalent:

static void Main(string[] args)
{
const string serviceAddress = "http://localhost:4444";

var config = new HttpSelfHostConfiguration(serviceAddress);
config.Routes.MapHttpRoute("default-api",
"api/{controller}/{id}",
new
{
id = RouteParameter.Optional,
namespaces = new[] { typeof(TestApiController).Namespace }
});

config.Formatters.XmlFormatter.SupportedMediaTypes.Clear();
var server = new HttpSelfHostServer(config);
server.OpenAsync();

Console.WriteLine("Waiting on " + serviceAddress);

Console.ReadLine();
}

The web application had the following code in the global.asax.cs file:

protected void Application_Start()
{
var config = GlobalConfiguration.Configuration;
config.Filters.Clear();
ViewEngines.Engines.Clear();
config.Routes.MapHttpRoute("default-api",
"api/{controller}/{id}",
new { id = RouteParameter.Optional, namespaces = new[] { typeof(TestApiController).Namespace } });
config.Formatters.XmlFormatter.SupportedMediaTypes.Clear();
}

In the web.config, debug was set to "false", authentication mode was set to "none", runAllManagedModulesForAllRequests was set to "false" and sessions and profiles were disabled.

Although these options are not the defaults, I assumed they would give the best raw performance (although in practice they seemed to make little or no difference).

The Results

Note: all tests were run 3 times and the "best" time was taken.

All tests were run with:

ab -n 100000 -c 100 http://myip:port/api/testapi
 Requests (#/sec)Time per request (ms)
Windows 7 (IIS 7.5)3025.7333.050
Windows 7 (self-host)4624.4421.624
Windows 8 (IIS 8)4778.2320.928
Windows 8 (self-host)5612.2317.818

As can be seen above, in my environment, self-hosting can serve approximately 50% more requests per second under Windows 7 and about 17.5% on Windows 8.

Some of the overall performance difference between Windows 7 and Windows 8 is likely to be as a result of my Windows 7 installation having more services running (as the Windows 8 installation is brand new).

Conclusion

It is hardly surprising that self-hosting is more performant in terms of raw requests than IIS considering the rich feature-set of IIS. In addition, there might be settings in IIS that can bring it closer (or surpass) that of self-hosting by disabling various features. I would be happy to re-run the tests if anyone has any ideas.

As a basic recommendation, I would suggest that if you are creating a web api project, IIS is the safer and simpler choice by default.

Deployment using something like webdeploy means that the service can be updated without interruption. In the case of self-hosting with a Windows Service, there might be some juggling required to make sure that the stop -> deploy -> restart cycle minimizes down-time. A thin wrapper with auto-reloading external assemblies could go some way to resolve this but would require you to roll your own piece of infrastructure - something that is already solved in IIS.

Use-cases I can see for self-hosting include:

  • You are already shipping a Windows Service for a different purpose and would like to expose some kind of managment interface
  • You want very fine-grained control over the hosting stack and don't need any of the features of IIS
  • Squeezing out the maximum requests/second is critical to your application

Thursday, 3 May 2012

Sharing common view model data in asp.net mvc with all the bells and whistles

In anything but the most trivial applications, there are common pieces of data you will want to share between your different views. Typical examples include the name of the signed-in user, pervasive summaries such as the last three items viewed, unread message counts or anything else that typically appears in a navigation element and is user-specific.

There are many techniques for doing this but they all fell short for us in way or another as we wanted to meet all the following requirements:

  • It should be strongly-typed (no using of viewbag/viewdata, thanks - in our opinion it makes it too difficult to refactor views later).
  • The views should be bound to the necessary models to enable intellisense.
  • It should support ctor dependency injection.
  • The common data should be available to controllers and views. This is especially useful when your application supports authentication and you need to show a property of the user in the view and need to use a property of the user in your controller action.
  • You should be able to opt-out if necessary.
  • Controller actions shouldn't need to change in any way - e.g. no calling of functions to populate the models.
The first step is to define a class that will represent this shared context. There isn't anything special about this class - it's a regular poco.

Here is an example that will store the current user and the number of unread messages in their inbox.
namespace Web.Models
{
    public class SharedContext
    {
        public User CurrentUser { get; set; }
        public int UnreadMessageCount { get;set; }
    }
}
Once you have created your shared context class you need to create a base view model. Again, it's just a poco but what is important is that it is able to hold an instance of the shared context which will be explained in more detail further below. Here is an example:
namespace Web.Models
{
    public class LayoutModel
    {
        public SharedContext Context { get; set; }
    }
}
This is the model you will bind to your _Layout file which takes care of the intellisense and "no loosely-typed view data" requirements. In your _Layout, if you would like to show the user's name, for example, you could access the property with @Model.Context.CurrentUser.Name (assuming you had a User class with a Name property, obviously).

The next step is to wire up these classes so they are populated automatically. We start by creating the interface for what I have called the view model factory.

An example of such an interface is as follows:
namespace Web.Mvc
{
    public interface IViewModelFactory
    {
        T Create<T>() where T : SharedContext, new();
        void Set<T>(T model) where T : SharedContext, new();
    }
}
The generic constraint ensures that we can access the context properties in the method implementations. Here is an example implementation of this interface:
namespace Web.Mvc
{
    public class ViewModelFactory : IViewModelFactory
    {
        private readonly IUserMessageService _userMessageService;
        private readonly IUserService _userService;

        public ViewModelFactory(IUserMessageService userMessageService,
            IUserService userService)
        {
            _userMessageService = userMessageService;
            _userService = userService;
        }

        public T Create<T>() where T : SharedContext, new()
        {
            var model = new T();
            Set(model);

            return model;
        }

        public void Set<T>(T model) where T : SharedContext, new()
        {
            var user = _userService.GetCurrent();

            model.User = user;
            model.UnreadMessageCount = _userMessageService.GetUnreadCount(user.Id);
        }
    }
}
Hopefully it's pretty straightforward. It's an implementation of the view model factory that is injected with several fictitious dependencies and generates a shared context. You will need to use your imagination here a bit.

At this point, you are going to want to register the view model factory in whatever DI container (I hope) you're using. In Unity, you might do something like:
container.RegisterType<IViewModelFactory, ViewModelFactory>(new PerCallContextLifeTimeManager());
Although usually not a fan of inheritance it works well for this scenario. You need a base class from which all your controllers will inherit (instead of from "Controller"). You might have done this already for various other reasons. Here is an example:
namespace Web.Mvc
{
    public class BaseController : Controller
    {
        public SharedContext Context { get; set; }
    }
}
In one of your action methods, you could access the current user via Context.CurrentUser.

We want our view model factory to be called automatically so our model is populated correctly. Here is the code for that attribute - you should be able to use this class as-is unless you've renamed the view model factory or shared context.
namespace Web.Mvc
{
    public class LayoutModelAttribute : ActionFilterAttribute
    {
        private readonly IViewModelFactory _viewModelFactory;
        
        public LayoutModelAttribute(IViewModelFactory viewModelFactory)
        {
            _viewModelFactory = viewModelFactory;
        }

        public override void OnActionExecuting(ActionExecutingContext filterContext)
        {
            var controller = filterContext.Controller as BaseController;
            if (controller != null)
            {
                (controller).Context = _viewModelFactory.Create<SharedContext>();
            }
        
            base.OnActionExecuting(filterContext);
        }

        public override void OnResultExecuting(ResultExecutingContext filterContext)
        {
            viewModel = filterContext.Controller.ViewData.Model;
            var controller = filterContext.Controller as BaseController;

            var model = viewModel as LayoutModel;
            if (model != null)
            {
                (model).Context = controller != null && controller.Context != null
                    ? controller.Context
                    : _viewModelFactory.Create<SharedContext>();
            }

            base.OnResultExecuting(filterContext);
        }
    }
}
Taking a quick step back, this is what the attribute is doing:

We override OnActionExecuting and OnResultExecuting as these execute at different places within the asp.net mvc pipeline. To accomplish the requirement of being able to access the share context in a controller, the attribute needs to execute before the controller action; hence OnActionExecuting.

To intercept the model returned from the action and populate the required properties, we override OnResultExecuting which executes after the action has complete but before the view is rendered.

There are two different base-class checks here that allow us to opt-out of the shared context population. If the base class of your controller does not inherit from your new BaseController class, the view model factory will not be invoked before the action executes.

The other check is to ensure that the view model you are returning inherits from the new LayoutModel class. If not, the view model factory is bypassed. This means you can also use the shared context in your non-layout views which can be useful.

The next step is to register this attribute so it executes for every controller. There are different ways to do this, but I generally use the following as part of my site's bootstrapper (where container is our DI container):
GlobalFilters.Filters.Add(container.Resolve<LayoutModelAttribute>(), 1);
 

The last parameter (1 in this case) is there because I have an authentication filter higher up that should be checked before the new attribute is executed. You are likely to have different requirements in your own application.

Now that the infrastructure is complete, we can get on with building the application. Here is a sample view model that you might use on the homepage of your site:
namespace Web.Models
{
    public class HomeModel : LayoutModel
    {
        public string Content { get;set; }
    }
}
And here is the controller you might use:
namespace Web.Mvc
{
    public class HomeController : BaseController
    {
        public ActionResult Index()
        {
            return View(new HomeModel { Content = "Hello View Model Factory!" });
        }
    }
}

It might seem a bit complicated at first, but after several large applications this appears to provide the most maintainable and robust solution to this particular problem.

Tips for formatted urls in asp.net mvc

It wasn't long ago that applications built using Microsoft tools had some pretty unfriendly url's as standard (if you've used Webforms or "Classic ASP" you know what I'm talking about).

This was due to IIS' obsession with handler mappings and probably the general feeling that Webforms wasn't really suited for the Internet and therefore the benefits of better url's such as SEO were not as important. I suspect that back in the day, the Internet also wasn't as competitive for search rankings.

Yes - as with most things - there were ways around it. Personally, I used Helicon's ISAPIRewrite to make IIS behave a little bit more like Apache and have a more robust abstraction between the url that the user sees and the physical file that is serving the request.

With the release of ASP.NET MVC and IIS 7, extensionless urls were introduced as a first-class feature and seeing .aspx everywhere was a thing of the past. Unfortunately, you can still spot an MVC application in the wild because of the upper-case characters - it's not quite as easy as spotting a Webforms application (view-source, CTRL-f, viewstate) but it bothers me nonetheless.

This is a class I have used on a ton of projects to format the route as I think it should be:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web.Mvc;
using System.Web.Routing;

namespace Core.Mvc
{
    public class FormattedRoute : Route
    {
        private readonly List<string> _formatExclusions = new List<string>();

        public FormattedRoute(string url, object defaults, object constraints = null, IEnumerable<string> formatExclusions = null, object dataTokens = null) :
            this(url, defaults, new MvcRouteHandler(), constraints, formatExclusions, dataTokens)
        {
        }

        public FormattedRoute(string url, object defaults, IRouteHandler routeHandler, object constraints = null, IEnumerable<string> formatExclusions = null, object dataTokens = null) :
            base(url, new RouteValueDictionary(defaults), new RouteValueDictionary(constraints), new RouteValueDictionary(dataTokens), routeHandler)
        {
            if (formatExclusions != null)
                _formatExclusions.AddRange(formatExclusions);
        }

        public override VirtualPathData GetVirtualPath(RequestContext requestContext, RouteValueDictionary values)
        {
            foreach (var routeValue in requestContext.RouteData.Values)
            {
                if (_formatExclusions.Contains(routeValue.Key, StringComparer.OrdinalIgnoreCase)) continue;

                if (values[routeValue.Key] != null)
                    values[routeValue.Key] = values[routeValue.Key].ToString().ToLowerInvariant();
            }

            return base.GetVirtualPath(new RequestContext(requestContext.HttpContext, new RouteData()), values);
        }
    }
}
To use it, you would bootstrap the route from your Global.asax.cs file as follows:
routes.Add(new FormattedRoute("{controller}/{action}/{id}",
  new { controller = ControllerNames.Home, action = ActionNames.Default, id = UrlParameter.Optional }));
The custom route accepts much the same parameters as the default route or MapRoute call.

A key feature is the "formatExclusions" parameter which allows you to opt-out of the formatting on a key-by-key basis. Typically, this is used for proper-names so generated url's preserve the capitalisation.

An example of such a route is as follows:
    routes.Add(new FormattedRoute("p/{manufacturer}/{sku}/{productId}",
        new { controller = ControllerNames.Product, action = ActionNames.Default }, 
            constraints: null,
            formatExclusions: new[] {"manufacturer", "sku" }));
In this case, only the product id route value will be modified.

If you're wondering what the "ControllerNames" or "ActionNames" classes are - they are only constants which map to the real controller names. I find this gives a bit of extra compile-time assistance if I rename something and allows me to abstract the controller class names even further.

Monday, 6 June 2011

Polling your database queues less-often - using udp via sql server clr

We've all done it before: we have a database table somewhere that is being populated with records and we need to know when something new has arrived (let's forget for the moment that a real queuing infrastructure is probably more suited for this). The way it's usually implemented is:

You decide on the maximum age of the items in the queue before they need to be processed - this forms the basis of your polling interval. For example, if a change to the table must be processed within 5 minutes, your polling interval might be 4 minutes (to allow for the actual processing time). Of course, this is a sliding window, so most requests will not reach the maximum age (i.e. if one is inserted just before the table is polled).

There are obviously some inefficiencies here:

What happens if there are no requests? You are going to poll anyway.
What happens if a record is inserted directly after the polling interval? It will have to wait until the next interval.

It would be preferable if you were notified of a change to the table rather having to poll.

The solution (if you are using sql server 2005 or above) is to create a stored procedure that notifies the interested parties that there are new records to process. Think of it as "preempting the poll" - it's best to leave in the polling as a "worse-case" in case the udp broadcast is not received.

The c# code for the sql class lib may look similar to the following:

using System;
using System.Net;
using System.Net.Sockets;
using System.Security.Cryptography;
using System.Text;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;

namespace MyApp.Sql
{
    public class Utilities
    {
        [SqlProcedure]
        public static void Sync(string sourceServer, string sourceDatabase, string notificationIp, int notificationPort)
        {
            var ipAddress = !String.IsNullOrEmpty(notificationIp) ? IPAddress.Parse(notificationIp) : IPAddress.Broadcast;
            var remoteEp = new IPEndPoint(ipAddress, notificationPort == 0 ? 53101 : notificationPort);

            using (var udpClient = new UdpClient())
            {
                byte[] input = Encoding.ASCII.GetBytes(String.Format("{0}:{1}", sourceServer, sourceDatabase));
                udpClient.Send(input, input.Length, remoteEp);
            }
        }
    }
}

And to make this component available to your sql installation (if you have the necessary permissions):
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[NotifyChanges]') AND type in (N'P', N'PC'))
DROP PROCEDURE [dbo].[NotifyChanges]
GO

IF EXISTS (SELECT * FROM sys.assemblies asms WHERE asms.name = N'MyApp_Notifier')
DROP ASSEMBLY [MyApp_Notifier]

CREATE ASSEMBLY MyApp_Notifier FROM 'MyApp.Sql.Dll'
WITH PERMISSION_SET = UNSAFE
GO
CREATE PROCEDURE [dbo].[NotifyChanges]
@sourceServer [nvarchar](255),
@sourceDatabase [nvarchar](255),
@notificationIp [nvarchar](255),
@notificationPort [int]
WITH EXECUTE AS CALLER
AS
EXTERNAL NAME [MyApp_Notifier].[MyApp.Sql.Utilities].[Sync]
GO

The next step is to create a trigger on your polling table and call this new stored procedure when a new record is inserted. Note that the server name and database name are passed in the udp message in case you have multiple notification tables broadcasting to a single listener.

ALTER TRIGGER [dbo].[MyPollingTrigger] ON [dbo].[MyPollingTable]
AFTER INSERT
AS
DECLARE @DB_NAME VARCHAR ( 127 )
SET @DB_NAME = DB_NAME()
EXEC [NotifyChanges]
  @@SERVERNAME,
  @DB_NAME,
  NULL,
  0

In a future post I will show the udp server that you can use to respond to these events, but it's fairly trivial using the built-in .net socket libraries.

Using solr for .net - please stop using your database as a search engine

Google has spoiled us with fast, relevant search and users have come to expect this from all sites that they visit. There really is no more excuses for using database full-text functionality for web-site search (of course, if you have tightly integrated your solution into a piece of your rdbms full-text api, then migration to a different solution will not be trivial).

There are many reasons why using solr for search is a good idea so here are some:

  • Anything that reduces the load on your database is a good thing. I would guess that search functionality has the potential to bring many databases to it's knees.
  • Solr (and lucene on which it's built) is designed for searching - that's pretty much all it does and it's really, really good at it.
  • .net has an excellent API for solr which makes integrating solr with .net incredibly easy.
  • The solr server is written in java and can run pretty much anywhere you can run a jvm.

So why not use Lucene directly? Do I need to use solr?

Having delivered projects using solr and lucene, I would whole-heartedly recommend using solr for the following reasons:

  • solr takes care of being able to modify and query your search index remotely which is not trivial.
  • The .net api for Lucene is several versions behind the official java version for various technical and non-technical reasons (.net: 2.9.2, java: 3.2). You can read more about this on the lucene.net mailing lists if you want to read the ups-and-downs of apache incubator status. By using solr, you effectively bypass this issue (unless the solr api itself changes but this api is significantly simpler than lucene and is much less likely to change).
  • Running the server in a jvm allows you use linux for the search functionality of your application - which is going to work out easier and cheaper if you hosting in the cloud.

What about elasticsearch?

This project looks promising and, although I was able to index a few hundred thousand documents with a trivial amount of code, I found the absence of a schema slightly confusing. I also wasn't able to get any results out of the index using the NEST api for .net at the time of this writing. Since both projects use Lucene under-the-hood, I would suggest that skills are transferable between the two and a migration would be fairly easy.

Tips

I don't want to regurgitate one of the the many useful startup guides but share a few tips that I have discovered along the way.

  • Prepare to re-index often. Make sure your indexing process is repeatable and easily runnable - every time you change the schema, you need to re-index to see the changes (and this will happen fairly often during development).
  • It is obviously hugely dependent on many different factors (system hardware, index complexity etc). but anecdotally I can build solr indexes at approximately 850 items per second (average spec. notebook, with solr running on the same machine). Again, YMMV, but there is a number for you to compare against if you like that sort of thing).
  • Remove everything from the solr example files that you don't need - they are verbose and make it harder to know what pieces are actually being used (this includes the schema and configuration).
  • Don't try and be too clever with the query input string (stripping characters etc.) - for the most part, solr does a good job of parsing the query.
  • If you are searching across multiple fields, the best place to define this is within your solrconfig.xml file:
 <requestHandler name="search" class="solr.SearchHandler" default="true">
    <lst name="defaults">
      <str name="echoParams">explicit</str>
      <int name="rows">10</int>
      <str name="defType">edismax</str>
      <str name="qf">myImportantField^20.0 myOtherField yetAnotherField</str>
    </lst>
  </requestHandler>
  • Externalize the properties that are different for each platform using the solrcore.properties file (e.g. the location of the solr data directory). This makes it easier to deploy schema and configuration changes to production. You can do this by changing your solrconfig.xml to the following:
<dataDir>${data.dir}</dataDir>
and creating a a solrcore.properties file for each environment with something similar to:
data.dir=/data/solr
  • Don't be afraid to augment your results with data from other sources. Just because you need to show a particular field in your search results, doesn't mean you need to store it in your search index (there is obviously a careful performance trade-off to be made here).

  • If you are implementing "auto suggest", use an NGramFilterFactory in your schema similar to the following:

  • <fieldType name="wildcard" class="solr.TextField" positionIncrementGap="100">
      <analyzer type="index">
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <filter class="solr.LowerCaseFilterFactory"/>
        <filter class="solr.NGramFilterFactory" minGramSize="1" maxGramSize="25" />
        </analyzer>
      <analyzer type="query">
        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
        <filter class="solr.LowerCaseFilterFactory"/>
      </analyzer>
    </fieldType>
    
    I hope this gives you the incentive you need to give your database a holiday and improve the search on your website.

    Introducing synoptic - a console application framework for .net

    If you want to go directly to the app, you can find us on github or you can view the wiki.

    There needs to be more love for console applications. As a standard, our company produces a console application for every web site we develop. It doesn't contain all the functionality of the website, but does serve the following purpose:

    • Allows us to automate various parts of the site by giving us something that can be easily invoked via the build process (e.g. running a nightly map-reduce to update our data aggregates).
    • Allows us to debug the installation without firing up a web browser (e.g. showing the audit trail of a particular transaction if the client has any queries).
    • It proves to us that the front-end is sufficiently decoupled from the rest of the application to allow us to build a different user-interface for it later if necessary (e.g. mobile etc).
    In our minds, the console application is not just a throw-away application - it is a first-class citizen of our solution and should be treated as such. I am going to presume that most (if not all) readers have implemented a console application at some point in their lives. These are some of the issues you probably had to deal with (or chose to ignore):

    • How do I parse the parameters input from the command line and what format do I support (--param, -param, param, /param?) 
    • How do I show the command line usage?
    • How do I map these parameters to methods that I want to call? How do I make sure that the parameters are valid for this particular method?
    • What about stderr and stdout? How do I prevent exception messages from being piped to other commands?
    • How do I output text neatly on the command line? Given that the console is 80 characters by default (but can be resized by the user), how do I make sure that the text wraps and indents correctly? For example, I often see this:
        This is my long text. I wanted it to be indented. What happens when it wraps to the next line?

    or (note how the right edge of the first and second cells are not aligned correctly):

        This is the first cell.    This is text for the first cell.
        This is a much longer cell.    This is text for the second cell. 
    

    Wouldn't it be better if it looked like this?

        This is the first cell.        This is text for the first cell
                                       and it wraps with the correct indentation.
        This is a much longer cell.    This is text for the second cell. 
    

    Maybe this doesn't seem like a big deal, but it's not terribly difficult to fix (hint: the answer is not to manually insert line-breaks in your text because you don't know before-hand how wide the console is going to be).

    These problems are already solved for web applications. The url and form parameters represent the input (which, using whatever web framework you decide to use, designates what method to run and how these parameters are parsed). The layout is obviously taken care of with html.

    So what can synoptic do for you?

    If you have used any reasonable web mvc framework, you have defined classes and methods which maps to user input (the url). Synoptic does a similar thing, but for command line applications. It's best explained by using an example:

    Suppose you were using solr for search in your website and you were attempting to tune or maintain your search index. You might want to be able to do this from the command line so you can measure the performance or relevance regularly and maintain some kind of log. In synoptic, you would define this using the following syntax:
    [Command(Name="search", Description="Allows you to perform various operations on the site search engine.")]
    public class SearchCommand
    {
        [CommandAction]
        public void Query(string term)
        {
            // Search logic goes here.
        }
    
        [CommandAction]
        public void RebuildIndex()
        {
            // Logic to rebuild you index goes here.
        }
    
    }
    
    Then in your application entry point, you feed the arguments to synoptic:
    public class MyProgram
    {
        public static void Main(string[] args)
        {
            new CommandRunner().Run(args);
        }
    }
    
    You could now invoke the query method from the command line using the following:
    myapp.exe search query --term=mysearchterm
    

    ... or you could rebuild the search index with the following command:

    myapp.exe search rebuild-index
    

    If you run your application without specifying a command, you will see the usage pattern that is automatically generated (this is largely modeled on the git command line client behavior if it looks familiar).

    That's all you need to get your first synoptic application up-and-running. There is much more information on the wiki that covers more advanced features such as:

    • Using dependency injection with your commands
    • Supporting "global" options (e.g. allowing the user to specify logging verbosity)
    • Customizing and validating parameters
    • Using the ConsoleFormatter to format text (including ConsoleTable, ConsoleRow, ConsoleCell and ConsoleStyle which can be used to create perfectly formatted, wrapped and indented content on the command line).
    There are many new features we have in mind that we will hopefully be adding shortly (feel free to contribute with ideas or code):

    • Model-binding so that methods support more than just primitive types
    • More console-widgets (e.g. download progress etc.)
    • Additional customizations around global options
    • Internationalization

    A big thank you to Mono.Options for providing such a versatile command parsing library (which synoptic uses internally).