Generics have been in C# for a long time now, and I have been using them, but I have had trouble explaining them to new developers, or at least how to read the code. That is, until just the other day. Using generic lists is easy to understand,
var customers = List<Customer>();
can be read as customers is a List of type Customer. But we often do something like
class ManageUsersPresenter : basePresenter<IManageUsersView>
What the hell is that? A ManageUsersPresenter of Type ManageUsersView? That just doesn’t make sense a presenter of type View, and then it came to me. What we are doing is composition. I can replace the “of Type” with “composed with”.
A ManageUsersPresenter composed with ManageUsersView. Now that makes sense. When you see an object that has a generic of another type then most likely we are composing these 2 objects to be used together.
Of course you probably want to make sure that your are composing the 2 objects together but it makes reading the code a little easier
I saw yet another tweet linking to a criticism that we should be careful what we call a REST API, and linking to a blog post of none other than Roy Fielding explaining what constitutes a REST API. I am not arguing that we shouldn’t be precise, in our language, but the reason so many people get confused is because the REST API is a Unicorn. This mythical beast has never been seen. Seriously has anyone, anywhere, ever blogged about a real Rest API that was worth a dam? If you follow the concepts of REST API to its logical conclusion you find that the perfect client application has already been invented. Its called the browser, and the application is the web. Anything short of this literally comes up short and is severely criticized. I am yet to see anyone that has come up with a Hypertext driven API that provided real value, as in more value than the cost of creating it. Please someone show me the true value, until then I am going to create HTTP API’s, I will think about resources and I will try to use Hypertext, but I sure as hell won’t call it REST.
At my company we have been on a mission to outsource our IT department, including the outsourcing of custom applications. We are not a software company goes the mantra. That’s not what we do. So we buy a product from a vendor, in most cases this is the obvious thing to do. We are not going to write our own Email system or word processing application, but what about those custom applications that is specific to our own industry, and directly support our employees in their daily work. Should we find a vendor that offers broadly the functionality that we need. This seems to be a popular thought. We will buy the products, do a small amount of customization and we will be responsible for the “Glue” to make all of these disparate systems work. The net result should be smaller more cost effective IT departments.
Ok that’s a possibility and I don’t want to get into the risks this plan has I will save that rant for another post, but instead I want to look at the impact of the Cloud, and in particular Platform as a Service (PaaS). This substantially lowers the overall cost of custom applications. I can recall projects where a third of our time is taken up moving from QA to production, troubleshooting the same bugs over and over again because of environmental differences and the need to do most things manually with production engineers in place of developers. But not only does PaaS save us some time and resources it rebalances who we need on the team. A much higher proportion of our spend should be on meeting the functional business requirements – developers writing code, with a lower spend on engineers and hardware. The cost of quality requirements like scale and performance can also be reduced with solid cloud architectures, rather than big databases, and big hardware. This much larger percentage of our resources allocated to the meeting of functional business requirements means that the knowledge of our business process is much more valuable. Who better to know those processes than your own employees the developers who have been building and maintaining you systems. And who would you rather have this intimate knowledge of your business? Will the cloud save a few of our jobs?
So what’s the difference with these 2 code snippets?
var result = await subscriber.RunAsync((T)messagePacket.Body, cancellationToken);
await subscriber.RunAsync((T)messagePacket.Body, cancellationToken)
Well not much really, except for the error handling. What to do with those pesky exceptions. Especially since we are handing in a cancelation token. If the operation is canceled then an exception will be thrown. In the first instance the await keyword is kind of hiding the task that the Async operation returns. It will catch the exception and in this instance re-throw a new exception. You can choose how to handle it either wrapping the call or allowing it to percolate up the call stack.
In the second snippet we can do something without a new exception being thrown. We can look at the task that is returned and do something depending on the state of the task.
await subscriber.RunAsync((T)messagePacket.Body, cancellationToken)
//do something ... or nothing
Trace.WriteLine("This task timed out");
We lose a little of the readability of the Async await keyword paring and of course we could always wrap the first method in a try catch block and handle the second exception thrown, but I was taught that throwing exceptions is bad and expensive and for me I can understand what’s going on so I am going to stick with ContinueWith when I need to handle a cancelled or timed out request.
I am a great believer in tooling that makes jobs, coding, and even the need for learning disappear. So the other day I was attempting to use “Add Service Reference” in VS 2012 to add a reference to an OData feed I had just created. I am using the release candidate and the .NET 4.5 framework, so I expect a few changes, and of course the resulting proxy has changed a little. In my investigation I discovered the Asynchronous call back mechanisms. The mechanisms did not match the new .NET Framework 4.5 Async Await pattern. I expect that the generated proxy will get updated at some point to support this, but how hard can it be to do it myself? Well like everything once you have learned how to do it, it’s easy right!!
To test new things I like to use a unit test framework to make it easy to run the code, and I created a little test helper class to act as a container for my method. The service I was using is for a application that handles an approval process for downloading from the internet, so the service proxy is called SDAReadService (Software Download Approval Read Service) and the context is called ReadModelContainer. In the code below, the key is really the Factory.FromAsync method that has a number of overloads that allows you to pass in any pair of begin and end methods with up to 3 parameters.
public class Testhelper
public static Task<IEnumerable> GetSoftwareDownloadRequestAsync()
var container = new SDAReadService.ReadModelContainer(new Uri("http://localhost:10494/readservice.svc"));
DataServiceQuery query = (DataServiceQuery)from sdr in container.SoftwareDownloadRequests
var task = Task<IEnumerable>.Factory.FromAsync(query.BeginExecute, query.EndExecute, query);
The above method can be called with the await key word.
Apart from finding an example of how to do this the only issues I had was with passing in the correct form of the Begin and end methods. I first passed them in as methods with parameters, I knew it was not right because the parameters did not make any sense, but it took me some time before I recognized that I need to pass in the method it’s self. As usual it looks right, and you think its right, until you see what is wrong.
Here is a link to a page that demonstrates how to call the query http://msdn.microsoft.com/en-us/library/dd756367(v=vs.100).aspx.
Not too difficult to understand the old way to call an Asynchronous method with a call back, and here is documentation on how to wrap it in a task. http://msdn.microsoft.com/en-us/library/dd997423.aspx.
If you decompose your system into a SOA architecture then at some point you have to connect the services together to leverage those services. There are several options that could be considered, many of them violate the SOA architecture you have in place, but you may choose to put them in place for pragmatic reasons. As an architect I loose these arguments often, compromising the architecture because we have a date to meet, or occasionally because developers don’t believe in SOA, the performance argument always raises its head when talking about composing services, but there are other arguments raised and valid reasons for straying from your architecture. We know an understand the compromises we make, but in the enterprise we seldom get a chance to go back and rectify the issues, it is never in the budget, we change teams, the knowledge is lost. Eventually it becomes accepted that that is how the system was intended to work, but I digress. This post is intended discuss the design for connecting SOA services. The following posts will discus an implementation. Rather than simply prescribing the answer, my intent is to reveal the journey to the implementation. There are lots of choices to make, many of them can be for arbitrary reasons.
If you decompose a system how much data does your service require to be useful and how much of that data will be shared with other services. The ideal would be to share referential keys only but how many services will be useful with no shared data, for this post lets just accept that we need to share some data across at least some services. I still meet a lot of people that believe that Services are an application layer construct only, and that it is OK for Services to share the same database, but in my opinion that is simply wrong. Services need to be autonomous and that means not being interconnected at the data level. To be clear, this not about composing data from two services together, this is about data that needs to be in multiple Services for the Services to be useful. A service is not supposed to be aware of other specific services but even though I seldom hear or see this discussed explicitly, a SOA service must at least be aware that the information it contains may be of use to other services, and therefor it must have some method of publishing the data. Since the Service has know knowledge of what data other services find useful, it should be publishing all of its data (really?). I have already hinted at the pattern I am suggesting which is to use the Publish Subscribe Pattern. All data update events to our service should be either published to some publishing endpoint or should allow some subscriber to subscribe to the event. In addition our publish subscribe mechanism needs to be reliable, if you put an integration mechanism in place very few things destroy the confidence of your users faster than integration that fails, even if its failure rate is extremely low.
At this point you may be wondering why bother with the blog post, just put some enterprise grade middleware in place, or use something like nServiceBus, but my goal is to examine how we should build a modern system based on SOA services that are unaware of each other.
So those are the goals in the next post I will discuss the design before moving on to the implementation.