Thursday, April 18, 2013

Building your own System.Reflection API from scratch, Part II: Round Tripping, Starting with Byte Zero

Reading and writing assemblies are a vital part of any .NET metaprogramming library.


Duality at the heart of Tao


From the very beginning, I wanted to make sure that Tao was capable of both reading and writing assemblies. After being overwhelmed after reading parts of the ECMA-335 spec, I realized that the only way to effectively create Tao is to allow it to be able to both read and write in small portions that I could incrementally understand. PEVerify was ultimately useless in this case since it only deals with validating entire assemblies rather than just validating the small binary subsets that are contained in each assembly.

The Goal

In an ideal scenario, Tao should be able to read an assembly (or part of an assembly), load it into memory, and then write it back to the disk and the bytes should look exactly the same as the bytes that it used to load it into memory. Given that even the slightest byte read or written in the wrong position could mean the difference between a valid and an invalid assembly, I needed to use an approach that will prevent me from both regressive bugs, and act as a guide when I read or write the incorrect bytes from an assembly. The problem is that since no such tools exist, what’s the best approach for incrementally round tripping an assembly?

Testing the seemingly untestable
Writing your own System.Reflection API from scratch can be very difficult, and there’s absolutely no room for mistakes. That’s why I chose to write it using a specification-driven approach since I didn’t have the time to test everything manually once the code was written. I needed an automated approach to testing since there was so many unwritten features that I still had to test. I also needed Tao to easily evolve as my understanding of the .NET assembly metadata increased over time. The idea was that I would use a hex editor to sample small byte arrays from an assembly and use those sample arrays as input to the code under test. Any particular round tripping feature in Tao would be considered “done” if what was read into memory could be serialized back to disk and match the sample array, byte-for-byte. At the time, there was so much that I didn’t know about the PE Format, so I wanted to start small, and use the tests as a guide to build my knowledge of the format over time. However, if you have a bunch of bytes in memory that I need to verify against a set of bytes on disk, how do you ensure that both sets of bytes match each other?

That’s where Tao’s hash function comes in handy:


These two extension methods form the heart of Tao’s self-verification system. It allowed me to incrementally read, write, and ultimately verify every single byte from a .NET assembly as my understanding of .NET assembly formats grew over time. The only problem I had at that point was that the test feedback was still too coarse. The hash functions allow you to verify a block of data, but how do you make it fail fast and track down the exact byte location if it writes the incorrect byte?

Divide and conquer

In order to effectively test Tao, we’ll need to use a hash function that tracks down an invalid byte at the very point where the byte is written. Each test should be able to immediately fail fast and fail hard if it encounters an invalid byte being read or written to disk. Tao cannot afford to wait until an entire chunk of bytes is written before it can be verified, simply because there’s currently no way to pinpoint the exact location of a byte mismatch if the chunk gets too large. For example, if you want to do a random write of a byte array with two million elements and have it fail at the instant that it writes the first invalid byte, how do you find the exact point where the mismatch occurred with just a hashing function performed on the entire byte array?

As it turns out, if you have two side-by-side arrays of two million bytes each that Tao needs to diff, the simplest way to track down the differences between these two arrays using a hashing function is to eliminate the parts that they have in common. The bytes that will remain are the bytes that differ, and in this case, we only need to find the first byte position where there is a mismatch. One of the most effective ways to eliminate the common parts of these two byte arrays is to recursively halve and compare the two arrays into smaller chunks and compare each successively smaller chunk until the first mismatching byte is found. This approach allows Tao to compare even very large sets of data often using fewer than fifty comparisons, and aside from the recursive function call, the implementation speaks for itself:


Each call to getMismatchPosition recursively halves and compares the streams until the stream comparison size is only a single byte in length. Once the mismatch byte has been found, the comparison ends, and the local function returns the position where the mismatch occurred. Now that we have a way to pinpoint the exact byte mismatch position in any given stream, the next issue is finding a way to immediately fail a test at the very instant that any piece of Tao's code attempts to write an invalid chunk of bytes to a stream. In this case, this is where Tao's TracerStream class is immensely useful:
There's nothing particularly interesting about the StreamDecorator base class other than the fact that it wraps (or decorates) an existing stream so that we don't have to reimplement an entire stream in the TracerStream class if it just intercepts the write calls. What is interesting, however, is that the TracerStream class compares every chunk that is being written (just as it is being written) and immediately fails the write operation if the bytes being written don't match the bytes in the original stream. It's a very simple and effective way to incrementally verify parts of a binary, and it's very useful for verifying .NET assemblies. Now that we have written the basic tools for verifying byte streams in an assembly, the next task is to actually find samples that Tao can parse into memory, write to disk, and verify whether or not it matches the original bytes written into memory. Given this task, exactly what tools do we need to actually grab these byte samples, and start roundtripping these samples in Tao's tests?

Finding binary samples using the right toolset


One of the best tools for analyzing and sampling raw .NET assemblies is a tool called CFF Explorer. When I first started writing Tao, I used CFF Explorer’s hex editor to create binary dumps in C# that I could use as the expected byte arrays in Tao’s unit tests. For example, when I needed to test roundtriping MS-DOS headers with Tao, I used the sample bytes from CFF Explorer to verify that the DosHeaderWriter class was writing the correct bytes to its given output stream:

Closing the loop
In a nutshell, the above test case demonstrates how Tao began, and shows how I designed Tao for roundtripping from the very beginning. The idea was that in order to roundtrip an assembly, we need to have the tools to sample small parts of the assembly so that we can, in turn, write small tests that have expectations and assertions that determine what an assembly should look like in memory, despite the fact that we might yet not understand everything about the .NET assembly format. In the next series of posts, we'll dive into the .NET format with the knowledge that each one of these successive read, write, and roundtripping tests in Tao will ensure that it will always be reading and writing the correct assemblies, without fear of having any bug regressions. Fewer regressions means that it will be easier to work with the .NET assembly format and we can just move on to working on the next part of reading/writing/roundtripping an assembly. It's a simple idea with some big design reprecussions, and hopefully, these series of posts will show you the details of how I created Tao, and help you understand how I did it, as well as understand why I did it.

Coming up in the next post
In the next post in this series, we'll talk about how to build the simplest possible assembly from scratch with ILASM and parse it with Tao. We'll also dive deeper into the CLR metadata format and explore the CLR metadata tables, and I'll briefly talk about how those tables bind an assembly together, and show you how Tao roundtrips those tables. Lastly, I'll also talk about some experimental uses for being able to directly manipulate those tables, and how they might just change the way you look at .NET assembly manipulation. One half of the post will talk about the tables, and the other half will talk about some of the crazy ideas which inspired me to write Tao in the first place, so stay tuned!

Wednesday, April 10, 2013

Building your own System.Reflection API from scratch, Part I: Choosing Nemerle

Sometimes you have to reinvent a better light bulb to understand how it works.

Introduction 

About three years ago, I decided to take a break from working on LinFu. Although I was happy with some of the work that I was doing with Cecil and IL rewriting, I wanted to understand the underlying abstractions that represent your every day .NET assembly. Even though there was some good IL rewriting work being done by other bytecoders like Simon Cropp, they only focus mostly on making small and surgical changes to assemblies, such as implementing the INotifyPropertyChanged interface, or making all public methods on a POCO class virtual.

For me, being able to make small changes to the IL wasn't enough. I wanted to understand how to manipulate .NET assemblies so that I could make some "big" changes, such as:

  • Type cloning
  • Dead type elimination
  • Code migrations
  • Modifying signed .NET BCL assemblies at runtime

Unfortunately, at the time of this post, there are currently no assembly manipulation tools that are capable of doing those things, and even the author of Cecil says that it doesn't support type cloning:
There's no easy way to move one type from a module toanother, as it involves taking a lot of decisions about what to dowith references. Also Mono.Merge is completely dead
Given that there are no tools that are capable of doing what I wanted to do with assemblies, and given that I wanted to master assembly manipulation, I decided to take the next logical step: I was going to build my own reflection API from scratch.

The Tao of Metaprogramming

In this series of small posts, I'll talk about some of the design decisions as well as share some of my design notes that I have as I continue to build Tao, which is my own reflection metaprogramming API.

Choosing Nemerle

When I first starting this project over two years ago, I needed to use a language with built-in support for Design by Contract features since I was essentially going to create a library that builds .NET assemblies from scratch, and since I was starting with nothing, I needed a language that was robust enough to be fault-intolerant enough to tell me where I was failing, and why I was failing. Those days were some challenging days for me because all I had was the CLR Metadata Specification as a reference, and there were no programs at all (including PEVerify) that would tell me what or where my mistakes were being made.

Essentially, I was flying blind, and I relied heavily on Nemerle to be able to explicitly state my assumptions as runtime assertions. For example, here's how you can use Design by Contract macros and Non-nullable type macros in Nemerle to write more reliable code. The [NotNull] macro ensures that NullReferenceExceptions will be all but impossible, and the Design by Contract syntax extensions ensure that the code is always in a valid state, and those extensions are invaluable when you need to build an API that has no room for mistakes. In reading or writing .NET executables, even a single byte in the wrong position can give you an invalid assembly. The world of compiling and decompiling can a very cold and unforgiving world, and I needed the best tools I could find to make sure that my API was doing exactly what I intended it to do.

Needless to say, building your own reflection API from scratch can be a very daunting task. Even today, as I look back on the work that I have already done with Tao, it's hard to imagine being able to get this far without the DbC language features that Nemerle has to offer, and in hindsight, I'm glad that I made that choice.

Coming up in the next post

In the next post, I'll talk about some of the challenges of reading a raw .NET portable executable and turning it into something meaningful that a program can understand. For example, what does the format of a .NET assembly look like? How is it different from say, an unmanaged DLL/EXE file? More importantly, how do you actually write tests that ensure that the bytes that you're reading into memory are exactly the same as the ones loaded from the disk? Those questions were just some of the issues that I had to solve, and in the next post, I'll tell you exactly how I solved them, as well as talk about some of the tools I had to (re)invent in order to solve those problems. Meanwhile, stay tuned!



Sunday, May 8, 2011

Introduction to IL Rewriting with Cecil, Part 1–Rewriting FizzBuzz and the Art of Redirecting Method Calls

lesson1

The simplest possible code example that anyone can learn from.

Introduction

When I first started learning IL rewriting and Cecil about several years ago, one of the difficulties that I struggled with was the fact that there were very few practical examples on how to take an existing assembly and modify it at runtime. In many ways, I was stranded in heavily undocumented territory, and needless to say, this lack of documentation made it very difficult to learn how to do anything useful with Cecil.

Meanwhile, in the Year 2011…

It’s now 2011, and I think it’s safe to say that for many people, IL rewriting (much less Cecil) is still a “big mystery wrapped in an enigma containing frustration”. Indeed, Cecil is an incredible library that can let you do some incredible things, but at the same time, it can be very frustrating since the learning curve is still steep and there are still no practical guides for using it. As a user, there must be some sample code out there that shows how to do the most basic tasks with Cecil, right?

Establishing the Feedback Loop

In order to learn any skill (such as IL rewriting), we need to establish a simple feedback loop that allows users to easily experiment with the tools they are given so that they know:

  • What went wrong if it doesn’t work
  • Where to fix it if it breaks
  • How to see the results of their experiments without getting mired in the implementation details of the tests themselves

In this case, we’ll need to set up a basic environment that will let users experiment and learn how to modify assemblies at runtime with Cecil. We will need:

  1. A test fixture that loads a sample assembly and gives users the chance to modify it before reloading the modified assembly into memory (An NUnit base fixture)
  2. A way to display/diagnose any invalid assembly errors that occur due to making changes to the original assembly (PEVerify)
  3. To make it easy to change so that we can experiment with different approaches to modifying IL, thus “closing” the feedback loop

Lost in Bytecode

Given these requirements, where would we even begin? It’s not every day that one decides to randomly parse .NET assemblies and learn how to change the underlying bytecode that ultimately defines their behavior. This can seem like a daunting task for even the most intelligent of budding interlopers, but fortunately for my readers, most of the work has already been done for you in these examples. All you need to do is sit back and scroll down the page, as I proceed to tell you the “ins” and “outs” about Cecil, and the practical lessons learned from rewriting IL. With that in mind, let’s get started!

A WriteLine for Another WriteLine

One of the simplest things that you can possibly do with Cecil is to swap a single static method call for another static method call with the same parameters and the same return type. (It’s a fairly simple operation since both methods have the same signature, and you don’t need to add any additional instructions to make it happen). In this case, I opted to swap all calls to Console.WriteLine() with calls to FakeConsole.WriteLine():

As you can see from the example above, I used a simple LINQ query to identify all the call instructions that needed to be modified. More experienced Cecil users will probably notice that I decided to rewrite all method calls to point to FakeConsole.WriteLine() instead of individually checking to make sure that the method call I was replacing was indeed Console.WriteLine(). Indeed, that was an intentional move, given that the FizzBuzz.Print() method doesn’t make any other external calls to any other methods besides Console.WriteLine().

Assuming that I somehow created an instruction that caused an invalid modified assembly, however, how would I be able to know what went wrong, much less know how to fix it?

PEVerify, how do I love and hate thee…

As it turns out, there is a tool called PEVerify.exe that can tell you whether or not the assemblies that you modify with Cecil are valid or invalid. For example, if I were to remove all the IL instructions out of the FizzBuzz.Print() method, PEVerify would give me the following error message:

peverify1

(Believe me, it’s much prettier when it’s zoomed out)

PEVerify will examine any given assembly and be able to tell you whether or not the compiler (or in this case, you, the human compiler) made any mistakes in creating the assembly. It can be a very useful tool, and that’s why I modified the sample test fixture to run PEVerify right after the user modifies the sample assembly. If you don’t already have PEVerify installed, make sure you download it and configure the Lesson1 app.config file to point to where PEVerify is installed:


Once PEVerify has been configured as part of the tests, the rest is up to your imagination.

Exploring Method Replacement and Beyond

Now that the basic IL rewriting setup has been laid out for you, the onus is on you to explore the possibilities with Cecil and IL rewriting, even if it means that you have to start with some small, basic steps. In the next installment in this series, I’ll show you how to use PEVerify and Cecil to keep the stack balanced so that you can do things like swap static method calls for instance method calls, and even do things like install runtime hooks so you can change your code as your application is running. Stay tuned!

Wednesday, May 4, 2011

Dynamically Intercepting Thrown Exceptions with LinFu.AOP 2.0

exception

A screenshot of LinFu dynamically catching a thrown exception.

On Error, Resume Interception

Another useful thing that LinFu.AOP allows you to do is to intercept (and rethrow) exceptions within your applications at runtime. LinFu makes it so easy, in fact, that all you have to do is add the following lines to your CSProj file:


Exceptionally Simple

To use LinFu.AOP's dynamic exception handling capabilities, all you need to do is make the following call to handle all exceptions being thrown in your application:


Try/Catch Me, If You Can

The call to ExceptionHandlerRegistry.SetHandler tells LinFu to hook the SampleExceptionHandler into your application so that all exceptions that will be thrown will automatically be handled by the given exception handler. Under normal circumstances (where interception is disabled), the call to account.Deposit() will cause the app to crash, but as this example shows, LinFu.AOP was able to intercept the thrown exception before it could crash the rest of the app.

What makes this even more interesting, however, is the IExceptionHandlerInfo instance that describes the context from which the exception was thrown:


More Information Than You Can Throw An Exception At


The IExceptionHandlerInfo interface has enough information to describe the method that caused the exception, as well as having properties such as ShouldSkipRethrow that allow you to decide whether or not LinFu should just swallow the exception and keep running the program as if an exception was never thrown. The ReturnValue property, in turn, allows you to alter the return value of a given method in case you want to resume the method and provide an alternate return value as if no exceptions were ever thrown.

As you can see, LinFu.AOP makes it really easy to transparently handle exceptions in your applications, and if this post saves at least a few developers a few headaches from having to manually diagnose their applications, then I'd consider it to be a gratifying success.

Enjoy!

EDIT: You can get the code examples for LinFu.AOP's dynamic exception handling here.

Thursday, April 28, 2011

Intercepting Console.WriteLine and Other Third-Party Method Calls with LinFu.AOP 2.0

ConsoleInterceptor

Worth a thousand words

In case you are wondering, yes, that is a screenshot of LinFu.AOP intercepting calls to Console.WriteLine() at runtime.
One of the more useful things that LinFu.AOP can do is intercept calls to third-party assemblies that aren’t necessarily under your control. In fact, LinFu makes it so easy that all you have to do to make the interception happen is add the reference to LinFu like the following lines to your CSProj file just like I did with my SampleLibrary.csproj file:

<PropertyGroup>
<PostWeaveTaskLocation>$(MSBuildProjectDirectory)\$(OutputPath)\..\..\..\lib\LinFu.Core.dll</PostWeaveTaskLocation>
</PropertyGroup>
<UsingTask TaskName="PostWeaveTask" AssemblyFile="$(PostWeaveTaskLocation)" />
<Target Name="AfterBuild">
<PostWeaveTask TargetFile="$(MSBuildProjectDirectory)\$(OutputPath)$(MSBuildProjectName).dll" InterceptAllMethodCalls="true" />
</Target>

‘Automagically’ Delicious

Once you reload and rebuild the solution, LinFu.AOP will automatically modify your code after the build runs so that you can intercept it at runtime. LinFu does this by adding hooks to your code so you can change it as the program is running. In this case, I casted the modified BankAccount class to an IModifiableType instance so that I could add my custom ConsoleInterceptor instance:



// Create the BankAccount class just like normal...
var account = new BankAccount(100);
// Notice how LinFu.AOP automatically implements IModifiableType so you can intercept/replace method calls at runtime
var modifiableType = account as IModifiableType;
if (modifiableType != null)
modifiableType.MethodCallReplacementProvider = new WriteLineMethodReplacementProvider();

account.Deposit(100);
The WriteLineMethodReplacementProvider class, in turn, determines the method calls that should be intercepted at runtime:

public class WriteLineMethodReplacementProvider : IMethodReplacementProvider
{
public bool CanReplace(object host, IInvocationInfo info)
{
var declaringType = info.TargetMethod.DeclaringType;
if (declaringType != typeof(System.Console))
return false;

// We're only interested in replacing Console.WriteLine()
var targetMethod = info.TargetMethod;
return targetMethod.Name == "WriteLine";
}

public IInterceptor GetMethodReplacement(object host, IInvocationInfo info)
{
return new ConsoleInterceptor();
}
}

Choosing which methods to intercept

As you can see from the example above, this class ensures that only calls to Console.WriteLine() are ever intercepted. The ConsoleInterceptor itself is responsible for replacing and intercepting the Console.WriteLine() method itself:

public class ConsoleInterceptor : IInterceptor
{
public object Intercept(IInvocationInfo info)
{
var targetType = info.TargetMethod.DeclaringType;
var target = info.Target;
var targetMethod = info.TargetMethod;
var arguments = info.Arguments;

Console.WriteLine("Intercepted method named '{0}'", targetMethod.Name);

// Call the original WriteLine method
targetMethod.Invoke(null, arguments);

// Console.WriteLine doesn't have a return value so it's OK to return null)
return null;
}
}
The most interesting part about the code example is how LinFu.AOP adds the method call hooks without touching a single line of the source code. All of the IL rewriting is done behind the scenes so you won’t have to worry about the gory details of using an AOP framework in your legacy code. The beauty of this approach is that it allows you to intercept any method call, even if that method call is a part of the .NET base class libraries.
You can find the LinFu.AOP.Examples library here at Github.
NOTE: Please intercept BCL method calls responsibly.

Wednesday, April 27, 2011

Beyond Duck Typing with LinFu.DynamicObject: Creating Types that can Change at Runtime

A Post-Easter Egg

One of the hidden features that LinFu.DynamicObject has is the ability to dynamically add properties and methods to itself using a shared type definition at runtime. In other words, you can have two or more LinFu.DynamicObject instances share the same DynamicType, and any changes you make to that type will be propagated to all LinFu.DynamicObject instances that share that same type:
using LinFu.Reflection.Extensions;
using NUnit.Framework;

namespace LinFu.Reflection.Tests
{
[TestFixture]
public class DynamicTypeTests
{
[Test]
public void ShouldBeAbleToShareTheSameDynamicType()
{
var typeSpec = new TypeSpec() { Name = "Person" };

// Add an age property 
typeSpec.AddProperty("Age", typeof(int));

// Attach the DynamicType named 'Person' to a bunch of dynamic objects
var personType = new DynamicType(typeSpec);
var first = new DynamicObject();
var second = new DynamicObject();

first += personType;
second += personType;

// Use both objects as persons
IPerson firstPerson = first.CreateDuck<IPerson>();
IPerson secondPerson = second.CreateDuck<IPerson>();

firstPerson.Age = 18;
secondPerson.Age = 21;

Assert.AreEqual(18, firstPerson.Age);
Assert.AreEqual(21, secondPerson.Age);

// Change the type so that it supports the INameable interface
typeSpec.AddProperty("Name", typeof(string));
INameable firstNameable = first.CreateDuck<INameable>();
INameable secondNameable = second.CreateDuck<INameable>();

firstNameable.Name = "Foo";
secondNameable.Name = "Bar";

Assert.AreEqual("Foo", firstNameable.Name);
Assert.AreEqual("Bar", secondNameable.Name);
}
}
}

Evolving Ducks

Most of the code above is self-explanatory, and the most interesting part about this code is the fact that it has two DynamicObject instances that share the same DynamicType instance. Once the Age property was added to the DynamicType definition, both first and second DynamicObjects automatically ‘inherited’ the additional Age property that was added to the DynamicType at runtime. Another interesting piece of code was the duck typing call to the IPerson interface, which wasn’t possible until after the Age property was added:

// Use both objects as persons
IPerson firstPerson = first.CreateDuck();
IPerson secondPerson = second.CreateDuck();

firstPerson.Age = 18;
secondPerson.Age = 21;

Assert.AreEqual(18, firstPerson.Age);
Assert.AreEqual(21, secondPerson.Age);


As you can see from the example above, LinFu.DynamicObject is smart enough to change its definition every time the attached DynamicType definition changes, and that’s why it was also able to duck type itself to the INameable interface:

// Change the type so that it supports the INameable interface // Change the type so that it supports the INameable interface
            typeSpec.AddProperty("Name", typeof(string));
            INameable firstNameable = first.CreateDuck<INameable>();
            INameable secondNameable = second.CreateDuck<INameable>();

            firstNameable.Name = "Foo";
            secondNameable.Name = "Bar";

            Assert.AreEqual("Foo", firstNameable.Name);
            Assert.AreEqual("Bar", secondNameable.Name);
Pretty straightforward, isn't it? (LinFu.DynamicObject has had this feature for well over 4 years, but I never got around to publishing it until now). Enjoy!

Tuesday, April 26, 2011

Duck Typing with LinFu & C# 4.0’s Dynamic Keyword

The Lame Duck

When C# 4.0 came out with the dynamic keyword, I was pretty excited over the prospect of having Ruby-like features finally being baked in to the C# language itself, but as one of my twitter followers and friends pointed out in one of their posts from a few years ago, C# 4.0 still lacks duck typing support, and the .NET BCL doesn’t seem to have anything that could do something similar to the following code:

public interface ICanAdd
{
int Add(int a, int b);
}
public class SomethingThatAdds
{
private ICanAdd _adder;
public SomethingThatAdds(ICanAdd adder)
{
_adder = adder;
}
public int FirstNumber { get; set; }
public int SecondNumber { get; set; }
public int AddNumbers()
{
return _adder.Add(FirstNumber, SecondNumber);
}
}

There has to be some way to construct an object at runtime and map it to an ICanAdd interface, but the problem is that the current .NET Base Class Libraries don’t seem to have a solution for this problem. As Dave Tchepak pointed out in his post, the following dynamic code will fail miserably at runtime:

public class Dynamic : DynamicObject
{
Dictionary<String, object> members = new Dictionary<string, object>();
public override bool TrySetMember(SetMemberBinder binder, object value)
{
members[binder.Name] = value;
return true;
}
public override bool TryGetMember(GetMemberBinder binder, out object result)
{
return members.TryGetValue(binder.Name, out result);
}
}
// ..and the test would look like:
[Test]
public void CannotUseDynamicAdderForAnythingUseful()
{
dynamic adder = new Dynamic();
adder.Add = new Func<int, int, int>((first, second) => first + second);
var somethingThatCanAdd = new SomethingThatAdds(adder); /* Fails here at runtime */
somethingThatCanAdd.FirstNumber = 10;
somethingThatCanAdd.SecondNumber = 20;
Assert.That(somethingThatCanAdd.AddNumbers(), Is.EqualTo(30));
}



The problem is that the runtime isn’t smart enough to figure out that there has to be a duck-typing cast to the ICanAdd interface in order to use the SomethingThatCanAdd class, and that’s where LinFu’s DynamicObject comes in handy.


If it walks and quacks like a duck, then it’s all good


LinFu.DynamicObject is flexible enough that it can let you build object instances at runtime and then ‘strongly’ duck type those object instances to any interface that matches the intended duck type. In this case, we need to find a way to build up something that can map to an ICanAdd interface instance so that it can be used by the SomethingThatAdds class:

[Test]
public void CanCreateADynamicAdder()
{
var adder = new DynamicObject();
CustomDelegate addBody = delegate(object[] args)
{
int a = (int)args[0];
int b = (int)args[1];
return a + b;
};
// Map LinFu's DynamicObject to an ICanAdd interface
var linfuDynamicObject = new DynamicObject(new object());
var returnType = typeof(int);
var parameterTypes = new Type[] { typeof(int), typeof(int) };
linfuDynamicObject.AddMethod("Add", addBody, returnType, parameterTypes);
// If it looks like a duck...
Assert.IsTrue(linfuDynamicObject.LooksLike<ICanAdd>());
// ...then it must be a duck, right?
var somethingThatCanAdd = new SomethingThatAdds(adder.CreateDuck<ICanAdd>());
somethingThatCanAdd.FirstNumber = 10;
somethingThatCanAdd.SecondNumber = 20;
Assert.AreEqual(somethingThatCanAdd.AddNumbers(), 30);
}
 The call to the DynamicObject.CreateDuck() method does all the heavy lifting for you so you don’t have to worry about the details of how to make the object behave like a duck. It just works, and that’s the power that LinFu offers.
(EDIT: You can grab the source code and examples for LinFu.DynamicObject here at Github)
 
Technorati Tags: ,,

Ratings by outbrain