Wednesday, 1 October 2008
Dublin - it's not a city it's a new Microsoft codename :)
Friday, 29 August 2008
27 stack frames on 32 bit OS and 22 stack frames on 64 bit OS
64 bit JIT and 32 bit JIT are 2 very different beasts which is what one would suspect. But still I was rather surprised that 64 bit JIT was able to reduce the call stack of a call by 20% in comparison with 32 bit JIT. A piece of my code crawls the call stack to gather some runtime characteristics and it broke on a 64 bit machine. That's how I started a small investigation and found this difference. In most cases it's not a good idea to rely on the way JIT works as this is subject to change but from time to time there is simply no other way. If you look for more information about 32/64 JIT you can check this blog post of Scott Hanselman which is an overview with links to other Microsoft bloggers that dive much deeper into details.
Thursday, 28 August 2008
Henrik Kniberg and his list of top 10 ways of screwing Scrum
Watch this video to see Henrik talk in a compelling way about what you should avoid when you practice Scrum. Every second slide is accompanied by a real-life story which makes this 1,5h presentation fly in no time.
Tuesday, 22 July 2008
Monday, 30 June 2008
A setting that can boost performance of any heavily network-dependent application
And the setting is:
<system.net>
<connectionManagement>
<add address="*" maxconnection="96"/>
</connectionManagement>
</system.net>
WCF service hangs - the pool of available sessions might have been exhausted
but this is going to work as long as you control all the clients. If you don't then some of them might not properly close their sessions which next might lead to a resource leak on the service side. This is not an easy problem to solve unless you are ready and able to abandon sessions.
From my perspective it's much more important to know that a service is about to reach it's limit of sessions or it is hung because it already has reached it. Unfortunately as far as I know there is no out of the box way of monitoring the number of active sessions per WCF service using Performance Counters. This leaves us with only one option, namely we have to write a custom performance counter on our own. This can be done as a WCF extension that implements IChannelInitializer and IInputSessionShutdown interfaces.
When a service seems to be frozen the story is not that simple as there might be dozens of reasons why it is in such a state. The only way I know to prove or disprove that the problem is related to the fact that there are no available sessions is to create a memory snapshot of the process where the service is hosted in and use Debugging Tools for Windows to check the state of all ServiceThrottle objects.
The following steps shows how to carry out such an investigation :) :
0. Install Debugging Tools for Windows and copy C:\Windows\Microsoft.NET\Framework\v2.0.50727\sos.dll (extension for .NET debugging) to the folder where Debugging Tools for Windows are installed.
1. Find the id of the process that is hosting the WCF service we are examining. In my case it is one of the IIS worker processes.
2. Create a memory snapshot using the adplus script which is part of Debugging Tools for Windows. By default adplus creates all snapshots under the folder where the Debugging Tools for Windows is installed.
3. Launch windbg.exe (the same location as adplus) and open the memory snapshot.
4. Type .load sos and press enter to load sos.dll into windbg.
5. Type !dumptheap -type ServiceThrottle -short in the command line to list all objects of type ServiceThrottle that exist on the managed heap. By the list of all objects I mean a list of their addresses in memory.
6. For each address on the output list carry out steps 7 - 8
7. Type !do address of the object to see what's inside of it.
8. The ServiceThrottle object has bunch of fields but only one of them which is called sessions is interesting from our perspective. Type !do address of the sessions field to see what's inside of it.
If you find a sessions field that has count and capacity fields set to the same value then you know that the pool of available sessions has been exhausted. If you can't find it then at least you know that there is something else wrong with your service.
Happy debugging :)
Saturday, 7 June 2008
Windows XP 64 bit - a relatively unknown but great operating system
- XP 64 bit can leverage all 4GB of RAM which allows me to run 4/5 instances of VS.NET, SQL Server, Outlook, Firefox and a dozen of background applications without any noticeable performance degradation.
- XP 64 bit shares core components with Windows Server 2003 which would explain its great performance and reliability.
- XP 64 bit comes with IIS 6.0. I suppose I don't have to explain why this is big. If my point is not clear to you, just compare IIS 5.1 and IIS 6.0 and you will immediately understand what I'm talking about.
Thursday, 29 May 2008
WeakEvent - you wish it was here
When you define an event you don't have to write add/remove methods on your own because C# compiler generates them automatically. Basically the following code snippet:
public event EventHandler <EventArgs> MyEvent;
is just "syntactic sugar" that C# compiler transforms to much more verbose form. You can find detailed description of this process in CLR via C# by Jeffrey Richter. From our perspective the most important thing is that we can overwrite the default behavior of the compiler and inject our own implementation of Add and Remove methods in a way that is completely transparent to subscribers. SomeClass and Subscriber classes show how it can be done. Don't worry about WeakEvent<T> class as it will be explained later.
public class SomeClass
{
private WeakEvent <EventArgs> myWeakEvent;
public event EventHandler <EventArgs> MyWeakEvent
{
add
{
myWeakEvent.Add(value);
}
remove
{
myWeakEvent.Remove(value);
}
}
private void SomeMethodThatNeedsToRiseMyWeakEvent()
{
OnMyWeakEvent(new EventArgs());
}
protected void OnMyWeakEvent(EventArgs args)
{
myWeakEvent.Invoke(args);
}
}
public class Subscriber
{
private SomeClass someClass;
public Subscriber()
{
someClass = new SomeClass();
someClass.MyWeakEvent += Method;
}
private void Method(object sender, EventArgs e)
{
}
}
Add and Remove methods take a delegate as the input parameter. Every .NET delegate is an object with 2 properties. One of them is a reference to the target of the delegate(the object the delegate will be called on) and the second one is a description of the method which is provided as an instance of System.Reflection.MethodInfo class. Static delegates have the target property set to null. The target field is the root of all evil as it keeps the subscriber alive(it is a strong reference to the object the delegate will be called on). Fortunately .NET framework provides a class that can act as man in the middle between the method and its target which lets us break the direct link between them.
The class that makes it possible is called (no surprise) System.WeakReference. An instance of System.WeakReference class keeps a weak reference(instead of strong) to the object that is passed to its constructor. The weak reference can be transformed into the strong reference by accessing its Target property and storing its value in an ordinary variable. In this way we resurrect the object. If the object is already garbage-collected then the property returns null. All aforementioned functionality is encapsulated in a custom class that I called WeakDelegate.
internal class WeakDelegate
{
private WeakReference target;
private MethodInfo method;
public object Target
{
get
{
return target.Target;
}
set
{
target = new WeakReference(value);
}
}
public MethodInfo Method
{
get { return method; }
set { method = value; }
}
}
WeakEvent<T> is a class that takes advantage of WeakDelegate class to solve the problem outlined in the first paragraph. Its below implementation is rather straightforward but 2 pieces of code might need some explanation. The first one is inside Invoke method. Internally we store instances of WeakDelegate class which means that we can not invoke them directly and every time one of them needs to be executed we have to assemble an instance of System.Delegate class. I don't know if the way the code creates delegates is the fastest one but I measured the execution time of that statement and the average time was 0.005384 ms per delegate which is fast enough for me. The second one is related to the fact that the locking is done in a way that prevents threads from waiting forever. If a thread can't enter the critical section within 15 seconds then it throws an exception. The rationale behind that approach is explained here.
public class WeakEvent <T> where T : EventArgs
{
private readonly List <WeakDelegate> eventHandlers;
private readonly object eventLock;
public WeakEvent()
{
eventHandlers = new List <WeakDelegate>();
eventLock = new object();
}
public void Invoke(T args)
{
ExecuteExclusively(delegate
{
for (int i = 0; i < eventHandlers.Count; i++)
{
WeakDelegate weakDelegate = eventHandlers[i];
// don't move this line to the ELSE block
//as the object needs to be resurrected
Object target = weakDelegate.Target;
if (IsWeakDelegateInvalid(target, weakDelegate.Method))
{
eventHandlers.RemoveAt(i);
i--;
}
else
{
Delegate realDelegate = Delegate.CreateDelegate(typeof(EventHandler <T>),
target, weakDelegate.Method);
EventHandler <T> eventHandler = (EventHandler <T>)realDelegate;
eventHandler(this, args);
}
}
});
}
public void Remove(EventHandler <T> value)
{
ExecuteExclusively(delegate
{
for (int i = 0; i < eventHandlers.Count; i++)
{
WeakDelegate weakDelegate = eventHandlers[i];
Object target = weakDelegate.Target;
if (IsWeakDelegateInvalid(target, weakDelegate.Method))
{
eventHandlers.RemoveAt(i);
i--;
}
else
{
if (value.Target == target && value.Method == weakDelegate.Method)
{
eventHandlers.RemoveAt(i);
i--;
}
}
}
});
}
public void Add(EventHandler <T> value)
{
ExecuteExclusively(delegate
{
RemoveInvalidDelegates();
WeakDelegate weakDelegate = new WeakDelegate();
weakDelegate.Target = value.Target;
weakDelegate.Method = value.Method;
eventHandlers.Add(weakDelegate);
});
}
private void RemoveInvalidDelegates()
{
for (int i = 0; i < eventHandlers.Count; i++)
{
WeakDelegate weakDelegate = eventHandlers[i];
if (IsWeakDelegateInvalid(weakDelegate))
{
eventHandlers.RemoveAt(i);
i--;
}
}
}
private void ExecuteExclusively(Operation operation)
{
bool result = Monitor.TryEnter(eventLock, TimeSpan.FromSeconds(15));
if (!result)
{
throw new TimeoutException("Couldn't acquire a lock");
}
try
{
operation();
}
finally
{
Monitor.Exit(eventLock);
}
}
private bool IsWeakDelegateInvalid(WeakDelegate weakDelegate)
{
return IsWeakDelegateInvalid(weakDelegate.Target, weakDelegate.Method);
}
private bool IsWeakDelegateInvalid(object target, MethodInfo method)
{
return target == null && !method.IsStatic;
}
}
You might have noticed that there is some housekeeping going on whenever one of Add, Remove or Invoke methods is called. The reason why we need to do this is that WeakEvent<T> keeps a collection of WeakDelegate objects that might contain methods bound to objects(targets) that have been garbage-collected. In other words we need to take care of getting rid of invalid delegates on our own. Solutions to this problem can vary from very simple to very sophisticated. The one that works in my case basically scans the collection of delegates and removes invalid ones every time a delegate is added, removed or the event is invoked. It might sound like overkill but it works fine for events that have around 1000-5000 subscribers and it's very simple. You might want to have a background thread that checks the collection every X seconds but then you need to figure out what is the value of X in your case. You can go even further and keep the value adaptive but then your solution gets even more complicated. In my case the simplest solutions works perfectly fine.
Hopefully this post will save someone an evening or two :).
Monday, 28 April 2008
Machines are predictable, people are not
I suppose we would all agree with that and that's why smart people try to develop processes to make us more predictable. On the other hand nobody likes being constrained by anything and especially a process. Some people call this kind of lack of structure freedom, some call it chaos :). From my experience a bit of process might actually help a lot whereas a complete lack of it leads sooner or later to a disaster. Scrum is one of the approaches that let people develop software in a predictable way and that's the topic of the next MTUG event (29th April) that I'm not going to miss. See you there.
Wednesday, 16 April 2008
Never ever synchronize threads without specifying a timeout value
- CLR, Concurrent Programming : Joe Duffy - Microsoft
- CLR : CLR Via C# - Jeffrey Richter - Wintellect
- Debugging: Tess - Microsoft
- Minimize locking - Basically lock as little as possible and never execute code that is not related to a given shared resource in its critical section. The most problems I've seen were related to the fact that code in a critical section did more then it was absolutely needed.
- Always use timeout - Surprisingly all synchronization primitives tend to encourage developers to use overloads that never time out. One of the drawbacks of this approach is the fact that if there is a problem with a piece of code then an application hangs and nobody has an idea why. The only way to figure that out is to create a dump of a process (if you are lucky enough and the process is still hanging around) and debug it using Debugging Tools for Windows. I can tell you that this is not the best way of tackling production issues when every minute matters. But if you use only API that lets you specify a timeout then whenever a thread fails to acquire a critical section within a given period of time it can throw an exception and it's immediately obvious what went wrong. DefaultPreferred
Monitor.Enter(obj)
Monitor.TryEnter(obj, timeout)
WaitHandle.WaitOne()
WaitHandle.WaitOne(timeout, context)
The same logic applies to all classes that derive from WaitHandle: Semaphore, Mutex, AutoResetEvent, ManualResetEvent. - Never call external code when in a critical section - Calling a piece of code that was passed to a critical section handler from outside is a big risk because there is always a good chance that at the time the code was designed nobody even thought that it might be run in a critical section. Such code might try to execute a long running task or to acquire another critical section. If you do something like that you simply ask for trouble :)
Wednesday, 26 March 2008
MIX summary in Dublin
It looks like there will be a micro MIX like event in Dublin in May - http://visitmix.com/2008/worldwide/. It might be interesting.
Sunday, 24 February 2008
There is no perfect job
I suppose we all know that there are always some "ifs" and "buts". Edge Pereira wrote a blog post about a few of them that are related to human-human interaction. If I had to choose a single sentence from his post I would go for this one: "if an employee does not know the reason of his daily work, he will never wear the company's jersey". Needles to say I totally agree with the whole post.
Friday, 15 February 2008
ReSharper 4 - nightly builds available at last
At this stage I nearly refuse writing code without ReSharper. I know it's bad but that's not the worst addiction ever :). Fortunately, JetBrians decided to release nightly builds of ReSharper 4 to public. Sweet.
Tuesday, 12 February 2008
C# generics - parameter variance, its constraints and how it affects WCF
List <String> stringList = null;
List <object> objectList = stringList; //this line causes a compilation error
Error 1 Cannot implicitly convert type 'System.Collections.Generic.List<string>
The generics are all over the place in WCF and you would think that this is always beneficial to all of us. Well, it depends. One of the problems I noticed is that you can not easily handle generic types in a generic way. I know it does not sound good :) but that's what I wanted to say. The best example is ClientBase<T> that is the base class for auto generated proxies. VS.NET generates a proxy type per contract(interface) which might lead to a situation where you need to manage quite a few many different proxies. Let's assume that we use username and password as our authentication method and we want to have a single place where the credentials are set. The method might look like the one below: public void ConfigureProxy(ClientBase<Object> proxy) { proxy.ClientCredentials.UserName.UserName = "u"; proxy.ClientCredentials.UserName.Password = "p"; } Unfortunately we can't pass to that method a proxy of type ClientBase<IMyContract> because of nonvariant nature of C# generics. I can see at least two options how to get around that issue. The first one requires you to clutter the method with a generic parameter despite the fact that there is no use of it.
public void ConfigureProxy <T>(ClientBase <T> proxy) where T : classYou can imagine I'm not big fun of this solution. The second one is based on the idea that the non-generic part of the public interface of ClientBase class is exposed as either a non-generic ClientBase class or a non-generic interface IClientBase. Approach based on a non-generic class:
{
proxy.ClientCredentials.UserName.UserName = "u";
proxy.ClientCredentials.UserName.Password = "p";
}
public abstract class ClientBase : ICommunicationObject, IDisposable
{
public ClientCredentials ClientCredentials
{
//some code goes here
}
}
public abstract class ClientBase <T> : ClientBase where T : class
{
}
Approach based on a non-generic interface:
public interface IClientBase : ICommunicationObject, IDisposable
{
ClientCredentials ClientCredentials { get; }
}
public abstract class ClientBase <T> : IClientBase where T : class
{
}
Having that hierarchy in place we could define our method in the following way:
public void ConfigureProxy(ClientBase/IClientBase proxy)
{
proxy.ClientCredentials.UserName.UserName = "u";
proxy.ClientCredentials.UserName.Password = "p";
}
Unfortunately WCF architects haven't thought about that and a non-generic ClientBase/IClientBase class/interface doesn't exist. The interesting part of this story is that the FaultException<T> class does not suffer from the same problem because there is a non-generic FaultException class that exposes all the non-generic members. The FaultException<T> class basically adds a single property that returns the detail of the fault and that's it. I can find more classes that are implemented in exactly the same way FaultException<T> is implemented. It looks like ClientBase<T> is the only one widely used class that breaks that rule. I would love to see this inconvenience fixed as an extension of C# parameter variance.
Saturday, 9 February 2008
Dublin Bus Executives, see how far behind you are
Saturday, 2 February 2008
Looking for Quality not Quantity
When I started blogging I wanted to have a blog on weblogs.asp.net but the admin replied to me that they didn't have any "free slots". At that time I was a little bit upset about that but now I know that his decision was right. Basically in most cases before you are able to provide interesting content you need to familiarize yourself with all the stuff that's already out there. Reinventing the wheel is not interesting and learning (climbing predecessors' shoulders) is not easy.
Current status: Climbing :).
Wednesday, 30 January 2008
AccuRev - another story how to screw UI
A child stream inherits content from its parent which makes branching/merging a non-issue. Let's say you have the production version of a product in the main stream and all the new features are being implemented as child streams. Whenever there is a code change (bug fix, small improvement) in the main stream all child streams get it automatically. Developers don't even have to think explicitly about merging. They just update their workspace, get changes from the parent stream and resolve simple conflicts and that's it. I said simple conflicts because the more often you merge the less time you spend on it. Additionally, in most cases all inherited changes are incorporated seamlessly because AccuRev does a really good job at merging.
If you need to include an external dependency (which is exposed as a stream as well) you just need to create a link between your stream and the stream where the dependency is. This means that if you have a data access component then the New Feature 1 stream can consume version 1.0 of it whereas the New Feature 2 stream can upgrade to version 2.0 in complete isolation. As you can see AccuRev is powerful and there is no doubt about that.
The only problem is that all described features are exposed to the end user by a crappy UI. The UI has been written in the way that makes it useless in so many cases that it's not even funny. I'm going to show a few examples how you must not write UI. All of them made the AccuRev adoption in my company very hard. Fortunately we have a brilliant build team which basically created a new web based UI that sits on top of AccuRev and hides many of the "innovative" ideas you will see.
The order of the list does not matter because different people hate passionately different AccuRev UI "features". The list is based on version 4.5.4 of AccuRev.
- UI is based on Eclipse which is written in Java. I have nothing against Java but I haven't seen a really good UI written in that language. The AccuRev UI is not as responsive as the average native Windows app. From time to time it starts rendering everything in gray. Click at the image to see more details.
- AccuRev comes with its own vocabulary that describes the source code management.
Common term AccuRev term update update & populate check in/commit promote delete defunct move cut & paste conflict overlap - In order to make sure that you've got all recent changes you need to update and populate your workspace. Of course there is no single button that does both of them in one go. What is more both actions are launched in two completely different ways. The update button is the best hidden button I've ever seen. Nearly every single developer that starts using AccuRev can not find it. Note that the update action is accessible neither from the main menu nor the context menu.
- Whenever AccuRev can not update your workspace because there is something wrong with it you get a cryptic error message. In most cases it is the same error message. Click at the image to see more details.
- When someone promotes changes to the parent stream then all children get it by inheritance. From time to time the changes introduce conflicts that need to be resolved. The problem is that in order to find them you need to look for them using two different searches - overlap and deep overlap. I know why there are two different types of them but I don't get why this is so explicitly exposed to the end user.
- As I said before you can include other streams into you stream. The problem is that the window that lists all available streams is just a list view without any way of filtering items in the current view. I can spend 15 minutes trying to find the stream I'm interested in. Click at the image to see more details.
- Whenever you edit a text box the Ctrl-Z shortcut does not work.
- Let's say you are done with your task and you want to promote all your changes. In order to do this you need to remember to execute 3 different searches(external, modified, pending) to find all your changes. Again there is no single search that can do this for you. Needless to say that you can not define your own searches.
- There are always some files that are part of your workspace but you never want to add them to the repository. The obj and bin folders are a good example. Unfortunately the AccuRev UI does not let you exclude neither files nor folders. Instead you need to create an environment variable where you specify all patterns for files and folders that AccuRev should ignore. What a user friendly solution. Even free TortoiseSvn has that feature built in.
- When you want to move a file that is already in the repository you need to right click on it, choose cut, navigate to the new location, right click and choose paste. Why is there no explicit move command?
- Until recently the AccuRev integration with Visual Studio has been very poor and only recently AccuRev has released a plugin that more or less works with VS 2005(what about 2008?). My biggest problem with AccuRev plugin is the fact that from time to time when I compile my solution it does something in the background that causes the compilation process to freeze for a few seconds. From my perspective this is unacceptable behaviour. I don't mind if the plugin does its housekeeping when the IDE is idle but it must not interfere in the compilation process.
Since recently a happy AccuRev user :).