Difference in memory management in different .NET versions

Writing tests for memory issues, I came across a strange behavior. There is a minimal piece of code reproducing the case:

public void LocalVariableTest()
  var foo = new Foo();
  foo = null;

  // assert that there is no Foo instances in the memory
  dotMemory.Check(memory =>
        where => where.Type.Is<Foo>()).ObjectsCount,

Under .NET Framework 3.5 test passes both in debug and release modes and it looks reasonable, if I explicitly tell the runtime to release a reference it should do it. But starting from .NET 4.0 test passes in release and fails in debug mode.

I looked at IL code and found nothing weird, therefore this shit optimization happens during JIT compiling.

Why it’s needed to release a local variable reference in a test? I know only one but though important case. The integration test which checks, that when the “main” application instance is collected, no “our” objects are remain in memory. Of course, for such test, assert should look like that – checks that there is no instances of any class declared in “MyAppRootNamespace” namespace.

    where => where.Namespace.Like("MyAppRootNamespace.*")).ObjectsCount,