Igor Ostrovsky on August 13th, 2008

Performance of parallel programs is an interesting – but also tricky – issue. I put together an article for our team blog that talks about the most common reasons why a parallel program may not scale as desired:

Developers ask why one program shows a parallel speedup but another one does not, or how to modify a program so that it scales better on multi-core machines.

The answers tend to vary. In some cases, the problem observed is a clear issue in our code base that is relatively easy to fix. In other cases, the problem is that Parallel Extension does not adapt well to a particular workload. This may be harder to fix on our side, but understanding real-world workloads is a first step to do that, so please continue to send us your feedback. In yet another class of issues, the problem is in the way the program uses Parallel Extensions, and there is little we can do on our side to resolve the issue.

Interestingly, it turns out that there are common patterns that underlie most performance issues, regardless of whether the source of the problem is in our code or the user code. In this blog posting, we will walk though the common performance issues to take into consideration while developing applications using Parallel Extensions.

Read the rest of the article here.

Tags:

Igor Ostrovsky on August 11th, 2008

Big-Oh notation is a simple and powerful way to express how running time of a particular algorithm depends on the size of the input. When you say that a particular algorithm runs in O(N2) time, you mean that the number of steps the algorithm takes is proportional to the input size squared. Or, in mathematical terms, there is some fixed constant C, such that to process input of size N, the algorithm needs at most C x N2 steps. One interesting question is how to define a “step”. The beauty of the Big-Oh notation is that any somewhat reasonable definition of a step will do. The step could be a clock cycle, a hardware instruction, or an expression in source code. If an algorithm takes O(N) steps according to one definition of a “step”, it takes O(N) according to all reasonable definitions.

While this is great, you already probably heard it a thousand times. But, what about the complexity of parallel programs? A major departure from the sequential case is that the number of steps an algorithm takes and the actual real-world time may not be proportional. In the parallel case, we may have potentially many cores executing the computation steps!

Read the rest of this entry »

Tags:

Igor Ostrovsky on July 21st, 2008

Skip lists are a fascinating data structure: very simple, and yet have the same asymptotic efficiency as much more complicated AVL trees and red-black trees. While many standard libraries for various programming languages provide a sorted set data structure, there are numerous problems that require more control over the internal data structure than a sorted set exposes. In this article, I will discuss the asymptotic efficiency of operations on skip lists, the ideas that make them work, and their interesting use cases. And, of course, I will give you the source code for a skip list in C#.

Read the rest of this entry »

Tags:

Igor Ostrovsky on June 16th, 2008

There is a lot of information on the concurrent primitives and concepts exposed by the .NET Framework 3.5 available on MSDN, blogs, and other websites. The goal of this post is to distill the information into an easy-to-digest high-level summary: what are the different pieces, where they differ and how they relate. If you want to know the difference between a Thread and a BackgroundWorker, or what is the point of interlocked operations, you are reading the right article.

Read the rest of this entry »

Tags:

I realized that there is a very clean way to express a multi-clause if statement by composing ternary conditional operators like this:

var result = 
    condition1 ? result1
    : condition2 ? result2
    : condition3 ? result4
          ...
    : conditionN ? resultN
    : default;

Read the rest of this entry »

Tags:

Igor Ostrovsky on May 26th, 2008

In responses to my last week’s post, several readers mentioned LINQ-like operators they implemented themselves. I also had ideas for operators that would lead to neat solutions for some problems, so I decided to give it some thought and collect up the most useful operators into a reusable library.

My goal was to include operators that are simple to use, but applicable to a broad range of problems. I  left out operators that I thought were either too complicated to use, or too specific to a particular problem domain.

You can download the full source code of the library here (rename the file to ExtendedEnumerable.cs). Read on to find out what it contains.

Read the rest of this entry »

Tags:

Igor Ostrovsky on May 18th, 2008

Ever since I learned about LINQ, I keep discovering new ways to use it to improve my code. Every trick makes my code a little bit faster to write, and a little bit easier to read.

This posting summarizes some of the tricks that I came across. I will show you how to use LINQ to:

If you have your own bag of LINQ tricks, please share them in the comments! Also, if you like this article, you may like my next article, Extended LINQ: additional operators for LINQ to objects.

1. Initialize an array 

Often, you need to initialize elements of an array to either the same value, or to an increasing sequence values, or possibly to a sequence increasing or decreasing by a step different from one. With LINQ, you can do all of this within the array initializer – no for loops necessary!

In the following code sample, the first line initializes a to an array of length 10 with all elements set to -1, the second line initializes b to (0,1,..9), and the third line initializes c to (100,110,…,190):

int[] a = Enumerable.Repeat(-1, 10).ToArray();
int[] b = Enumerable.Range(0, 10).ToArray();
int[] c = Enumerable.Range(0, 10).Select(i => 100 + 10 * i).ToArray();

A word of caution: if you are initializing large arrays, you may want to forego the elegance and use the old-fashioned for loop instead. The LINQ solution will grow the array dynamically, so garbage arrays will need to be collected by the runtime. That said, I use this trick all the time when initializing small arrays, or in testing/debugging code.

2. Iterate over multiple arrays in a single loop 

A friend asked me a C# question: is there a way to iterate over multiple collections with the same loop? His code looked something like this:

foreach (var x in array1) {
    DoSomething(x);
}

foreach (var x in array2) {
    DoSomething(x);
}

In his case, the loop body was larger, and he did not like the duplicated code. But, he also did not want to allocate a new array to hold elements from both array1 and array2.

LINQ provides an elegant solution to this problem: the Concat operator. You can rewrite the above two loops with a single loop as follows:

foreach (var x in array1.Concat(array2)) {
    DoSomething(x);
}

Note that since LINQ operates at the enumerator level, it will not allocate a new array to hold elements of array1 and array2. So, on top of being rather elegant, this solution is also space-efficient.

3. Generate a random sequence 

This is a simple trick to generate a random sequence of length N:

Random rand = new Random();
var randomSeq = Enumerable.Repeat(0, N).Select(i => rand.Next());

Thanks to the lazy nature of LINQ, the sequence is not pre-computed and stored in an array, but instead random numbers are generated on-demand, as you iterate over randomSeq.

4. Generate a string 

LINQ is also a nice tool to generate various kinds of strings. I found this quite useful to generate strings for testing and debugging purposes.

Let’s say that you want to generate a string with the repeating pattern "ABCABCABC…" of length N. Using LINQ, the solution is quite elegant:

string str = new string(
    Enumerable.Range(0, N)
    .Select(i => (char)('A' + i % 3))
    .ToArray());

[EDIT] Petar Petrov suggested another interesting way to generate strings with LINQ. His approach applies to different scenarios than my solution above:

string values = string.Join(string.Empty, Enumerable.Repeat(pattern, N).ToArray());

5. Convert sequences or collections 

One thing you cannot do in C# or VB is to cast a sequence of type T to a sequence of type U, even if T us a derived class from U. So, you cannot just simply cast List<string> to List<object>. (For an explanation why, see Bick Byers’ posting).

But, if you are trying to convert IEnumerable<T> to IEnumerable<U>, LINQ has a simple and efficient solution for you:

IEnumerable<string> strEnumerable = ...;
IEnumerable<object> objEnumerable = strEnumerable.Cast<object>();

If you need to convert List<T> to List<U>, there is also a simple LINQ solution, but it involves copying the list:

List<string> strList = ...;
List<object> objList = new List<object>(strList.Cast<object>());

[EDIT] Chris Cavanagh suggested an alternate solution:

var objList = strList.Cast<object>().ToList();

6. Convert a value to a sequence of length 1 

When you need to convert a single value to a sequence of length 1, what do you do? You could construct an array of length 1, but I prefer the LINQ Repeat operator:

IEnumerable<int> seq = Enumerable.Repeat(myValue, 1);

7. Iterate over all subsets of a sequence 

Sometimes it is useful to iterate over all subsets of an array. This situation arises quite frequently in brute-force solutions to hard problems. For small inputs, subset sum, boolean satisfiability and the knapsack problem can all be solved easily by iterating over all subsets of some sequence.

In LINQ, we can generate all subsets of array arr as follows:

T[] arr = ...;
var subsets = from m in Enumerable.Range(0, 1 << arr.Length)
              select
                  from i in Enumerable.Range(0, arr.Length)
                  where (m & (1 << i)) != 0
                  select arr[i];

Note that if the number of subsets overflows an int, the above code will not work. So, only use it if you know that the length of arr is at most 30. If the length of arr is greater than 30, chances are that you don’t want to iterate over all of its subsets anyway because it is going to take minutes or more.

Comments and Conclusion

I hope you find these tricks useful and applicable to your programs.

The code samples in this posting are all implemented in C#, but they can be easily adapted to just about any other .Net language. However, LINQ is most conveniently used from .Net languages that support extension methods, lambda expressions and type inference, such as C# and Visual Basic.

To the best of my knowledge, each code sample in this posting works, but – as is common on the web – I don’t make any guarantees. As always, double check any code before using it.

Related:

kick it on DotNetKicks.com

Tags:

Igor Ostrovsky on May 9th, 2008

The folks at Dev 102 posted a programming job interview challenge. It is rather easy, but I figured that it may be a nice change of pace, since my last posting was fairly involved.

The challenge is to reverse the bits in each byte, given a large array of bytes. What is the fastest possible solution?

Read the rest of this entry »

Tags:

Igor Ostrovsky on May 4th, 2008

What is the time complexity of quicksort? The answer that first pops up in my head is O(N logN). That answer is only partly right: the worst case is in fact O(N2). However, since very few inputs take anywhere that long, a reasonable quicksort implementation will almost never encounter the quadratic case in real life.

I came across a very cool paper that describes how to easily defeat just about any quicksort implementation. The paper describes a simple comparer that decides ordering of elements lazily as the sort executes, and arranges the order so that the sort takes quadratic time. This works even if the quicksort is randomized! Furthermore, if the quicksort is deterministic (not randomized), this algorithm also reveals the input which reliably triggers quadratic behavior for this particular quicksort implementation.

Read the rest of this entry »

Tags:

Igor Ostrovsky on September 6th, 2007

Today, I am writing about a design problem related to C# generics that I’ve seen arise a few times. The problem occurs when we need to manipulate a generic class given a reference to its non-generic base class. For example, if a generic class Node<T> inherits from a non-generic class Node, and we are holding a Node reference to a Node<T> object, we cannot just cast the object to Node<T> because we do not have access to T.

I realize that the description is a bit abstract; let’s look at an example right away! It may look like a bit of code, but the classes are very simple and do just what you’d expect:

Read the rest of this entry »

Tags: