Microsoft Enters the Parallel Programming Fray

If ever there was a sign a particular area of development was going mainstream, it is the entry of Microsoft into the space. Rik Myslewksi at The Register has a somewhat breathless write up of Microsoft’s answers to OpenCL and CUDA, C++ AMP.

Microsoft principal native-languages architect Herb Sutter unveiled the technology Wednesday morning at AMD’s Fusion Developer Summit. Initially, C++ AMP will help devs take advantage of general purpose GPU computing (GPGPU), but in the future, Microsoft will extend the technology to multi-core architectures and beyond.

Those future plans include opening up the specification, if not the implementation of C++ AMP. Sutter also explains the reasons for choosing C++ over C although I expect that have more to do with Microsoft’s proprietary tooling investments in the form of Visual C++. Given both of those bits, I don’t really expect to see this particular approach employed aware outside of Windows. The siloing of approaches in this area is more tragic in my mind than existing application development since building usable and effective parallel programming techniques and tools is a vastly greater challenge.

Microsoft juices C++ for massively parallel computing, The Register

A New, Java Based Parallel Language

The EE Times links to the announcement from the Universal Parallel Computing Research Center (UPCRC) at the University of Illinois of the Deterministic Parallel Java project.

The broad goal of our project is to provide deterministic-by-default semantics for an object-oriented, imperative parallel language, using primarily compile-time checking. “Deterministic” means that the program produces the same visible output for a given input, in all executions. “By default” means that deterministic behavior is guaranteed unless the programmer explicitly requests nondeterminism. This is in contrast to today’s shared-memory programming models (e.g., threads and locks), which are inherently nondeterministic and can even have undetected data races.

UPCRC is a cross disciplinary project across several departments at the university and with Microsoft and Intel. The DPJ project is being led by Professor Vikram Adve and Ph.D. student, Robert Bocchino. The emphasis is on ease of use rather than exploring beyond the current conceptual horizon of parallel programming research.

My first thought on reading that this was based on Java was to dismiss it as a minor step forward. Looking through the tutorial, though, I think this is worthy of more attention. The choice of Java was driven more by the ease of implementation than the current approach to parallelism in that language. The fork-join model described reminds me, at least conceptually of Go, Google’s C-like concurrent programming language. The UPCRC is also working on a set of extensions for C++, with the help of Intel, that would make their implementation even more available to more programmers.

The real value of efforts like these is getting concepts like the fork-join approach to task parallelism out and into the hands of working programmers. The work at UPCRC was presented at last year’s OOPSLA but this announcement is the first I’ve heard of it. The open source license (GPL2), available code, tutorials and other documentation is very encouraging for those who simply want to grab the fruits of this team’s research and see what it makes possible.

University releases parallel programming language, EE Times