Tim Bray: Hard Open Problems in Network Computing

about | archive

[ 2005-November-16 21:10 ]

One month ago, Tim Bray gave a talk at my university about some hard problems that he thinks are important, and that he thinks might be a good fit for academic research. I thought it was worth sharing with the world. He presented five problems: Concurrency, Dynamic Languages, Web Services, Syndication, and Storage.


This is a familiar topic for anyone who reads Bray's excellent blog. The issue is that multithreaded architectures will be everywhere very soon, from laptops through to servers, but we don't really know how to take advantage of them. This is a huge, very difficult, and very old problem. For example, there has decades of work on parallelizing compilers and so far they only really work well on the kinds of nested loops that appear in scientific applications. I think that novel and pragmatic approaches are required in order to deliver useful solutions in the next five years. This is an ideal space for a creative and ambitious PhD student to make a big impact.

Dynamic Languages

Again, Bray has covered this on his blog before. However, in the presentation he stated that he believes that dynamic languages are the future for a very large fraction of applications. The motivation is simple: programmers are more productive. However, there are two big chunks that he feels are missing. First, there is a lack of IDE support. Secondly, the performance of these languages is generally very poor when compared with statically-typed languages like C/C++, C#, and Java. Both of these problems have been investigated by academic research before, but I think there is still lots of work in the specific context of dynamic languages.

The IDE problem is a big one for many programmers who rely on tools like Visual Studio, NetBeans, or Eclipse. These developers may not be comfortable with the command line, or manually setting up build systems, version control, and test suites. The research challenge in this space is code completion, a very useful and addictive feature. Code completion is difficult for dynamic languages since there is no way to know the exact type of a variable. There is previous work in this area for compilers, but I think there is still work to be done in the context of IDEs.

The performance problem is more interesting. In reality, performance does not matter for a huge class of applications that are primarily I/O bound. However, for some of these applications response time may be critical. For others, scaling up to support large numbers of users can be an issue. Hence, performance is important. In order to avoid the issue, a very large fraction of the Python standard library is written in C. This is a workable solution but it makes portability, development, and debugging more difficult. In my opinion, the performance problem is partly real and partly just bad public relations. However, one related area that is critical is concurrency. For example, neither Python or Ruby support true concurrent execution (although Jython and JRuby do). I think there is significant potential for run-time code generation to greatly improve the performance of dynamic languages.

Web Services and Syndication

Bray's opinion is that the "official" web services stack is far too complicated, and that it is basically a recreation of CORBA with angle brackets. I tend to agree. I think that web companies like Google, Amazon, and Yahoo are the clear leaders in this space with their REST-like APIs. Bray believes that Atom and syndication can be useful tools in this space. One interesting idea he mentioned is that Atom might be useful as a standardized "list" container for XML. The open problem here is another old one: what is a good model for building distributed applications, and what infrastructure is required to make it easier for application developers? A combination of practical experience and radical ideas will be required to find a better way.


This is one subject that Bray has not discussed on his blog. In the past few years, an interesting performance shift has occurred: it is now faster to reach across the network and get data from the memory of another machine than it is to read it from local disk. Additionally, while CPUs have become much faster, RAM has not. This leads to the expression that Tim used: "RAM is the new disk, disk is the new tape." The question here is can we build storage systems that have similar reliability as disk, using RAM and lots of duplication over the network? Of course, at some point you want to have a copy on disk as well, but maybe that is done at infrequent intervals to keep the performance costs low. This is a very interesting idea, and I think it is ideal for academic research because it is still probably about five years away from the mainstream.