Boycotting Amazon and PayPal


After the events the last week I see no other alternative but to boycott Amazon and PayPal. I wanted to just very quickly make a note here that I have closed my accounts with both, and explain why.

Amazons decision to stop hosting WikiLeaks is said to have been based on terms-of-use violations, however – these violations didn’t stop Amazon from hosting WL the last two times – and there are many other web sites on Amazon that have likewise questionable content. If anyone where to write a book based on the WikiLeaks diplomatic cables, would Amazon not sell it then? I can’t help but believe that the real reason Amazon stopped hosting WikiLeaks had to do with pressure applied from Joe Lieberman and the US Government. As far as outside observers can determine, there seems to have been no legal process – but rather just a phone call from Liebermans department. It sets a very dangerous precedent when a large company and a de-facto common carrier folds like this from Government pressure.

Very quickly after Amazon folded, Tableau Software decided to do the same without any contact from Liebermans department.

On Saturday, it was announced that PayPal had decided to follow suit and cancel WikiLeaks donation account.

For many reasons, these three incidents have convinced me that I have to make a moral stand and boycott Amazon and PayPal – I have cancelled my accounts. I urge you to do the same if you feel that the last few days events have been a dangerous precedent.



Language features at the Emerging Languages camp


As I’ve covered in several blog posts, I visited the Emerging Languages camp last week. It was an interesting experience for many reasons, and some of my conclusions are still half formed. I wanted to talk a little bit about some common themes in several languages presented at this camp, and also what the trends are leaning towards. Now, I’m not entirely sure what my insights will be yet, but hopefully I’ll know as I approach the end of this blog post.

There are several different axises you can divide the presented – about 26 – languages. The first one is in terms of age and maturity. The oldest language presented was probably Parrot, Frink, Io, Factor and D – who have all been around for eight to ten years. All of these languages are very mature and you wouldn’t hesitate to use them for real life work. The second category are the languages that range from a few years in age to quite new ones. Most of these languages are still evolving, still not stable, but definitely on the path to getting there. The final set of languages are the most interesting in my view – the new ideas that have just started germinating, or even concepts that aren’t actually there yet. From my list of interesting things, Wheeler is definitely a language in that category.

But you can also look at the types and features you find in the new languages (and I will exclude the oldest group of languages when talking about these features and ideas). There is a large group of languages that are evolutionary rather than revolutionary. This is fine, of course. Many of these languages take much inspiration from Ruby and Smalltalk. Object orientation is extremely common among these languages, several of them prototype based. There was also several languages with indentation based syntax. I was surprised about the amount of languages that target native instead of a virtual machine. AmbientTalk, Clojure, Ioke/Seph, Frink and Mirah targets the JVM, Stratified JavaScript, E/Caja and CoffeeScript targets JavaScript and F# targets the CLR. All the other languages can be considered native. I was quite surprised about this – anyone got a good explanation?

There are also several graphical languages presented. These are harder to categorize from a traditional paradigm perspective. Thyrd is more of a proof of concept, very graphical, but backed by a stack language in the style of Forth. Kodu seems to have a quite traditional backend, but the graphical interface hides that in most cases. Both of these languages are optimized to run in situations where you don’t always have a keyboard – Thyrd for tablet PCs and Kodu for the XBox. Subtext/Coherence is based around non-syntactic thinking, but didn’t seem to have a graphical interface either at this point.

As mentioned above, the trend of using JavaScript as a target language also seems to be on the way up – both CoffeeScript, Caja, and Stratified JavaScript follows that approach. Parts of Ur/Web are also compiled to JavaScript to allow the based language to describe behavior in both the server and the client. Gilad reported that he was very interested in getting Newspeak to run on JavaScript too, and there has been a lot of talk about getting Io to work on that platform too. This seems to be an interesting idea for many languages, and the benefits of compiling to a language that can run in any browser is definitely compelling. However, there seems to be a lot of problems with that approach too. Some people create a bytecode machine in JavaScript that then executes generated bytecodes. Some languages have to do lifting of functions because of bugs in several JavaScript implementations. And of course, the JavaScript language doesn’t give you good ways of generating code that is easy to debug.

The low level languages all seem to give interesting capabilities. D is what C++ should have been. BitC allow you to write programs with strong guarantees. ooc gives many of the high level benefits to a low level language. Go makes it possible to handle concurrency in an easy and powerful way.

I’m at the end of my thoughts right now, and there is no grand conclusion to be found. Just some interesting observations. Maybe they are indicative of the future in language development, and maybe not.



The JVM Language Summit 2010


I’ve just come back from three days in Santa Clara, spending time with some of the brightest people in the Java world – the JVM language summit is truly a fantastic collection of great people. And I was there too…

THe goal of the JVM language summit is to collect the people who work with languages on the JVM and have them share their projects, their experiences and their networks – and let them network with the people in charge of implementing the JVM’s for different companies. This year, a lot of discussion about JSR 292 and project lambda was on the plate. The presence of hardware and VM people was also more pronounced. I counted principals for at least six different virtual machines in the audience or presenting (Hotspot, JRockit, J9, Azul, Maxine, and Monty).

Among the experienced platform and language people there, some of the notables included Kresten Krab Thorup, Joshua Bloch, Bob Lee, Neal Gafter, John Rose, Brian Goetz, Alex Buckley, Rich Hickey, Charles Nutter, Cliff Click, Doug Lea, Per Bothner and many more. A great collection of people.

As an example of the funny happenstance that can happen in this collection of people, I was sitting rebinding my Java implementations for Mac OS X – and I had remove lots of links in /usr/bin. A few minutes later the person next to me started asking some questions about my experience with Java on the Mac – and it turns out he’s the manager for the Apple JVM team. Or at one point Rich Hickey reported on a quite puzzling problem that causes bad semantics when iterating over data that doesn’t fit in memory – and Cliff Click immediately opens up his laptop, says “give me an half hour and I’ll see what I can do”.

Another funny anecdote was when Doug Lea pointed out that if you use fibonacci to test performance against yourself or others, it’s important that the implementations actually agrees about the first values of fib. Funnily enough, I saw three different implementations of the ground rule in fib during the summit – all of them different. (if n < 2 return 1, if n<=2 return n, if n < 2 return n).

There were way too many interesting presentations and discussions for me to be able to talk about all of them – instead I just wanted to give some highlights.

Charles Nutter

Charles gave a quick introduction to JRuby and Mirah, and what kind of optimizations JRuby is currently doing. He also talked about how far he’s gotten in inlining invoke dynamic calls inside of JRuby (and he’s gotten very far – it’s really cool).

Fredrik Öhrström

Fredrik is the JRockit representative on JSR 292, and way too smart. He presented a solution to how you can use method handles integrated with function types to solve many of the current problems in project lambda. A very powerful and interesting presentation.

Doug Lea

Doug spent his keynote trying (quite successfully) to concinve the room of the hegemony of fork-join as a good solution to concurrency problems. A very good and thought provoking keynote.

Josh Bloch

Last year at the JVM language summit, Josh talked about what he called “the Semantic Gap”. This year, after being beat up by some linguists, he changed the name of this concept to “Performance Anxiety”. The basic idea is that in our current infrastructure we have traded performance for predictability. Two examples from his talk about when this happens in Java was pretty interesting. He had one benchmark that consistently showed about the same numbers for the same JVM run, but differed between JVM runs. There was no undeterminism in the benchmark itself, but they benchmark times continued to oscillate between 0.7 and 0.85 depending on JVM run. Cliff Clicks explanation for this is that it is probably the compilation planner, which is a separate thread. Depending on when that thread runs the compilation strategy will be different, and makes a difference in times. And it’s really hard for the programmer to take this difference into account.

The other example is simpler (and don’y change your code because of this). In some circumstances it turns out that & is faster than && in Java, because a && will short curcuit, which means it will branch. The single ampersand will always execute both sides, which means the CPU can pipeline both of them to execute at the same time.

All the examples he shown comes down to the same thing – we can’t really reason intuitively about the performance of our language constructs anymore. Our systems have become to complex in order to support better performance, and we give up predictability to get that performance. And at the end of the day it doesn’t even matter if you go down to C or assembler – you still cannot control exactly what the CPU is doing anymore.

Kresten Krab Thorup

Kresten is the CTO of Trifork, and one of the main organizers of many of my favorite conferences (like JAOO and QCon). The last nine months he has worked on an Erlang implementation for Java, which he talked about. It seems to be a very good implementation, and he’s getting surprisingly good performance and context switching numbers. In fact, several of the ideas in Seph will be stolen from Erjang.

Rémi Forax

Rémi showed off his PHP.reboot project, implemented using JSR 292 and getting quite good performance. His JSR 292 backport seems to be really useful and I think I’ll use that to make sure Seph can run on pre Java 7 machines. Good stuff.

Rich Hickey

Rich spent some time collecting comments from people in the room of what was problematic with the JVM in its current incarnation. To start us off, he showed one piece of hilarious/horrible Clojure code. Any one wants to guess what it does?

static public Object ret1(Object ret, Object nil) {
    return ret;
}

public static int count(Object o){
    if(o instanceof Counted)
        return ((Counted) o).count();
    return countFrom(Util.ret1(o, o = null));
}

We then went on to a few other things (which you can find on the JVM Language Summit wiki). The consensus seemed to be that tail calls is really very important. Last year, it wasn’t as crucial but now that we see how powerful method handles and lambda will be, tail calls turn out to be very nice to have. Hopefully we can make that happen.

JSR 292

The JSR 292 expert group got lots of chances to work on ideas and designs for the future. Lots of interesting results came out of these discussions. Some of the more notable ones are skisses on how method handles and function types can work together, how invoke dynamic and bootstrap method can be used to implement defender methods and several other interesting ideas.

All in all it has been a fun few days, going far out in language and implementation geekiness. I hope to come back to this next year.



Life in the time of Java 7


I’m currently in the process of implementing Seph, and I’ve reached an inflection point. This point is the last responsible moment to choose what I will target with my language. Seph will definitely be a JVM language, but after that there is a range of options – some quite unlikely, some more likely. The valid choices are:

  • Target Java 1.4
  • Target Java 5/6
  • Target Java 7
  • Target Java 7 with extensions

Of these, the first options isn’t really interesting for Seph, so I’ll strike it out right now. The other three choices are however still definitely possible – and good choices. I thought I might talk a little bit about why I would choose each one of them. I haven’t made a final decision yet, so that will have to be the caveat for this post.

Before talking about the different choices, I wanted to mention a few things about Seph that matters to this decision. The first one is that I want Seph to be useful in the real world. That means it should be reasonably fast, and runnable for people without too much friction. I want the implementation to be small and clean, and hopefully as DRY as possible – if I end up with both and interpreter and just-in-time compiler, I want to be able to share as much of these implementations as possible.

Java 5/6

The easiest way to go forward would be to only use Java 5 or 6. This would mean no extranice features, but it would also mean the barrier to entry would be very low. It would mean development on Seph would be much easier and wouldd in general make everything simpler for everyone. The problem with it would mainly be implementation complexity and speed, which would both suffer compared to any of the Java 7 variants.

Java 7

There are many good reasons to go with Java 7, but there are also some horrible consequences of doing this. For Seph, the things that would make things from Java 7 is method handles, invoke dynamic and defender methods. Other things would be nice, but the three previous ones are the killer features for Seph. Method handles make it possible to write much more succinct code, not generate lots of extra classes for each built in method, and many other things. It also becomes possible to refer to compiled code using method handles, so the connection between the JIT and the interpreter would be much nicer to represent.

Invoke dynamic is quite obvious – it would allow me to do much nicer compilation to bytecode, and much faster. However, I could still build the same thing myself, to much greater cost and it would also mean inlining wouldn’t be as easy to get.

Finally, defender methods is a feature of the new lambda proposal that allow you to add new methods to interfaces without breaking backwards compatibility. The way this works is that when you add a new method to an interface, you can specify a static method that should be called when that interface method is invoked and there are no other implementations on the concrete classes for a specific object. But the interesting side effect of this feature is that you can also use it to specify default implementations for the core language methods without depending on a shared base class. This will make the implementation much smaller and more flexible, and might also be useful to specify required and optional methods in an API.

The main problem with Java 7 is that it doesn’t exist yet, and the time schedule is uncertain. It is not entirely certain exactly what the design of the things will look like either – so it’s definitely a moving target. Finally, it will make it very hard for people to help out on the project, and also it won’t make Seph a possible language for people to use until they upgrade to Java 7.

Java 7 with extensions

It turns out that the interesting features coming in Java 7 is just the tip of the iceberg. There are many other proposed features, with partial implementations in the DaVinci project (MLVM). These features aren’t actually complete, but one way of forcing them to become more complete is to actually use them for something real and give lots of feedback on the feature. Some of the more interesting features:

Interface injection

This feature will allow you to say after the fact that a specific class implements an interface, and also specify implementations for the methods on that interface. This is very powerful and would be extremely helpful in certain parts of the language implementation – especially when doing integration with Java. The patch is currently not very complete, though.

Tail calls

Allowing the JVM to perform proper tail calls would make it much easier to implement many recursive functional algorithms easily. Since Seph will have proper tail calls in the language, this will mean that I will have to implement this myself if the JVM doesn’t do it, which means Seph will be slower based on this. The patch seems to be quite good and possible to merge and harden to the JDK at some point. Of all the things on this list, this seems to be one of things that we can actually envision see being added in the Java 7 or Java 8 time frame.

Coroutines/continuations

Both coroutines and continuations seem to be possible to do in a good way, at least partially. Coroutines might be interesting for Seph as an alternative to Kilim, but right now it seems to be a bit unstable. Continuations would allow me to expose continuations as a first class citizen which is never bad – but it wouldn’t give me much more than that.

Hotswapping

Hotswapping of code would make it possible to do agressive JITting and then backing out from that when guards fail and so on. This is less interesting when we have invoke dynamic, but will give some more flexibility in terms of code generation.

Fixnums, tuples, value types

We all want ways of making numbers faster – but these features might also make it possible to efficiently represent simple composite data structures, and also things like multiple return values. These are fairly simple features, but have no real patch right now (I think).

Light weight code loading (anonymous classes)

It is horrible to load byte code at runtime in Java at this point. The reason is that to be able to make sure your loaded code gets garbage collected, you will have to load each chunk of code in a new class in a new classloader. This becomes very expensive very fast, and also endangers permgen. Anonymous classes make this go away, since they don’t have names. This means you don’t actually have to keep a reference to older classes, since there is no way to get to them again if you lost the reference to them. This is a good thing, and makes it possible to not generate class loaders every time you load new code. THe state of this seems to be quite stable, but at this point JVM dependent.

The price

Of course, all of these lovely features comes with a price. Two prices in fact. The first price is that all the above features are incomplete, ranging from working patches to proof of concepts or sketches of ideas. That means that the ground will change under any language using it – which introduces hard version dependencies and complicates building. The other price is that none of these features are part of anything that has been released, and there are no guarantees that it will ever be merged in Java at any point. So the only viable way of distributing Seph would be to distribute standard build files with a patched OpenJDK so that anyone can download and use that specific JDK. But that limits interoperability and causes lots of other problems.

Somewhere in between

My current thinking is that all of the above choices are bad. For Seph I want something inbetween, and my current best approach looks like this. You will need a new build of MLVM with invoke dynamic and method handles to develop and compile Seph. I will utilize invoke dynamic and method handles in the implementation, and allow people to use Rémi Forax’ JSR 292 backport to run it on Java 5 and 6. When Java 7 finally arrives, Seph will be more or less ready for it – and Seph can get some of the performance and maintainability benefits of using JSR 292 immediately. At this point I can’t actually use defender methods, but if anyone is clever enough to figure out a backport that will allow defender methods to work on Java 5 or 6, I would definitely use them all over the place.

This doesn’t actually preclude the possibility of creating alternative research versions of Seph that uses some of the other MLVM patches. Charles Nutter have shown how much you can do by using flags to add features that are turned off by default. So Seph could definitely grow the above features, but currently I won’t make the core of the language depend on them.



Preannouncing Seph


I’ve been dropping a few hints and mentions the last few weeks, and I thought it was about time that I took some time to preannounce a new project I’m working on. It’s going to be much easier writing my next few blog posts if people already know about the project, and my reasons for keeping quiet about it have mostly disappeared. It’s also a moot point since I talked about it at the Emerging Languages camp last week, and the video will be up fairly soon. And I already put the slides online for it, so some things you might have already seen.

So without further ado, the big announcement is that I’m working on a new language called Seph. Big whoop.

Why?

I already have Ioke and JRuby to care for, so it’s a very valid question to ask why I would want to take on another language project – outside my day job of course. The answer is a bit complicated. I always knew and communicated clearly that Ioke was an experiment in all senses of the word. This means my hope was that some of the quirky features of Ioke would influence other languages. But the other side of it is that if Ioke seems good enough as an idea, there might be value in expanding and refining the concept to make something that can be used in the real world. And that is what Seph is really about. That blog post I wrote a few weeks ago with the Ioke retrospective – that was really a partial design document for Seph.

So the purpose of Seph is to take Ioke into the real world while retaining enough of what made Ioke a very nice language to work with. Of course, being the person I am, I can’t actually avoid doing some new experimentation in Seph, but they will be mostly a bit safer than the ones in Ioke, and some of the craziest Ioke features have been scaled back a bit.

Some features

So what’s the difference? Seph will still be prototype based object oriented, in the same way as Ioke. It will definitely consider the JVM its home. It will be homoiconic, and allow AST manipulation as a first class concept – including working with message chains as a way of replacing blocks. It will still have a numerical tower. It will use almost exactly the same syntax as Ioke. It will still allow you to customize operators and precedence rules.

The big difference. The one that basically makes most all other design changes design themselves is a small but very important difference: objects are immutable. The only way you can create new objects is by creating a new object that specifies the difference. This can be done either by creating a new child of the existing object, or creating a new sibling with the specified attributes changed. In most cases, the difference between the strategies isn’t actually visible, so it might be an implementation strategy.

Now once you have immutable objects but still focus on polymorphic dispatch, that changes quite a lot of things. It changes the core data structures, it changes the way macros work, it changes the flow of data in general. It also changes what kinds of optimizations will be possible.

Another side effect of immutability is that is becomes much more important to have a good module story. Seph will have first class modules that ends up still being simple Seph objects at the same time. It’s really a quite beautiful and simple scheme, and it makes total sense.

If you’re creating a new Object Oriented language, it turns out that proper tail calls is a good idea if you can do it (refer to Steele for more arguments). Seph will include proper TCO for all Seph code and all participating Java code – which means you’ll only really grow the stack when passing Java boundaries. This will currently be done with trampolining, but I deem the cost worth the benefit of a tail recursive programming style.

I mentioned above that objects are immutable. However, local variables will be mutable. It will also be possible to create lexical closures. I’m still undecided whether it’s a good idea to leave a big mutable hole in the tyoe system, or whether I should make it impossible for lexical closures to mutate their captured environment. Time will tell what I decide.

Stealing is good

Seph believes in reusing concepts other people have already made a great job with. As such, many pieces of the language implementation will be stolen from other places.

Just like in Ioke, the core numbers will come from gnu.math. This library has served me well, and I’ll definitely continue to use it. The big difference compared to Ioke is that the gnu.math values will be first class Seph object, and won’t have to be wrapped. Seph will also have real floats instead of bigdecimals. This is a concession to reality (which I don’t much like, btw).

Seph will incorporate Erlang style light weight threads with an implementation based on Kilim (just like Erjang).

As mentioned above, the core data structures will have to change. And the direction of change will be towards Clojure. Specifically, Seph will steal/has stolen Clojures persistent data structures, all the concurrency primitives and the STM. I don’t see any reason to not incorporate fantastic prior art into Seph.

As mentioned above, the module system is also not new – it’s in fact heavily inspired of Newspeak. Having no globals force this kind of thinking, but I can’t say I would have been clever enough to think of it without Gilad’s writings, though.

Basically everything else is copied from or inspired by Ioke.

Isn’t mutability the essence of Ioke?

If you have worked with Ioke, or even heard me talk about it, you might have gotten the impression that mutability is one of the core tenets of Ioke. And your impression would be correct. It wasn’t until I started thinking about what a functional object hybrid version of Ioke would look like, that I realized most of things I like in Ioke could be preserved without mutability. Most of the macros, the core evaluation model and many other pieces will be extremely similar to Ioke. And this is a good thing. I think Ioke has real benefits in terms of both power and readability – something that is not easy to combine. I want Seph to bring the same advantages.

Will you abandon Ioke now?

In one word: no. Ioke is still an experiment and there are still many things that I want to explore with Ioke. Seph will not fill the same niche, it won’t be possible for me to do the same experimentation, and fundamentally they are still quite different languages. In fact, you should expect an Ioke release hopefully within a few weeks.

So will it be useful?

Yes. That’s the whole goal. Seph will have an explicit focus on two areas that Ioke totally ignored. These areas are concurrency and performance. As seen from the features above, Seph will include several powerful concurrency features. And from a performance standpoint, Ioke was a tarpit – even if you wanted to make it run faster, there wasn’t really anything to get a handle on. Seph will be much easier to optimize, it’s got a structure that lends itself better to compilation and I expect it to be possible to get it to real world language performance. My current idea is that I should be able to get it to JRuby performance for roughly the same operations – but that might be a bit optimistic. I think it’s in the right ballpark though. Which means you should be able to use it to implement programs that actually do useful things in the Real World ™.

Is it available?

No. At the current point, Seph is still so young I’m going through lots of rewrites. I would like the core to settle down a little bit before exposing everything. (Don’t worry, everything is done in git, and the history will be available, so anyone who wants to see all my gory mistakes will have no trouble doing that). But in a nutshell, this is why this is a preannouncement. I want to get the implementation to the stage where it has some of the interesting features I’ve been talking about before making it public and releasing a first version.

Don’t worry though, it should be public quite soon. And if I’m right and this is a great language to work in – then how big of a deal is another month of waiting?

I’m very excited about this, and I hope you will be too! This is an adventure.



Questioning the reality of generics


I’ve been meaning to write about this for a while, since I keep saying this and people keep getting surprised. Now maybe I’m totally wrong here, and if that’s the case it would be nice to hear some good arguments for that. Here’s my current point of view on the subject anyway.

A specter is haunting the Java community – the specter of generics.

Java introcued a feature called generics in Java 5 (this feature is generally known under the name of parametric polymorphism in the literate). Before Java 5 it wasn’t possible to create a reusable collection that would ensure the type safety at compile time of what you put in to that collection. You could create a collection of for example Strings and have that working correctly, but if you wanted to have a collection of anything, as long as that anything was the same type, you were restricted to doing runtime checks, or just having good tests.

Java 5 made it possible to add type parameters to any other type, which means you could create more specific collections. There are still problems with these – they interact badly with native arrays for example, and wildcards (Java’s way of implementing co= and contravariance) have ended up being very hard for Java developers to use correctly.

Java and C# both added generic types at roughly the same time. The C# version of generics differed in a few crucial ways, though. The most important difference in implementation is that C# generics are reified, while Java generics use type erasure. And this is really the gist of this blog post. Because over and over I hear people lament the lack of reified generics in Java, citing how good C# and the CLR is to have this feature. But is that really the case? Is reified generics a good thing? Of course, that always depends on who is asking the question. Reified might well be good for one person but not another. Here you will hear my view.

Reified? Huh?

So what does reified generics mean, anyway? It is probably easiest to explain compared to the Java implementation that uses type erasure. Slightly simplified: in Java generics doesn’t exist at runtime. It is purely a fiction that the compiler uses to handle type checking and make sure you don’t do anything bad with your collection. After the generics have been type checked, they are used to generate casts and type checks in the code using generics, some metadata is inserted into the class file format, and then the generic information is thrown away.

In contrast, on the CLR, generic classes exist as specific versions of their class. The same class with different generic type arguments are really different classes. There are no casts happening at the implementation level, and the CLR will as a result generate more specific code for the generic code. Reflection and dynamic type checks is also possible on the CLR. Having reified generics means basically that they exist at runtime, that the virtual machine knows about them and handles them correctly.

Multi-language virtual machines

The last twenty years something interesting has happened. Both out hardware and software has gotten mature enough that a new generation of virtual machines have entered the market. Traditionally, virtual machines for languages were made for specific languages, such as Pascal, Lisp and Smalltalk, and possibly except for SECD and the Warren machine, there haven’t really been any virtual machines optimized to running more than one language well. The JVM didn’t start that way either, but it turned out to be more well suited for it than expected, and there are lots of efforts to make it an even better platform. The CLR, Parrot, LLVM and Rubinius are other examples of things that seem to become environments rather than just implementation strategies for languages.

This is very exciting, and I think it’s a really good thing. We are solving very complex problems where the component problems are best solved in different ways. It seems like a weird assumption that one programming language is the best way of solving all problems. But there is also a cost associated with using more than one language. So having virtual machines act as platforms, where a sharked chunk of libraries are available, and the cost of implementation is low, makes a lot of sense.

In summary, I feel that the JVM was the first step towards a real viable multi-language virtual machine, and we are currently in the middle of the evolution towards that point.

Solving the problems

So why not add reified generics to the JVM at this point? It could definitely be done, and using an approach similar to the CLR, where libraries are divided into pre and post reified makes the path quite simple from an implementation standpoint. On the user side, there would be a new proliferation of libraries to learn – but maybe that’s a good thing. There is a lot of cruft in the Java standard libraries that could be cleaned up. There are some sticky details, like how to handle the API’s that were designed for erased generics, but those problems could definitely be solved. It would also solve some other problems, such as making it possible for Scala to pattern match on type parameters and solving part of the problem with abstracting over primitive types. And it’s absolutely possible to do. It would probably make the Java language into a better language.

But is it the only solution? At this point, making this kind of change would complicate the API’s to a large degree. The reflection libraries would have to be completely redesigned (but still kept around for backwards compatibility). The most probable result would be a parallel hierarchy of classes and interfaces, just like in the CLR.

Refified generics are generally being proposed in discussions about three different things. First, performance, second, making it easier for some features in Scala and other statically typed languages on the JVM, and thirdly to handle primitives and primitive arrays a bit better. Of these, the first one is the least common, and the least interesting by far. JVM performance is already nothing short of amazing. The second point I’ll come back to in the last section. The third point is the most interesting, since there are other solutions here, including unify primitives with objects inside the JVM, by creating value types. This would solve many other problems for language implementors on the JVM, and enable lots of interesting features.

The short stick

I believe in a multi language future, and I believe that the JVM will be a core part of that future. Interoperability is just too expensive over OS boundaries – you want to be on the same platform if possible. But for the JVM to be a good environment for more than one language, it’s really important that decisions are made with that in mind. The last few years of fantastic progress from languages like Rhino, Jython, JRuby, Groovy, Scala, Fantom and Clojure have shown that it’s not only possible, but benificial for everyone involved to focus on JVM languages. JSR 223, 292 and several others also means the JVM is more and more being viewed as a platform. This is good.

Generics is a complicated language feature. It becomes even more complicated when added to an existing language that already has subtyping. These two features don’t play very well together in the general case, and great care has to be taken when adding them to a language. Adding them to a virtual machine is simple if that machine only has to serve one language – and that language uses the same generics. But generics isn’t done. It isn’t completely understood how to handle correctly and new breakthroughs are happening (Scala is a good example of this). At this point, generics can’t be considered “done right”. There isn’t only one type of generics – they vary in implementation strategies, feature and corner cases.

What this all means is that if you want to add reified generics to the JVM, you should be very certain that that implementation can encompass both all static languages that want to do innovation in their own version of generics, and all dynamic languages that want to create a good implementation and a nice interfacing facility with Java libraries. Because if you add reified generics that doesn’t fulfill these criteria, you will stifle innovation and make it that much harder to use the JVM as a multi language VM.

I’m increasingly coming to the conclusion that multi language VM’s benefit from being as dynamic as possible. Runtime properties can be extracted to get performance, while static properties can be used to prove interesting things about the static pieces of the language.

Just let generics be a compile time feature. If you don’t there are two alternatives – you are an egoist that only care about the needs of your own language, or you think you have a generic type system that can express all other generic type systems. I know which one I think is more likely.



Emerging Languages camp – day 2


The second day of Emerging Languages camp was at least as good as the first day. We also managed to squeeze in four more talks, since everybode agreed that the afternoon pause was too long and ineffective during day one. At the end of the day my brain was substantially melted that I didn’t even contemplate finishing these comments. But after some sleep I think I have a fresh perspective.

The sessions were a bit more varied compared to the first day – both in quality and how far out the ideas were. Because of how my interest in various subject vary, there might be some inconsistency in length of reporting on the different languages.

Anyway, here goes:

Kodu

Kodu is a language from Microsoft for creating games. It’s specifically aimed at kids to see if they can learn programming in a better way using something like this. The language uses icons and a backend text based syntax to make it easy for someone to program using structure instead of syntax. You get a basic 3d environment where you can modify and edit things in various ways. Another important part of the design is to get the game to quickly do something, so you get immediate feedback. Everything added to the language is user tested before adding it – including doing gender testing. They thought long and hard about whether they should add conjunctions or not – but ended up deciding for doing it. You work with an XBox when programming and running the game. It’s also free. Overall, Kodu looks like a really nice and innovative initiative, probably going back as far as Logo in terms of inspiration. Very nice.

Clojure

Rich didn’t actually talk much about Clojure in general, but decided to focus on a specific problem he is working on solving. His talk title doesn’t really say much about this, though: “Persistent, Transience, Persistents, Transients and Pods – invasion of the value snatchers”. It was a great talk with lots of information coming extremely fast. I found myself focusing more during this talk than during any other during the conference, just to follow all threads of thought.

Rich spent some time giving an introduction to persistent data structures so everyone knew how Clojure works with them – including how they are turned into transients – since that’s where the new feature comes in.

An important part of persistent data structures is that yu preserve the performance guarantees of a mutable equivalent of that data structure. Clojure uses bit-partitioned hash tries, originally described by Phil Bagwell. This allows Clojure to have structural sharing, which means it’s safe to “update” something – the old version is retained. It uses path copying to make it possible to udpate with a low cost. There is definitely cost to doing it, but it works well in a concurrent environment where other solutions would be more costly to get correct results.

Clojure has an epochal time model that allows Clojure to view things as values inbetween being “modified”. State is at one step higher than that, so you can see mutable change as a creation of a new value from an existing value that is then put into the same reference the original value existed in. Clojure has four different types of references with various semantics for coordination.

To get good performance, some Clojure functions will actually mutate state that is invisible to anyone else to efficiently create new data structures. To get performance that is acceptable to Rich Clojure, data structures are not implemented using purely immutable data structures (Okasaki style) from the Java side. Persistent data structures also doesn’t scale to larger changes, specifically multiple collections, several steps or other situations where you want to have functional end points but efficient mutation inbetween.

Transients is a feature that allows Clojure to give birth to a data structure. Clojures transients will accumulate changes in a safe way and can then finally give you a persistent value. Only vectors and hash-maps are currently supported. Lists are not, since there is no benefit in doing that. Transients also enforce thread isolation. Composite operations are OK, and so is multi-collection work and you don’t need any locks for this. This is already in Clojure, but they might be doing too much. They both handle editing and enforce the constraints on it, such as single-threadedness. Transients can sometimes return new values too, even on mutating operations.

Pods allow you to split out the policy from transients. Values go in, values go out. The process goes through the pod. Different policies are possible, such as single-threadness or mutexes. A pod knows how to make a transient version of a value. Functions to modify a pod will have to return a new thing (or the same thing). Dereferencing the pod allows you to get a new value from a pod at that point. This gives you the possbility to apply recipes on ordinary Java objects too. A good example is String vs StringBuilder. Pods can ensure lock acquisition order, but not lock composition – although pods can detect it at least. There are still a few details in the design that Rich hasn’t decided on yet.

All in all, a very interesting talk, about the kind of concurrency problems you wish your language had.

E/Caja

Mark Miller recapped the interaction models of the Web, starting with static frames going to the current mess of JavaScript fragments going back and forth, using JSONP, AJAX and Comet. He also talks a bit about  the adoption curves of languages and why some languages get adopted. Posits that a mess of features may be easier to get adopted. This means many languages succeed by adding complexity.

E is an experiment in expressing actors in a persistent way. He used some of the lessons from E combined with AJAX/JavaScript to create Caja, a secure language. Some of the features from Caja were then used  to start work on EcmaScript 5. They are currently working on a standard for SES, secure JavaScript. Dr. SES is an extension of this, that stands for Distributed, Resilient, Secure JavaScript. Object capabilities involve two additions to a regular memory safety and encapsulation model; effects only on held references, and no powerful references by default. This means a reference graph becomes an access graph. Caja can sanitize JavaScript to prevent malicious behavior, but preserve the semantic meaning of the program outside of that.

He showed some examples of how Caja can be used to sanitize regular JavaScript and have it running securely. Very interesting stuff, although the generated code didn’t look as amenable to debugging as something like CoffeeScript.

Fancy

Fancy is a language that tries to be friendly to newcomers, with good documentation, a clean implementation and so on. It’s inspired b several languages: Smalltalk (pure message passing, everything’s an object, dynamic, class based OO, metaprogramming, reflective),  Ruby (file based, embraces UNIX, literal syntax, class definition is executable script, fixed some inconsistencies with procs/lambdas/blocks), Erlang (message passing concurrency, light weight processes – not implemented yet). Fancy takes the opinion that first class is good; classes, methods, documentation, tests should all be first class. FancySpec is a simple version of RSpec. Tests for all built in classes and methods are there. These tests are not dependent on implementation. There are plans to port Fancy to a VM. Methods marked with NATIVE will have an equivalent method in Fancy and in the interpreter, to improve performance.

It’s got dynamic scoping and method caching. Logic can be defined based on the sender of a message, which makes it possible to do things like private and public.

Exceptions are taken directly from the implementation (ie C++).

The language seems to be pretty similar to Ruby in semantics, but more Smalltalk like syntax.

BitC

BitC is geared towards critical systems code. Resource contrained, CPU, memory, those kind of areas. One cache miss sometimes counts. Abstraction is fine, but only if it’s the right one. Variance constrained too. Predictability is very important, so something like a JIT can be a problem. Statically exception free. “Zero” runtime footprint. Non-actuarial risk model. Mean time between failures in decades. Problem is to establish confidence. After other failures in this area, the conclusion has been that BitC shouldn’t be a prover.

The language is an imperative functional language with HM-style parametric type system. You have explicit control of representation. State is handled in a first class manner. Inferencing actually infers mutability in lots of cases. Dependent range checking isn’t there yet, but is coming soon. “The power of ML/Haskell”, “The low-level expressiveness of C”, “Near-zero innovation”.

Trylon

Trylon is a small language, indentionation based and compiles through C. It’s object oriented, with prototypes under the class based system. According to the author, nothing really new in the language – he just did it for his own sake. There are no users so far except for the author.

ooc

The language tries to be a high level low level language. It mixes paradigms quite substantially and has some nice features. It’s class based, and mostly statically typed.

Coherence/Subtext

Jonathan Edwards started this presentation by showing a small example where the ordering of statements in an implementation is dependent on what representation you use for data, and shows that it’s impossible to handle this case in a general way. From that point he claims that there is a fundamental tension between styles in a language, and you can only get two of these three: Declarative programming, Mutable state and Data Structures. I’m not sure if I agree with his conclusions, and the initial example didn’t feel like anything I’ve ever had trouble with.

Based on the conclusion that you can only have two of the three, he goes on to claim that the thing that cases all these problems is aliasing. So in order to avoid aliasing, his system uses objects where instances are physically always contained within another object. This means you can refer to these objects without having actual pointers – and thus cannot do aliasing either. From that point on, his system allows declarative programming of the flow, where updates never oscillate back out to create more updates.

Lots of interesting ideas in this talk, but I’m not sure I agree with either the premise or the conclusions.

Finch

Finch is a small programming language, bytecode compiled with fibers, blocks, TCO, objects, prototypes, a REPL and Smalltalk style message selectors. In the feature, the author aims to add metaprogramming, some self-hosting, continuations and concurrency features.

Circa

Circa is a small programming language that allows you to get immediate feedback. It’s aimed at game programming, and achieves this by running the script many times (one time for every frame as far as I understood it). You then specify what state you have in your program, and this state will be automatically persisted between invocations, so that a specific invocation of a specific function will always get access to the same state it started out with. This was a very interesting but weird model. It seems to work really well for smaller prototyping of games and graphics but I’m wondering what can be done to expand it.

Wheeler

Wheeler is a proof of concept presented by Matt Youell. It’s pretty hard to describe, and I’m not even sure if there’s a computational model there yet. The project is apparently just a few weeks old, and the ideas are still in progress. The basic tenets of the language seems to be that you work with categories of things and establish transitions between them. A transition pattern matches things that it looks for, which means that things like syntax and ordering doesn’t mean as much. The author calls it mutual dispatch, because it uses the types/categories of everything involved to establish what transitions to use. At this point there is no time model, so everything happens in one sweep, but once a time model gets in there it might be very interesting. To me it looked a bit like a cross between neural networks and cellullar automata.

Interval arithmetic

Alan (Mr Frink) gave a talk about the problems with floating point numbers, and one way of handling that. Floating point numbers cause problems by making it possible to introduce small errors.

Intervals is a new kind of number. It represents a specific number by giving two end points and saying the real number is somewhere within that interval. You can see it in two different ways: “the right value’s in there somewhere but I’m not sure where” or “the variable takes on ALL values in the interval simultaneously”.

This was a very interesting discussion, and you can find out more about it from Frink’s web page (just search for Frink and interval arithmetic). At the end of the presentation, Alan gave this challenge to other languages:

for:

x=77617

y=33096

calculate:

((333 + 3/4) – x^2) y^6 + x^2 (11x^2 y^2 – 121 y^4 – 2) + (5 + 1/2) y^8 + x/(2y)

Ioke handles it correctly, both using ratios and using decimals.

Stratified Javascript

Stratified JavaScript adds some concurrency features to JavaScript based on Strata. It looked like a very principled approach to giving JS concurrency primitives that are easy to use at the same time as they are very powerful. The presenter showed several examples of communication, blocking and coordination working really well.

Factor

Factor is a very high level stack based language created by Slava Pestov. He went through some of the things that Factor does well and other dynamic programming languages handle less well, like reloading code from the REPL. Lots of other small tidbits of how powerful Factor is and how expressive a stack language can be. At the end of the day I still think it’s interesting how much Ioke code sometimes resemble Factor, even though the underlying semantics are vastly different.

D

Walter Bright showed D, his systems level programming language. He focused on showing that it can do several different paradigms in the same language – all of it looked very, very clean, but I got the impression that D is an extremely big language from these examples. To summarize, D can do inline assembler, class based OO, generative programming, RAII, procedural, functional and concurrent programming (and I probably missed a few). I liked the approach to immutability, but I must admit I’m scared of the sheer size of the language. It’s impressive how such a big language can get so good at compile times.

AmbientTalk

AmbientTalk is a language built on top of Java that puts communication in center. It is supposed to be used in areas where you have bad network connectivity and want to communicate inbetween different devices in a flexible way. Things like network outages aren’t exceptions because they will happen all the time in the environments AmbientTalk is built on. The language embraces futures to a large degree and also takes a principled approach to how Java integration works – so that if you send an AmbientTalk object in to Java, it will work as if you had sent it to a remote device, and the only way Java can interact with that object is by sending messages to it. Much interesting stuff in this talk.

And that was it. I can obviously not capture all the interesting hall way and pub conversations that were had, but hopefully this summary will be helpful until the videos come along in two to four weeks. I would call this conference a total success and I really look forward to next year.



Emerging Languages camp – day 1


Yesterday was the first day of the Emerging Languages Camp, a part of OSCON specifically organized for language creators and designers. You can read more about it at www.emerginglangs.com. The first day was fantastic, lots of very interesting talks and great conversations. The amount of brain power in this room is really humbling.

The format of the camp is that there are about 20 speakers and each speaker gets 20 minutes. This is a fairly limiting format and means the speakers will have to focus their talks quite substantially. I expected a few talks (including my own) to bomb completely because of this, but it didn’t happen during the whole day. All of the talks were very different but good in many ways.

All of the presentations are filmed by Confreaks and will be available within a few weeks.

I’ll try to write a few sentences about each presentation, with thoughts and impressions baked in.

Go

Rob Pike started out the day by talking about the history of CSP (communicating sequential processes) and the lineage of languages that led to Go. Most of the talk was based on using channels/goroutines to handle concurrency. It was definitely a good talk, but it didn’t get me more interested in using Go for anything.

Ioke/Seph

I had the second slot. I had twenty minutes to cover both Ioke and a new language I’m working on, called Seph. Against all odds, my talk went quite well and I managed to communicate the things I wanted to get said. Hopefully the audience wasn’t too bored.

Thyrd

Thyrd is a proof of concept visual language, focused on using tablets for programming – so it’s distinctly none-textual. In many cases you drag and drop operations instead of typing them. The actual development happens in a recursive grid of cells. I’m wondering what the audience for this language would be – it definitely looks intruiging though, and I like how some algorithms ended up being very easily readable and understandable.

Parrot

Allison Randall gave a talk about what’s currently happening with Parrot. It seems they are going for a new rewrite of most of the subsystems. One of the changes is going from a CISC style op code system to a RISC style. Parrot apparently has over 1200 op codes at this point, and they want to scale back everything to about 20-30 bytecodes instead. As a preparation for this, they have ripped out the JIT and will revisit most of the subsystems in Parrot to see what can be done. Allison also gave the audience the distinct impression that Parrot is still quite slow for user programs.

Ur

Of all the talks during the day, I think I understood the least of the Ur/Web talk. Ur is a functional limited programming language focused specifically on building web applications. It’s got dependent types inspired by Agda and allow you to statically check your whole program. The example shown was a simple CRUD app, and I didn’t get any impression of how complicated it would be to actually use it for a real world application. The speaker said the only real world web app he knows about is a hosting application for Ur applications that he is building himself.

Frink

I don’t think I can do this presentation justice. Frink is just incredibly cool and you should check it out. It’s a general purpose programming language, but it’s got units of measure and several other features builtin that makes it very easy to use it to calculate all kinds of interesting facts. As an example, he showed that if all people in China jumped at the same time, that would be equivalent to 4.7 on the Richter scale.

Newspeak

Gilad Bracha talked a bit about the basic ideas and principles behind Newspeak and what the current status is. Gilad focused on no global state, and all names being late bound (including class names). The first feature falls quite naturally out of prototype based OO, so it’s something both Io, Ioke and Seph has (and it’s really nice). The second feature is a bit more obscure, but I’m not sure if it gives as many benefits as the first one.

F#

Joe Pamer talked about what they had to do to take F# from a research language to something Microsoft could ship in Visual Studio 2010. Not something most of us really think about, but there are lots of challenges in doing that kind of transition. Joe covered this quite well and also gave us an insight into the current state of F#.

CoffeeScript

CoffeeScript is a language that compiles down to JavaScript. In comparison with GWT for example, it’s pretty close in semantics to JavaScript, and the generated code can be debugged and looked out without wanting to stab out your eyes. The syntax of CoffeeScript is very pleasant and looks very nice to work with (it’s indentation based, and focuses on getting lambdas to be as small as possible). Next time I’m reaching for JavaScript, I think I might just go for CoffeeScript instead. Good stuff.

Mirah

Charles Nutter covered Mirah (the language formerly known as Duby). It looks more and more complete and useful, and sooner or later I’m going to try switching most of my Java development to Mirah. The extensability features makes it possible to do metaprogramming tricks in Mirah that you wouldn’t even try in Ruby.

Io

Steve jumped in last minute to cover for the Objective-J guy who couldn’t be here. Steve covered the basics of Io, talking about concurrency and the other basic features.

It’s been a great first day, and now day two begins – so I’ll have to focus on that.



Patterns of method missing


One of the more dynamic features of Ruby is method_missing, a way of intercepting method calls that would cause a NoMethodError if method_missing isn’t there. This feature is by no means unique to Ruby. It exists in Smalltalk, Python, Groovy, some JavaScripts, and even most CLOS extensions have it. But Ruby being what it is, for some reason this feature seem to have more heavily used in Ruby than anywhere else. It’s also a feature most Ruby developers seem to know about. Is this because Ruby people are power hungy, crazy monkey patchers? Maybe, but method_missing is also potentially very useful, if used correctly. But of course, it’s exceedingly easy to misuse. In almost all cases you think you need method_missing, you actually don’t.

The purposes of this post is to take a look at a few ways people are using method_missing in the wild, what the consequences are and what you can do to mitigate them. I’m bound to have missed a few use cases here, so please feel free to add more in the comments.

Adding better debug information on failure

One of the most simple but still very powerful ways of using method_missing is to allow it to include more information in the error message than you would usually have got. A simple example of that could look like this:

class MyFoo
  def method_missing(method, *args, &block)
    raise NoMethodError, <<ERRORINFO
method: #{method}
args: #{args.inspect}
on: #{self.to_yaml}
ERRORINFO
  end
end

This usage is pretty common – and is in my opinion a very valid use of the functionality. The only thing you have to be careful about is to not introduce any recursive calls to method_missing. Say if you forget to require YAML in the above example – the error would be a stack overflow.

One of the places where you’ve almost certainly seen this used is in Rails, where the feature is called whiny nils. The idea is that nil will have a method missing that gives some extra information. It can guess based on the method name what object you were expecting. This could be a typical message from Rails whiny nil:

Loading development environment (Rails 2.2.2)
>> nil.last
NoMethodError: You have a nil object when you didn't expect it!
You might have expected an instance of Array.
The error occurred while evaluating nil.last
	from (irb):2

This functionality is exceedingly simple to implement, but gives you lots of leverage to find and debug your problem quicker and easier.

Encode parameters in method name

Another common pattern is to use the name of the method to encode parameters, instead of sending them in as explicit parameters. In some cases this can be used to good effect, but if possible it would be better to encode the possible names beforehand, or send in the parameters as actual parameters instead. Contrast a Rails-style find expression:

Person.find_by_name_and_age("Ola", 28)

With another way of creating the same API:

Person.find_by(:name => "Ola", :age => 28)

The difference here isn’t that large, and in the case of Rails I do think they are harmless – but creating these kinds of API’s make it much harder to debug and maintain an application, so care should be taken.

Builders

Creating XML, HTML, graphical UIs and other hierarchical data structures lend themselves very well to the builder pattern. The idea of a builder is that you use Ruby’s blocks and method_missing to make it easy to create any kind of output structure. The canonical example in Ruby is Jim Weirich’s Builder, that can be used to easily create complicated XML structures. A small example:

builder = Builder::XmlMarkup.new
xml = builder.books { |b|
  b.book :isbn => "124" do
    b.title "The Prefect"
    b.author "Alastair Reynolds"
  end

  b.book :isbn => "65565" do
    b.title "Against a Dark Background"
    b.author "Iain M Banks"
  end
}

The result of this code will be a properly formatted and escaped XML document. Most notable, all the finicky details of closing tags and escaping rules are taken care of for us.

In general, this approach is very pleasant to work with. It’s easy to test (since you don’t even have to generate the real XML to make sure it’s correct), and it works well with your existing Ruby tools. It’s also quite easy to implement a basic version of. For the fully general case you need to use a blank slate object, though.

Accessors

The inversion of the builder pattern is to use a parser that slurps in an XML document (or a YAML, database or anything else really), and then allow you to access the elements of it by using regular Ruby method calls – intercepting these calls with method missing and looking them up. A usage could look something like this:

slurper = Slurp <<XML
<books>
  <book isbn="14134">
    <title>Revelation Space</title>
    <author>Alastair Reynolds</author>
  </book>
  <book isbn="53534">
    <title>Accelerando</title>
    <author>Charles Stross</author>
  </book>
</books>
XML

puts slurper.books.book[1].author

I’m not much of a fan of this approach. In almost all cases there are better ways of doing it than using method_missing. The only valid use case for something like this would be for a throwaway really hacky oneoff thing. But in general, Ruby allows you to define methods dynamically anyway, so you can do that instead for this case.

Proxy/delegation

When you want to insert a proxy that resends method calls somewhere else, method_missing can be an easy way to get that to work. You can resend method calls to another object, you can resend to several objects, you can send method calls over the wire, to implement a crude RMI system. You can also record method calls and write them to disk. All of these can be achieved with just a few lines of code. But in many cases there are better options – especially if you want to do delegation. One of the dangers (and the power also, of course) of method_missing is that it can take any kind of method call. So if you misspell something, method_missing will happily treat it the same way.

But when delegating, you generally want to be explicit about what you delegate, to avoid this problem. There are several classes in the standard library that allow you to explicitly say what methods to delegate and where to delegate them – and if you can, try using this instead. Proxying and delegation should be explicit if possible.

Making parts of an API extensible and optional

In some cases you might want to create a base class for an API, but allow the subclasses to add additional API methods. In some cases it can make sense to ignore calls to these subclass API methods if called on something that doesn’t support it. By definition, the super class can’t actually know which API methods the subclasses might add, so it makes sense to use method_missing to open up the API and make it more convenient. This is not very common – and in most cases should probably not be done, but sometimes it can be a useful technique.

Test helpers

All kinds of test helpers can be created using method_missing. They can be used to implement factories, delegate and do all kinds of things. If you take a look at any open source Ruby project, the tests is the place where you are most likely to find implementations of method_missing. I can’t say that these implementations actually follow any specific patterns either.

Summary

Finally, remember. Method missing is a powerful powerful feature – it should not be used in almost all the cases. But if you do want to use it, don’t forget to implement responds_to? correctly. And if you’re designing your class for subclassing, it’s also important to design your method_missing usage for inheritance. Liskovs Substitution Principle applies here.



RubyConf India was a great success!


This weekend ThoughtWorks in collaboration with RubyCentral and several sponsors arranged the first ever RubyConf India. I was there as a keynote speaker and wanted to take a few minutes to tell you about the experience, since I think this was an event that showed how much pent up interest there really is for Ruby in India.

When planning the conference we aimed for about 150 delegates – but that sold out in a few days so we rearranged the event to accomodate about 400 people, and we managed to fill that. At the end of the day, there was about 420 people there, including both delegates and speakers.

Unfortunately there had been some problems with visas for Chad Fowler and Ivan Porto Carrero. We solved that by having Ivan present over Skype, and rearrange some of the talks and give Brian Guthrie a last minute spot.

The conference started out with Roy Singham, the founder and chairman of ThoughtWorks, setting the tone for the rest of the conference by getting people to think about how India can start innovate for real using technologies such as Ruby.

After that I did a keynote about programming languages, quite similar to what I did at RailsWayCon in Berlin last year. Hopefully people thought it was interesting. I tried to first discuss why we need different languages, where some Ruby language features comes from, a small taxonomy of languages, and some ideas about what might happen in the future.

Obie Fernandez was next up with a controversial keynote called Blood, Sweat and Rails. The basics of the keynote was a description of different lessons Obie has learned from running HashRocket. The controversial bit was based around two different factors, the first being Obie’s heavy use of profanity. (Heavy enough that one of the organizers went up on stage halfway through the presentation and asked him to tone it down). The other controversial part was that Obie used his keynote spot to spend quite a lot of time promoting HashRocket. Later in the day, a representative from ThoughtWorks, and one representative from Castle Rock (another sponsor) went on stage, mentioning that the point of sponsorship was not to push their respective companies but push Ruby in India.

After Obie’s keynote it was lunch time. I was tired and jetlagged so I walked around in a bit of a daze after my keynote was done. I did catch some of Aman’s talk about Ruby OO with objects instead of classes. What I saw sounded intruiging, but I didn’t get the full picture since I walked in and out of the presentation.

The highlight of the day was definitely Matz keynote, where he called in using Skype and did a presentation and some Q&A. Matz talked a bit about the history of Ruby and then started mentioning some things about the future of Ruby. The most interesting concrete information was that 1.9.2 will come this summer, and after that they will start work on 2.0. This version will also make some heavier changes, some of them which I’m not sure I like (such as requiring parenthesis for invocation).

The second day started out with Nick Sieger from EngineYard talking about the next version of Rails. Lots of useful information about what we can expect from the next major revision. Compared to mine and Obie’s keynotes, this was shock full with technical information instead of high level concepts. Good stuff.

After that, Pradeep Elankumaran from Intridea doing a very interesting session about startups. His talk ended up being a long discussion with most of the audience about startups. This discussion kept most people in the audience interested enough to stay long into lunch time. Very good session.

After lunch I had a major conflict of interest. Both colleagues of mine had interesting sessions going. Sarah Taraporewalla talking about Ruby view technology and Sidu Ponnappa and Niranjan Paranjape talking about entropy in long running Ruby projects. I ended up choosing Sarah’s talk – which was brilliant. She did a great job explaining why the current view technologies are generally too permissive and make it harder to test the behavior of your view correctly.

Of course, Sidu and Niranjan got good reviews – and I heard lots of things that sounded like it was a session full of controversial ideas. Sounds like fun – wish I’d been there too.

The next session was about building a Ruby Application Server. Sadly, I’d kinda misunderstood what this session was about, since I was assuming that “Application Server” was meant in the Java environment meaning. This was not the case – instead it was about implementing a Ruby web server. I kinda lost interest quite quickly and ended up doing some work instead.

Brian Guthrie’s replacement session was called “Advanced Ruby Idoms so clean you can eat off of them”, and was both hilarious and very on point. The room was standing room full and Brian’s presentation sparked lots of debate. The gist of it was that basically there is no such thing as magic in programming. Everything is a function of your understanding of what’s going on in the language. You can have good code or bad code. Clean code and dirty code. But magic code just means you don’t understand the language, and is not really something you should use as an argument for or against an implementation. Brian expanded on this by giving loads of tips and tricks on what to do and what not to do.

I didn’t catch the final session, since I was very tired at that point. After the final session, Roy Singham came back with a keynote about a number of different things. He talked about the current state of agile in the world, the Ruby and Rails culture in the US, and how that should inform the Ruby culture in India, what things Unicef has been doing lately with Rails and other technology, and how we can use new technology to start being more socially responsible. The keynote sparked a lot of debate, and as usual Roy made quite a large amount of controversial statements in soundbite form. Take a look at my twitter stream for this weekend to get some quotes.

All in all, I think this was a total success. Lots of interesting talks, fantastic networking opportunities and a great vibe in the air. Looking at the tweets from the conference, RubyConf India seems to have been very appreciated by a large majority of all attendees. Here’s hoping for an even better next year!