Use presence


One of the things you quite often see in Rails code bases is code like this:

do_something if !foo.blank?

or

unless foo.blank?
  do_something
end

Sometimes it’s useful to check for blankness, but in my experience it’s much more useful to check for presence. It reads better and doesn’t deal in negations. When using blank? it’s way too easy to use complicated negations.

For some reasons it seems to have escaped people that Rails already defines the opposite of blank? as present?:

do_something if foo.present?

There is also a very common pattern that you see when working with parameters. The first iteration of it looks like this:

name = params[:name] || "Unknown"

This is actually almost always wrong, since it will accept a blank string as a name. In most cases what you really want is something like this:

name = !params[:name].blank? ? params[:name] : "Unknown"

Using our newly learnt trick, it instead becomes:

name = params[:name].present? ? params[:name] : "Unknown"

Rails 3 introduces a new method to deal with this, and you should back port it to any Rails 2 application. It’s called presence, and the definition looks like this:

def presence
  self if present?
end

With this in place, we can finally say

name = params[:name].presence || "Unknown"

These kind of style things make a huge difference in the small. Once you have idiomatic patterns like these in place in your code base, it’s easier to refactor the larger parts. So any time you reach for blank?, make sure you don’t really mean present? or even presence.



Blog about the future of the Internet


I have decided I have several things related to Wikileaks, the future of the Internet and other related things that I want to write about. However, that content seems like it will be a bit far from what I usually write about in this blog (however seldom that actually is). So I’ve decided to create a new, parallel blog for these postings. I will not make any notice of new blog posts on that blog here and the posts to that blog will not be syndicated through the ThoughtWorks blog syndication. If you are interested in what I have to say on these issues, the new blog is can be found here. That’s all.



The End of the Free Internet – Panel tomorrow


Tomorrow evening (Wednesday, Dec 8, 2010) at 8:15pm EST, Rebecca ParsonsTim Bray and Marc Rotenberg will participate in a panel discussion hosted by The Real News and moderated by Paul Jay. The discussion will focus on these questions:

  • What does the cut off of service to WikiLeaks mean for the future of the Internet?
  • Will digital journalism be less protected?
  • Was WikiLeaks afforded procedural protections before its website and DNS entries were shut down? What process should be required?
  • The Internet is vulnerable to internal threats. What technical innovations are needed?
  • Can leaks ever be stopped? Is it worth the price?
If you have an interest in the impact of the recent events on the future of a free and open Internet, I urge you to come watch!
For more information and a link to the actual broadcast, go here.


Boycotting Amazon and PayPal


After the events the last week I see no other alternative but to boycott Amazon and PayPal. I wanted to just very quickly make a note here that I have closed my accounts with both, and explain why.

Amazons decision to stop hosting WikiLeaks is said to have been based on terms-of-use violations, however – these violations didn’t stop Amazon from hosting WL the last two times – and there are many other web sites on Amazon that have likewise questionable content. If anyone where to write a book based on the WikiLeaks diplomatic cables, would Amazon not sell it then? I can’t help but believe that the real reason Amazon stopped hosting WikiLeaks had to do with pressure applied from Joe Lieberman and the US Government. As far as outside observers can determine, there seems to have been no legal process – but rather just a phone call from Liebermans department. It sets a very dangerous precedent when a large company and a de-facto common carrier folds like this from Government pressure.

Very quickly after Amazon folded, Tableau Software decided to do the same without any contact from Liebermans department.

On Saturday, it was announced that PayPal had decided to follow suit and cancel WikiLeaks donation account.

For many reasons, these three incidents have convinced me that I have to make a moral stand and boycott Amazon and PayPal – I have cancelled my accounts. I urge you to do the same if you feel that the last few days events have been a dangerous precedent.



Language features at the Emerging Languages camp


As I’ve covered in several blog posts, I visited the Emerging Languages camp last week. It was an interesting experience for many reasons, and some of my conclusions are still half formed. I wanted to talk a little bit about some common themes in several languages presented at this camp, and also what the trends are leaning towards. Now, I’m not entirely sure what my insights will be yet, but hopefully I’ll know as I approach the end of this blog post.

There are several different axises you can divide the presented – about 26 – languages. The first one is in terms of age and maturity. The oldest language presented was probably Parrot, Frink, Io, Factor and D – who have all been around for eight to ten years. All of these languages are very mature and you wouldn’t hesitate to use them for real life work. The second category are the languages that range from a few years in age to quite new ones. Most of these languages are still evolving, still not stable, but definitely on the path to getting there. The final set of languages are the most interesting in my view – the new ideas that have just started germinating, or even concepts that aren’t actually there yet. From my list of interesting things, Wheeler is definitely a language in that category.

But you can also look at the types and features you find in the new languages (and I will exclude the oldest group of languages when talking about these features and ideas). There is a large group of languages that are evolutionary rather than revolutionary. This is fine, of course. Many of these languages take much inspiration from Ruby and Smalltalk. Object orientation is extremely common among these languages, several of them prototype based. There was also several languages with indentation based syntax. I was surprised about the amount of languages that target native instead of a virtual machine. AmbientTalk, Clojure, Ioke/Seph, Frink and Mirah targets the JVM, Stratified JavaScript, E/Caja and CoffeeScript targets JavaScript and F# targets the CLR. All the other languages can be considered native. I was quite surprised about this – anyone got a good explanation?

There are also several graphical languages presented. These are harder to categorize from a traditional paradigm perspective. Thyrd is more of a proof of concept, very graphical, but backed by a stack language in the style of Forth. Kodu seems to have a quite traditional backend, but the graphical interface hides that in most cases. Both of these languages are optimized to run in situations where you don’t always have a keyboard – Thyrd for tablet PCs and Kodu for the XBox. Subtext/Coherence is based around non-syntactic thinking, but didn’t seem to have a graphical interface either at this point.

As mentioned above, the trend of using JavaScript as a target language also seems to be on the way up – both CoffeeScript, Caja, and Stratified JavaScript follows that approach. Parts of Ur/Web are also compiled to JavaScript to allow the based language to describe behavior in both the server and the client. Gilad reported that he was very interested in getting Newspeak to run on JavaScript too, and there has been a lot of talk about getting Io to work on that platform too. This seems to be an interesting idea for many languages, and the benefits of compiling to a language that can run in any browser is definitely compelling. However, there seems to be a lot of problems with that approach too. Some people create a bytecode machine in JavaScript that then executes generated bytecodes. Some languages have to do lifting of functions because of bugs in several JavaScript implementations. And of course, the JavaScript language doesn’t give you good ways of generating code that is easy to debug.

The low level languages all seem to give interesting capabilities. D is what C++ should have been. BitC allow you to write programs with strong guarantees. ooc gives many of the high level benefits to a low level language. Go makes it possible to handle concurrency in an easy and powerful way.

I’m at the end of my thoughts right now, and there is no grand conclusion to be found. Just some interesting observations. Maybe they are indicative of the future in language development, and maybe not.



The JVM Language Summit 2010


I’ve just come back from three days in Santa Clara, spending time with some of the brightest people in the Java world – the JVM language summit is truly a fantastic collection of great people. And I was there too…

THe goal of the JVM language summit is to collect the people who work with languages on the JVM and have them share their projects, their experiences and their networks – and let them network with the people in charge of implementing the JVM’s for different companies. This year, a lot of discussion about JSR 292 and project lambda was on the plate. The presence of hardware and VM people was also more pronounced. I counted principals for at least six different virtual machines in the audience or presenting (Hotspot, JRockit, J9, Azul, Maxine, and Monty).

Among the experienced platform and language people there, some of the notables included Kresten Krab Thorup, Joshua Bloch, Bob Lee, Neal Gafter, John Rose, Brian Goetz, Alex Buckley, Rich Hickey, Charles Nutter, Cliff Click, Doug Lea, Per Bothner and many more. A great collection of people.

As an example of the funny happenstance that can happen in this collection of people, I was sitting rebinding my Java implementations for Mac OS X – and I had remove lots of links in /usr/bin. A few minutes later the person next to me started asking some questions about my experience with Java on the Mac – and it turns out he’s the manager for the Apple JVM team. Or at one point Rich Hickey reported on a quite puzzling problem that causes bad semantics when iterating over data that doesn’t fit in memory – and Cliff Click immediately opens up his laptop, says “give me an half hour and I’ll see what I can do”.

Another funny anecdote was when Doug Lea pointed out that if you use fibonacci to test performance against yourself or others, it’s important that the implementations actually agrees about the first values of fib. Funnily enough, I saw three different implementations of the ground rule in fib during the summit – all of them different. (if n < 2 return 1, if n<=2 return n, if n < 2 return n).

There were way too many interesting presentations and discussions for me to be able to talk about all of them – instead I just wanted to give some highlights.

Charles Nutter

Charles gave a quick introduction to JRuby and Mirah, and what kind of optimizations JRuby is currently doing. He also talked about how far he’s gotten in inlining invoke dynamic calls inside of JRuby (and he’s gotten very far – it’s really cool).

Fredrik Öhrström

Fredrik is the JRockit representative on JSR 292, and way too smart. He presented a solution to how you can use method handles integrated with function types to solve many of the current problems in project lambda. A very powerful and interesting presentation.

Doug Lea

Doug spent his keynote trying (quite successfully) to concinve the room of the hegemony of fork-join as a good solution to concurrency problems. A very good and thought provoking keynote.

Josh Bloch

Last year at the JVM language summit, Josh talked about what he called “the Semantic Gap”. This year, after being beat up by some linguists, he changed the name of this concept to “Performance Anxiety”. The basic idea is that in our current infrastructure we have traded performance for predictability. Two examples from his talk about when this happens in Java was pretty interesting. He had one benchmark that consistently showed about the same numbers for the same JVM run, but differed between JVM runs. There was no undeterminism in the benchmark itself, but they benchmark times continued to oscillate between 0.7 and 0.85 depending on JVM run. Cliff Clicks explanation for this is that it is probably the compilation planner, which is a separate thread. Depending on when that thread runs the compilation strategy will be different, and makes a difference in times. And it’s really hard for the programmer to take this difference into account.

The other example is simpler (and don’y change your code because of this). In some circumstances it turns out that & is faster than && in Java, because a && will short curcuit, which means it will branch. The single ampersand will always execute both sides, which means the CPU can pipeline both of them to execute at the same time.

All the examples he shown comes down to the same thing – we can’t really reason intuitively about the performance of our language constructs anymore. Our systems have become to complex in order to support better performance, and we give up predictability to get that performance. And at the end of the day it doesn’t even matter if you go down to C or assembler – you still cannot control exactly what the CPU is doing anymore.

Kresten Krab Thorup

Kresten is the CTO of Trifork, and one of the main organizers of many of my favorite conferences (like JAOO and QCon). The last nine months he has worked on an Erlang implementation for Java, which he talked about. It seems to be a very good implementation, and he’s getting surprisingly good performance and context switching numbers. In fact, several of the ideas in Seph will be stolen from Erjang.

Rémi Forax

Rémi showed off his PHP.reboot project, implemented using JSR 292 and getting quite good performance. His JSR 292 backport seems to be really useful and I think I’ll use that to make sure Seph can run on pre Java 7 machines. Good stuff.

Rich Hickey

Rich spent some time collecting comments from people in the room of what was problematic with the JVM in its current incarnation. To start us off, he showed one piece of hilarious/horrible Clojure code. Any one wants to guess what it does?

static public Object ret1(Object ret, Object nil) {
    return ret;
}

public static int count(Object o){
    if(o instanceof Counted)
        return ((Counted) o).count();
    return countFrom(Util.ret1(o, o = null));
}

We then went on to a few other things (which you can find on the JVM Language Summit wiki). The consensus seemed to be that tail calls is really very important. Last year, it wasn’t as crucial but now that we see how powerful method handles and lambda will be, tail calls turn out to be very nice to have. Hopefully we can make that happen.

JSR 292

The JSR 292 expert group got lots of chances to work on ideas and designs for the future. Lots of interesting results came out of these discussions. Some of the more notable ones are skisses on how method handles and function types can work together, how invoke dynamic and bootstrap method can be used to implement defender methods and several other interesting ideas.

All in all it has been a fun few days, going far out in language and implementation geekiness. I hope to come back to this next year.



Life in the time of Java 7


I’m currently in the process of implementing Seph, and I’ve reached an inflection point. This point is the last responsible moment to choose what I will target with my language. Seph will definitely be a JVM language, but after that there is a range of options – some quite unlikely, some more likely. The valid choices are:

  • Target Java 1.4
  • Target Java 5/6
  • Target Java 7
  • Target Java 7 with extensions

Of these, the first options isn’t really interesting for Seph, so I’ll strike it out right now. The other three choices are however still definitely possible – and good choices. I thought I might talk a little bit about why I would choose each one of them. I haven’t made a final decision yet, so that will have to be the caveat for this post.

Before talking about the different choices, I wanted to mention a few things about Seph that matters to this decision. The first one is that I want Seph to be useful in the real world. That means it should be reasonably fast, and runnable for people without too much friction. I want the implementation to be small and clean, and hopefully as DRY as possible – if I end up with both and interpreter and just-in-time compiler, I want to be able to share as much of these implementations as possible.

Java 5/6

The easiest way to go forward would be to only use Java 5 or 6. This would mean no extranice features, but it would also mean the barrier to entry would be very low. It would mean development on Seph would be much easier and wouldd in general make everything simpler for everyone. The problem with it would mainly be implementation complexity and speed, which would both suffer compared to any of the Java 7 variants.

Java 7

There are many good reasons to go with Java 7, but there are also some horrible consequences of doing this. For Seph, the things that would make things from Java 7 is method handles, invoke dynamic and defender methods. Other things would be nice, but the three previous ones are the killer features for Seph. Method handles make it possible to write much more succinct code, not generate lots of extra classes for each built in method, and many other things. It also becomes possible to refer to compiled code using method handles, so the connection between the JIT and the interpreter would be much nicer to represent.

Invoke dynamic is quite obvious – it would allow me to do much nicer compilation to bytecode, and much faster. However, I could still build the same thing myself, to much greater cost and it would also mean inlining wouldn’t be as easy to get.

Finally, defender methods is a feature of the new lambda proposal that allow you to add new methods to interfaces without breaking backwards compatibility. The way this works is that when you add a new method to an interface, you can specify a static method that should be called when that interface method is invoked and there are no other implementations on the concrete classes for a specific object. But the interesting side effect of this feature is that you can also use it to specify default implementations for the core language methods without depending on a shared base class. This will make the implementation much smaller and more flexible, and might also be useful to specify required and optional methods in an API.

The main problem with Java 7 is that it doesn’t exist yet, and the time schedule is uncertain. It is not entirely certain exactly what the design of the things will look like either – so it’s definitely a moving target. Finally, it will make it very hard for people to help out on the project, and also it won’t make Seph a possible language for people to use until they upgrade to Java 7.

Java 7 with extensions

It turns out that the interesting features coming in Java 7 is just the tip of the iceberg. There are many other proposed features, with partial implementations in the DaVinci project (MLVM). These features aren’t actually complete, but one way of forcing them to become more complete is to actually use them for something real and give lots of feedback on the feature. Some of the more interesting features:

Interface injection

This feature will allow you to say after the fact that a specific class implements an interface, and also specify implementations for the methods on that interface. This is very powerful and would be extremely helpful in certain parts of the language implementation – especially when doing integration with Java. The patch is currently not very complete, though.

Tail calls

Allowing the JVM to perform proper tail calls would make it much easier to implement many recursive functional algorithms easily. Since Seph will have proper tail calls in the language, this will mean that I will have to implement this myself if the JVM doesn’t do it, which means Seph will be slower based on this. The patch seems to be quite good and possible to merge and harden to the JDK at some point. Of all the things on this list, this seems to be one of things that we can actually envision see being added in the Java 7 or Java 8 time frame.

Coroutines/continuations

Both coroutines and continuations seem to be possible to do in a good way, at least partially. Coroutines might be interesting for Seph as an alternative to Kilim, but right now it seems to be a bit unstable. Continuations would allow me to expose continuations as a first class citizen which is never bad – but it wouldn’t give me much more than that.

Hotswapping

Hotswapping of code would make it possible to do agressive JITting and then backing out from that when guards fail and so on. This is less interesting when we have invoke dynamic, but will give some more flexibility in terms of code generation.

Fixnums, tuples, value types

We all want ways of making numbers faster – but these features might also make it possible to efficiently represent simple composite data structures, and also things like multiple return values. These are fairly simple features, but have no real patch right now (I think).

Light weight code loading (anonymous classes)

It is horrible to load byte code at runtime in Java at this point. The reason is that to be able to make sure your loaded code gets garbage collected, you will have to load each chunk of code in a new class in a new classloader. This becomes very expensive very fast, and also endangers permgen. Anonymous classes make this go away, since they don’t have names. This means you don’t actually have to keep a reference to older classes, since there is no way to get to them again if you lost the reference to them. This is a good thing, and makes it possible to not generate class loaders every time you load new code. THe state of this seems to be quite stable, but at this point JVM dependent.

The price

Of course, all of these lovely features comes with a price. Two prices in fact. The first price is that all the above features are incomplete, ranging from working patches to proof of concepts or sketches of ideas. That means that the ground will change under any language using it – which introduces hard version dependencies and complicates building. The other price is that none of these features are part of anything that has been released, and there are no guarantees that it will ever be merged in Java at any point. So the only viable way of distributing Seph would be to distribute standard build files with a patched OpenJDK so that anyone can download and use that specific JDK. But that limits interoperability and causes lots of other problems.

Somewhere in between

My current thinking is that all of the above choices are bad. For Seph I want something inbetween, and my current best approach looks like this. You will need a new build of MLVM with invoke dynamic and method handles to develop and compile Seph. I will utilize invoke dynamic and method handles in the implementation, and allow people to use Rémi Forax’ JSR 292 backport to run it on Java 5 and 6. When Java 7 finally arrives, Seph will be more or less ready for it – and Seph can get some of the performance and maintainability benefits of using JSR 292 immediately. At this point I can’t actually use defender methods, but if anyone is clever enough to figure out a backport that will allow defender methods to work on Java 5 or 6, I would definitely use them all over the place.

This doesn’t actually preclude the possibility of creating alternative research versions of Seph that uses some of the other MLVM patches. Charles Nutter have shown how much you can do by using flags to add features that are turned off by default. So Seph could definitely grow the above features, but currently I won’t make the core of the language depend on them.



Preannouncing Seph


I’ve been dropping a few hints and mentions the last few weeks, and I thought it was about time that I took some time to preannounce a new project I’m working on. It’s going to be much easier writing my next few blog posts if people already know about the project, and my reasons for keeping quiet about it have mostly disappeared. It’s also a moot point since I talked about it at the Emerging Languages camp last week, and the video will be up fairly soon. And I already put the slides online for it, so some things you might have already seen.

So without further ado, the big announcement is that I’m working on a new language called Seph. Big whoop.

Why?

I already have Ioke and JRuby to care for, so it’s a very valid question to ask why I would want to take on another language project – outside my day job of course. The answer is a bit complicated. I always knew and communicated clearly that Ioke was an experiment in all senses of the word. This means my hope was that some of the quirky features of Ioke would influence other languages. But the other side of it is that if Ioke seems good enough as an idea, there might be value in expanding and refining the concept to make something that can be used in the real world. And that is what Seph is really about. That blog post I wrote a few weeks ago with the Ioke retrospective – that was really a partial design document for Seph.

So the purpose of Seph is to take Ioke into the real world while retaining enough of what made Ioke a very nice language to work with. Of course, being the person I am, I can’t actually avoid doing some new experimentation in Seph, but they will be mostly a bit safer than the ones in Ioke, and some of the craziest Ioke features have been scaled back a bit.

Some features

So what’s the difference? Seph will still be prototype based object oriented, in the same way as Ioke. It will definitely consider the JVM its home. It will be homoiconic, and allow AST manipulation as a first class concept – including working with message chains as a way of replacing blocks. It will still have a numerical tower. It will use almost exactly the same syntax as Ioke. It will still allow you to customize operators and precedence rules.

The big difference. The one that basically makes most all other design changes design themselves is a small but very important difference: objects are immutable. The only way you can create new objects is by creating a new object that specifies the difference. This can be done either by creating a new child of the existing object, or creating a new sibling with the specified attributes changed. In most cases, the difference between the strategies isn’t actually visible, so it might be an implementation strategy.

Now once you have immutable objects but still focus on polymorphic dispatch, that changes quite a lot of things. It changes the core data structures, it changes the way macros work, it changes the flow of data in general. It also changes what kinds of optimizations will be possible.

Another side effect of immutability is that is becomes much more important to have a good module story. Seph will have first class modules that ends up still being simple Seph objects at the same time. It’s really a quite beautiful and simple scheme, and it makes total sense.

If you’re creating a new Object Oriented language, it turns out that proper tail calls is a good idea if you can do it (refer to Steele for more arguments). Seph will include proper TCO for all Seph code and all participating Java code – which means you’ll only really grow the stack when passing Java boundaries. This will currently be done with trampolining, but I deem the cost worth the benefit of a tail recursive programming style.

I mentioned above that objects are immutable. However, local variables will be mutable. It will also be possible to create lexical closures. I’m still undecided whether it’s a good idea to leave a big mutable hole in the tyoe system, or whether I should make it impossible for lexical closures to mutate their captured environment. Time will tell what I decide.

Stealing is good

Seph believes in reusing concepts other people have already made a great job with. As such, many pieces of the language implementation will be stolen from other places.

Just like in Ioke, the core numbers will come from gnu.math. This library has served me well, and I’ll definitely continue to use it. The big difference compared to Ioke is that the gnu.math values will be first class Seph object, and won’t have to be wrapped. Seph will also have real floats instead of bigdecimals. This is a concession to reality (which I don’t much like, btw).

Seph will incorporate Erlang style light weight threads with an implementation based on Kilim (just like Erjang).

As mentioned above, the core data structures will have to change. And the direction of change will be towards Clojure. Specifically, Seph will steal/has stolen Clojures persistent data structures, all the concurrency primitives and the STM. I don’t see any reason to not incorporate fantastic prior art into Seph.

As mentioned above, the module system is also not new – it’s in fact heavily inspired of Newspeak. Having no globals force this kind of thinking, but I can’t say I would have been clever enough to think of it without Gilad’s writings, though.

Basically everything else is copied from or inspired by Ioke.

Isn’t mutability the essence of Ioke?

If you have worked with Ioke, or even heard me talk about it, you might have gotten the impression that mutability is one of the core tenets of Ioke. And your impression would be correct. It wasn’t until I started thinking about what a functional object hybrid version of Ioke would look like, that I realized most of things I like in Ioke could be preserved without mutability. Most of the macros, the core evaluation model and many other pieces will be extremely similar to Ioke. And this is a good thing. I think Ioke has real benefits in terms of both power and readability – something that is not easy to combine. I want Seph to bring the same advantages.

Will you abandon Ioke now?

In one word: no. Ioke is still an experiment and there are still many things that I want to explore with Ioke. Seph will not fill the same niche, it won’t be possible for me to do the same experimentation, and fundamentally they are still quite different languages. In fact, you should expect an Ioke release hopefully within a few weeks.

So will it be useful?

Yes. That’s the whole goal. Seph will have an explicit focus on two areas that Ioke totally ignored. These areas are concurrency and performance. As seen from the features above, Seph will include several powerful concurrency features. And from a performance standpoint, Ioke was a tarpit – even if you wanted to make it run faster, there wasn’t really anything to get a handle on. Seph will be much easier to optimize, it’s got a structure that lends itself better to compilation and I expect it to be possible to get it to real world language performance. My current idea is that I should be able to get it to JRuby performance for roughly the same operations – but that might be a bit optimistic. I think it’s in the right ballpark though. Which means you should be able to use it to implement programs that actually do useful things in the Real World ™.

Is it available?

No. At the current point, Seph is still so young I’m going through lots of rewrites. I would like the core to settle down a little bit before exposing everything. (Don’t worry, everything is done in git, and the history will be available, so anyone who wants to see all my gory mistakes will have no trouble doing that). But in a nutshell, this is why this is a preannouncement. I want to get the implementation to the stage where it has some of the interesting features I’ve been talking about before making it public and releasing a first version.

Don’t worry though, it should be public quite soon. And if I’m right and this is a great language to work in – then how big of a deal is another month of waiting?

I’m very excited about this, and I hope you will be too! This is an adventure.



Questioning the reality of generics


I’ve been meaning to write about this for a while, since I keep saying this and people keep getting surprised. Now maybe I’m totally wrong here, and if that’s the case it would be nice to hear some good arguments for that. Here’s my current point of view on the subject anyway.

A specter is haunting the Java community – the specter of generics.

Java introcued a feature called generics in Java 5 (this feature is generally known under the name of parametric polymorphism in the literate). Before Java 5 it wasn’t possible to create a reusable collection that would ensure the type safety at compile time of what you put in to that collection. You could create a collection of for example Strings and have that working correctly, but if you wanted to have a collection of anything, as long as that anything was the same type, you were restricted to doing runtime checks, or just having good tests.

Java 5 made it possible to add type parameters to any other type, which means you could create more specific collections. There are still problems with these – they interact badly with native arrays for example, and wildcards (Java’s way of implementing co= and contravariance) have ended up being very hard for Java developers to use correctly.

Java and C# both added generic types at roughly the same time. The C# version of generics differed in a few crucial ways, though. The most important difference in implementation is that C# generics are reified, while Java generics use type erasure. And this is really the gist of this blog post. Because over and over I hear people lament the lack of reified generics in Java, citing how good C# and the CLR is to have this feature. But is that really the case? Is reified generics a good thing? Of course, that always depends on who is asking the question. Reified might well be good for one person but not another. Here you will hear my view.

Reified? Huh?

So what does reified generics mean, anyway? It is probably easiest to explain compared to the Java implementation that uses type erasure. Slightly simplified: in Java generics doesn’t exist at runtime. It is purely a fiction that the compiler uses to handle type checking and make sure you don’t do anything bad with your collection. After the generics have been type checked, they are used to generate casts and type checks in the code using generics, some metadata is inserted into the class file format, and then the generic information is thrown away.

In contrast, on the CLR, generic classes exist as specific versions of their class. The same class with different generic type arguments are really different classes. There are no casts happening at the implementation level, and the CLR will as a result generate more specific code for the generic code. Reflection and dynamic type checks is also possible on the CLR. Having reified generics means basically that they exist at runtime, that the virtual machine knows about them and handles them correctly.

Multi-language virtual machines

The last twenty years something interesting has happened. Both out hardware and software has gotten mature enough that a new generation of virtual machines have entered the market. Traditionally, virtual machines for languages were made for specific languages, such as Pascal, Lisp and Smalltalk, and possibly except for SECD and the Warren machine, there haven’t really been any virtual machines optimized to running more than one language well. The JVM didn’t start that way either, but it turned out to be more well suited for it than expected, and there are lots of efforts to make it an even better platform. The CLR, Parrot, LLVM and Rubinius are other examples of things that seem to become environments rather than just implementation strategies for languages.

This is very exciting, and I think it’s a really good thing. We are solving very complex problems where the component problems are best solved in different ways. It seems like a weird assumption that one programming language is the best way of solving all problems. But there is also a cost associated with using more than one language. So having virtual machines act as platforms, where a sharked chunk of libraries are available, and the cost of implementation is low, makes a lot of sense.

In summary, I feel that the JVM was the first step towards a real viable multi-language virtual machine, and we are currently in the middle of the evolution towards that point.

Solving the problems

So why not add reified generics to the JVM at this point? It could definitely be done, and using an approach similar to the CLR, where libraries are divided into pre and post reified makes the path quite simple from an implementation standpoint. On the user side, there would be a new proliferation of libraries to learn – but maybe that’s a good thing. There is a lot of cruft in the Java standard libraries that could be cleaned up. There are some sticky details, like how to handle the API’s that were designed for erased generics, but those problems could definitely be solved. It would also solve some other problems, such as making it possible for Scala to pattern match on type parameters and solving part of the problem with abstracting over primitive types. And it’s absolutely possible to do. It would probably make the Java language into a better language.

But is it the only solution? At this point, making this kind of change would complicate the API’s to a large degree. The reflection libraries would have to be completely redesigned (but still kept around for backwards compatibility). The most probable result would be a parallel hierarchy of classes and interfaces, just like in the CLR.

Refified generics are generally being proposed in discussions about three different things. First, performance, second, making it easier for some features in Scala and other statically typed languages on the JVM, and thirdly to handle primitives and primitive arrays a bit better. Of these, the first one is the least common, and the least interesting by far. JVM performance is already nothing short of amazing. The second point I’ll come back to in the last section. The third point is the most interesting, since there are other solutions here, including unify primitives with objects inside the JVM, by creating value types. This would solve many other problems for language implementors on the JVM, and enable lots of interesting features.

The short stick

I believe in a multi language future, and I believe that the JVM will be a core part of that future. Interoperability is just too expensive over OS boundaries – you want to be on the same platform if possible. But for the JVM to be a good environment for more than one language, it’s really important that decisions are made with that in mind. The last few years of fantastic progress from languages like Rhino, Jython, JRuby, Groovy, Scala, Fantom and Clojure have shown that it’s not only possible, but benificial for everyone involved to focus on JVM languages. JSR 223, 292 and several others also means the JVM is more and more being viewed as a platform. This is good.

Generics is a complicated language feature. It becomes even more complicated when added to an existing language that already has subtyping. These two features don’t play very well together in the general case, and great care has to be taken when adding them to a language. Adding them to a virtual machine is simple if that machine only has to serve one language – and that language uses the same generics. But generics isn’t done. It isn’t completely understood how to handle correctly and new breakthroughs are happening (Scala is a good example of this). At this point, generics can’t be considered “done right”. There isn’t only one type of generics – they vary in implementation strategies, feature and corner cases.

What this all means is that if you want to add reified generics to the JVM, you should be very certain that that implementation can encompass both all static languages that want to do innovation in their own version of generics, and all dynamic languages that want to create a good implementation and a nice interfacing facility with Java libraries. Because if you add reified generics that doesn’t fulfill these criteria, you will stifle innovation and make it that much harder to use the JVM as a multi language VM.

I’m increasingly coming to the conclusion that multi language VM’s benefit from being as dynamic as possible. Runtime properties can be extracted to get performance, while static properties can be used to prove interesting things about the static pieces of the language.

Just let generics be a compile time feature. If you don’t there are two alternatives – you are an egoist that only care about the needs of your own language, or you think you have a generic type system that can express all other generic type systems. I know which one I think is more likely.



Emerging Languages camp – day 2


The second day of Emerging Languages camp was at least as good as the first day. We also managed to squeeze in four more talks, since everybode agreed that the afternoon pause was too long and ineffective during day one. At the end of the day my brain was substantially melted that I didn’t even contemplate finishing these comments. But after some sleep I think I have a fresh perspective.

The sessions were a bit more varied compared to the first day – both in quality and how far out the ideas were. Because of how my interest in various subject vary, there might be some inconsistency in length of reporting on the different languages.

Anyway, here goes:

Kodu

Kodu is a language from Microsoft for creating games. It’s specifically aimed at kids to see if they can learn programming in a better way using something like this. The language uses icons and a backend text based syntax to make it easy for someone to program using structure instead of syntax. You get a basic 3d environment where you can modify and edit things in various ways. Another important part of the design is to get the game to quickly do something, so you get immediate feedback. Everything added to the language is user tested before adding it – including doing gender testing. They thought long and hard about whether they should add conjunctions or not – but ended up deciding for doing it. You work with an XBox when programming and running the game. It’s also free. Overall, Kodu looks like a really nice and innovative initiative, probably going back as far as Logo in terms of inspiration. Very nice.

Clojure

Rich didn’t actually talk much about Clojure in general, but decided to focus on a specific problem he is working on solving. His talk title doesn’t really say much about this, though: “Persistent, Transience, Persistents, Transients and Pods – invasion of the value snatchers”. It was a great talk with lots of information coming extremely fast. I found myself focusing more during this talk than during any other during the conference, just to follow all threads of thought.

Rich spent some time giving an introduction to persistent data structures so everyone knew how Clojure works with them – including how they are turned into transients – since that’s where the new feature comes in.

An important part of persistent data structures is that yu preserve the performance guarantees of a mutable equivalent of that data structure. Clojure uses bit-partitioned hash tries, originally described by Phil Bagwell. This allows Clojure to have structural sharing, which means it’s safe to “update” something – the old version is retained. It uses path copying to make it possible to udpate with a low cost. There is definitely cost to doing it, but it works well in a concurrent environment where other solutions would be more costly to get correct results.

Clojure has an epochal time model that allows Clojure to view things as values inbetween being “modified”. State is at one step higher than that, so you can see mutable change as a creation of a new value from an existing value that is then put into the same reference the original value existed in. Clojure has four different types of references with various semantics for coordination.

To get good performance, some Clojure functions will actually mutate state that is invisible to anyone else to efficiently create new data structures. To get performance that is acceptable to Rich Clojure, data structures are not implemented using purely immutable data structures (Okasaki style) from the Java side. Persistent data structures also doesn’t scale to larger changes, specifically multiple collections, several steps or other situations where you want to have functional end points but efficient mutation inbetween.

Transients is a feature that allows Clojure to give birth to a data structure. Clojures transients will accumulate changes in a safe way and can then finally give you a persistent value. Only vectors and hash-maps are currently supported. Lists are not, since there is no benefit in doing that. Transients also enforce thread isolation. Composite operations are OK, and so is multi-collection work and you don’t need any locks for this. This is already in Clojure, but they might be doing too much. They both handle editing and enforce the constraints on it, such as single-threadedness. Transients can sometimes return new values too, even on mutating operations.

Pods allow you to split out the policy from transients. Values go in, values go out. The process goes through the pod. Different policies are possible, such as single-threadness or mutexes. A pod knows how to make a transient version of a value. Functions to modify a pod will have to return a new thing (or the same thing). Dereferencing the pod allows you to get a new value from a pod at that point. This gives you the possbility to apply recipes on ordinary Java objects too. A good example is String vs StringBuilder. Pods can ensure lock acquisition order, but not lock composition – although pods can detect it at least. There are still a few details in the design that Rich hasn’t decided on yet.

All in all, a very interesting talk, about the kind of concurrency problems you wish your language had.

E/Caja

Mark Miller recapped the interaction models of the Web, starting with static frames going to the current mess of JavaScript fragments going back and forth, using JSONP, AJAX and Comet. He also talks a bit about  the adoption curves of languages and why some languages get adopted. Posits that a mess of features may be easier to get adopted. This means many languages succeed by adding complexity.

E is an experiment in expressing actors in a persistent way. He used some of the lessons from E combined with AJAX/JavaScript to create Caja, a secure language. Some of the features from Caja were then used  to start work on EcmaScript 5. They are currently working on a standard for SES, secure JavaScript. Dr. SES is an extension of this, that stands for Distributed, Resilient, Secure JavaScript. Object capabilities involve two additions to a regular memory safety and encapsulation model; effects only on held references, and no powerful references by default. This means a reference graph becomes an access graph. Caja can sanitize JavaScript to prevent malicious behavior, but preserve the semantic meaning of the program outside of that.

He showed some examples of how Caja can be used to sanitize regular JavaScript and have it running securely. Very interesting stuff, although the generated code didn’t look as amenable to debugging as something like CoffeeScript.

Fancy

Fancy is a language that tries to be friendly to newcomers, with good documentation, a clean implementation and so on. It’s inspired b several languages: Smalltalk (pure message passing, everything’s an object, dynamic, class based OO, metaprogramming, reflective),  Ruby (file based, embraces UNIX, literal syntax, class definition is executable script, fixed some inconsistencies with procs/lambdas/blocks), Erlang (message passing concurrency, light weight processes – not implemented yet). Fancy takes the opinion that first class is good; classes, methods, documentation, tests should all be first class. FancySpec is a simple version of RSpec. Tests for all built in classes and methods are there. These tests are not dependent on implementation. There are plans to port Fancy to a VM. Methods marked with NATIVE will have an equivalent method in Fancy and in the interpreter, to improve performance.

It’s got dynamic scoping and method caching. Logic can be defined based on the sender of a message, which makes it possible to do things like private and public.

Exceptions are taken directly from the implementation (ie C++).

The language seems to be pretty similar to Ruby in semantics, but more Smalltalk like syntax.

BitC

BitC is geared towards critical systems code. Resource contrained, CPU, memory, those kind of areas. One cache miss sometimes counts. Abstraction is fine, but only if it’s the right one. Variance constrained too. Predictability is very important, so something like a JIT can be a problem. Statically exception free. “Zero” runtime footprint. Non-actuarial risk model. Mean time between failures in decades. Problem is to establish confidence. After other failures in this area, the conclusion has been that BitC shouldn’t be a prover.

The language is an imperative functional language with HM-style parametric type system. You have explicit control of representation. State is handled in a first class manner. Inferencing actually infers mutability in lots of cases. Dependent range checking isn’t there yet, but is coming soon. “The power of ML/Haskell”, “The low-level expressiveness of C”, “Near-zero innovation”.

Trylon

Trylon is a small language, indentionation based and compiles through C. It’s object oriented, with prototypes under the class based system. According to the author, nothing really new in the language – he just did it for his own sake. There are no users so far except for the author.

ooc

The language tries to be a high level low level language. It mixes paradigms quite substantially and has some nice features. It’s class based, and mostly statically typed.

Coherence/Subtext

Jonathan Edwards started this presentation by showing a small example where the ordering of statements in an implementation is dependent on what representation you use for data, and shows that it’s impossible to handle this case in a general way. From that point he claims that there is a fundamental tension between styles in a language, and you can only get two of these three: Declarative programming, Mutable state and Data Structures. I’m not sure if I agree with his conclusions, and the initial example didn’t feel like anything I’ve ever had trouble with.

Based on the conclusion that you can only have two of the three, he goes on to claim that the thing that cases all these problems is aliasing. So in order to avoid aliasing, his system uses objects where instances are physically always contained within another object. This means you can refer to these objects without having actual pointers – and thus cannot do aliasing either. From that point on, his system allows declarative programming of the flow, where updates never oscillate back out to create more updates.

Lots of interesting ideas in this talk, but I’m not sure I agree with either the premise or the conclusions.

Finch

Finch is a small programming language, bytecode compiled with fibers, blocks, TCO, objects, prototypes, a REPL and Smalltalk style message selectors. In the feature, the author aims to add metaprogramming, some self-hosting, continuations and concurrency features.

Circa

Circa is a small programming language that allows you to get immediate feedback. It’s aimed at game programming, and achieves this by running the script many times (one time for every frame as far as I understood it). You then specify what state you have in your program, and this state will be automatically persisted between invocations, so that a specific invocation of a specific function will always get access to the same state it started out with. This was a very interesting but weird model. It seems to work really well for smaller prototyping of games and graphics but I’m wondering what can be done to expand it.

Wheeler

Wheeler is a proof of concept presented by Matt Youell. It’s pretty hard to describe, and I’m not even sure if there’s a computational model there yet. The project is apparently just a few weeks old, and the ideas are still in progress. The basic tenets of the language seems to be that you work with categories of things and establish transitions between them. A transition pattern matches things that it looks for, which means that things like syntax and ordering doesn’t mean as much. The author calls it mutual dispatch, because it uses the types/categories of everything involved to establish what transitions to use. At this point there is no time model, so everything happens in one sweep, but once a time model gets in there it might be very interesting. To me it looked a bit like a cross between neural networks and cellullar automata.

Interval arithmetic

Alan (Mr Frink) gave a talk about the problems with floating point numbers, and one way of handling that. Floating point numbers cause problems by making it possible to introduce small errors.

Intervals is a new kind of number. It represents a specific number by giving two end points and saying the real number is somewhere within that interval. You can see it in two different ways: “the right value’s in there somewhere but I’m not sure where” or “the variable takes on ALL values in the interval simultaneously”.

This was a very interesting discussion, and you can find out more about it from Frink’s web page (just search for Frink and interval arithmetic). At the end of the presentation, Alan gave this challenge to other languages:

for:

x=77617

y=33096

calculate:

((333 + 3/4) – x^2) y^6 + x^2 (11x^2 y^2 – 121 y^4 – 2) + (5 + 1/2) y^8 + x/(2y)

Ioke handles it correctly, both using ratios and using decimals.

Stratified Javascript

Stratified JavaScript adds some concurrency features to JavaScript based on Strata. It looked like a very principled approach to giving JS concurrency primitives that are easy to use at the same time as they are very powerful. The presenter showed several examples of communication, blocking and coordination working really well.

Factor

Factor is a very high level stack based language created by Slava Pestov. He went through some of the things that Factor does well and other dynamic programming languages handle less well, like reloading code from the REPL. Lots of other small tidbits of how powerful Factor is and how expressive a stack language can be. At the end of the day I still think it’s interesting how much Ioke code sometimes resemble Factor, even though the underlying semantics are vastly different.

D

Walter Bright showed D, his systems level programming language. He focused on showing that it can do several different paradigms in the same language – all of it looked very, very clean, but I got the impression that D is an extremely big language from these examples. To summarize, D can do inline assembler, class based OO, generative programming, RAII, procedural, functional and concurrent programming (and I probably missed a few). I liked the approach to immutability, but I must admit I’m scared of the sheer size of the language. It’s impressive how such a big language can get so good at compile times.

AmbientTalk

AmbientTalk is a language built on top of Java that puts communication in center. It is supposed to be used in areas where you have bad network connectivity and want to communicate inbetween different devices in a flexible way. Things like network outages aren’t exceptions because they will happen all the time in the environments AmbientTalk is built on. The language embraces futures to a large degree and also takes a principled approach to how Java integration works – so that if you send an AmbientTalk object in to Java, it will work as if you had sent it to a remote device, and the only way Java can interact with that object is by sending messages to it. Much interesting stuff in this talk.

And that was it. I can obviously not capture all the interesting hall way and pub conversations that were had, but hopefully this summary will be helpful until the videos come along in two to four weeks. I would call this conference a total success and I really look forward to next year.