When programming we in some sense try to model the world, or model our understanding of an aspect of the world. Or maybe even specifically model a piece of the world in a way that makes it easier for us to work with. But the world isn’t everything. We are also modeling ideas and abstractions that have no physical counterpart. Of course, we aren’t just modeling physical objects, “things”. We also have to model processes, relations, work flows and interactions.
How do we model these things? In most cases we generally use either object orientation or functional programming. Structured programming without explicit object orientation support in the language can still be divided into actions that operate on data, in something that looks a bit like object orientation. I won’t touch on the functional approach here, since it’s too tangential to what I want to discuss.
When I talk about object oriented languages here, many will probably associate mostly to languages like Java, C#, C++ or maybe Python or Ruby. These are all quite firmly in the category of class based object oriented languages. Is that all there is to object orientation? Not really. In fact, there are several ways to look at OO. Depending on what part you emphasize, you will get object systems that vary quite drastically.
For example, Scheme was originally an attempt to understand the Actor model, where the way actors work will look like OO (at least with tinted glasses). Common Lisp has CLOS, where generic methods aren’t associated with a specific class, but instead can be specialized on zero or more arguments. (Of course, you can mimic regular class based OO in CLOS, by only ever specializing on the first argument, but that diminishes the power a lot).
And then there is Self. Self was a programming language and environment based on Smalltalk, that introduced prototype based object orientation. This model basically removes the distinction between classes and other objects. Even in really open systems, where classes are instances just like other objects, classes still is the only thing you can make instances of in a class based OO language. In Self you clone an object to create a new one. Both values and methods are slots, and all slots of an object is copied when an object is cloned.
Self influenced several other languages, most notably JavaScript, NewtonScript, Io and REBOL. All of these have their own special take on prototyping, but the general distinction between prototype based and class based languages is easy to make.
I’m currently working on Ioke, and just like Io, Ioke will be a prototype based language. It will use differential delegation, which means that a new clone of something doesn’t get any copies of slots, but instead has a pointer back to the object it was cloned from. If you try to call a method on the object and that method isn’t found, the search will continue to the object it was created from, and so on. Since Ioke will allow you to have more than one prototype, I’ve decided to not use the word prototype, but instead call them mimics (an object mimics zero or more other objects).
We, as programmers, need to have powerful ways to model the world. But as I mentioned in the beginning, we also need to be able to model abstract concepts. I’m getting more and more convinced that prototype based OO is easier to work with and reason around than class based OO. The world around is can definitely be categorized, but the that is a side effect of the way our minds work. There is no real world equivalent of Plato’s Theory of Forms, but yet we insist in programming like it was true.
Working with ideas categorized as classes constrain us. As an example, most programs start out as some vaguely formed thoughts, and these thoughts get clearer and clearer as we continue developing the system. In most cases we end up refining the class definition in a source file and recompiling. But working with a prototype based system, this isn’t the only choice. You can instead create an initial set of objects and then continue to refine and refine with new objects. This is especially powerful when doing exploratory programming.
Is the world really class oriented? No. Is it prototype based? Not really. (Although a case could definitely be made for calling evolution, DNA and genetics for prototype based.) We can see the world in class oriented glasses, or we can see it using prototype based lenses. Both choices work well for certain things, and when modeling it’s nice to have both. So the question is this – can you express the one naturally in the other?
Class based system generally have a hard time looking like prototype based languages (although Ruby actually is very close). Prototype based systems on the other hand can very naturally congeal some of the core concepts into classes, and it’s easy to work with these as if they were real classes. (Some of the proposals for EcmaScript 4 added class oriented OO on top of the prototype based system).
I choose to make Ioke prototype based, since this gives me more power and flexibility.

12 Comments, Comment or Ping
I think Io looks really neat, but exploratory programming aside it seems to me that the big win of a class-based system it creates an overt structure that makes it easy to:
a) For others to understand the idea and intent of a larger system
b) For yourself to understand how you wrote the code when you get back to code you wrote a long time ago.
These are actually similar to the issues I experience with the lack of and pre-existing structured module system in Ruby.
Of course, none of this is a problem in a small scale system, or if the system is built using smaller, separately deployed, pieces.
But as the system grows, my experience is that structure becomes more and more important. I am working on a java product with 1800+ classes. That project would be significantly smaller in Ruby, but even clocking in at one tenth of that, I would expect the Ruby version to be harder organize. I may very well be wrong though, but in that case I’d be very interested in knowing how to overcome those problems.
P.S. Nice word play/joke with the language name, but it might be a bit easy to mistake “Ioke” with “loke”.
October 13th, 2008
Hi Ola. Interested reading. I’d love to see some more examples (and explanations) of Ioke code, to get a flavour of what it will look like. Is the syntax going to be Javascript-ish?
My 2c: I understand why you don’t want to use “prototype”, but “mimic” is a bad alternative, as it would more accurately applied to the DERIVED object, not the original. Perhaps “model” would be a better word? Or just stick with “prototype” – I don’t think having multiple prototypes for the same objects necessarily stretches the meaning of “prototype”.
October 14th, 2008
I can see where prototypes over classes can be an improvement, but maybe it’s just a nicer flavour of (inheritance) spaghetti? I think it’s more important to replace as much of the inheritance hierarchy as possible with delegation and composition. So I’d like to hear how Ioke supports doing that.
October 14th, 2008
Christoffer:
That is definitely a valid concern, although for several reasons I don’t think it will be a real world problem. First, just like with Lisp, code in these languages will end up much more compact. This is because there are ways of creating new abstractions that a language like Java really can’t match (you have to have classes for everything in Java, for example)
Another reason I don’t think it will be that much of a problem is that projects in these kind of languages tend to not grow big in the same way. They stay quite small for some reason.
And third, as I also mentioned in the post, in a prototype based language you will eventually end up with class like structures for many of the core concepts. These help to build the framework on top of it.
The difference I see is that you can get to that point in an evolutionary style, which means you have a better shot at actually getting the project done. Creating static hierarchies up front is almost as bad as waterfall project management – and I believe that prototype based languages help avoid this.
Regarding the name – yeah, that was actually by design. I’m fine with that duality. For all you know it might actually be called loke. =)
Mike:
You’ll see more examples in the blog, and as the language progresses there will be more and more code examples in the Git repository too. And no, the language doesn’t really look at all like JavaScript – it resembles both Self and Io quite closely. Whitespace separates message sending, and so on.
And your concern about the name “mimic” is also true, from your perspective. I haven’t shown many examples of how it’s actually used and it turns out to read well in practice. At least I think so. =)
Greg:
To a degree it can turn into inheritance spaghetti, although the problems associated is different. As you say, delegation and composition is extremely important, and Ioke can easily handle both of these tasks. I’m thinking about adding some macros that make it easy to do structural composition and so on, but in the simple case an object can always delegate to any other using equivalents of method_missing. And regular message sends too, of course.
October 14th, 2008
Responding to your post:
Dismissing classes because everything in the real world is unique is a specious argument because a map that is fully representative of the terrain is no smaller than the terrain. Information must be lost in the model, since you can’t fit whole people etc. into the machine.
Classes are useful not because the real world is classful, but because we want to create abstractions (i.e. lose information) such that we can treat certain sets of objects in a common way according to a set of criteria. It’s all very well to say that everyone is unique individual flower, but at the end of the day, the in-machine concepts we use to model them need to fit into the Employee table if they want to get paid.
In a prototype-based language, this commonality could be modelled along the chain of prototypes, a la isPrototypeOf (or clone-chain in your mimic objects), but that typically isn’t how programmers do it. They usually end up using duck typing, probably because dynamic languages obscure the flow of types through the program, and hence it is less likely to be to the forefront of the programmer’s mind.
Unfortunately, the unannotated nature of the type flow through the source hurts maintainability for many kind of programming brains out there. So, some function f() applies some set of functions {a, b, c} to its argument; if these functions are very unique, not much information is lost, but if these functions are ambiguous – particularly if multiple dispatch is in effect – the result is a lot harder to reason about.
Consider e.g. that one does not want to invoke polymorphic dispatch sites while holding a lock, for example, due to their lack of transparency (=> deadlock).
On refining an initial set of objects like Smalltalk, this is likely to fail to catch on like Smalltalk failed to. Text simply works better for a larger slice of the brain types out there.
Responding to your comment reply to Christoffer:
“Getting the project done” is a non-sequitur, and frankly, irrelevant. Most projects are already “done”. The work is in maintaining and extending them over the years, and most of the effort here is expended on understanding the original code so that it can be modified correctly.
About code being more compact: there’s only so much shrinkage common to all programs that you can apply to codebases in the millions of lines before you start hitting the irreducible complexity of the problem domain(s) you’re trying to model. The *layered* abstractions that need to be used to get multiplicative effects on the code reduction fraction will be specific to the problem domain and require quite a cognitive adjustment for new programmers.
In the showdown between very indirected code using novel domain-specific nested abstractions, and verbose, plodding straight-line code, short code isn’t necessarily the slam-dunk one might hope for. Just as most people prefer prose sentences that tell a story from start to finish, rather than the dry academic style with nested subjunctive clauses, many programmers have a hard time with dense code that needs uncompressing, and would rather follow the story linearly with a debugger, rather than weaving in and out of parameterized abstractions.
I could have written that sentence differently to make a point; but, maybe, you already see it.
October 14th, 2008
I’ve mostly Java/C++/Obj-C and Ruby. I always felt that Objective-C typically worked quite differently when it came to hierarchy size etc. Especially in C++/Java, the solution to any problem tends to be to create a new class, but even my Ruby has quite a few classes (might be due to mind-pollution from Java though).
So what I mean is that I can see what you could mean when you talk about when you say “projects in these kind of languages tend to not grow big in the same way” even if it might not have the same reason as with Obj-C.
Still, I can’t shake the feeling that there might a significant advantage to language that imposes a structured way of organizing the code.
How does one keep the code easy to follow if the code ever should grow “large”? Do you feel that the “class-like structures” created from prototypes be as easy to read as the corresponding class-based implementations?
October 14th, 2008
Christoffer – the set of messages that an object responds to *defines* its class; with dynamic languages that use duck-typing, the word can’t really have any other meaning. Thus, even in classless languages all the objects have a class, and the number of instances outnumber the number of classes. Any time you want to work with values polymorphically, you cannot escape the abstraction lens of the class.
Meanwhile, if we want to minimize the number of classes, we must turn to shape, but then unfortunately we lose our names. No longer do we ask for Age, but rather cadddr, etc.; if we have moved beyond the magic numbers of programming childhood, are we to return to ordinal nirvana?
October 14th, 2008
Barry:
OK, let’s see here. Your first paragraph puts up a straw man argument that really has nothing to do with what I said. I do not dismiss classes because their lack of mapping to the real world. To use your map analogy, I totally get that the value of a map is that it removes information, but the question with classes vs prototypes is how you arrive at what information to take away. As you mention, abstractions are what’s interesting. Classes and prototypes are different ways of abstracting data.
I don’t understand the next paragraph where you say “They usually end up using duck typing”. Duck typing has nothing to do with classes vs prototypes. It’s a style of programming more than anything else, and it can be done in both class-based and prototype-based languages.
You say “the unannotated nature of the type flow”. I don’t see how this is different for class-based and prototype-based languages. My comparison here is between dynamic languages – they are all unannotated.
The next paragraph: “Consider e.g. that one does not want to invoke polymorphic dispatch sites while holding a lock, for example, due to their lack of transparency (=> deadlock).”
Not sure what to respond to this. Could you expand on what you mean, since written like this it doesn’t make sense at all.
There are many ideas about why Smalltalk failed. But Smalltalk is definitely workable from text.
““Getting the project done” is a non-sequitur, and frankly, irrelevant. Most projects are already “done”.”: yes, but when designing a new language, I’m probably not targeting the projects that are done, right?
With regards to shrinking of code, yes, you’re right. There are limits. But we are nowhere close to it.
You say “… require quite a cognitive adjustment for new programmers.” This is completely true. And this have been true for all significant paradigm shifts within our field. In fact, I think it’s time for a new shift.
In summary, I don’t agree with your points. At least not my understanding of your points.
October 14th, 2008
Christoffer:
Of course, there are advantages to very structured languages. There are domains and systems where extreme structure is exactly what you want. But I believe there needs to be more discussion about when these structured languages are actually necessary, and when they get in the way.
To the other question – you generally try to get your abstractions right, you work with them and form them into a form that is as obvious as possible. This is no different between different languages.
But yes, I do think that as a system grows larger, some prototypes have a tendency to take on more structure to the system. The difference is that there will still float around a large amount of prototype objects that are not class-like, but are not exactly instance-like either. That’s where much of the power comes in.
October 14th, 2008
Barry:
The implicit understanding in this blog post should be that when I talk about classes, I talk about classes in the sense of the class abstraction found in languages like Java, C++ and Smalltalk. These are classes that are apparent in some way in the source code. Talking about another meaning of classes, that which is defined based on the messages an object can receive is not relevant.
Not sure what kind of definition you have of dynamic languages, but many dynamic languages have explicit classes (such as Smalltalk, Python and Ruby).
Structural and nominal typing doesn’t have much to do with the discussion either. And what’s the point of minimizing the amount of classes? The amount of classes in a system written in a class based OO language should be exactly the right amount of classes, neither more nor less.
October 14th, 2008
Hey Ola Bini,
I think there is a language that you will love because you are a Lisp lover, check it out Clojure is a Lisp 1 for the JVM, it have a lot of potential and there is already a book for it in the pragmatic bookshelf. Have features for concurrency actor based borrowed from Erlang and many more more more stuff.
I’m doing lately a lot of Clojure and is very impressive with Swing, Servlets etc. There is an article that with less than 100 lines of code you can have a complete web server with Clojure http://robert.zubek.net/blog/2008/04/26/clojure-web-server/
Also is impressive the syntax, very elegant and clean and the import statement is very elegant, really really check it out.
Clojure is a beauty.
October 15th, 2008
a few rambling thoughts on your interesting blog entry…
physical things are composed of atoms and physical things. their composition often changes as a result of a process. the amount of change is often a relation between the process, change agents, and original participants. the composition of the physical thing (or participant) also acts as a constrain of interactions. all of this runs on top of a set of laws that both constrain and describe behavior of interactions.
todays languages fail at providing some of these modeling constructs. “a process”? there is no process construct. at least not one in a “modeling” sense. why don’t processes get their own first class citizenry offerings?
and look at the rise of dependency injection in languages – a real cry for help imo. the need for a “process” construct so that in runtime we can provide a behavior without the cluttering the concepts our code is trying to model.
and then there are “properties” – often neglected in OO languages, but first class citizens in some modeling languages such as OWL. i believe properties are very important citizens, just look how they flourish in ruby in the form of modules used via mix_ins, and at higher levels as plugins in rails (acts_as_taggable).
i can’t think of anything that would be easier to model or abstract in a prototype based languages over ruby, although ruby is arguably much more than a class based language.
October 15th, 2008
Reply to “Is the world Class oriented?”