The Secret Diary of Han, Aged 0x29

Archive for December 2007

Building readline 5.2 on OS X Leopard

Readline 5.2 does not build properly on OS X Leopard. It fails with a  -compatibility_version only allowed with -dynamiclib error. I ran into this problem when trying to build ruby, using GNU readline instead of the default editline. The problem is easily fixed though. Readline explicitly checks for the darwin version, but does not include 9 (Leopard) in this check. Patch support/shobj-conf using the following:

 --- support/shobj-conf	2007-12-26 18:30:46.000000000 +0900
+++ support/	2007-12-26 18:30:39.000000000 +0900
@@ -142,7 +142,7 @@

# Darwin/MacOS X

@@ -171,7 +171,7 @@

case "${host_os}" in
-	darwin[78]*)	SHOBJ_LDFLAGS=''
+	darwin[789]*)	SHOBJ_LDFLAGS=''
SHLIB_XLDFLAGS='-dynamiclib -arch_only `/usr/bin/arch` -install_name $(libdir)/$@ -current_version $(SHLIB_MAJOR)$(SHLIB_MINOR) -compatibility_version $(SHLIB_MAJOR) -v'
*)		SHOBJ_LDFLAGS='-dynamic'

and rerun configure and make.


Written by Han

December 26, 2007 at 18:29

Posted in Uncategorized

Tagged with

Ruby 1.9.0 released

Ruby 1.9.0 has been released a few minutes ago.

From the Changelog:

Tue Dec 25 23:33:55 2007 Yukihiro Matsumoto

* development version 1.9.0 released.

As promised at Ruby Kaigi last summer, it was released on Christmas day (with almost half an hour to spare).

Get it from subversion at

Congratulations to Matz, Ko1, and all others who worked hard to make this happen!

Written by Han

December 25, 2007 at 23:38

Posted in Uncategorized

Y combinator in Ruby 1.9

Just for fun, Tom Moertel’s Y combinator in Ruby 1.9’s new lambda syntax:

def y
  -> (x) { x.(x) }.(
    -> (f) { -> (*args) { yield (f.(f)).(*args) } }

fac = y { |rec| -> (n) { n < 2 ? 1 : n * rec.(n-1) } }

puts fac.(5)   # ==> 120

Written by Han

December 20, 2007 at 20:56

Posted in Uncategorized

Code size, code complexity and constraints

 Steve Yegge discusses code size or “bloat” as the ultimate bad property of code.


Size and bloat hamper the requirement for a programmer to get a piece of code fully into his or her head, to be able fully understand it and to grasp the consequences of changes to that code.

Steve’s focus is on how a language (Java in his case) promotes an increasing code size and how choosing a different language can reduce that size by maybe a factor 2 or 3 or so. Using a more expressive language frees a programmer from writing many lines of tedious, repetitive code. By being able to focus more on the problem at hand, the code’s intention can stand out more clearly as well, which further improves understandability.

Language is certainly a factor, but I’m fairly sure Steve would concur that there is more to it.


For one thing, it is not code size per se, but something that is often strongly related to size that wreaks havoc on the understandability of a code base: code complexity.

Code complexity is often taken to be the ease or difficulty of understanding a simple piece of code that implements some common algorithm. Like the implementation of Quicksort in Haskell versus Java. That is not the complexity I am talking about. The complexity that makes programs hard to understand play at a somewhat higher level. In order to achieve its goals, various parts of a program have to interact. All but the most trivial programs contain many parts that must communicate. A lot of the complexity of code is holed up in the countless ways parts of a program talk to each other and influence each other. It is this dynamic complexity that makes things hard to understand.

Over the years, many methodologies were invented to better cope with the complexities of larger code bases. None have proven to be the silver bullet.

Object orientation, for example, is, in its essence, an attempt to better organize code in order to reign in some of its increasing complexity, by modeling entities into objects, and encapsulating their implementation while just showing a public interface to the rest of the program. It is beyond doubt that OO does somewhat help to keep a tab on complexity. But it only works to certain level, and there are certain characteristic of OO that, at some point, can be seen as making things worse. For example, it is not always a good idea to hide the implementation behind an interface. It is, for example, well established by now that it is bad practice to hide the fact that an object is local, or at the far side of a network.

More fundamentally, objects can, and very often do, keep a state. Objects interact, and calling an object’s methods often changes its state. By giving objects public methods that can change its state, a programmer is effectively creating a protocol for interacting with the object. Sometimes certain methods have to be called in a certain order, or after a certain other method has been called at some point beforehand. A program then, consists of numerous objects of different classes, calling each other, following many different protocols, whose outcome and effect may be different depending on the state objects were in to begin with.

A programmer reading through a code base must try to grasp these protocols, and try to reconstruct how objects interact. This is hard. OO allows us to create boundless implicit protocols and stateful objects, but gives us few tools to cope with the dynamism of a running program. It’s hard to see how this promotes understandability.


So, what can be done to reduce complexity?

When the complexity of a program gets out of hand, it may be a sign that there are too many degrees of freedom in its design. And the answer to that is to add constraints. Constraints can be added at various levels and in various ways. Below are a few examples.

  • For example we can drastically reduce the number of protocols allowed on objects. An example of this is REST. But even within a single executable, having only a few well defined ways for objects to communicate can be a great benefit. At first, this architectural style may seem limiting. However, putting in place the right set of constraints forces a developer to think a design through more deeply and ultimately may lead to a better solution. A solution, that has fewer surprises and a less steep learning curve for newcomers to the code, as it adheres to a set of well defined expectations. In addition, constraints at this level, can lead to substantial architectural benefits at higher levels.
  • As another example, without any constraints in place, arbitrary objects in a large program may call a public method on other objects far away in a different corner of the program. Having unbounded links between various parts of a large program greatly increases difficulty of understanding. However, depending on the nature of the program, it may be possible to establish, for example, a simple publish and subscribe mechanism of events, so that certain objects are constraint to raise events if their state changes, instead of directly calling methods on other objects, and others, quite independently react to the occurrence of such an event. The objects are decoupled, and the programmer trying to understand the dynamics will have a much easier job.
  • Avoiding to store state in objects or systems can also increase understandability. Functional programming languages have seen an amazing rise in popularity in the last few years, even though they have been around since the earliest times. This can be understood, in a way, as a reaction to object orientation. Side effect free functions have many benefits, not the least of which is understandability. Explicitly passing state around, instead of hiding it in objects and changing it as side effects of methods can make things far easier to comprehend. In addition, it comes with fringe benefits involving concurrency and scalability.
  • Breaking up programs in smaller parts makes it harder for those parts to communicate. Certain things, that are easy if those parts were in the same memory space are suddenly much harder to accomplish. In general, this will lead to communications becoming less fine-grained. If the boundaries at which to break up a big program in smaller ones are chosen right, this may again lead to less complex, less subtle code. As a result, the now independent parts can often be simplified considerably, having to focus on a more restricted, specialized subset. Understanding the broken up parts may be a lot easier than trying to grasp a big monolithic block of code.

Written by Han

December 20, 2007 at 19:54

Posted in Uncategorized

Perl 5.10 released on it’s 20th birthday

Perl turned 20 on December 18th. Happy birthday!

More than five years have passed between 5.8 and 5.10. I’m pretty sure 5.10 was never planned though. A huge amount of effort has gone into the still elusive Perl 6. But who knows when it will be released*. Meanwhile, time has passed, other languages have caught up, and an evolutionary step was in order. Some of the new features (see here for much more):

  • Regular expression Improvements
    • Named matching groups in regexes (from C#)
    • Recursive patterns (In an attempt to make them Turing complete?)
    • Possessive quantifiers (/C++/ takes on a wholy different meaning….)
  • A switch like statement (“given”) with a smart match operator (a la Ruby)
  • Static variables (C style)
  • The new // (defined-or) operator

*When Matz was asked, at the 2007 Ruby Kaigi, when Ruby 2 will be released, his answer was “Two years after Perl 6”.

Written by Han

December 20, 2007 at 00:37

Posted in Uncategorized

Ginza super cluster

Michelin recently handed out stars to 150 Tokyo restaurants. Now Tokyo is a big city, but around 30 or so of the chosen places are in Ginza, in an area that is barely half a square km (0.2 sq miles). That makes for an interesting density map (via Note the three 3-star restaurants withing spitting distance at the left.Map of Ginza with Michelin star overlay

Written by Han

December 13, 2007 at 20:30

Posted in Uncategorized