Macros Design Patterns

I was reading the thread What are some fun or useful macros? on reddit and it reminded me of another thread that appeared in the Pro mailing list. These kind of threads are always enjoyable because each time you learn something new and see really interesting things. While reading them, it crossed my mind that another variant of this question would be what are some fun or useful macros design patterns. Instead of examples of specific code macros (general or not) it would be nice to see common programming practices using macros. So, for a lack of a better expression name, let’s call them macros design patterns.

My favorite one is to configure an algorithm, especially when we want to use the correct types. Essentially, we write an algorithm using a macro that takes types or configuration arguments and then we expand it to the appropriate desired configurations. For example, if you have an algorithm that operates on different types of sequences, instead of writing several duplicate functions with the same algorithm but with the associated type declarations, just apply this pattern. A simple but contrived example: we want to compute the mean of a vector but use its proper type. In addition, we might also want the possibility of using a key to access the vector elements. We can write the following macro:

(defmacro mean-body (vector vector-type vector-ref key)
  (let ((size (gensym)) (i (gensym)))
    `(locally 
	 (declare (type ,vector-type ,vector))
       (let ((,size (length ,vector)))
	 (/ (loop for ,i from 0 below ,size
		  sum ,(if key 
			   `(funcall ,key (,vector-ref ,vector ,i))
			   `(,vector-ref ,vector ,i)))
	    ,size)))))

The macro contains the algorithm (in this simple case the mean) and the arguments allow us to configure the multiple versions we need. If we want a simple-vector, the macro will expand to use the correct type declaration and svref. If a key function is needed it will also include it. Then, we can call the macro with the several configurations value inside the main function:

(defun mean (vector &optional key)
  (typecase vector
    (simple-vector 
     (if key 
	 (mean-body vector simple-vector svref key)
	 (mean-body vector simple-vector svref nil)))
    (vector 
     (if key 
	 (mean-body vector vector aref key)
	 (mean-body vector vector aref nil)))
    (otherwise 
      (if key 
	 (mean-body vector sequence elt key)
	 (mean-body vector sequence elt nil)))))

This can be very useful in situations where we want to optimize code since it becomes easy to add the proper type declarations to the input arguments of an algorithm. Moreover, we keep the algorithm in a single place, making it easier to maintain. Depending on the situation, we can also define a function for each configuration. In the example we could have a mean-simple-vector and mean-vector.

I don't know if it has already a specific name but I like to call it the configurable algorithm pattern. I find it very useful. And thinking back to the reddit thread, what are your favorite macros design patterns? Which ones do you find useful and use them regularly? If you want to share, feel free to drop a line. I am interested in seing and learning other patterns!

Sorting algorithms used in the CL implementations

Which sorting algorithm should one implement when developing a program? The best answer is probably none. Use the sort provided by your system/library/etc. Unless you know your input data has some special properties that you can take advantage of, the provided sort should be enough for your needs and probably is more efficiently implemented.

However, I think it is important to know what sorting algorithm is implemented. If one knows the properties of the data, it is possible to understand if the provided sort can or will pose a problem. In the same way a programmer shouldn’t implement a sorting algorithm every time it needs to sort something, the programmer should also be aware of the limitations/advantages of the system sort. That way one can decide if a special sort is needed or not.

Common Lisp provides the functions sort and stable-sort. The HyperSpec describes their operation well but it does not define the sorting algorithm. That decision is left free to the implementations. In addition, both functions don’t necessarily share the same algorithm. The difference between the two is that the second function sorts in a way that guarantees stability, i.e., two elements that are equal remain in the same position after sorting is completed. The use of sort and stable-sort requires some care (see the section sort pitfalls) but lets focus on the algorithms and not on its usage.

What sorting algorithms do the major open source CL implementations actually implement? I was curious about it and went to check the source for ABCL, CCL, CLISP, CMUCL, ECL and SBCL. Not surprising, we find some differences between the implementations. What it was more unexpected to discover is that some implementations also use different sorting algorithms according to the sequence type. A quick survey of the findings is summarized in the following table (if anythings is incorrect, please tell me). The links for the source code are in the implementation name (careful, in CCL and SBCL there are two links).

Implementation sort stable-sort
ABCL merge sort (lists) / quicksort merge sort
CCL merge sort (lists) / quicksort merge sort
CLISP tree sort tree sort
CMUCL heapsort merge sort
ECL merge sort (lists) / quicksort quicksort (strings + bit vectors) / merge sort
SBCL merge sort (lists) / heapsort merge sort

 
In terms of the implementation of sort, quicksort is the most used algorithm, followed by heapsort. The choice for these algorithms is expected. Both have an average-case performance of O(nlgn) and heapsort guarantees a worst-case performace of O(nlgn) too. Quicksort has a worst-case performance of O(n2) but it can be optimized in several ways so that it also gives an expected worst-case performance of O(nlgn). However, it seems that the quicksort implementations are not completely optimized. In ECL (and ABCL) quicksort implements a partition scheme which deals better with duplicate elements (although is not the three-way partitioning) but it always picks as pivot the first element. CCL chooses the pivot with a median-of-3 method and always sorts the smaller partition to ensure a worst-case stack depth of O(lgn).

As for CLISP, I think it uses a tree sort but I am not entirely sure. The only source file I could find with a sort implementation was sort.d and it looks like it contains an implementation of tree sort with a self-balanced binary tree, which also gives this algorithm an average and worst-case performance of O(nlgn).

As expected, most of the implementations use merge sort to implement stable-sort since it is a stable sort with average and worst-case performance of O(nlgn). Apparently, all implementations are bottom-up merge sorts with the exception of CCL and ECL. Another interesting thing is that merge sort is also used for lists in sort, in most of the implementations. However, I found it surprising to find quicksort in the stable-sort column because it is not a stable algorithm. Since it is only used for strings and bit vectors, it is not really an issue. While reading the source code of the implementations, I realized that ABCL was using quicksort in stable-sort for all non-list sequences. This is a problem that exists in the current 1.0.1 release but I’ve sent a bug report with a quick fix to the maintainers. The next release should have stable-sort fixed.

This exploration of the sorting algorithms used in the open source implementations was very educational and interesting to me. I’ve learned what algorithms are actually used and enjoyed seing how they were implemented. Just spotting the issue in ABCL stable-sort made this review worthwhile. I think there is still room for improvement in some implementations but knowing now the strengths and weaknesses of the sorts in CL is already good enough. On a final note, I just wonder what are the algorithms used in ACL and LW.

Packages organization and exporting symbols

I’ve started to re-design my main library for evolutionary computation. One of the main things I did for the new version was a complete new organization of the packages (and respective files/modules). Before I had essentially two main packages, the library itself and the examples. Although simple, it became a pain to use this model when I extended it heavily with more algorithms and related utilities. I hope I am not going now in the opposite direction (too complicated) but so far I like the new organization.

In short, there is a package for each main branch of algorithms (e.g., GA, GP) with everything specific that kind, which imports from a core package with the common components. These “sub-packages” are gathered together in a single package (the main library package). This way, it is possible to use in a project everything or simply just the desired component (e.g., if you just want GP). Furthermore, an extra package for the users is also provided to allow REPL experimentation without being on the library main package.

However, while implementing this scheme I realized that I wanted to have all the exported symbols from the packages that compose the library, also exported by the library main package. This way, all the symbols that compose the library are easily seen on the main package. For me this is very useful since it allows exploration of a library, especially if it has many things. Since I have never done something like this before, I went and search for a way to solve this minor problem in an easy way.

The answer is basically use do-external-symbols. With this macro you iterate over the exported symbols of a given package and then export them again on the package you want. Do this inside an eval-when form and when the library is loaded and the main package will contain all the symbols. If *library-sub-packages* is a list with the packages labels that compose your library:

(eval-when (:compile-toplevel :compile-toplevel :execute)
  (dolist (package *library-sub-packages*)
    (do-external-symbols (symbol (find-package package))
      (export symbol (find-package *library-main-package*)))))

Making all the exported symbols of internal packages also exportable by the main package turn out to be an easy thing to do. I don’t recall seing do-external-symbols (or the related macros) but I’m glad such a macro is provided. As always, the HyperSpec is your friend :-)

ECLM 2011 Notes

This last weekend I was in Amsterdam to attend the European Common Lisp Meeting. This was my third participation in a organized Lisp meeting (after the first ZSLUG in Zurich and the ELS 2011 in Hamburg) and I am happy I’ve decided to go. I was only present at the meeting itself since going to the dinners and city tour would have been way out of my budget. Anyway, the ECLM was a nice venue. I enjoyed most of the talks and still had an opportunity to talk with fellow lispers. I enjoyed talking with Luís Oliveira and meeting Zach Beane.

The first talk was given by Nick Levine and it can be viewed in two parts. In the first one, he talked about his experiences of trying to write a CL book for O’Reilly. It was quite interesting to see how hard can it be to prepare a book, especially for a publisher who was (is?) not very lisp-friendly. The second part was mostly about the community, although presented with a rant on libraries. This is a topic that has been debated several times. Thanks to Quicklisp, the problem now is not installing libraries but finding them and knowing which ones are good. I am not sure if creating another site as suggested would be a good thing since resources are already scarce. Perhaps more thought must be made in how to improve the current ones. CLiki still seems to me the best starting point. Still, Nick Levine talk was good and entertaining. One of the best in the meeting.

The following talks were mostly about companies that use CL as their main programming language. Jack Harper talked about the company he recently started, Secure Outcomes, that produces a unique portable fingerprint scanner. I must say his talk was quite inspiring! He talked about how to get a startup running and the decisions that took him to choose Lisp as the main development language. In addition, he also explained why prefers Lispworks to any other implementation.

Next, it was the talk given by Dave Cooper. I must confess his talk was the weaker of the day mainly because he talked about two different subjects without any connection. He started talking about GDL, the main product from his company, Genworks. I’m sure GDL can be a great thing but I didn’t get much from his talk. About halftime, the talk suddenly changed to the Common Lisp Foundation. This was the interesting part of the talk since he explained the aims of CLF, the people behind it, etc. However, it was not clear how it will distinguish itself from ALU in terms of operation (in terms of purpose, CLF just focus on Common Lisp while ALU in all Lisp dialects) and this was the main concern that was expressed during the questions time. After presenting CLF, and since there was still some time left for the next presenter, he went back to GDL.

Afterwards, it was the turn of Luke Gorrie to present his lisp-hacker startup Teclo Networks. His talk was an expanded/updated version of the one given in Zurich. Still, it was also quite interesting. He started by telling how a group of hackers with a Lisp and/or Erlang background got together to improve the mobile TCP/IP communications. Then, he showed us how TCP badly misbehaves in a mobile network and how their product, Sambal, can give 10% to 27% improvements. Another interesting point of the talk was that CL is used as their main development language. In short, it is used to develop and study all their algorithms. They have a TCP/IP stack fully implemented in CL! Moreover, all their analysis and maintenance tools are also all in CL. However, in the actual product boxes they have reimplemented the algorithms in C. The reason: extreme pragmatism. Luke concluded by hinting that the sales of their product is going very well!

In the afternoon the talks started with Paul Miller from Xanalys. The talk was dedicated to Link Explorer, a windows desktop tool to analyze data. The application is quite impressive and was developed using just CL. Paul also gave us a demonstration of the tool as well as some notes on future development.

The best and most awaited talk, Quicklisp, technically and socially, was given by Zach Beane. The talk focused on several aspects of Quicklisp. Zach started by giving an overview of the famous library problem of CL, the solutions that existed before QL, explaining their advantages and disadvantages. Also, and very important, what people were actually using and what difficulties they were facing. In a survey he did, most CL programmers were installing libraries by hand, including Zach! Then he proceed to how Quicklisp was developed, some technical issues, what is the role of Quicklisp and what is the reception after one year. The talk focused then on the social impact of Quicklisp in the community. One of the things that makes Zach happy it’s the number of emails he gets saying that people are back to using CL and contributing more to the community (i.e., making libraries available) because of QL. Finally, some indications of what is to come. My perception is that the possibility to enable hacking as it was possible with clbuild is one of the most exciting future features for Quicklisp. Zach’s talk was excellent from all points of view!

The last talk of the day was by Hans Hübner. This was my second favorite presentation. Although the topic, code style and conventions, can start some heated discussions, I must say that I agree with almost everything Hans Hübner mentioned. However, like everything, some common sense is always necessary. One of the main points was that lispers should not use constructions which are not part of the standard language when the standard provide options, just because you want to save some typing. It is more important for another programmer to understand faster what is written than forcing him to look for the definition of the unusual constructs. The if*, bind were examples given. Hans also talked abut the 80-column rule, style guides, etc. In the end, it always depends on the project, the people, etc, but code style is important and should not be ignored.

The meeting ended with several lightning talks. The most interesting bits were: Marco Antoniotti announced ELS 2012, to be held in Zadar, Croatia, around April-May; Christophe Rhodes talked again about swankr, a swank and slime for R; the announcement of ABCL 1.0.0 by Erik Huelsmann.

Some words on the organization. Organizing a meeting of this kind is not easy and Edi Weitz and Arthur Lemmens must be congratulated for making a great event. Not all was perfect but everything went smoothly. I wish that it continues to happen in the coming years!

Some thoughts on The Book of Ruby

Disclaimer: No Starch Press provided me a free copy for review.

Ruby is a programming language that I always liked and somehow prefer it to Python. I used Ruby for some prototyping but when you have Common Lisp, it becomes hard to use any other language. When No Starch Press offered me the opportunity to review The Book of Ruby I was curious because the two previous books I’ve read from them were simply excellent. I already have four books on Ruby so I was wondering how this one could compare to those but most important, if would follow the same style as Land of Lisp and Learn You a Haskell for Great Good!. After reading the book, unfortunately, my feelings are mixed. Let’s see.

The book is well-written, with a good structure, covering beginner topics to advanced ones. It contains 20 chapters (without the introduction) and 4 appendixes. The initial chapters focus on the basics of the Ruby language. The later ones focus on more advanced parts of Ruby and more specific topics, for example, debugging and Ruby on Rails. This is a positive aspect of the book since for someone starting with Ruby can have in a single source access to several important topics. The chapters also have a Digging Deeper section at the end, presenting interesting discussions of the topic at hand. Also a nice read was the last chapter since it deals with the dynamic aspects of the language (use of eval, etc).

However, the book has some issues. The most important one is about the coding style, or the lack of it. The book is not consistent, does not follow Ruby conventions and it shows quite easily. I believe this is bad for a novice programmer in the language since it makes examples harder to understand, not to mention other things. Second, the book does not have the same fun style as the other No Starch Press books. This is a not problem per se but since the book subtitle is A hands-on guide for the adventurous, the reader is more or less mislead to think it follows the other books fun style. Third, the examples are too contrived and a few project ideas are missing. Ruby is a very nice language and with it you can do lots of things without writing lots of lines. So, it is a little disappointing that a book that aims itself for someone that wants to learn the language (but not programming from scratch) is not offered with some pointers in how to expand what is learning.

To conclude, the book is nice but probably is not the best book for a complete novice and not the best ruby book.

GECCO in Dublin

Last week I was in Dublin for GECCO. It was good to return to a conference that I haven’t been for quite some time, especially because it was in Europe. I presented my work at the new Self-* Search track and the feedback I got was good. The organization was also great, at least from what I’ve experienced (I was not at the poster session which I was told was not that good because of some Karaoke!?). The weather was also nice in Dublin which made my stay in Dublin a pleasant one :-)

NXT 2.0

Yesterday I finally dedicated some time to assembly my newest toy: a Lego Mindstorm NXT 2.0! I just made the basic drive rover with some sounds. I need now to explore more the capabilities of the NXT. I need to see what other options exist in terms of programming the brick.

As before, the internal language it’s a bit limited. I played with the original Mindstorm a few years ago. Already back then, I used for a few experiments a set of Lisp macros to generate some commands for the rover. There is XS in Lisp but I’m not sure if that is what I want. Something more like lejOS would be nicer (but in Common Lisp). Anyway, regardless of that, a mindstorm is always a cool toy :-)