I’ve had two observations floating around in my head, looking for a way to connect with each other.

Many “architecture patterns” are scar tissue around the absence of higher-level language features.

and a criterion for choosing languages and designing APIs

Write down the simplest syntactically valid expression of what you want to do. That expression should be a program.

First, let me clarify that there are all sorts of wonderful patterns in software–things like “functions”, “iteration”, “monads”, “concurrent execution”, “laziness”, “memoization”, and “parametric polymorphism”. Sometimes, though, we write the same combination of symbols over and over again, in a nontrivial way. Maybe it takes ten or twenty lines to encapsulate an idea, and you have to type those lines every time you want to use the idea, because the language cannot express it directly. It’s not that the underlying concept is wrong–it’s that the expression of it in a particular domain is unwieldy, and has taken on a life of its own. Things like Builders and, in this post, Factories.

Every language emphasizes some of these ideas. Erlang, for instance, emphasizes concurrency, and makes it easy to write concurrent code by introducing special syntax for actors and sending messages. Ruby considers lexical closures important, and so it has special syntax for writing blocks concisely. However, languages must balance the expressiveness of special syntax with the complexity of managing that syntax’s complexity. Scala, for instance, includes special syntactic rules for a broad variety of constructs (XML literals, lexical closures, keyword arguments, implicit scope, variable declaration, types)—and often several syntaxes for the same construct (method invocation, arguments, code blocks). When there are many syntax rules, understanding how those rules interact with each other can be difficult.

I argue that defining new syntax should be a language feature: one of Lisp’s strengths is that its syntax is both highly regular but also semantically fluid. Variable definition, iteration, concurrency, and even evaluation rules themselves can be defined as libraries—in a controlled, predictable way. In this article, I’d like to give some pragmatic examples as to why I think this way.

Netty

There’s a Java library called Netty, which helps you write network servers. In Netty each connection is called a channel, and bytes which come from the network flow through a pipeline of handlers. Each handler transforms incoming messages in some way, and typically forwards a different kind of message to the next handler down the pipeline.

Now, some handlers are safe to re-use across different channels–perhaps because they don’t store any mutable state. For instance, it’s OK to use a ProtobufDecoder to decode several Protocol Buffer messages at the same time. It’s not safe, however, to use a LengthFieldBasedFrameDecoder to decode two channels at once, because this kind of decoder reads a length header, then saves that state and uses it to figure out how many more bytes it needs to accept from that channel. We need a new LengthFieldBasedFrameDecoder every time we accept a new connection.

In languages which have first-class functions, the easiest way to get a new, say, Pipeline is to write down a function which makes a new Pipeline, and then call it whenever you need one. Here’s one for Riemann.

(fn []
  (doto (Channels/pipeline)
    (.addLast "integer-header-decoder"
              (LengthFieldBasedFrameDecoder. Integer/MAX_VALUE 0 4 0 4))
    (.addLast "protobuf-decoder"
              (ProtobufDecoder. (Proto$Msg/getDefaultInstance)))))

Doto is an example of redefinable syntax. It’s a macro—a function which rewrites code at compile time. Doto transforms code like (doto obj (function1 arg1) (function2)) into (let [x obj] (function1 x arg1) (function2 x) x), where x is a unique variable which will not conflict with the surrounding scope. In short, it simplifies a common pattern: performing a series of operations on the same object, but eliminates the need to explicitly name the object with a variable, or to write the variable in each expression.

Every time you call this function, it creates a new pipeline (with Channels.pipeline()), and adds a new LengthFieldBasedFrameDecoder to it, then adds a new protobuf decoder to it, then returns the pipeline.

Java doesn’t have first-class functions. It has something called Callable, which is a parameterizable class for zero-arity functions, but since there are no arguments you’re stuck writing a new class and explicitly closing over variables you need every time you want a function. Java works around these gaps by creating a new class for every function it might need, and giving that class a single method. These classes are called “Factories”. Netty has a factory specifically for generating pipelines, so to build new Pipelines, you have to write a new class.

public class RiemannTcpChannelPipelineFactory implements ChannelPipelineFactory
  public ChannelPipeline getPipeline() throws Exception {
    ChannelPipeline p = Channels.Pipeline();
    p.addLast("integer-header-decoder", 
      new LengthBasedFieldFrameDecoder(Integer/MAX_VALUE, 0, 4, 0, 4);
    p.addLast("protobuf-decoder",
      new ProtobufDecoder(Proto.Msg.getDefaultInstance()));
    return p;
  }
}

new RiemannTcpChannelPipelineFactory()

The class (and the interface it implements) are basically irrelevant–this class only has one method, and its type is inferrable. This is a first-class function, in Java. We can shorten it a bit by writing an anonymous class:

new ChannelPipelineFactory() {
  public ChannelPipeline getPipeline throws Exception {

… which saves us from having to name our factory, but we still have to talk about ChannelPipelineFactory, remember its method signature and constructor, etc–and the implementer still needs to write a class or interface.

Since Netty expects a ChannelPipelineFactory, we can’t just feed it a Clojure function. Instead, we can use (reify) to create a new instance of a dynamically compiled class which implements any number of interfaces, and has final local variables closed over from the local environment. So if we wanted to reuse the same protobuf decoder in every pipeline…

(let [pb (ProtobufDecoder. (Proto$Msg/getDefaultInstance))]
  (reify ChannelPipelineFactory
     (getPipeline [this]
       (doto (Channels/pipeline)
         (.addLast "integer-header-decoder"
                   (LengthFieldBasedFrameDecoder. Integer/MAX_VALUE 0 4 0 4))
         (.addLast "protobuf-decoder" pb)))))

In Java, you’d create a new class variable, like so. Note that if you wanted to change pb you’d have to write some plumbing functions–getters, setters, constructors, or whatever, or use an anonymous class and close over a reference object.

public class RiemannTcpChannelPipelineFactory {
  final ProtobufDecoder pb = new ProtobufDecoder(Proto.Msg.getDefaultInstance());

  ...

Now… these two create basically identical objects. Same logical flow. But notice what’s missing in the Clojure code.

There’s no name for the factory. We don’t need one because it’s a meaningless object–its sole purpose is to act like a partially applied function. It disappears into the bowels of Netty and we never think of it again. This is an entire object we didn’t have to think up a name for, ensure that its name and constructor are consistent with the rest of the codebase, create a new file to put it in, and add that file to source control. The architecture pattern of “Factory”, and its associated single-serving packets of one verb each, has disappeared.

(let [adder (partial + 1 2)]
  (adder 3 4) ; => 1 + 2 + 3 + 4 = 10
public class AdderFactory {
  public final int addend1;
  public final int addend2;
  ...

  public AdderFactory(final int addend1) {
    this.addend1 = addend1;
  }

  public AdderFactory(final int addend1, final int addend2) {
    this.addend1 = addend1;
    this.addend2 = addend2;
  }

  ...

  public int add(final int anotherAddend1, final int anotherAddend2) {
    return addend1 + addend2 + anotherAddend1 + anotherAddend2;
  }
}

AdderFactory adder = new AdderFactory(1, 2)
adder.add(3, 4);

Factories are just awkward ways to express partial functions.

Back to Netty.

So far we’ve talked about a single ChannelPipelineFactory. What happens if you want to make more than one? Riemann has at least three–and I don’t want to write down three classes for three almost-identical pipelines. I just want to write down their names, and the handlers themselves, and have a function take care of the rest of the plumbing.

Enter our sinister friend, the macro, stage left:

(defmacro channel-pipeline-factory
  "Constructs an instance of a Netty ChannelPipelineFactory from a list of
  names and expressions which evaluate to handlers. Names with metadata 
  :shared are evaluated once and re-used in every invocation of 
  getPipeline(), other handlers will be evaluated each time.

  (channel-pipeline-factory
             frame-decoder    (make-an-int32-frame-decoder)
    ^:shared protobuf-decoder (ProtobufDecoder. (Proto$Msg/getDefaultInstance))
    ^:shared msg-decoder      msg-decoder)"
  [& names-and-exprs]
  (assert (even? (count names-and-exprs)))
  (let [handlers (partition 2 names-and-exprs)
        shared (filter (comp :shared meta first) handlers)
        forms (map (fn [[h-name h-expr] ]
                        `(.addLast ~(str h-name) 
                                   ~(if (:shared (meta h-name))
                                     h-name
                                     h-expr)))
                   handlers)]
    `(let [~@(apply concat shared)]
       (reify ChannelPipelineFactory
         (getPipeline [this]
                      (doto (org.jboss.netty.channel.Channels/pipeline)
                        ~@forms))))))

What the hell is this thing?

Well first, it’s a macro. That means it’s Clojure code which runs at compile time. It’s going to receive Clojure source code as its arguments, and return other code to replace itself. Since Clojure is homoiconic, its source code looks like the data structure that it is. We can use the same language to manipulate data and code. Macros define new syntax.

First comes the docstring. If we say (doc channel-pipeline-factory) at a REPL, it’ll show us the documentation written here, including an example of how to use the function. ^:shared foo is metadata–the symbol foo will have a special key called :shared set on its metadata map. We use that to discriminate between handlers that can be shared safely, and those which can’t.

  [& names-and-exprs]

These are the arguments: a list like [name1 handler1 name2 handler2].

  (assert (even? (count names-and-exprs)))

This check runs at compile time, and verifies that we passed an even number of arguments to the function. This is a simple way to validate the new syntax we’re inventing.

(let [handlers (partition 2 names-and-exprs)
      shared (filter (comp :shared meta first) handlers)

Now we assign a new variable: handlers. (partition 2) splits up the list of handlers into [name, handler] pairs, to make it easier to work with. Then we find all the handlers which are sharable between pipelines. (comp :shared meta first) composes three functions into one. Take the first part of the handler (the name), get its meta data, and tell me if it’s :shared.

(let [handlers (partition 2 names-and-exprs)
      shared (filter (comp :shared meta first) handlers)
      forms (map (fn [[h-name h-expr] ]
                      `(.addLast ~(str h-name) 
                                 ~(if (:shared (meta h-name))
                                   h-name
                                   h-expr)))
                 handlers)]

Now we turn these pairs like [pb-decoder (ProtobufDecoder...)] into code like (.addLast "pb" pb) if it’s shared, and (.addLast "pb" (ProtobufDecoder...)) otherwise. Where does the variable pb come from?

  `(let [~@(apply concat shared)]

Ah, there it is. We take all the shared name/handler pairs and bind their names to their values as local variables. But wait–what’s that backtick just before let? That’s a special symbol for writing macros, and it means “Don’t run this code–just construct it”. ~@ means “It’s OK to run this code now–and insert whatever it returns in its place”. So the first part of the code we return will be the (let) expression binding shared names to handlers.

(reify ChannelPipelineFactory
       (getPipeline [this]
                    (doto (org.jboss.netty.channel.Channels/pipeline)
                    ~@forms))))))

And there’s the pipelinefactory itself. We construct a new pipeline, and… insert new code–the forms we generated before.

Macros give us control of syntax, and allow us to solve problems at compilation time. You don’t have access to the values behind the code, but you can manipulate the symbols of the code itself absent meaning. Syntax without semantics. At compile time, Clojure invokes our macro and generates this bulky code we had before…

(let [protobuf-decoder (ProtobufDecoder. (Proto$Msg/getDefaultInstance))]
  (reify ChannelPipelineFactory
     (getPipeline [this]
       (doto (Channels/pipeline)
         (.addLast "integer-header-decoder"
                   (LengthFieldBasedFrameDecoder. Integer/MAX_VALUE 0 4 0 4))
         (.addLast "protobuf-decoder" protobuf-decoder)))))

… from a much simpler expression:

(channel-pipeline-factory
     integer-header-decoder (LengthFieldBasedFrameDecoder. Integer/MAX_VALUE 0 4 0 4)
  ^:shared protobuf-decoder (ProtobufDecoder. (Proto$Msg/getDefaultInstance))

Notice what’s missing. We don’t need to think about the pipeline class, or the name of its method. We don’t have to name and manipulate variables. .addLast disappeared entirely. The protobuf handler is reused, and the length decoder is created anew every time–but they’re expressed exactly the same way. We’ve fundamentally altered the syntax of the language–its execution order–in a controlled way. This expression is symmetric, compact, reusable, and efficient.

We’ve reduced the problem to a simple, minimal expression–and made that into code.

Tradeoffs

I didn’t start out with this macro. Originally, Riemann used plain functions to compose pipelines. As the pipelines evolved and split into related variants, the code did too. When it came time to debug performance problems, I had a difficult time understanding what the pipelines actually looked like—composing a pipeline involved three to four layers of indirect functions across three namespaces. In order to understand the problem—and develop a solution—I needed a clear way to express pipelines themselves.

(channel-pipeline-factory
           int32-frame-decoder (int32-frame-decoder)
  ^:shared int32-frame-encoder (int32-frame-encoder)
  ^:shared executor            shared-execution-handler
  ^:shared protobuf-decoder    (protobuf-decoder)
  ^:shared protobuf-encoder    (protobuf-encoder)
  ^:shared msg-decoder         (msg-decoder)
  ^:shared msg-encoder         (msg-encoder)
  ^:shared handler             (gen-tcp-handler 
                                 core
                                 channel-group
                                 tcp-handler))

In this code, the relationships between handlers is easy to understand, and making changes is simple. However, this isn’t the only way to express the problem. We could provide exactly the same semantics with a plain old function taking other functions. Note that #(foo bar) is Clojure shorthand for (fn [] (foo bar)).

(channel-pipeline-factory
  :unshared :int32-frame-decoder #(int32-frame-decoder)
  :shared   :int32-frame-encoder (int32-frame-encoder)
  :shared   :executor            shared-execution-handler
  :shared   :protobuf-decoder    (protobuf-decoder)
  :shared   :protobuf-encoder    (protobuf-encoder)
  :shared   :msg-decoder         (msg-decoder)
  :shared   :msg-encoder         (msg-encoder)
  :shared   :handler             (gen-tcp-handler 
                                 core
                                 channel-group
                                 tcp-handler))

In this code we’ve replaced bare symbols for handler names with :keywords, since symbols in normal code are resolved in the current scope. Symbols can’t take metadata, so we’ve introduced a :shared keyword to indicate that a handler is sharable. Non-shared handlers, like int32-frame-decoder, are written as functions which are invoked every time we generate a new pipeline. And to parse the list into distinct handlers, we could either wrap each handler in a list or vector, or (as shown here), introduce a mandatory :unshared keyword such that every handler has three parts.

This is still a clean way to express a pipeline factory—and it has distinct tradeoffs. First, the macro runs at compile time. That means you can do an expensive operation once at compile time, and generate code which is quick to execute at runtime. The naive function version, by contrast, has to iterate over the handler forms every time it’s invoked, identify whether it’s shared or unshared, and may invoke additional functions to generate unshared handlers. If this code is performance-critical, the iteration and function invocation may not be in a form the JIT can efficiently optimize.

Macros can simplify expressing the same terms over and over again, and many library authors use them to provide domain-specific languages. For example, Riemann has a compact query syntax built on macros, which cuts out much of the boilerplate required in filtering events with functions. This expressiveness comes at a cost; macros can make it hard to reason about when code is evaluated, and break the substitution rule that a variable is equivalent to its value. This means that macros are typically more suitable for end users than for library code—and you should typically provide function equivalents to macro expressions where possible.

As a consequence of violating the substitution rule (and evaluation order in general), macros sacrifice runtime composition. Since macros operate on expressions, and not the runtime-evaluated value of those expressions, they’re difficult to use whenever you want to bind a form to a variable, or pass a value at runtime. For instance, (map future ['(+ 1 2) (+ 3 4)]) will throw a CompilerException, informing you that the compiler can’t take the value of a macro. This gives rise to macro contagion: anywhere you want to invoke a macro without literal code, the calling expression must also be a macro. The power afforded by the macro system comes with a real cost: we can no longer enjoy the freedom of dynamic evaluation.

In Riemann’s particular case, the performance characteristics of the (channel-pipeline-factory) macro outweigh the reusability costs—but I don’t recommend making this choice lightly. Wherever possible, use a function.

Further examples

In general, any control flow can be expressed by a function called with stateful first-class function. Javascript, for instance, uses explicit callback functions to express futures:

var a = 1;
var f = future(function() { return a + 2; });
f.await(); // returns 3

And equivalently, in Clojure one might write:

(let [a 1
      f (future (fn [] (+ a 2)))] ; Or alternatively, #(+ a 2)
  (deref f)) ; returns 3

But we can erase the need for an anonymous function entirely by using a macro—like the one built in to Clojure for futures:

(let [a 1
      f (future (+ a 2))]
  (deref f)) ; returns 3

The Clojure standard library uses macros extensively for control flow. Short-circuiting (and) and (or) are macros, as are the more complex conditionals (cond) and (condp). Java’s special syntax for synchronize { … } is written as the (locking) macro—and the concurrency expressions (dosync) for STM transactions, (future) for futures, (delay) for laziness, and (lazy-seq) for sequence construction are macros as well. You can write your own try/catch by using the macro system, as Slingshot does to great effect. In short, language features which would be a part of the compiler in other languages can be written and used by anyone.

Summary

Macros are a powerful tool to express complex ideas in very little code; and where used judiciously, help us reason about difficult problems in a clear way. But—just as language designers do—we must balance the expressiveness of new syntax with the complexity of its interactions. In general, I recommend you:

  • Write simple macros which are as easy to reason about as possible.
  • Use macros to express purely syntactic transformations, like control flow.
  • Choose a macro to simplify writing efficient, but awkward, code which the runtime cannot optimize for you.
  • In most other cases, prefer normal functions.
Aphyr on

As an aside, I want to note that my use of a macro here makes sense in Riemann’s context–where pipeline factories are fixed at compile time–but if I were writing an API for dynamic use (e.g. Netty) you might not have all the necessary pieces at compile time, and a macro would just get in the way. A better way to express this API, with a slight performance cost, is simply to use a function:

(channel-pipeline-factory integer-header-decoder #(LengthFieldBasedFrameDecoder. Integer/MAX_VALUE 0 4 0 4) protobuf-decoder (ProtobufDecoder. (Proto$Msg/getDefaultInstance))

… where the function assumes any functions it receives are non-shared, and calls them every time getPipeline() is invoked to generate new handlers.

Since this is in a performance-critical path in Riemann’s infrastructure, I’m trying to eke out every last ounce I can, and avoiding the extra function lookup+invocation (plus some awkward type hints) is one of the things I’m trying. Plus this application of implicitly controlling/delaying expression invocation was too interesting to pass up in a post. :)

Aphyr
Alex Redington

I like your conclusion, your exemplar macro, and overall your line of reasoning, but as a person who suffered through an age where working in LISP was even more difficult to achieve than today, when working in Java was something many of us, including me, had to swallow, I’d like to object to some of your distinctions with Java.

Primo: Java has a concept of an Anonymous Class. Generally these classes are implemented against interfaces, and you place them within the definition of some other class that will be using the Anonymous Class. These give you a (verbose, awkward, and still inferior) mechanism to defining fns inline as in Clojure.

Secundo: Java Anonymous Classes can close over their local scope, removing the necessity of all the overhead of get/set, constructor variables propogating scope, etc. However, as Anonymous Classes are able to access those variables, and not just their values, this means that the mechanisms for mutability (and reasoning about its consequences) expand rapidly when you start using Anonymous Classes which have mutating behavior.

These two points are not terribly important, but useful to keep in mind if you find yourself in the unfortunate position of having to write Java source code. Maybe Android development, for example.

Vedang

the macro will actually expand to

(let protobuf-decoder (ProtobufDecoder. (Proto$Msg/getDefaultInstance)) (.addLast “integer-header-decoder” (LengthFieldBasedFrameDecoder. Integer/MAX_VALUE 0 4 0 4)) (.addLast “protobuf-decoder” protobuf-decoder)))))

yes? (Minor nitpick, it’s confusing if someone is trying to follow along and learn about macros)

Vedang

Sorry about the formatting, what I wanted to point out was that it shouldn’t be pb but protobuf-decoder in the final example.

Aphyr on

Ah, thanks for catching that, Vedang!

Redina: you’re right, anonymous classes means we no longer have to name our own instance of the factory. I’ve updated the post to show an example. We still need to look up, import, and extend the factory type–and the library author still needs to write the type in the first place.

When I write Java, I do try to take advantage of anonymous classes closing over final method variables as much as possible. Access to mutable values is… well, dangerous, as you note, but one does at least have a choice not to use them. ;-)

Aphyr
ryan king

This article reminds me of a similar comparison between Java and Scala:

http://robey.lag.net/2011/04/30/dissolving-patterns.html

AndreasS
AndreasS on

Couple of things here:

  • its kind of hard to see whats the purpose of the blog post, there are different things I would say that could help/change that, like a different title: what I like about clojure macros | dissolving syntax patterns using (clojure) macros -when explaining the doto form I was reminded of combinators: https://github.com/raganwald/homoiconic/blob/master/2008-10-29/kestrel.markdown I think a side step to explain how the general concept is named would have added value to your post -you state: “Scala, for instance, includes special syntactic rules for a broad variety of constructs (XML literals, lexical closures, keyword arguments, implicit scope, variable declaration, types)—and often several syntaxes for the same construct (method invocation, arguments, code blocks). When there are many syntax rules, understanding how those rules interact with each other can be difficult.”

your examples of macros do not solve this issue for me in a clear way

==> All together I liked the post but it seemed a little rushed. Also when you speak of “power” I like it when the axis on which that is measured are described in a clear and general way.

Andreas
Andreas on

Hello Kyle,

thanks for this deep and insightful article! I do like the title, too :-) In the end, the title clearly states to what it comes down to, IMHO.

Greetings, Andreas

Post a Comment

Comments are moderated. Links have nofollow. Seriously, spammers, give it a rest.

Please avoid writing anything here unless you're a computer. This is also a trap:

Supports Github-flavored Markdown, including [links](http://foo.com/), *emphasis*, _underline_, `code`, and > blockquotes. Use ```clj on its own line to start an (e.g.) Clojure code block, and ``` to end the block.