# Understanding function composition and partial application

Independent consultant and developer Chip Camden uses practical examples to explain two principles of Functional Programming.

Let's say that we have a list (or array) of temperatures in Celsius, and we want to create a new list containing the same temperatures expressed in Fahrenheit. In most imperative programming languages, this kind of operation calls for a loop. For instance, in C# you might do something like this:

``````
List<float> ctemps = new List<float>() {37, -5, 0, 48};
List<float> ftemps = new List<float>();
foreach (float temp in ctemps)
{
}
``````

Languages that support a Functional Programming style provide a higher-order function, usually called "map," that returns a new list generated by mapping a function to each of the elements of an existing list. Thus, for example, in Ruby we could rewrite the above as:

``````
ctemps = [37, -5, 0, 48]
ftemps = ctemps.map {|temp| temp * 1.8 + 32}
``````

The map function requires two parameters: the list on which to operate (ctemps in the above example) and the function to map over each of its elements (the code block in braces above). This arrangement is more obvious in a pure functional language like Haskell:

``````
let ctemps = [37.0, -5, 0, 48]
in map (\ x -> x * 1.8 + 32) ctemps
``````

The expression in parentheses is a lambda, which corresponds to the code block given in the Ruby example. The backslash stands for the Greek character lambda (λ), and x is the assigned name for its parameter. We could have passed a named function instead of the lambda; the only reason why we used a lambda is because we don't have a named function that multiplies its parameter by 1.8 and then adds 32. We could write one:

``````
let ctemps = [37.0, -5, 0, 48]
toF x = x * 1.8 + 32
in map toF ctemps
``````

But in pure functional languages every operation is a function, including multiplication and addition. All the toF function is really doing is combining these functions, along with some specific parameters, into one function. The rest of this post will explain that statement.

## Partial function application

As I mentioned, in Haskell multiplication is a function, like everything else; it takes two numeric arguments and returns their product. But in Haskell, no function really has more than one parameter.

Those readers who are unfamiliar with Functional Programming are now trying to discern the sound of one hand clapping.

All functions in Haskell are curried, which means that when you call a function that takes two parameters, what that function returns is another function that takes the other parameter. For example, let's say I have the following function:

``````
mymult :: Int -> Int -> Int
mymult x y = x * y
``````

This little function is a simplified version of multiplication, but it will serve to illustrate the point. The first line is our type signature; it says that this function takes an Int (integer) argument and returns a function that takes an Int argument and returns an Int. If we invoke it:

``````
mymult 2 3
``````

The expression `mymult 2` returns a function that then takes 3 as an argument to invoke the rule that we've established. Why is this important? Because we can break that apart:

``````
let mymult2 = mymult 2
in map mymult2 somelist
``````

The map function expects as a parameter a function that takes only one parameter. Since mymult takes two parameters, we can partially apply it by supplying one of the arguments in order to create a function that we can pass to map. We don't even need to name it:

``````
map (mymult 2) somelist
``````

Better yet, we can ditch our custom function and just partially apply the multiplication operator:

``````
map (2*) somelist
``````

Now, why did I put the '*' after the 2? In Haskell, functions that are named with special symbols are infix by default. Haskell has a syntax for calling them in prefix fashion, as well as for calling prefix functions with infix notation, but we don't need to explore that here. Since multiplication is commutative, I could have said:

``````
map (*2) somelist
``````

If the operation were division or subtraction, though, then the order would matter.

## Function composition

How does this help our little toF function? We've seen how we can pass a partially-applied binary operation to map, but in our case we need two operations: multiplication by 1.8 and addition of 32. We want to take the output of the first operation, and pipe it to the second one to get our result.

Well, that's what the "function composition" operator (.) is for. Joining two functions with a "." creates a new function that first invokes the right-hand function, then passes its result to the left-hand function. Thus, we can eliminate our toF function altogether:

``````
let ctemps = [37.0, -5, 0, 48]
in map ((+32) . (*1.8)) ctemps
``````

We can also compose two other functions, show (convert to String) and putStrLn (output to the terminal), to see our result:

``````
main = do
putStrLn . show \$
let ctemps = [37.0, -5, 0, 48]
in map ((+32) . (*1.8)) ctemps
``````

We could have written that as `putStrLn(show(...))`. That's effectively the same thing, because we're evaluating it right away. In the case of map, though, as with all higher-order functions, we're building a pipeline that will be applied to values to be supplied at a later point. In that case, direct application of the result of the first function won't work:

``````
map ((+32) (*1.8)) ctemps
``````

That's wrong, because (*1.8) returns a function, and (+32) is expecting a number. You have to use function composition to get this to work.

## Conclusion

If you've never been exposed to Functional Programming before, you may be experiencing a bit of vertigo, nausea, and numbness in the extremities that's a normal part of learning Haskell. There will be more, if you continue. But hopefully these disorienting episodes will lead to moments of enlightenment that will not only enable you to become proficient in Functional Programming, but will also make you a better programmer in any language.

Chip Camden has been programming since 1978, and he's still not done. An independent consultant since 1991, Chip specializes in software development tools, languages, and migration to new technology. Besides writing for TechRepublic's IT Consultant b...

aikimark

Chip Shouldn't that earlier example have been: [code] let ftemps = [37.0, -5, 0, 48] in map (\ x -> x * 1.8 + 32) ctemps [/code]

Mark Miller

seanferd

For me, the examples really don't produce more disorientation than any other code examples, though. I found this to be rather clearly explained, in fact. The vertigo, well, VEBKAC in my case.

Sterling chip Camden

The Haskell "print" function could be used in place of "putStrLn . show", but I wanted to illustrate functional composition using non-symbolic functions.

Sterling chip Camden

The "let" is just setting up the list of Celsius temperatures that are then mapped in the "in" clause. If I wanted it assigned to ftemps, I could have included a let ftemps = around that, but then that would also need an "in" clause with some code that uses ftemps.

Sterling chip Camden

... that at bottom, most computers are essentially imperative with state modifications. Those of us who have worked in assembly languages tend to think in those terms, and knowing that the stateless functional/declarative abstraction is built on top of that, we keep on trying to see through the abstraction to what it's "really doing."

Sterling chip Camden

I think functional programming might be easier to understand for someone who doesn't have a long history with imperative programming. With more than 30 years in the latter, I found the former rather a shock at first -- although it elucidates and confirms some suspicions about programming that I've had for a very long time.

Mark Miller

Charles Bundy

We're still using Von Neumann architecture... I swear FP ties me in knots! Your article on the other hand did not (knots that is..) great read!

Mark Miller

Sterling chip Camden

There's so much to potentially learn about that we have to apply a focus to it. Otherwise, I might stop in the middle of writing some code to spend a few hours exploring the history of the shape of the letter 'l' in my DejaVu Sans Mono font (not that I haven't done that before).

Charles Bundy

" i am not sure that I agree that a language is JUST a means of modelling an idea." http://plato.stanford.edu/entries/language-thought/ I think Mark and Chip's point is that the professor in question was trying to get across a language paradigm. By focusing on the underlying machine (ala Wizard of Oz) the students were veering off into the weeds instead of focusing on the paradigm at hand e.g. do I solve this problem using functions, procedures, logic et-al. Good that they are curious, but in this case not certain it's perceptive on their part (unless of course the compiler is producing buggy code. Took a course using Modula-2, and spent more time debugging the professor's damn compiler than learning object based methods!) Speaking of computer architecture, on the way home from a soccer game last night my daughter wanted to stop off at the bookstore to get a magazine and something hot to drink. Lo and Behold I found the following book by Jane Smiley - http://en.wikipedia.org/wiki/The_Man_Who_Invented_The_Computer Treasure turns up at the oddest time :)

Mark Miller

I was complaining about a disparity in educational focus. It's not that important things are not being taught and explored. It's that a whole discipline that could contribute to this process has been marginalized for a long time. From a scientific perspective, yes, the model is the idea. As Dr. Dijkstra once said, we have the capability where we no longer have to worry about getting the computer to do what we want. We can describe our ideas formally, in whatever form we deem appropriate, and we can have it be the computer's job to execute the ideas. If the focus is on modeling how a car accelerates, or how we hear sound and what makes up sound waves, then yes, focusing on "How is the language doing this?" is a distraction, because the important thing to learn in this case is how programming helps us understand something else. If, on the other hand, the focus is on how the medium (the language) is constructed, and how this informs the construction of other media (languages), then the student's question would be an important thing to explore. I realize I'm coming at this from a different perspective, and I understand that the reason such questions might be asked, and accepted, is because students are curious to understand the "whole stack." They're more fascinated with the machine. They'd like to feel a full mastery of the system, rather than being at the mercy of somebody else's rules. I still maintain, though, that this is an engineering focus, not a scientific focus. Scientific thinking is different in that the goal, in this case, is to "construct to know," rather than "construct to use." The professor I spoke of specializes in media computation, so the focus was to get students looking at how they can use computation to understand the different elements that go into media. It's an introductory course. Now, will his students someday need to understand data processing, and get into the guts of the medium they're using? Yeah, probably. The point is that we don't have to make the underlying machine the only thing that's important. In fact that can be a distraction. It just depends on what the object of real study is at the moment.

Sterling chip Camden

https://secure.wikimedia.org/wikipedia/en/wiki/Lisp_machine The question about what's going on behind the scenes is a perceptive one, but it's not necessarily germane to learning a programming language. A programming language is supposed to be an abstraction above the hardware, and the compiler should optimize the best use cases for the language so that the programmer doesn't need to know how it's implemented under the covers. When performance constraints intrude, then the language implementation should document that a particular strategy has unavoidable performance penalties. But the programmer using the language should be thinking in terms of the language, not in terms of its underlying implementation. Exceptions for poor implementations granted.

VBJackson

Mark, Granted that I was a Computer Engineering major, not a CS major, but i am not sure that I agree that a language is JUST a means of modelling an idea. When the students asked "what was going on?", I would consider that not only a valid, but a PERCEPTIVE question. They are asking, how does the this language that we expect to model a process actually perform the modeling? Is it an acurate representation of the process? How is the compiler/interpreter going from the representation embodied in the language construct to an actual set of instructions that a CPU can execute? As far as developing alternate architectures, I am only aware of 3 basic categories of processor: Analog, Von Neuman, and Harvard or split bus. I would be very interested to find out how they designed an architecture to actually process Lisp statements. Do you have any references of it?

Sterling chip Camden

Very early in my programming career I sensed the conflict between mathematics and imperative programming. The statement in BASIC: x = x + 1 offended my mathematical sensibilities. That can't be true! Thought I. Soon, I came to understand the imperative nature of the statement, and the idea of state changes. Little did I know that I would need to undo all of that years later in order to understand FP.