# Effects A pure function looks the same in almost any programming language. Where today's programming languages differ is in their handling of *effects*: the parts of the program that consist of something other than just substituting a function call with its return value. [There is an equivalence](http://conal.net/blog/posts/the-c-language-is-purely-functional) between these things: any effect can be captured in a value that "lazily" represents performing that effect, and any manipulation of values could be modelled as an effect. [Assembly language](http://wall.org/~lewis/2013/10/15/asm-monad.html) (or rather machine code) is, paradoxically, the most extreme example of both: a machine-code program consists entirely of effects, because there's not even anywhere to put a value. But equivalently, a machine-code program is a value in the most obvious possible way: it's nothing but a sequence of bytes. Modern languagues make more finer-grained distinctions; in Haskell-style functional languages it's common to have a value (e.g. `GenerateRandomNumber`) that's "interpreted" first in some specific context (e.g. `Rng[Int]`), then in a more general context (e.g. `IO[Int]`), with several layers before we finally reach the stage of a general effect. To talk of "effects" is inherently to adopt a certain perspective on these things: that they do have something in common and can be treated similarly; in a sense I'm already assuming the Haskell worldview, and the biases that come with it. But hopefully this is enlightening for programmers of other schools, and will maybe shed some new light on certain language design decisions. Unsurprisingly, different programming languages take different approaches to managing effects. What's more interesting is when the same language adopts quite different attitudes to two effects that seem to me quite similar. Mostly I’ll use Java, Python and Scala as examples, because they’re the languages I’m most familiar with. When I’m aware of a language with a distinctive position on some area, I’ll mention it. Apologies if I misunderstand a feature of a less familiar language. Compiling this list of examples, a few axes emerged: * How an effect appears locally in the source code. Some effects are *magic* (that is, completely invisible) - the language simply manages that effect for you (so it will almost always also be implicit and safe). Some are *concise* and some are *verbose*. * How an effect appears more globally, from the function that calls the function in which the effect occurs. Some are *implicit* - a call to a function that performs that effect looks like any other function call. Some are *single-function* - the effect must be "resolved" in a single function (usually the "parent" function that might call child functions), and can't be split across multiple function scopes (i.e. starting in one function and being finished in another, sibling function). Some are *[special-cased](http://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/)* - functions with this effect must be handled differently from other functions, perhaps with language support or perhaps not. And some are *generic* - the effect is visible in the function call, but "effect-ness" can be abstracted over in the lanugage. * Whether the use of the effect is *safe* - the language enforces that the effect is handled correctly - or *unsafe* - it's up to the programmer to do that. I also found a distinct class of *semi-safe* effect handling, where there are constructs and simple rules that can be followed to ensure the effect is handled safely, but the language also permits invoking the effect unsafely (perhaps e.g. in legacy libraries). # The examples ## Memory management Java, Python, Scala: magic, implicit, safe. Most languages these days have automatic garbage collection C: verbose, implicit, unsafe. C is the canonical example of a language with manual memory management: the programmer explicitly calls `malloc` and `free`, and is responsible for ensuring they're called correctly. Allocation and ownership are implicit beyond function scope: the only way to know whether a function will allocate memory for new data structures, or free that of existing ones, is with out-of-band information like comments. C++: semi-concise, semi-generic, semi-safe. C++ can often feel like several languages at once (and so some now-common constructs are more verbose than they should be), but provided you use the right constructs (smart pointers, stack allocation with RAII) and follow a few rules then memory safety is enforced. How memory management appears globally is an interesting mix: memory management information is sometimes visible in a function definition (smart pointers, references) but not always. Abstracting memory management generically is interesting: in theory RAII means memory and non-memory resources can be managed in the same way (so e.g. templates can be polymorphic in what kind of resource they’re using), and templates can handle smart pointers or in theory even abstract over reference-ness. But I've seen limited use of these techniques in practice, in part because of the complexity of the template system. Rust: concise, single-function?, safe. Rust's big selling point is that memory is manually managed, but safety is enforced (via the borrow checker), with the lifecycle of any function parameter visible in the function signature. I’m not clear on exactly how “lifecycle-polymorphic” functions can be - Rust doesn't have the abstractions you need to treat effects completely generically (i.e. higher-kinded types), but some level of abstraction over borrowed vs. owned vs. shared resources might be possible. ## Non-memory resources Java <=6, Python <=2.4 using manual resources: verbose, implicit, unsafe. The programmer must explicitly free resources, and it usually requires a `try`/`finally` block to do correctly. Python <= 2.4 using destructors: magic, implicit, safe-ish. Python uses reference counting so has deterministic(ish) destruction, so it's possible to rely on object destructors to free resources (but at the risk of turning relatively benign memory leaks into more perilous resource leaks). I've even seen code that does this in early Java, but garbage collection makes it very unsafe: it's very possible to e.g. hit the OS file handle limit because your program never runs out of memory so never closes its files. Python 2.5+ using `with` statements: concise, single-function, semi-safe. Use a specific statement to give a resource block scope, safely, but relies on the programmer using that statement. Resources can't cleanly be passed across function boundaries - have to belong to a particular block. Java 7+ using try-with-resources: concise, single-function, safe. As Python, but the compiler will warn if the programmer forgets to use the statement with an `AutoCloseable` object. Scala using scala-arm: concise, generic, semi-safe. The `Resource` abstraction makes it possible to pass a resource safely across function boundaries, and since it's an ordinary parameterized type we can abstract over it like any other effect. But as in the Python case, the user has to explicitly call `managed` on their resources. C++ using RAII: Concise, implicit, safe. In C++ resources are usually managed via object construction and destruction; since object lifecycles are already managed carefully for the sake of memory management, we get management of other resources "for free". Go with “dispose”: concise, ???, ??? - I’ve heard extreme claims about how nice this is. Must investigate further. ## Error handling Java using checked exceptions: verbose, special-cased, safe. Java enforces that exceptions are caught or propagated explicitly at the language level. Exceptions are distinct from the rest of the type sysetm and it's impossible to abstract over whether a function throws. Handling the possibilty of error in a single function call requires a slightly clunky `try`-`catch` block. Python, Java using unchecked exceptions, Scala or C++ using exceptions: concise, implicit, unsafe. Any function might throw, and exceptions might not be caught. Scala using `Either` or `\/`: concise, generic, semi-safe. Possible-errors are an ordinary paramaterized type we can abstract over like any other effect, but the user must ensure their code returns errors rather than throwing (and use the `catching` construct to wrap legacy libraries). C using error codes: verbose, special-cased, unsafe (though gcc does have `warn_unused_result`). Error codes are passed as values of the same type as the actual value, meaning the language can't enforce that they're checked correctly. Propagating errors usually requires manual code and can't be abstracted over since there is no standardization of what different error codes might mean. Go: verbose, special-cased, safe. As C but the `res, err =` syntax enforces that errors are handled. Rust: concise, semi-generic, safe. As Scala but this is a language built-in mechanism so there is no risk of legacy code throwing exceptions. Error-ness [can be sort of abstracted over via a macro](http://lucumr.pocoo.org/2014/11/6/error-handling-in-rust/), but the language lacks higher-kinded types so can't express fully effect-generic functions. ## File I/O Java, Python, mainstream Scala: magic, implicit, "unsafe". In most languages any function might read or write files. Reading or writing files is usually inherently safe, but in the presence of multithreading it can require very careful manual steps to ensure correctness. Scala using `IO`: concise, generic, semi-safe. Again this is using an ordinary value to encapsulate I/O, so we can abstract over it and any other effect. The language enforces that `IO` values are composed correctly, but the programmer needs to take care to wrap legacy library calls if they perform I/O. Haskell: concise, generic, safe. As Scala but this is a built-in language feature so any library will be explicit about I/O. ## Random number generation Interestingly, these are exactly as File I/O: Java, Python, mainstream Scala: magic, implicit, "unsafe". Scala using `Rng`: concise, generic, semi-safe. Haskell: concise, generic, safe. ## Database Access Java/Scala JDBC, Python mysql module: verbose, implicit, unsafe. Any function might access the database, and the programmer must manually manage transaction boundaries. Java/Scala with Spring JDBC template: concise-ish, single-function, semi-safe. Since Java doesn't have first-class functions it can sometimes be hard to abstract away boilerplate; often the best approach is to define a superclass that can be extended by user code, which is what Spring JDBC does. It's safe if the database is only accessed within that class, but does force database accesses to happen in a single block (we can't e.g. pass an open transaction between functions). Java/Scala with Hibernate/JPA, Python with Django: magic, single-function or implicit, unsafe. These approaches offer two possibilities for transaction boundaries: either method scoped via an `@Transactional`, in which case database access has to happen in a single block and the programmer must ensure that only `@Transactional` methods access the database, or session-in-view (i.e. one transaction per web request), in which case the transaction is implicitly threaded between functions but not controlled, which can cause problems with async/multithreaded code or scheduled tasks that run outside the context of any particular web request. Scala with Squeryl: concise, single-function, unsafe. Squeryl's `inTransaction` blocks are a very similar abstraction to the above; they're slightly friendlier to automated refactoring (since they're ordinary method calls rather than annotations) but otherwise behave the same. Scala with Doobie: verbose, generic, safe. This follows the parameterized type approach, so database code can be used fully generically and passed between functions. Code is verbose because it only offers a JDBC-like row interface (no ORM). ## Async Java with callbacks: verbose, single-function, unsafe. Most languages "support" callback-based async (it doesn't require any language-level support); it's inherently single-function (callbacks can't be composed/chained) and up to the programmer to ensure that the correct things happen when code runs on different threads. Java with quasar, Python with various async libraries, Go: concise, implicit, safe. A relatively new model of green threading with thread transitions automatically, implicitly managed by the runtime. Relies on the runtime to ensure correctness of thread-dependent code; can make e.g. FFI calls (i.e. calling C libraries) difficult. Programmer has no control over thread transitions. Erlang: concise, special-cased, safe. As above but Erlang makes an explicit distinction between messages and function calls, giving the programmer more control over when context switches happen (perhaps because it's used for telecoms i.e. realtime?) but making it harder to abstract over sync vs. async. Scala using `Future`: concise, generic, safe. Once again, in Scala this is "just" a parameterized type. ## Custom constructs (e.g. audit logging) A custom construct by it's nature can't be handled magically; this section is about the mechanisms languages offer for extending them with your own custom effects. Any language: verbose, implicit, unsafe. It's always possible to implement an effect "by hand", even if a language doesn't offer anything more. Java using Spring AOP or AspectJ, Python using decorators: concise, single-function, unsafe. We saw these approaches in the database section, but it's possible to use them for custom effects - it's very easy in Python since a decorator is just a function. In Java the implementation is more verbose, but at the point of use it's a simple annotation. Scala using a custom monad: concise, generic, semi-safe. A custom effect is just another effect, and provided we define the "primitives" effectually then the type system will enforce that they're composed correctly. #Thoughts I started thinking about these issues after reading a few pieces about async in Javascript, particularly [this call for a magic approach](http://jlongster.com/Stop-Trying-to-Catch-Me). I was struck by the fact that Go favours a magic/implicit style for async despite taking a very verbose/special-cased approach to error handling. But on further thought the same contradiction is present in the Haskell tradition, which goes as far as managing I/O explicitly, but has long advocated a magic/implicit approach to memory management. Partly these languages are just playing to their strengths. In the pre-Java 7/Python 2.5 days, C++'s resource management was looked on with envy, but it was often observed that this was the product of necessity - since C++ programmers have to manage memory as an explicit resource, they made sure there were great tools for explicitly managing resources. Similarly, Haskell has great tools for concisely and generically operating on explicit effects (monads), which was surely a factor in the decision to manage I/O as an explicit effect. Conversely, with no unmanaged mutable state, the costs of garbage collection are lower than in more conventional languages. No doubt similar considerations apply to Go's design decisions, as alien as they seem to me. Perhaps the most important case is error handling in Java. Checked exceptions seem like a language-appropriate approach: they're verbose but safe, which could be Java's motto. But in practice users have turned away from them, partly because of that verbosity, partly because they're hard to abstract over (e.g. in a callback context). I can't help wondering if there's a lesson for the Haskell tradition here. Managing many things explicitly improves safety but robs functional languages of their greatest strength, conciseness. OCaml "should" be a worse language than Haskell - if only for the lack of higher-kinded types - but for many use cases, I'm starting to think it's the best option available. On the other hand, my Scala experience has convinced me that retrofitting effects like `IO` is not an option; even one incorrectly annotated library destroys most of the value of such an approach, and there are a lot of libraries out there. Maybe the takeaway is that there is no perfect language, that Haskell will always be a better choice for financial computations and Python a better one for quick and dirty scripts. But as a Scala fan even that's unsatisfying; while it makes compromises at both ends, I'd rather use Scala for a whole system than write parts of it in multiple languages. [Home](/) <div id="disqus_thread"></div> comments powered by Disqus