Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Imperative vs. Declarative (latentflip.com)
66 points by philip_roberts on April 2, 2013 | hide | past | favorite | 51 comments


I'm not sure I agree with some of the examples in the article. The examples used paint declarative programming as basically abstractions over details. The problem is that there is no line where an abstraction crosses the boundary into declarative programming. It's not really about abstractions but about control flow. If your code has a specific order it has to run in, then its imperative as you're still describing the steps needed to perform the action. SQL is declarative because you're describing the output set rather than the steps to generate it. Functional languages are considered declarative because of the fact that pure functions can be rewritten, optimized, lazy evaluated, etc by the runtime. I have a hard time considering map/reduce/etc in isolation as examples of declarative programming, as they're usually used in conjuction with an algorithm that most definitely has a defined execution order.


I agree. I don't recognize the examples here as declarative programming.

SQL and Prolog are both examples of declarative programming, so is Make to an extent. Using a map function doesn't make JavaScript a declarative programming language - it's a functional programming concept, not a declarative one.


it's a functional programming concept

That doesn't keep it from being a declarative thing.


so what would be declarative programming ? does it even exist? i mean, at some point you need to write some logic , and logic is imperative. Let's take a html file. It is declarative. but the underlying logic is written somewhere else. so you cant really have pure declarative programming? if it is possible how ?


I think the important distinction is what level of detail is the logic being passed off to the environment for translation into machine instructions. If the environment has freedom to decide exactly how to fulfill your request, then its declarative. Imperative is when your logic is specified such that there is little room for the environment to make implementation decisions.

In the case of map/reduce, if these functions are implemented in your language (say in jquery) then its not declarative as your implementation is still being specified. If you are actually talking to your environment in terms of map/reduce, then those methods are declarative. On the other hand, if we consider jquery as a part of our environment, perhaps it does make sense to consider it declarataive?


I think you're mistaking the program for the implementation. A program can be purely declarative; the fact that we need to create an imperative representation of it to run it on a computer doesn't change that.


An evaluation strategy is imperative but the logic that the strategy operates on doesn't have to be.


great point !


Yes. Read up on Prolog.


I really like SQL. Sure, the language has warts but the ability to concisely represent WHAT you want, not HOW you want it, makes it very readable once you understand the simple constructs and how to properly design tables and indexes (not very hard).

For example, consider the problem of finding the second largest value in a set.

In SQL, I'd do something like:

  SELECT MAX( col )
    FROM table
   WHERE col < ( SELECT MAX( col )
                   FROM table )
  
It's pretty readable, and can almost be read in plain english: "Get the next biggest value from the table where it's smaller than the biggest value."

How might you do this in Java? http://stackoverflow.com/questions/2615712/finding-the-secon...

But look at all the other ways you can do it in that thread. None of them are very readable. And, they can hide subtle bugs that you won't find just by reviewing the code.

Ruby has a pretty concise example if you happen to know the trick, and that the sort performance isn't miserable (kind of a gotcha question): http://stackoverflow.com/questions/8544429/find-second-large...

This is a very simple example, but as you scale up to more complex problems I almost always find SQL is fewer lines of code, more readable, and far less buggy (SQL tends to either work or not work - I find much more subtle bugs in a 30 line Java algo than a 10 line SQL command).


FWIW here's a very similar Python version:

    max(col for col in table if col < max(table))
a fun one is this:

    first, second = max(permutations(table), 2)
although its semantics are slightly different (if the maximum of the table is duplicated, it'll be returned for both slots)

or using heapq which notaddicted mentioned.


I think the similitude is deceptive. Since declarative programming is side-effect free, the runtime can perform the operation in the most efficient way it knows. Python isn't, so it has to ensure some execution order that breaks possible optimizations.

In your example, if you replace max() with a function that prints something, you'll see it's executed for each value in 'table', which is extremely inefficient. This happens because Python can't guarantee that max() will return the same each time.

Similarly, while in a side-effect free context the runtime could, for example, slice 'table', perform the work using multiple concurrent threads and then join the result, Python has to guarantee that the execution is done sequentially, since a change in order could affect the result.

Unlike in SQL, in Python you're always telling it HOW you want it done.


> I think the similitude is deceptive.

It's not.

> Since declarative programming is side-effect free, the runtime can perform the operation in the most efficient way it knows.

The point of declarative programming is to declare what you want done. The runtime may turn it into greatness or crap, that's not really relevant.

> In your example, if you replace max() with a function that prints something, you'll see it's executed for each value in 'table', which is extremely inefficient.

It's also not relevant and an implementation detail, with a known max() on a known type the implementation would be free to lift the computation out since this expression doesn't have side-effects. Just as a crappy SQL DB would be free to run the subquery once for each result.

> This happens because Python can't guarantee that max() will return the same each time.

The runtime can know that all types and functions involved are their native side-effect-less versions and is free to optimize them if it wishes, that it does not is irrelevant.

> Unlike in SQL, in Python you're always telling it HOW you want it done.

Nope, sorry, you're wrong.


> The point of declarative programming is to declare what you want done. The runtime may turn it into greatness or crap, that's not really relevant.

I didn't say the quality of the runtime was relevant. The point is that the runtime is free to achieve the goal you give it in any way it feels fit. In Python, the runtime isn't; it needs to ensure specific runtime guarantees.

>> In your example, if you replace max() with a function that prints something, you'll see it's executed for each value in 'table', which is extremely inefficient.

> It's also not relevant and an implementation detail, with a known max() on a known type the implementation would be free to lift the computation out since this expression doesn't have side-effects.

No, it can't. The Python reference specifies the semantics of generator expressions, and it says "Only the outermost for-expression is evaluated immediately, the other expressions are deferred until the generator is run"[1].

A implementation that lifted the computation would not be implementing Python, but a derivative language.

Furthermore, your example didn't specify neither the type of neither 'table' nor 'max', so I still think it's deceptive.

> The runtime can know that all types and functions involved are their native side-effect-less versions and is free to optimize them if it wishes, that it does not is irrelevant.

See above. According to the language spec, it can't.

[1]: http://www.python.org/dev/peps/pep-0289/#the-details


Aside: Python is similar to Ruby, albeit using the sorted() function rather than the sort() method.

    sorted(vals)[-2]


Also available in python:

  import heapq
  n = 2 
  heapq.nlargest(n, vals)[n-1]
Edit: and out of morbid curiosity, here is the same implemented by a scan. It could be arranged as a single statement.

  swapIfGt = lambda xs, q: xs if xs[0] >= q else sorted([q]+xs[1:])
  n = 2
  reduce(swapIfGt, vals, [None]*n)[0]
Edit2: rewrote swapIfGt


The pedant in me squirms at seeing a quadratic-time solution to something so linear. Use a heap or a scan, sir!


Oh, now I have to yell at myself. No doubt Python's sorted is some n*logn quicksort, and not actually in quadratic time. Crow for all!


It's an adaptive mergesort-based sort: http://en.wikipedia.org/wiki/Timsort


Still, I'd rather have a loop and be linear than using a sort and being linearithmic.


    maximum $ filter (< maximum xs) xs
or like the Ruby one, but with better error semantics

    (`atMay` 1) . reverse . sort
since we don't know that there exists such a column.


The important part is that abstracting the implementation from the declaration, the DBMS is free to compute this using whatever index, sorts and memory allocation is has too. So I think the way you do this in other languages doesn't even compare, because it's not really the same thing.

The effect is you end up with a declaration that is highly intelligible, exactly because you don't have to write the implementation.


> The important part is that abstracting the implementation from the declaration, the DBMS is free to compute this using whatever index, sorts and memory allocation is has too. So I think the way you do this in other languages doesn't even compare, because it's not really the same thing.

The ruby method-based form really is the same thing, because the semantics of teh result are constant across different enumerable objects (conventionally, of course, nothing about the language enforces this), but different enumerables may well implement the behavior differently under the hood -- including using internal indexes -- and may, in fact, apply the same type of adaptive techniques used by an SQL-based RDBMS (or, in extreme cases, actually defer all the work to an SQL-based RDBMS where the data presented with an enumerable interface in Ruby actually lives.)

This is less true, perhaps, of some of the other examples in that the implementation will be the same for all objects in the same system (but you still get some of the conceptual clarity advantages of declarative programming, even if the its not leveraged as effectively behind the scenes by the runtime adapting the actual methods used to the data it is working on.)


Prolog is declarative programming take to the maximum (excluding things like Answer Set Programming/clingo etc).

In Prolog you ask questions. For example: subset([1,2],[2]).

then it goes away and says "yes". Or you want to know if any subsets exist: subset([1,2],B).

B = [] B = [1] B = [2]

This makes it really really nice for some surprising tasks (Windows NT used to ship with a prolog interpreter for setting up the network)


I would disagree - in a sense, Prolog is less declarative than Haskell. For example, the order of "procedure calls" matters in Prolog, a sign of imperative programming. There is no such thing in Haskell (unless imperative behavior is being simulated with Monads).


If you're familiar with or interested in Prolog, I would definitely recommend checking out Mercury. The language home page was just migrated and they're having broken link issues, but here's a link: http://www.mercurylang.org/ Also, you can check out the wikipedia page for a quick summary: http://en.wikipedia.org/wiki/Mercury_programming_language

The language has a lot of functionality that Prolog doesn't have and (thanks to a strong typing system) performs much better. It just needs a bigger community to support it.


Ooh, I forgot all about prolog!

I worked through the 7 languages in 7 weeks book, and solving a sudoku with prolog blew my mind. I think the first "real" programming I did was a sudoku solver in Excel and VBScript (yeuch).


Prolog is really awesome (and impressive) for solvers where there is an "optimum" solution for what you want.

Its fairly trivial to write a checkers AI in prolog if you can define what you want + need:


Or to take his doubling example...

    double([], []).
    double(H | T, H2 | T2) :- H2 is H * 2, double(T, T2).


That's a lower-level version of map in a language with destructuring: Haskell would let you say

    double numbers = map (\x -> x * 2) numbers
or

    double [] = []
    double (x:xs) = x * 2 : double xs
or

    double = map (*2)
I don't think #2 is more declarative than #1, let alone #3. Then again, I don't think any of these versions is very declarative.

Declarative would be numpy:

    doubled = numbers * 2


Map and other functional constructs may be declarative, but I only "feel" like I'm programming declaratively when I'm coding in a language like Prolog.

The fact that, with unification and backtracking, you can not only get a result for a query, but also "pass a variable" as an argument and get a possible value makes it seem much more like a mathematical expression and less like a computation.

For example, I can define some relations:

  parent_of(john, mary).
  parent_of(mary, eve).

  grandparent_of(X, Y) :- parent_child(X, Z), parent_child(Z, Y).
And then I can simply run a query:

  ?- grandparent_of(john, eve).
  Yes
But I can also make it fill in the value for me:

  ?- grandparent_of(john, X).
  X = eve
'grandparent_of' is not some piece of code, it's an actual declaration of a relation between the terms.

Of course, you can do unification and backtracking in other languages, but Prolog is designed for it.


On the flip side, it also drastically changes the typical errors.

In imperative style, most of your mistakes or carelessness will usually mean that the machine makes a wrong result or crashes in the process - a bad 'what'.

In declarative style, most of your mistakes or carelessness will usually mean that the machine will take a bazillion times less efficient way trying to make that result, possibly taking 'forever' or running out of memory - i.e. a bad 'how'.


I've found in declarative style, most mistakes just turn into compilation errors.

That said, the "why did it choose that terrible implementation?" problem does occasionally come up in declarative programming, and inherently never comes up in imperative.


I don't think the author gets declarative right. It feels like he bolts a cool word onto some things he uses. Call me old fashioned, but I think Prolog is declarative, map() and reduce() are not.


Lately, when I code in C#, I write the code I wish was possible with the goal of trying to code to the problem as stated in the requirements. This way the code that solves the problem looks almost exactly like the description of the problem. That is step #1.

Step #2 is doing whatever is necessary to make that code work. Sometimes this means using the more interesting stuff like reflection, dynamic objects, expression tree visitors, etc. but I find that subsequent issues keep getting easier to deal with. This is because step #1 is naturally building a DSL for your problem domain and you start to find that what you did in step #2 is quite reusable.

I've been programming for a while, so I have experience with the imperative, "write the code that solves the problem" approach and it works too, but I am having fun with the "write the code that describes the problem" approach more.

Just my two cents.


This is what I love about C#, it really provides all the necessary components to do this style of "wishful development". I've started doing this everywhere and the results are great. Like you said, the code itself literally reads like a specification. It's allowed me to turn what is otherwise would have been an extremely tedious web application into something I enjoy working on.

As an example, I turned what otherwise would have been an extremely tedious exercise in writing tons of obscure SQL (creating reports from a very non-standard database layout) into an API for creating reports that is literally like reading the specification for the report. And all of it was done in about 250 lines of C#. And to top it off we still have complete static type checking! I really cannot sing the praises of C# enough.


It is not just as programmers. Consider, most cookbooks. Then consider the directions that come with Ikea furniture. Of course, the real beauty of both of those examples, is that they are a mix of declarative and imperative instructions.

For some reason, it seems we programmers are adamant that it must be one or the other. Consider all of the examples, either purely imperative or purely declarative. Why not both?


Ideally, we would all write programs by assembling declarations, imperative code would be limited to internal implementations. That's largely the reason it's good practice to abstract away implementation behind APIs - what you have left is almost a pure declarative language, or DSL, that maps 1:1 your problem domain, without looping or branching or I/O (which are computation details).

Taking the example from the original article, it would be more akin to:

    // Implementation
    function double(n) {
      return n * 2;
    }

    // Declaration
    [1,2,3,4,5].map(double)
    => [2,4,6,8,10]


And, see, I would actually invert further. The declaration should be:

    elements = [1,2,3,4];
    doubledElements = doubleElements(elements)
Basically, if you see the words map, fold, reduce in your code, you are probably not as easy to understand as you'd like to think.

Of course, in the cooking methaphor, I'm ok with mutating elements and just doing:

    elements = [1,2,3,4];
    doubleElements(elements);
This clearly has issues if multiple "cooks" are working with elements. But is ridiculously easy to intuit regardless. (Precisely because in real life so many things are changed by imperative commands.)


> And, see, I would actually invert further. The declaration should be:

But then you lose pureness, right? The whole point of using high-order functions is allowing you to be as declarative as mathematics, so you can just operate functions together.

Consider that in the first example, I only need to write the implementation for doubling a number n, while the `doubleElements` implementation is too specific, would throw the other half of the code back into imperative land.


Only in my example that relied on mutations. The first can all be implemented with pure functions just fine. Indeed, I was assuming it would be, hence the assignment to a new variable.

I suppose I should have said that the doubleElements implementation would likely be that map one liner. (Though, it needn't be. One could exploit custom knowledge of the domain there to do crazy crap like memoize the calls.)

That make sense?


> Only in my example that relied on mutations. The first can all be implemented with pure functions just fine. Indeed, I was assuming it would be, hence the assignment to a new variable.

Yes, the first example uses pure functions, but I guess you're confusing pure functions with HOFs [1]. The point is only having to write the implementation to double one number, and extrapolating it by composition. Consider that in your example, for instance, you would need a `doubleHash` function for hashes, and so on.

http://en.wikipedia.org/wiki/Higher-order_function


I did not realize we were debating HOF versus pure functions.

That is, I'm fine with using both the pure and HO functions. I just think hiding the HOF ones behind a normal function call is usually a big win for readability.

So, to do the full example:

    elements = [1,2,3,4]
    doubledElements = doubleElements(elements)
    function doubleElements(e) {
        function double(n) { return n*2; }
        return e.map(double);
    }
Where I would assume the "reader" code would only have the first two lines. The rest would be behind the implementation layer. If the double function would be used elsewhere, no need to scope it to doubleElements.


well one declares variables:

    - given a pan
    - given 3 eggs
    - given a little of olive oil
then one executes orders :

    - add the oil in the pan
    - break the eggs and add them in the pan
    - cook the eggs fo 5 mins
    - serve the food hot
so i guess one is never really doing either pure declarative of imperative cooking/programming ?

I like the cookbook metaphore for programming.


It is actually even more interesting in some respects. You consider "break the eggs and add them in the pan" as an imperative command; yet it has to be learned. Just going off of that there are so many ways it can go wrong it is frightening. Consider, how do you break the eggs, and do you add the entirety to the pan? (yes, I just watched my 3 year old try this recently.)

So, don't get me wrong. I'm all for declarative actions. I just think the "purely declarative" approach that many languages try and impose is a useful aberration when you consider how the vast majority of "programming" is done.


Great article, but one thing that's sort of glossed over here, and that I half-disagree with, is this:

>But we also get to think and operate at a higher level, up in the clouds of what we want to happen, and not down in the dirty of how it should happen.

The author mentions this at the end, but I feel it should be stressed more strongly: The dirty of how is important. The author presents a big "if" here, which is: if the function we've built to abstract away some ugliness performs in the best, most efficient way possible, with no drawbacks, then, yes, abstracting that functionality away and forgetting it is okay.

But to me that's a big if. It is just as important to me to understand and recognize that map is fast, efficient, and to understand why it's fast and efficient, so that someday, if you come across a situation where map does not apply, you will know why, and you'll be able to use something better.

Being up in the clouds all the time is, to me, a pipe dream -- we must always be cognisant of the ground on which we built this tower of abstraction layers.


The fact that map is fast and efficient isn't really its selling point to me, though. It's that it's a simple concept, so I use it when the concept applies, not when I need to worry about efficiency. So it doesn't matter to me how it works, as long as it does what I expect.


OP could definitely have used more examples, but I think he's on the right track. Where declarative or functional programming comes in really handy is composition. Underscore has a lot of utilities that make it easy.

  var genericFilter = function(type, value) {
    return function(items) {
      return _.filter(items, function(i) {
        return i[type] === value;
      });
    }
  };

  var sizeFilter = genericFilter('size', selectedSize);
  var brandFilter = genericFilter('brand', selectedBrand);

  var appliedFilters = _.compose(sizeFilter, brandFilter);

  var filteredItems = appliedFilters(items);
  // which ends up doing  sizeFilter(brandFilter(items));
// edit for sloppy code;


I work with a bunch of UX designers, and as the only developer here I'm often confronted with their question of "why can't I just describe what I want done?"

Their apprehension of tackling code is one I don't immediately understand, but I do get that they don't want to think about the how, rather the what. It's a funny parallel.

Here's a great video by Bret Victor who saw this problem, and tried to fix it for animation:

https://vimeo.com/36579366#t=1748


I prefer the imperative style personally, I like things done the way I want ... I kid, great write-up though.


What is the result of procedural programming? Functions that can be used declaratively! The purpose of procedural programming is to encapsulate and consequently eliminate "telling the 'machine' how to do something".

PS: What happened to 3GL vs. 4GL?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: