I remember reading a course on *combinatorial optimisation* when studying. Combinatorial optimisation problems include *the travelling salesman-* and *knapsack problem.* Those are both problems where we have a lot of combinations to try — in which *combination* do we visit each city, and which combination of items do we put in our bag.

I enjoyed the course a bunch, but very seldom the topic comes up after having studied, as it’s often seen as a thought-experiment with the purpose of making you reason in a different way with no practical application. Every time I do meet someone who have studied or attempted to do some form of combinatorial optimisation, they’ve all told me that they’ve loved it — it’s a very simple, yet different programming model. With that said, I would like to introduce you to the world of combinatorial optimisation, through the lens of *constraint programming* -- one of the more intuitive ways of doing combinatorial optimisation in my opinion. I will try and avoid as many unnecessary details as possible, and rather try and focus on the intuition of the paradigm.

## The Grand Idea 🔗

Constraint programming is a programming paradigm which is very different from the one’s you might have heard of before, such as imperative and functional programming. These programming paradigms all are based on one premise: that you tell the computer *how* to solve a task, and it’ll carry it out for you. Constraint programming on the other hand is very different in this regard. Much like declarative programming, when solving problems with constraint programming, we don’t necessarily care about *how* a problem is solved, rather *what* problem is solved. This might be hard to imagine how it is done, or you might perhaps scoff and think that it’s very limited and can not be used for real problems – hopefully I can convince you of otherwise!

## The Execution 🔗

The basic gist of constraint programming is that we’ve got variables which can take certain values (a variable’s *domain*), and we *constrain* these values more and more until we find a solution.

## Exploration: the n-queens problem 🔗

The n-queens puzzle is the more general version of the *eight queen puzzle*, which can loosely be stated as wanting to place *8 queens* on an *8×8* chess board, without any queen being able to attack another queen.

Failures

Solutions

## Modelling 🔗

In order to get a grip of how this works, let’s model this is a naive way. Let the main data structure be our board, which is going to be a matrix consisting of boolean values – each boolean representing whether a queen has been placed there or not. Each boolean value in each cell can be thought of as being a variable with domain {0, 1}. This sets up the basics of the board.

In addition to the basics of the board, there are a couple of constraints that must be obeyed in the puzzle for a candidate to be considered a solution, namely:

- There can be a maximum of one queen per row
**[1]**. - There can be a maximum of one queen per column.
- There can be a maximum of one queen per diagonal.

**[1]**: Compare this with the more imperative way of thinking, no more: “*if* there’s a queen on a board cell”, but rather simply pose *what* must be true in our model.

As previously mentioned, each cell of the blank board had possible values in the range {0, 1}. Adding these constraints do not limit a blank board any more – each cell still consist of a value in the range {0..1}. The next step is simple, we *guess* a value of a variable – we randomly set that a cell in the board. That is, we might say *there’s no queen in this cell (set it to 0),* or the equivalent that *there’s a queen in this cell (set it to 1)*.

Once we’ve, for example, placed a queen at a certain cell, we can then apply each constraint and remove unfeasible values. As we know that a solution *must* have only one queen per row, we can simply fix every other variable in this queen’s row to *0*. The same goes for the column and diagonal of this queen. From there, once we are unable to remove any more values, we simply make a guess again – and this goes on until we find a solution. If we could not find a solution through our guesses – we simply backtrack and try another guess.

For the observant: placing a queen in this model is going to allow us to remove more values than saying that a queen is not at a board cell!

That is essentially the grand idea – we look at the constraints, and we remove unfeasible values from each of our variable’s domain until we’ve found a solution.

Here is a model in the *MiniZinc* language. Please try running the code at: https://play.minizinc.dev. **Important note**: for the n-queens problem you (usually) want to find *all* solutions – that is, how many ways we can place the queens. Here, we are just getting *one* solution (treating it as a combinatorial *satisfaction* problem!).

## Can we do better? 🔗

How far could you get with the previous solution? On my computer, I’m able to go to roughly 22 queens in 11 seconds, after that it goes very slow.

When using *regular* programming paradigms, when we are finished and want to get more juice out of a program, we usually roll up our sleeves and try to improve the hit ratio in our caches, we tinker and fiddle trying to improve *how* we solve the problem. Modelling, we instead lean back in our chair and think if we can model our problem in a *better* way – which usually means checking if we can reduce the search space by figuring out a tighter *upper-* or *lower-bound* of a variable in our problem, or reduce the number of variables.

One such optimization we can make surrounding the previous example is that we reduce the matrix into an array of larger domains. For this, we’ll also get fewer constraints we need to check.

For this one, I can run 150 queens (!) in about ~5 seconds.

We can take this version and make it more succint from built-in generic constraints that often pop up in problems.

We get similar results for this one, 150 queens in ~5 seconds, but it looks a bit neater I think.