I’ve told this story in person dozens of times, it’s time to write it down and share it here. I’ve again experimentally recorded a video version (below), which you can view on a non-Flash device here.
The Prolog Story from Kyle Cordes on Vimeo.
I know a little Prolog, which I learned in college – just enough to be dangerous. Armed with that, and some vigorous just-in-time online learning, I used Prolog in a production system a few years ago, with great results. There are two stories about that woven together here; one about the technical reasons for choosing this particular tool, and the other about the business payoff for taking a road less travelled.
In 2004 (or so) I was working on a project for an Oasis Digital customer on a client/server application with SQL Server behind it. This application worked (and still works) very well for the customer, who remains quite happy with it. This is the kind of project where there is an endless series of enhancement and additions, some of them to attack a problem-of-the-moment and some of them to enrich and strengthen the overall application capabilities.
The customer approached us with a very unusual feature request – pardon my generic description here; I don’t want to accidentally reveal any of their business secrets. The feature was described to us declaratively, in terms of a few rules and a bunch of examples of those rules. The wrinkle is that these were not “forward” rules (if X, do Y). Rather, these rules describe scenarios, such that if those scenarios happen, then something else should happen. Moreover, the rules were are on complex transitive/recursive relationships, the sort of thing that SQL is not well suited for.
An initial analysis found that we would need to implement a complex depth/breadth search algorithm either in the client application or in SQL. This wasn’t a straightforward graph search, though, rather that part was just the tip of the iceberg. I’m not afraid of algorithmic programming, Oasis Digital is emphatically not an “OnClick-only” programming shop, so I dug in. After spending a couple of days attacking the problem this way, I concluded that this would be a substantial block of work, at least several person-months to get it working correctly and efficiently. That’s not a lot in the grand scheme of things, but for this particular customer, this would use up their reasonable-but-not-lavish budget for months, even ignoring their other feature needs.
We set this problem aside for a few days, and upon more though I realized that:
- this would be a simple problem to describe in Prolog
- the Prolog runtime would then solve the problem
- the Prolog runtime would be responsible for doing it correctly and efficiently, i.e. our customer would not foot the bill to achieve those things.
We proceeded with the Prolog approach.
….
It actually took one day of work to get it working, integrated, and into testing, then a few hours a few weeks later to deploy it.
The implementation mechanism is pretty rough:
- The rules (the fixed portions of the Prolog solution) are expressed in a prolog source file, a page or two in length.
- A batch process runs every N minutes, on a server with spare capacity for this purpose.
- The batch process executes a set of SQL queries (in stored procs), returning a total of tens or hundreds of thousands of rows of data. SQL is used to format that query output as Prolog terms. These stored procs are executed using SQL Server BCP, making it trivial to save the results in files.
- The batch process run a Prolog interpreter, passing the data and rules (both are code, both are data) as input. This takes up to a few minutes.
- The Prolog rules are set up, with considerable hackery, to emit the output data we needed in the form of CSV data. This output is directed to a file.
- SQL Server BCP imports this output data back in to the production SQL Server database.
- The result of the computation is thus available in SQL tables for the application to use.
This batch process is not an optimal design, but it has the advantage of being quick to implement, and robust in operation. The cycle time is very small compared to the business processes being controlled, so practically speaking it is 95% as good as a continuous calculation mechanism, at much less cost.
There are some great lessons here:
- Declarative >>> Imperative. This is among the most important and broad guidelines to follow in system design.
- Thinking Matters. We cut the cost/time of this implementation by 90% or more, not by coding more quickly, but by thinking more clearly. I am a fan of TDD and incremental design, but you’re quite unlikely to ever make it from a handcoded solution to this simply-add-Prolog solution that way.
- The Right Tool for the Job. Learn a lot of them, don’t be the person who only has a hammer.
- A big problem is a big opportunity. It is quite possible that another firm would not have been able to deliver the functionality our customer needed at a cost they could afford. This problem was an opportunity to win, both for us and for our customer.
That’s all for now; it’s time for LessConf.