Skip to main content

That Time I Wrote a Calculation Engine That No One Knew How To Use

♾️ He talks in maths, he buzzes like a fridge...

I work on software, and I've been doing so professionally for over a decade now. Shortly after starting at my current job, I was pulled into a brand new project working on quoting tool for one of our sales divisions. These quotes had a lot of complexity and cascading effects involving margins and markups, sometimes deriving final amounts from those, sometimes going the other way, and it all had to add up correctly.

This is, just so we're clear, a lot more complex than it sounds. Computers are machines built to do math, but they are very opinionated about the math they do. You see, for humans a number is a number, but to a program running on the JVM, a number is a byte, a short, an int, a long, a float, or a double (and sometimes also a boolean) and these different types don't inherently play nicely together. If you take $1 and add $.75 to it, the computer may just tell you it can't do that unless you convert the 1 to a 1.0 first. However, any time you introduce a decimal in computer math, you necessarily introduce a loss of precision as the value is cast as a floating point decimal. Loss of precision in business accounting is not desirable, ergo, the base unit of money is not the dollar, but the penny. But things also need to be human-readable, so you're constantly switching back and forth. $1.00 gets turned into 100 pennies then manipulated to become 175 pennies and then turned back, and then anything that relies on that amount converts it again. There are libraries to make this simpler, but they don't play nicely with your basic math operators, so you have to do everything in method calls which ends up feeling like reverse polish notation, but in a text document where you can't actually see the numbers.

There's all sorts of silly crap like this that is not hard, per se, but it gets to be a lot to keep track of. And if you're not very deliberate with how you wrangle it, you end up with something that's difficult to read and impossible to debug. One of the approaches we chose for this project was to break calculations down into smaller chunks that could be individually unit-tested and maintained. The challenge, then, was one of orchestration. How do you make sure things fire in the right order? How do you avoid circular dependencies?

So I had an idea, and the rest of the team was happy to let me run with it since I'm one of those people who doesn't find math intimidating at all. I wanted to create a dynamic orchestration engine that would take calculators, determine their dependencies, and order them appropriately. First I made sure that every single calculator in the stack had only one output. Then I created a Spy of the data model (a Spy is a type of Mock object used in testing that keeps track of how it's interacted with). I ran the entire calculation stack against it and used reflection to determine which fields they read from (inputs) and which ones they wrote to (outputs). From there I was able to build a dependency tree which I could use to order the calculation stack. If you wanted to calculate something for a new field, all you had to do was write a calculator with any number of inputs and only one output and register it with the orchestrator and it would get picked up and integrated automatically. And for good measure, I had a unit test that would fail if it ever detected a circular dependency. I built and it I tested it and it became the backbone of the entire calculation matrix. And speaking of the matrix...

If I may paraphrase The Architect: The first calculation engine I designed was quite naturally perfect. It was a work of art, flawless, sublime. A triumph equaled only by its monumental failure.

You see, it's not that it didn't work. It worked spectacularly. The problem was that I had written a framework, and it was opaque. People wanting to edit or add a calculator had to grok the entire framework first. They didn't inherently understand why a calculator could only have one output. Or the recursive dependency test would fail and they would have no idea why. Suddenly I was a gatekeeper, the only person who could interact with the damned thing. So a few months later, I ended up re-writing it entirely.

There's a lesson that any programmer worth their salt eventually learns, and this was the incident that drove it home for me. You spend more time reading code than writing it, and you spend more time working on other people's code instead of your own. It doesn't matter how brilliant this thing you've constructed is if the person who comes after you doesn't know what to do with it. Far better to keep things obvious and simple, and if they can't be simple, they should at least be intuitive.

]{p

Comments

Ed Skinner said…
I wrote a set of programs that all worked in pennies instead of dollars. Any fractional part of a penny could be safely rounded in the usual way, and after shifting the decimal to make the humans happy, the numbers always came out right. But "the team" said the effort (doing all the conversions so the math could all be done with integers [except for the rounding part]) made the code hard to maintain, and the work was moved to /dev/null.
BTW: The annoying little errors that creep in with floating point are due to the binary conversion to/from decimal. You can express 10.3 exactly in decimal, but it can't be done in binary no matter how many little fiddly bits are included.