Note: This article is part of a larger series.react
In recent years, the software industry has made a dramatic shift away from static/compiled programming languages to dynamic languages— at least in situations where flexibility is valued over raw performance. However, these languages introduce special challenges (such as validation) and additional infrastructure and tooling is needed as the codebase scales.web
IT legacy nightmareshell
The debate of static versus dynamic languages is not new — in fact goes back to 1958 and the creation of LISP. However, many CIOs would agree the recent proliferation of languages is leaving behind an unsustainable pile of technical debt. Each new language provides one or two marginally beneficial features but within a few months it seems like another flashy language comes along. Even traditional stalwarts like C# and Java increasingly resemble JavaScript — which raises serious questions about their existing stacks. Worse, current dynamic language tooling is often adapted from compiler-driven 「flat file」 workflow tempo and offers almost no support at enterprise scale.api
Every sufficiently complex application/language/tool will either have to use Lisp or reinvent it the hard way
— Greenspun’s Tenth Rule of Programming
Led by Goldman Sachs’ 「billion dollar secret」 after the 2008 crash, NYC has embarked on a massive buildout of 「codebuilder」 technology in Eastern Europe to address the lack of dynamic language infrastructure:app
Tiny Estonia has been somewhat unable to participate in this frenzy, as these are labor-intensive mega-projects — but we are looking ahead at the larger picture and see NYC essentially constructing point solutions (much like the programming language dilemma).less
As such, we are questioning the role of the underlying 「operating system」 — e.g. what has it done for us lately? Even back in 1964, the software industry realized that one cannot easily leverage dynamic languages, AI automation etc. without a minimum level of 「smart」 programming infrastructure. I realize Seattle/San Francisco have an almost cargo-cult reverence towards the operating system so perhaps the term 「virtual machine」 is more acceptable. But I want to take the reader back to the original (and much larger) vision behind UNIX and revisiting some of the groundbreaking ideas that were abandoned when the 1970s recession hit.async
History buffs will recall we are talking about Multics.ide
We will now attempt to converge the following concepts:oop
Turn back while you still canflex
Category theory is a branch of mathematics that attempts to recognize patterns and unify concepts. However, it suffers from a bit of a paradox in that one cannot use the same terminology to describe something ‘outside’ of itself — and Scala and Haskell type systems are somewhat infamous in this regard. This leads to a soup of confusing algebra even when describing fairly simple things. In the end, the reader must 「go behind the matrix」 in order to visualize what is really going on.
The main takeaway here is the notion of 「lift」 — which simply means we raise a concept to a higher level of abstraction where it can be merged with other concepts that normally don’t fit together.
Even in a purely functional (stateless) program, the developer must push a rapidly growing number of parameters around the call tree. Lacking a formal way for managing this data, clever devs might materialize the call stack into function instances via closures or currying techniques or rely on data passing behind the scenes via some sort of built-in monad. The elephant in the room is lack of proper system support for transient data (say nothing of error handling) because it is difficult for languages to separate the notion of scope from control flow.
A ‘monad’ in the programming world usually boils down to the idea of hidden help behind the curtains. Context can be thought of as King of the Monads. Think of it as a 「lift」 of the Multics/UNIX shell to understand how we would use it to track configuration data across multiple lines of code.
Below we see a context whitsunday
that reminds us of a UNIX shell. Indeed, we can view the browser as a modernized ‘terminal’. When we create the object foo
, it appears much like an empty folder and after we create a member called x
we can 「cd」 into it (like object path navigation) and 「list」 the contents.
Context
With a nod to the venerable 「vi」 command, we allow direct manipulation of the function test
or variable str1
. This illustrates the larger ambition behind Multics versus the bare-bones dev support we have in UNIX/Linux. Because we have context, there is a notion of spatial (path) location of these granular entities independent of control flow (or source file(s)). Note that our prototype is not simply resting these objects in a traditional UNIX filesystem but rather in regular program memory. We will return to context later.
Intel, HP, Micron and others are developing non-volatile (NVDIMM) memory chips and associated APIs to make persistent memory programming easier. Intuitively, persistent memory behaves much like a laptop that sleeps when you close the lid. In theory, if you arranged things properly ahead of time, you could treat your little program as an in-memory database and then you could make code updates etc. on the fly. NYC in-memory trading systems have operated this way for a while, although they often just treat persistent memory as another type of storage device or ‘distributed persistent memory’ e.g. blockchain.
True 「persistent memory」 makes little sense to the average programmer coming from a background in static languages. Low-level programs are full of brittle memory 「references」 to heaven-knows-what and trying to preserve them in situ is just asking for a hot (loading) mess. Moreover, complex runtime objects such as an HTTP server are usually assembled as a one-off side effect of running a von Neumann machine over a list of build instructions— and memory addresses were never intended to be primary keys. On the other hand, dynamic languages employ stable late-binding name references which (by design) tend to be more robust.
Below is a simple example of persistent memory. The lv
shortcut (list ‘save’ status) shows two variables as green/new (y
shows up first because I created it more recently for the example) until x
is saved. Later, x
shows up red to warn us of unsaved changes.
Persistent memory
The ll
command here confirms that x
has actually been saved. Note how context performs the vital role as ‘root’ anchor for persistent memory pathing. Python programmers may be familiar with the idea of 「pickling」 or 「application checkpointing」 in long-running programs, but the difference here is that state is restored automatically if the computer goes down. Also note this particular implementation also tracks meta information and locking (Multics supported access control lists on all sorts of things). I should point out the reactive community has seen non-persistent 「transactional memory」 before — mobx and Meteor are notable examples for JS process coordination. Erlang goes a step further with both memory and disk persistence with mnesia. However, a functional programmer might view persistent memory as anathema and this is where separation of functional code from configuration comes in (after all simply loading a program in memory is a state change — having ability to rollback is often the larger unspoken goal of FP). The debate therefore comes down to whether it is more appropriate to use a DSL like SQL or a Turing-complete language for persistent storage. Of course, the more typical use case is simply saving function edits (akin to database stored procedures). The more interesting case is when we start creating transient compound objects.
So far the ideas presented are comparable to a commercial DBMS with advanced support for various language extensions, but now we are going to apply some category theory to bring a number of concepts together. Single-level memory or SLM (also called single-level store) is another innovative Multics concept later advanced by IBM in the 1970s that attempts to extend a single programming model to various devices and operating systems capabilities.
We extend the classic notion of Single-Level Memory to mean:
These concepts have been illustrated previously, but essentially language neutrality means no single DSL is 「subordinate」 to another e.g. reduced to embedded strings. The below example shows both SQL and JavaScript treated as peers:
Single-level memory
Combining statements across languages is where category theory fits naturally — we need to map SQL datatypes to JavaScript. However the code was not overtly imperative in that it did not specify which database we are talking to, nor how the FP should be handled (sets vs scalar, sync vs async etc.). Nor did the function foo
have to loop explicitly over the set results. These are configuration settings best handled by the context.
Note that foo
may be a proxy to another underlying language implementation (e.g. either for performance or legacy bridge). Placing a 「virtual」 dynamic layer atop static code has been popular in NYC trading systems for a while and part of a larger enterprise architecture of functional / configuration separation. We believe this will become more mainstream with VMs like WebAssembly (which ironically takes us back to LISP).
In the above example, single-level memory handles mapping of the function foo
and result1
to a virtual filesystem format, allowing a conventional editor to manipulate them. Although this looks like a normal filesystem, it is really a memory mapping. As we see below, the context cloud
simply appears to Atom as a traditional folder and the entities automagically map to files:
The key thing here is that edits can be bi-directional e.g. a 「save」 in Atom will be immediately hot-loaded into the runtime, allowing for a more immersive REPL-driven development experience. On the same theme, it is interesting to note that Google, Facebook and others are looking at filesystem in userspace (FUSE) technology to improve their code development systems.
For completeness, we show the result1
variable is automatically rendered as a CSV file:
Things get more interesting with mapping of SLM to JSON, which can be useful for managing various configuration files.
Exploring FP a bit further, a context can optionally bind variable names to column names in the input set as shown below:
All these settings are akin to how environment variables are handled in the UNIX shell but the motivation is (1) removing noise from code to make it easier to follow what the developer is trying to do and (2) trying to be more declarative even in a conventional imperative language.
I should also point out that contexts also have interesting multi-user collaboration properties — all inspired by Multics.
I hope this visual walkthrough shows you how somewhat arcane concepts like category theory, persistent memory etc. can fit together into something more tangible and provides a roadmap for what programming might look like in the future. IRL, we constantly see examples of the modern world outpacing increasingly antiquated infrastructure laid generations ago so why should technology be any different? UNIX has influenced the design of operating systems for decades but we believe the time has come for the software industry to bring forward the grander vision behind it all. Continued here.