master
Victor Maia 3 years ago
parent ee4b5f95ab
commit 985ffcb74e
  1. 63
      README.md

@ -5,9 +5,9 @@ What is the true nature of computation? A hundred years ago, humanity answered t
invented the Turing Machine, which, highly inspired by the mechanical trend of the 20th century, distillated the common
components of early computers into a single universal machine that, despite its simplicity, was capable of performing
every computation conceivable. From simple numerical calculations to entire operating systems, this small machine could
do anything. Thanks to its elegance and simplicity, the Turing Machine became the most popular model of computation, and
served as the main inspiration behind every modern processor and programming language. C, Fortran, Java, Python are
languages based on a procedural mindset, which is highly inspired by Turing's invention.
compute anything. Thanks to its elegance and simplicity, the Turing Machine became the most popular model of
computation, and served as the main inspiration behind every modern processor and programming language. C, Fortran,
Java, Python are languages based on a procedural mindset, which is highly inspired by Turing's invention.
Yet, the Turing Machine wasn't the only model of computation that humanity invented. Albeit a less known history, also
in 1936, and in a completely independent way, Alonzo Church invented the Lambda Calculus, which distillated the common
@ -26,19 +26,19 @@ essentially states that computers are capable of emulating each-other. If that w
wouldn't matter. After all, if, for example, every programming language is capable of solving the same set of problems,
then what is the point in making a choice?
Yet, the Church-Turing hypothesis makes a statement about computability, it says nothing about computation. In other
words, a model can be inherently less efficient than other. Historically, procedural languages such as C and Fortran,
have have consistently outperformed the fastest functional languages, such as Haskell and Ocaml. Yet, languages like
Haskell and Agda provide abstractions that make entire classes of bugs unrepresentable. Historically, the functional
paradigm has been more secure.
Yet, while the Church-Turing hypothesis makes a statement about computability, it says nothing about computation. In
other words, a model can be inherently less efficient than other. Historically, procedural languages such as C and
Fortran have have consistently outperformed the fastest functional languages, such as Haskell and Ocaml. Yet, languages
like Haskell and Agda provide abstractions that make entire classes of bugs unrepresentable. Historically, the
functional paradigm has been more secure. Of course, these factors can vary greatly, but the point is that this notion
of equivalence is limited, and there are impactiful differences.
The Turing Machine and the Lambda Calculus aren't the only models of computation worth noting, though. In 1983, Stephen
Wolfram introduced the Rule 110, an elementary cellular automaton that has been shown to be as capable as both. Wolfram
argues that this system is of fundamental importance, and that a new kind of science should emerge from its study.
These claims were met with harsh scepticism; after all, if all models are equivalent, what is the point? Yet, we've just
stablished that, while equal in capacity, different models result in different practical outcomes. Perhaps there isn't a
new branch of science to emerge from the study of alternative models of computation, but what about the design of
processors and programming languages?
In 1983, Stephen Wolfram introduced the Rule 110, an elementary cellular automaton that has been shown to be as capable
as both. Wolfram argues that this model is of fundamental importance for math and physics, and that a new kind of science should emerge from
its study. These claims were met with harsh scepticism; after all, if all models are equivalent, what is the point?
Yet, we've just stablished that, while equal in capacity, different models result in different practical outcomes.
Perhaps there isn't a new branch of science to emerge from the study of alternative models of computation, but what
about the design of processors and programming languages?
A model of computation has a significant impact on the way we design our languages, and several of their drawbacks can
be traced down to limitations of the underlying model. For example, the parallel primitives of the procedural paradigm,
@ -47,21 +47,21 @@ is generally considered hard, and programmers still write sequential code by def
to the procedural paradigm. Global state, mutable arrays and loops generate an explosion of possible execution paths,
edge cases and off-by-one errors that make absolute security all but impossible. Even highly audited code, such as
OpenSSL, is often compromised by out-of-bounds exploits. The functional paradigm handles both issues much better: there
is an enourmous amount of inherent parallelism to be extracted from pure functional programs, and logic-based type
systems make entire classes of bugs unrepresentable. But if that is the case, then why functional programs are still
mostly single-threaded, and bug-ridden? And why isn't the functional paradigm more prevalent?
is an enourmous amount of inherent parallelism on functional programs, and logic-based type systems make entire classes
of bugs unrepresentable. But if that is the case, then why functional programs are still mostly single-threaded, and
bug-ridden? And why isn't the functional paradigm more prevalent?
The culprit, we argue, is the underlying model. As much as the Turing Machine, and, thus, the procedural paradigm as a
A strong factor, we argue, is performance issues. As much as the Turing Machine, and, thus, the procedural paradigm as a
whole, is inadequate for parallelism and security, the Lambda Calculus, and the functional paradigm as a whole, is
inadequate for real-world computations. The fundamental operation of the Lambda Calculus, substitution, may trigger an
inadequate for efficient computations. The fundamental operation of the Lambda Calculus, substitution, may trigger an
unbounded amount of copies of an unboundedly large argument. Because of that, it can not be performed in a bounded
amount of steps, and, thus, there isn't a physical mapping of substitution to real-world physical processes. In other
words, substitution isn't an atomic operation, which prevents us from creating efficient functional processors and
runtimes. Attempts to solve the issue only pushed it into other directions, such as the need of shared references, which
inhibit parallelism, or garbage collection, which isn't atomic. The failure of the functional paradigm to achieve
compatible efficiency impacted its popularity, which, in turn, lead to tools like formal verification to never catch up.
satisfactory efficiency impacted its popularity, which, in turn, lead to tools like formal proofs to never catch up.
This raises the question: is there a model of computation which, like Turing Machine, has a clear physical
This raises the question: is there a model of computation which, like Turing Machine, has a reasonable physical
implementation, yet, like the Lambda Calculus, has a robust logical interpretation? In 1997, Yves Lafont proposed a new
alternative, the Interaction Combinators, on which substitution is broken down into 2 fundamental laws: commutation,
which creates and copies information, and annihilation, which observes and destroys information. In a sense, this may
@ -72,17 +72,14 @@ amount of steps, and has a clear physical mapping. Not only that, they're inhere
the Lambda Calculus has been claimed to be, in theory, but without the issues that let it to be, in practice.
Interestingly, every aspect which is considered good in other models of computation is present on Interaction
Combinators, while negative aspects are completely absent. Like the Turing Machine, there is a very clear physical
implementation. Like the Lambda Calculus, there is a robust logical interpretation which is well-suited for high order
abstractions, strong types, formal verification and so on. Curiously, both the Lambda Calculus and the Turing Machine
can be emulated by a small Interaction Combinator, with no loss of performance, while the opposite isn't true. This
suggests that, while the 3 systems are equivalent in terms of computability, the Interaction Combinators are more
capable in terms of computation. Under certain point of view, one could argue that both the Turing Machine and the
Lambda Calculus are slight distortions of this fundamental model, with a touch of human creativity, caused by our
historical intuitions regarding machines and mathematics, which is the source of their inefficiencies. Perhaps machines
and substitutions aren't as fundamental as we think, and some alien civilization has developed all its mathematical
theories and computers based on annihilation and commutation, with no references to the Lambda Calculus, or the Turing
Machine.
Combinators, while negative aspects are completely absent. Moreover, both the Lambda Calculus and the Turing Machine can
be efficiently emulated by the Interaction Combinators, while the opposite isn't true. This suggests that, while the 3
systems are equivalent in terms of computability, the Interaction Combinators are more capable in terms of computation.
Under certain point of view, one could argue that both the Turing Machine and the Lambda Calculus are slight distortions
of this fundamental model, caused by human creativity, due to our historical intuitions regarding machines and
mathematics. Perhaps machines and substitutions aren't as fundamental as we think, and some alien civilization has
developed all its mathematical theories and computers based on annihilation and commutation, with no references to the
Lambda Calculus, or the Turing Machine.
We, at the Kindelia Foundation, hold the view that Interaction Combinators are a more fundamental model of computation,
and, consequently, that computers, processors and programming languages inspired by them would offer tangible benefits

Loading…
Cancel
Save