Fexl is a programming language based on the concept of functions, and is an
extension of the notation known as the *lambda calculus*, devised by
Alonzo Church in the 1930s. The name is an acronym for "**F**unction
**EX**pression **L**anguage," and is pronounced *fex'-uhl*,
somewhat similar to "pixel."

A *function* is something that is applied to an input and returns a
corresponding output. In effect, a function is something that answers
questions: the input is the question, and the output is the answer to
that question.

For example, you might have a function which takes a number as input and returns twice that number as output. Apply the function to 1 and you get 2. Apply it to 2 and you get 4. Apply it to 27 and you get 54. When you apply the function to a number, you are asking the question "What is twice the value of this number?", and the function gives you the answer.

The lambda calculus is a notation capable of expressing any computable function one might conceive. Fexl extends the notation in small but powerful ways to make it more practical for writing real world programs.

The *lambda calculus* is a notation capable of expressing any computable
function. Although Fexl extends this notation to make it more convenient to
write programs, it is always possible to define the extensions in terms of the
basic lambda calculus.

In the lambda calculus, a function takes one of three forms: (1) a symbol, (2) a function application, or (3) a lambda expression.

The first form of a function is a *symbol*. A symbol is either a
*name* or a *string*. In the original lambda calculus there
are only names, but I take the liberty of including strings right up front
in this presentation. I also pin down the rules of what constitutes a name.

The first form of a symbol is a *name*.
A name is any sequence of characters except for white space or these:

\ ( ) [ ] { } ; " ~ # =

A "white space" character is any byte in the range 0 (NUL) through 32 (SPACE). In other words, it's the actual SPACE character and all the "control" characters below it. So for example if you try to parse /dev/zero as a Fexl program, the parser will run forever skipping all the NUL bytes as white space, instead of buffering them all up as an infinitely large name.

Because of how UTF-8 encoding works, you can use names that include arbitrary "wide" characters outside the normal ASCII range. Here are some examples of names:

x 37 + twice append åabcüdef 封

The second form of a symbol is a *string*. A string is a way of
expressing a *string value*, which is any sequence of zero or more
arbitrary bytes (NUL is allowed). A *byte* is an 8-bit unsigned
machine number in the range 0 to 255.

There are two ways of specifying string values: a *quote string* or a
*tilde string*.

The first way of expressing a string value is a *quote string*.
A quote string is a string value enclosed within a pair of double
quotes, with the restriction that a double quote cannot appear within the
string value. Examples:

"" "hello world" "This is a multi- line string."You can also use characters from other languages and notations through the use of UTF-8 encoding. Example:

"hej åabcüdef ghij üä 1≠0 封 "

The second way of expressing a string value is a *tilde string*.
A tilde string is a string value enclosed within a pair of delimiters of your
own choice. You can choose any delimiter which does not appear within the
string value.

The tilde string starts with a tilde (~), then zero or more non white-space
characters. That sequence, including the tilde, constitutes the
*delimiter*. The white space character which terminates the delimiter
is ignored. This is followed by the desired string value, and finally a
repeat occurrence of the original delimiter. Examples:

~ This has "quotes" in it.~ ~| This has "quotes" in it.~| ~END Anyone familiar with "here is" documents should easily understand the rule. ~END ~~ Usually just a single tilde "~" will suffice as a delimiter, or if the string contains a tilde like this one, then a double tilde will suffice. ~~

(FX)

In that expression, **F** is typically called the *function*, and **X** is
called the *argument*.

Examples:

(twice 27) (add (twice x) ((mul y) y)) ((say "Hello world.") (say "Goodbye.")) (((a b) (c d)) (e (f (g (h i)))))

If **X** is an arbitrary name, and **F** is an arbitrary function, then the
following is the lambda expression which introduces **X** as a parameter to **F**:

(\XF)

The backslash character '\' indicates that the name **X** is to be treated as a
parameter name, distinguishing it from a function application. The original
lambda calculus uses the Greek character λ to introduce a parameter, but
I use '\' instead because it is easier to type, and is the ASCII character
which most closely resembles λ. Examples:

(\x 2) (\x x) (\x (x x)) (\x (\y ((add (twice x)) ((mul y) y)))) (\x (x (\x ((mul x) x))))

Note that the last expression has two different declarations of parameter x. In the following expression, I have highlighted the outer parameter x along with its sole free occurrence:

(\x(x(\x ((f x) x))))

In the next expression I have highlighted the inner parameter x along with its two free occurrences:

(\x (x (\x((fx)x))))

A lambda expression *binds* the parameter name, so occurrences outside
the body of the expression are no longer considered free. A lambda expression
is also known as an *abstraction*, because the body of the expression is
defined abstractly, in terms of a parameter name whose value is not yet
specified.

Consider first a lambda expression:

(\XF)

That function is considered to be in *normal form*, meaning that no
further evaluation is possible. The value of the expression is itself.

Now consider a function application:

(FX)

In this case **F** is evaluated first, typically resulting in a lambda
expression. Then the argument **X** is substituted into the body of the
lambda expression, replacing all free occurrences of the lambda parameter.
Evaluation then continues with the resulting expression.

Finally, consider the evaluation of a symbol. The value of a string is always
itself. But note that you will never have to evaluate a *name*,
because a name is always replaced with some other functional value before it is
evaluated.

To illustrate the process of evaluating a function, I use a notation where I put ":" in front of the original expression, and "=" in front of each reduction step. Comments are indicated with "#".

Here's an example which uses three names add, twice, and mul which are not defined within the example itself. Assume that those names have already been replaced with suitable definitions from an outer context.

: (((\x (\y ((add (twice x)) ((mul y) y)))) 3) 7) # Original expression = ((\y ((add (twice 3)) ((mul y) y))) 7) # Replace x with 3. = ((add (twice 3)) ((mul 7) 7)) # Replace y with 7. = ((add 6) ((mul 7) 7)) # The add function evaluates its first argument. = ((add 6) 49) # The add function evaluates its second argument. = 55

The basic lambda calculus has the minimal syntax needed to express arbitrary computation in an unambiguous form. But minimal is not the same as optimal.

The first problem is there are too many parentheses. Every individual lambda expression or function application must be grouped with a pair of matching parentheses. That is easy to fix.

First consider this expression which adds x and y:

((add x) y)

The first simplification is that the topmost expression need not appear inside parentheses.

(add x) y

The next simplification is that function applications are *left-associative*, meaning that if you omit parentheses, they group toward the left by
default.

add x y

In general, ((f x) y) can be written more simply as (f x y). So these two expressions are equivalent:

(((((a b) c) d) e) f) a b c d e f

That doesn't mean you can eliminate *all* parentheses though. These
two expressions are equivalent, and that's as far as the rule can be applied:

(((a b) (c d)) (e (f (g (h i))))) a b (c d) (e (f (g (h i))))

Consider this lambda expression:

(\x (AB))

There is no particular need for the inner parentheses, so you can write it as:

(\xAB)

Now consider this:

(\x (\yA))

Again there's no need for the inner parentheses, so you can write it as:

(\x \yA)

Furthermore, there's no need for a space between the parameters:

(\x\yA)

The rule can be extended to any number of parameters:

(\x\y\zA)

Although the plain lambda syntax is sufficient to express any computation, it is not always convenient. Imagine you are using the arithmetic functions add and mul in several places:

add 4 (add (mul 23 8) (mul 3 5))

Assume for a moment that those are not predefined, and you have to define them yourself. Using only the plain lambda syntax you'd have to define them like this:

( \add \mul add 4 (add (mul 23 8) (mul 3 5)) )definition_of_adddefinition_of_mul

In that expression the add parameter is replaced with its definition, then mul is replaced with its definition, and computation proceeds from there.

That's no fun, because putting the
definitions of symbols at the *end* is counterintuitive and
unwieldy. For this reason I introduce a special syntax for definitions, so
the example above can be written more simply as:

\add=definition_of_add\mul=definition_of_muladd 4 (add (mul 23 8) (mul 3 5))

That way a symbol and its definition appear together at the top, before the symbol is referenced.

In some cases you don't want the expression to be evaluated at the point of
definition. This is particularly important when the expression has
*side effects* which should only happen later. A side effect might
include printing something to the screen, or even just engaging in a lengthy
calculation which should only happen under certain conditions to be determined
later.

For these direct definitions you use the \\ syntax,
which in effect "escapes" or "guards" the definition. These are also known
as *lazy* definitions. For example:

\\fred=(say "I am Fred.") \\digit=(nth_digit_of_pi 1000) # At this point we have definitions of fred and digit, but no output or # computation has actually happened yet. Now let's use those symbols ... fred # Prints the line "I am Fred." to the output. say digit # Prints the 1000th digit of pi, which will take some time.

In many cases you do want the expression to be evaluated at the point of definition. This prevents the program from having to re-evaluate the expression every time it is encountered below the definition. For example:

\\fred=(say "I am Fred.") \digit=(nth_digit_of_pi 1000) # At this point the digit has already been calculated in full. fred # Prints the line "I am Fred." to the output. say (* digit digit) # Prints the square of the 1000th digit of pi. # ^ Thank goodness I used an eager definition for digit, otherwise it would # have to calculate the digit TWICE!

Note that the convenient syntax for definitions is just a shorthand for basic lambda expressions. In the case of direct definitions, the following two expressions are equivalent:

\\X=DF(\\XF)D

In the case of eager definitions, the following two expressions are equivalent:

\X=DF(\XF)D

You can always rely on those equivalences to be true for the purpose of refactoring your code.

Consider this example:

(a (b (c (d (e (f g))))))

Is there any way of eliminating all those bunched up parentheses on the right?
Note that you *cannot* do this:

a b c d e f g

Because that is equivalent to:

((((((a b) c) d) e) f) g)

We need some way of indicating that expressions are to be grouped on the
*right* side. For this purpose I introduce the ';' (semicolon)
notation which acts like a *pivot* in the expression, in effect
automatically wrapping everything following the ';' inside a pair of matching
parentheses. In general, the following expression:

A(BC)

Is equivalent to this:

A;BC

So the example above can be properly rewritten as:

a; b; c; d; e; f g

You can even put a ';' before the very last argument g, though it is not strictly necessary:

a; b; c; d; e; f; g

This doesn't mean you can always eliminate all parentheses. Consider:

(((a b) (c d)) (e (f (g (h i)))))

First we can apply the left-associative rule for function application to eliminate some parentheses:

a b (c d) (e (f (g (h i))))

Then in the third expression we can apply the pivot syntax to eliminate some more parentheses:

a b (c d) (e; f; g; h i)

You can even take it one step farther if you like:

a b (c d); e; f; g; h i

But that's as far as you can go. You cannot eliminate the parentheses around the (c d) term.

In the previous section I mentioned the notion of *side effects*. Side
effects include such things as printing output, writing to a file, opening a
window on a screen, reading input from the keyboard, and many other ways of
interacting with the environment in which a function is evaluated. Fexl is
not "purist" in its approach to functional programming: it recognizes that the
entire point of running a computer program is to produce *useful side
effects*. Many functional programming languages go to great lengths to
hide or obscure the production of side effects, but Fexl does not bother.

The classic example for all programming languages is the "Hello world" program. This program simply prints "Hello world" to the output device, followed by a new line, and then stops. In Fexl, that program is:

say "Hello world"

That is as simple as it can get. The **say** function prints its argument
to the screen. But in functional programming, every function evaluation
produces a *value*. So what is the value of (say "Hello world")?
And if all we really care about is the side effect, why does it even need to
return a value? Well, consider this expression:

say "Hello world" say "Goodbye"

Here's what happens. The (say "Hello world") is evaluated, printing that
message to the screen. The *value* of that expression is something
we call **I**, representing the *identity function*. The identity
function is the function which returns the value of any argument you pass into
it, and can be defined in the lambda calculus as:

(\x x)

So after the (say "Hello world") is evaluated, the expression becomes:

I say "Goodbye."

The **I** applied to **say** gives us the value **say**, so now we
have:

say "Goodbye."

Now the message "Goodbye." is written to the screen, and the final value of the whole expression is:

I

If you're familiar with normal procedural languages, think of **I** as the
value returned when all you want to do is continue evaluating the next thing,
in sequence. You shouldn't have to think about it too much. If you're
evaluating functions which have side effects and don't really return a value
as such, you can just string them together in a linear sequence, just as you
would do in an ordinary procedural language. The reason I emphasize that
these actually return a value **I** is to show that all types of programs,
including procedural ones, can ultimately be expressed in the notation of
lambda calculus. In Fexl, all evaluation is done in terms of lambda
substitution and function application, and there is no exception for so-called
"procedural" programs.

Quite simply, the evaluation of any function in Fexl can have side effects.
In fact, if you get right down to it, even the evaluation of a so-called "pure"
function like (add 2 3) has side effects too. You might think all that
does is return the value 5, but in point of fact the evaluation both (1)
consumes time, and (2) produces heat inside the computer: both of which are
side effects. The point here is to emphasize that Fexl is a language for
controlling the behavior of a *physical device*, even though it is
based on the mathematical concept of functions.

The beauty of a language with lambda abstraction is that it allows you to
isolate side effects however you like, making more of your code look "pure".
So instead of a traditional piece of code going through a bunch of data,
processing and filtering the data, *and* printing as it goes, you can
instead very easily do the processing and filtering in an entirely separate
section of code, and do the printing in a very simple section that expects
perfectly groomed input.

The one kind of side effect that can be extremely malignant in a traditional
programming language is *mutable data*, which is data that can be
changed during processing right "under your feet" so to speak. That can lead
to very confusing code. However, Fexl is not dogmatic on this issue either.
Many things in the real world do in fact behave like mutable data: the file
system being a prime example. Another example is any kind of payment API,
where you send transactions to a remote server. In that case the entire
remote database behaves like a giant global variable, albeit with a very
well-defined interface.

Fexl does provide a "var" (variable) data type which you can use to model mutable data. A var is a pointer to a value, and you can change what the var points to. However, there is no way to alter a value itself. In my extensive programming with Fexl, I have found very few cases where I need to use a var. The main use is when I am emulating a database in memory, and I don't often need to do that.

In virtually 100% of my code, I use only immutable data — and that
includes cases where you might immediately think to use mutable data in a
normal programming language. A prime example is how I use Fexl to do
accounting for investment partnerships. In a traditional language, you might
implement each event such as security trades, dividends, and partner deposits
or withdrawals using a routine that modifies a data structure such as a nested
hash table. In Fexl, I implement each event as a function which takes in the
original data structure as input, and produces a new structure as output.
An experienced programmer may worry about excessive copying, however, that is
not a problem in Fexl because vast swaths of the input and output structures
are *shared* via direct pointers. And, because those structures are
immutable, such sharing is always safe.

Furthermore, Fexl takes care of all memory management for you, using the
technique of *reference counts*, so you never have to worry about
memory leaks, or accidentally using memory that has already been freed.

In this example I have used the name **add** for addition and **mul**
for multiplication. I could also use **sub** for subtraction and **div**
for division. However, I prefer to use the common operator symbols **+**
***** **-** **/**. So here is a function which adds its first
two arguments together, and multiplies that sum by its third argument:

(\x\y\z * (+ x y) z)

Note that I have expressed the function as a *closed form*, i.e. as a
standalone entity. I haven't bothered to give it a name. If I wanted to name
it as "G", I would do this:

\G=(\x\y\z * (+ x y) z) # ... Then I could use G repeatedly down here.

No doubt at this point a lot of people are going to object to putting **+**
and ***** *in front* of its inputs instead of *between* them.
However, I am aiming for the simplest possible notation, based on the concept
that a function is always applied to an input, expressed as **f x**.
Given that premise, putting the function between its inputs makes no sense. If
you write this:

2 + 3

Then what you're trying to do is take the value **2**, apply that to the
value **+**, and then apply that to the value **3**. It's utter nonsense
to apply the number **2** to the input **+**. You might object that I
should make an exception for "special symbols" such as **+** and *****.
My reply is (1) I don't want any concept of special symbols, and (2) even if I
had them things would get very complicated and irregular. I have been doing
functional programming for a long time and I'm completely comfortable with
expressions like **(+ x y)**. I don't miss the "infix" notation one
single bit. The grammatical and conceptual simplicity is totally worth it.

There is a special value called **void** which a function can return if
there is no sensible output for a given input. For example if the function
does an ordinary square root, it can return void if you give it a negative
number. Another example is trying to open a file that doesn't exist.

The nice thing about void is that it eliminates the need for an "exception"
mechanism present in some programming languages. If a function returns void
and your code goes on to apply *that* to another value, the result is
still void. That way an entire block of computation can just "void out" on a
bad input, and the void result percolates all the way up. At any stage you can
check for void if you like, for example if you wanted to issue an error message
— but you *don't* have to check for void at *every* stage.

The void value behaves like a function which takes any input you give it, and
always returns **void** itself as the output. So for any input **x**, it
is always true that:

void x = void

All programming involves logical functions, which return a value of **T**
(true) or **F** (false). For example there is a function **eq** which
decides if its two inputs are equal. The inputs can be either numbers or
strings. Here are a few examples:

eq 1 1 = T eq 1 2 = F eq 27 27 = T eq 27 32 = F eq "" "" = T eq "" "a" = F eq "abc" "abcd" = F eq "abcd" "abcd" = T eq "Abcd" "abcd" = F

Now the question is, once you have a logical value **T** or **F**, what
can you *do* with that value? You need to be able to do one thing if
it's true, and another thing if it's false. Most languages have an "if"
statement to do that kind of branching. Fexl takes a different approach. The
logical values **T** and **F** behave like *functions* which give
you either the first or second input. In other words, for any inputs **x**
and **y**, the logical values behave like this:

T x y = x F x y = y

The true value gives you the first input, and the false value gives you the
second input. There is no need for a special keyword like "if" used in
virtually every other programming language under the sun. In fact, there are
*no* special keywords at all in Fexl. Any time you see a symbol, it
refers to a value or function, period. No symbol has any special
significance outside of that.

So let's say you want to write a function called **is0** that returns "YES"
if its input is 0, and "NO" if its input is not zero. That would be:

\is0=(\n eq n 0 "YES" "NO")

In a more traditional programming language like C, you could say:

const char *is0(int n) { if (n == 0) return "YES"; return "NO"; }

Or you could get cute and use the "ternary" operator:

const char *is0(int n) { return (n == 0) ? "YES" : "NO"; }

The "fixpoint" function **@** is the foundation of recursion. It allows
you to create a function that can refer back to *itself*. It is defined
in such way that for any function **F**, this is always true:

@ F = F (@ F)

In other words, the fixpoint of **F** passes *its entire self* in as
the input to **F**! This means that **F** now has a "handle" on its own
fixpoint. It's almost like pointing two mirrors at each other, so you get a
reflection of a reflection of a reflection, ad infinitum.

For example, let's say I wanted to write a function **tally** that takes an
input number **n** and adds up all the numbers from 1 through **n**. Of
course, I could use the standard formula for that, which doesn't involve any
recursion:

(\n / (* n (+ n 1)) 2)

But for the purposes of this discussion I want to do it by actually adding up
all the numbers. When the function gets the input **n**, it first decides
if it's less than or equal to 0. If so, it returns 0. If it's greater than
0, it subtracts one from it, calculates that tally, and adds **n** to the
result.

I need to be able to call the tally function from within the function itself.
To do that, I declare **tally** as a parameter to the function, and apply
**@** to that function:

(@\\tally\n le n 0 0; \m=(- n 1) + n (tally m) )

Now *that* is a completely closed-form expression for the function that
adds up all the numbers from 1 through the input **n**. But the syntax is
unfamiliar: what does it mean to jam the **@** function right up against the
parameter like that? As it turns out, there's another syntax feature which
makes it possible. Any time you have an expression like this:

F (\\x ...)

You don't need to group the lambda expression on the right, so you can write it more simply like this:

F \\x ...

Furthermore, because "\" cannot be used within a symbol, you can write it:

F\\x ...

Ordinarily that would look a little obscure but for **@** it's perfect:

@\\tally ...

That syntax feature turns out to be incredibly useful for other reasons which
I'll get to later. But it's particularly useful for this example too, because
otherwise I'd have to write the **tally** function more clunkily as:

@ (\\tally\n le n 0 0; \m=(- n 1) + n (tally m) )

An experienced programmer might cleverly suggest that I assign a name to the function like this:

\tally= (\n lt n 0 0; \m=(- n 1) + n (tally m) ) tally 6

However, believe it or not, that accomplishes nothing. Recall that the "=" sign used for definitions is just a shorthand for a lambda expression. So the suggestion above is equivalent to:

( \tally tally 6 ) (\n lt n 1 0; \m=(- n 1) \total=(tally m) + n total )

The indented expression above still refers to **tally**, which is not
defined within the expression. It has failed to achieve self reference.
The definition of a recursive function must be a *closed-form*
expression which can be lifted out, passed around, and substituted just like
any other value.

In fact, if we continue to follow the evaluation of the ill-conceived expression above, we arrive at:

+ 6 (tally 5)

The **tally** symbol is not defined within that expression, so we have no
idea how to proceed. We must use the **@** function to establish actual
self-reference.

On the other hand, it *is* useful to define the name **tally** for the
closed form definition, so you can use it in multiple places just like the
**G** function in an earlier example. I could then say something like:

\tally= (@\\tally\n le n 0 0; \m=(- n 1) + n (tally m) ) * (tally 10) (tally 7)

The value of that expression is then:

1540

I do understand all the splendid logic behind the **@** function. But
honestly, I tend to think of it as introducing a *label* in the code.
When I refer to that label, I "jump" to that place in the code. That is a
very old-school way of thinking about it, but I like it because it's very
concrete. The difference is that in Fexl, when you do the "jump", you can
pass along other parameters as you go. In the **tally** function where it
says **tally m**, I am indeed "jumping" to the label called **tally**,
but I'm also carrying in the new value **m** as I go.

What's *really* happening when I refer to **tally** within the code is
that I'm in effect getting an entire copy of the whole function,
*including* the **@** itself. That may seem a little mind-bending
at first, but if you're going to do functional programming, you'd better get
used to it.

All right, we have a definition of **tally**. Now I'll show how the
expression **tally 3** is evaluated, giving the result of 6. In effect, it
will be a mathematical proof that:

tally 3 = 6

I'll show you the reduction sequence for that, but first I'd like to express the definition of tally like this:

\Q= (\\tally\n le n 0 0; \m=(- n 1) + n (tally m) ) \tally=(@ Q)

The reason is that when you're dealing with fixpoints, it's not convenient to
copy the entire definition every time in the reduction sequence. So I
introduce a new symbol **Q** to refer to the large blob of code there.
**WARNING:**
This reduction sequence will look very long and boring, but it will give you
some insight into how a functional expression, even one involving recursion,
can be evaluated *symbolically*, by a human being, "on paper" so to
speak.

# Here's the original expression : tally 3 # Replace tally with its definition. = (@ Q) 3 # Invoke the rule for @. = Q (@ Q) 3 # Replace the first Q with its definition. = (\\tally\n le n 0 0; \m=(- n 1) + n (tally m) ) (@ Q) 3 # Replace the tally parameter with the input value. = (\n le n 0 0; \m=(- n 1) + n ((@ Q) m) ) 3 # Replace the n parameter with the input value. = ( le 3 0 0; \m=(- 3 1) + 3 ((@ Q) m) ) # Evaluate (le 3 0), which returns F (false). = ( F 0; \m=(- 3 1) + 3 ((@ Q) m) ) # Invoke the rule (F x y) = y. = ( \m=(- 3 1) + 3 ((@ Q) m) ) # Evaluate m. = ( \m=2 + 3 ((@ Q) m) ) # Replace m with its value. Also use the ";" notation here to eliminate some # parentheses. = (+ 3; (@ Q) 2) # Invoke the rule for @. = (+ 3; Q (@ Q) 2) # Replace the first Q with its definition. = (+ 3; (\\tally\n le n 0 0; \m=(- n 1) + n (tally m) ) (@ Q) 2 ) # Replace the tally parameter with the input value. = (+ 3; (\n le n 0 0; \m=(- n 1) + n ((@ Q) m) ) 2 ) # Replace the n parameter with the input value. = (+ 3; le 2 0 0; \m=(- 2 1) + 2 ((@ Q) m) ) # The (le 2 0) returns F, which chooses the second input. = (+ 3; \m=(- 2 1) + 2 ((@ Q) m) ) # Replace m with its value, and use another ";". = (+ 3; + 2; (@ Q) 1) # Invoke the rule for @. = (+ 3; + 2; Q (@ Q) 1) # Replace the first Q with its definition. = (+ 3; + 2; (\\tally\n le n 0 0; \m=(- n 1) + n (tally m) ) (@ Q) 1 ) # Replace the tally parameter with the input value. = (+ 3; + 2; (\n le n 0 0; \m=(- n 1) + n ((@ Q) m) ) 1 ) # Replace the n parameter with the input value. = (+ 3; + 2; le 1 0 0; \m=(- 1 1) + 1 ((@ Q) m) ) # The (le 1 0) returns F, which chooses the second input. = (+ 3; + 2; \m=(- 1 1) + 1 ((@ Q) m) ) # Replace m with its value, and use another ";". = (+ 3; + 2; + 1; (@ Q) 0) # Invoke the rule for @. = (+ 3; + 2; + 1; Q (@ Q) 0) # Replace the first Q with its definition. = (+ 3; + 2; + 1; (\\tally\n le n 0 0; \m=(- n 1) + n (tally m) ) (@ Q) 0 ) # Replace the tally parameter with the input value. = (+ 3; + 2; + 1; (\n le n 0 0; \m=(- n 1) + n ((@ Q) m) ) 0 ) # Replace the n parameter with the input value. = (+ 3; + 2; + 1; le 0 0 0; \m=(- 0 1) + 0 ((@ Q) m) ) # This time the (le 0 0) return T, which chooses the first input. = (+ 3; + 2; + 1; 0) # Evaluate the (+ 1 0). = (+ 3; + 2; 1) # Evaluate the (+ 2 1). = (+ 3; 3) # Evaluate the (+ 3 3). = 6

I have now proven that:

tally 3 = 6

Keep in mind that a computer processor, using direct pointers in memory, can do this kind of thing in a flash, with no need to copy the intermediate values at every step. I only did that here to illustrate the logic of each step.

Experienced programmers will immediately note that the way I defined
**tally** will pile up **n** stack frames as it evaluates. For large
numbers that might be an issue. As it turns out, you can write it differently
so it carries a "running total" as it goes. This makes it entirely "tail
recursive" so it doesn't use any extra stack space regardless of how large the
input number is.

\tally= (@\\loop\total\n le n 0 total; \total=(+ n total) \m=(- n 1) loop total m ) \tally=(tally 0)

Note that I used the name **loop** there. Sometimes I don't bother to name
the recursion parameter the same as the function being defined, and I just use
"loop". It doesn't make any difference. Sometimes it even seems clearer to
me that way.

That first definition of **tally** takes in a running total and the current
number n. It adds n to the total, subtracts 1 from n, and then loops with the
new total and number. The second definition simply redefines tally so it always
starts with a running total of 0.

If you prefer, you can write the definition the "minimalist" way:

\tally= ( (@\\loop\total\n le n 0 total; \total=(+ n total) \m=(- n 1) loop total m ) 0 )

Or if you'd like to make that loop a bit more explicit, you can say:

\tally= ( \accumulate= (@\\loop\total\n le n 0 total; \total=(+ n total) \m=(- n 1) loop total m ) accumulate 0 )

A lot of this is stylistic. As Larry Wall the author of Perl likes to say,
"There's More Than One Way to Do It." (**TMTOWTDI**) On the other hand,
the "Zen of Python" argues that "There should be one — and preferably
only one — obvious way to do it." In that case I'd go with the
minimalist way, on the basis that it has fewer visual "moving parts" so to
speak.

Note how the loop takes in the initial running total of 0 and "absorbs" it,
returning a function which then accepts the parameter **n**. In the
functional programming literature this is called a "closure". The function
has the initial running total "baked into" it, and now it's sitting there
waiting for you to give it a number so it can run with it.

You got me there. In C you could just write the tally function as:

int tally(int n) { int total = 0; int i; for (i = 0; i <= n; i++) total += i; return total; }

So if that's what you need to do, then go ahead and do it. You can write a
function in C and link it into Fexl. I've already done that for several
functions that need the efficiency of bit-bashing in C, e.g. for cryptography
functions. Fexl coexists with C. I designed Fexl to be the *thinnest*
possible functional programming layer on top of C that I could possibly
conceive. That way I can get the best of both worlds, and the worst of
neither.

Modularity is the ability to put code in separate files, allowing the code to
be shared. Most programming languages use specific keywords to implement
modularity. Fexl has no keywords, and is based entirely on functions (lambda
expressions), so it defines modularity in terms of plain functions known as
*contexts*.

Because a context is just a function, this allows you to control and compose a
set of useful definitions *using Fexl itself*, with the full power of
the programming language. With this power, you make an entire suite of useful
functions available instantly and effortlessly anywhere you like. You can
evaluate different programs in different contexts, depending on the problem
domain. You can implement familiar features such as "search paths" using Fexl
itself. You can extend a context for convenience, or restrict a context for
safety, or both. You can write an ultra-portable, "abstracted" program, where
the definitions of key symbols are provided *outside* the program
itself, and can be changed or swapped at will.

For more on this topic, see Modularity.

© 2022 Patrick Chkoreff