In the first part of this series we have looked at a conceptual overview of the Abra specification, which describes how data flow is achieved within the Qubic Computation Model. In the second part we used the Abra specification to start implementing Qupla, a higher level programming language for Qubic. We introduced Qupla’s most basic entities (trit vectors, look-up tables, and constants) and how they map to Abra. The third part explored the way Qupla uses functions and expressions. The fourth part examined Qupla’s basic operations in detail. This fifth part will show how all the components work together by exploring a few examples of Qupla functions.

**You want the latest news about Crypto? Sign up to our weekly Newsletter!**

### Putting it all together through examples

To most programmers, nothing helps to understand a programming language more than exploring some example code. This article will therefore focus on a select few examples of Qupla coding. We will introduce a few more language concepts while doing so.

These concepts are mostly convenience-based and optional, like we saw with function templates and function type specifiers. We did not want to distract too much from the basic working of Qupla before, which is why we delayed introducing them until now.

#### Factorial function

Let’s start with a function template that can be used to calculate factorial numbers with ridiculous precision. Consider this function as it would be defined in any of the C-based languages (C++, C#, Java, Objective-C, Swift, etc.)

long factorial(long n) {

return (n == 0) ? 1 : (n * factorial(n - 1);

}

The factorial function expects a non-negative number for n and returns the multiplication of all the numbers from 1 to n (in math expressed as: n!). The size of the results in the factorial sequence grows faster than exponentially. Therefore we defined it here as operating on a long integer value (let’s assume that has 64 bits). That way we can calculate exact factorials with this function for values of n from 1 to 20 (20! equals 2,432,902,008,176,640,000, which is the largest factorial that will fit in a 64 bit integer).

To calculate any larger factorials you could change from long to double precision floating point but in that case you would lose precision at some point. In most cases you will have to resort to special external math libraries to be able to calculate exact factorials in C-based programming languages. Now let’s see what the equivalent Qupla code could look like:

importQupla

templatefactorial<T>funcT factorial<T> (Tn) = {zero= isZero[sign<T>(n)]

returnOne= nullifyTrue<T>(zero, 1)

altN= nullifyFalse<T>(zero,n)nMin1= decr<T>(altN)nMin1Factorial= factorial<T>(nMin1)returnFactorial= mul<T>(altN,nMin1Factorial)

returnreturnOne|returnFactorial}

Right off the bat we are introducing a new language concept. The line that says **import** Qupla is used to import *all* types, LUTs, and functions that are defined in the Qupla module.

#### The import statement

The import statement is very convenient because it enables us to use basic entities that were defined elsewhere without having to recreate or copy/paste each one separately into this source file.

Instead, with one import statement we can reference all of the standard types, LUTs, and functions that a module provides at once. In addition, such a module can be pre-compiled once and then stored somewhere for quick retrieval without need for lengthy re-compilation.

A module in Qupla is simply a file system folder that holds a number of Qupla source files. Within the module folder, source files can be grouped using sub-folders. The name of the folder is the module name. If you need to import a module from within another module you only need to use an import statement in one of the source files of the module that wants to import. It’s okay to have multiple sources reference the same module with an import statement, but the module will be imported only once, so it is not necessary.

Module Qupla is our standard library for the Qupla language. It contains a number of helper functions, like nullifyTrue() and all(), integer math functions like add(), sub(), mul(), and div(), equivalent floating point math functions, and an ever-growing number of other assorted functions that are used often in code, like the min() and max() functions discussed in part 4 of this series. You will also find the standard types and standard LUTs we covered in part 2 of this series. In fact, you will find that most of the examples we used in the previous parts are present in this module.

#### Exploring the factorial function

Okay, back to the factorial() function. As you can see we decided to create a function template for it. That way we can decide later what the proper trit vector size for our function is, depending on what maximum factorial we want to be able to compute. And because we have quite the eclectic range of integer sizes we should be able to go well beyond the 64-bit long integer from the C-based example.

In fact, with a Huge (81 trits) integer we can go up to 33!, and when using Hash (243 trits) as an integer trit vector we even get to go up to 78!, which turns out to be exactly 11,324,281,178,206,297,831,457,521,158,732,046,228,731,749,579,488,251,990,048,962,825,668,835,325,234,200,766,245,086,213,177,344,000,000,000,000,000,000, or about 1.132428 * 10^115…

First thing we do in the factorial() function is determine whether n equals zero. We do that by using the sign() function, which will return 0 when its parameter is 0, and then use the isZero LUT to see if that is the case.

Next we get to the decision logic, which in Qupla uses parallel execution paths. The if-path, where if n equals zero we return 1, is encoded in the line *returnOne* = nullifyTrue<T>(*zero*, 1). This will set returnOne to 1 when n was zero, and null otherwise. That is all we need for the if-path.

The else-path is encoded in the next 4 lines:

altN= nullifyFalse<T>(zero,n)nMin1= decr<T>(altN)nMin1Factorial= factorial<T>(nMin1)returnFactorial= mul<T>(altN,nMin1Factorial)

First we create an alternative for n, called altN, that is set to n when n was non-zero, and null otherwise. That way, if this is not the branch that should be taken, all the following functions will be short-circuited because all their parameters are null and so these functions will each return null as well.

Now we calculate n – 1 by decrementing altN. Next we pass nMin1 to the factorial() function, and then we multiply the result by altN and store that result in returnFactorial. Remember that if n was zero, this will actually end up as null.

Finally, we get to the line that says **return** *returnOne* | *returnFactorial*. Only one of these values will be non-null and the other value will be null, which satisfies the merge condition, and causes the correct value to be returned.

Note that we have separated out each term into separate expressions in this example function for clarity while explaining. We could have expressed the same code more compact as follows:

import Qupla

templatefactorial<T> {func Tfactorial<T> (Tn) = {zero= isZero[sign<T>(n)]altN= nullifyFalse<T>(zero,n)returnnullifyTrue<T>(zero, 1)|

mul<T>(altN, factorial<T>(decr<T>(altN)))}

}

#### The conditional operator

If you were able to follow along in the examples in this article series you may have noticed that decision logic in Qupla/Abra functions according to certain patterns.

The data flow splits into two paths based on a condition, one path gets nullified based on the condition being true, and the other path gets nullified based on the condition being false. Then finally both paths end up in a merge operation, which selects the non-null value for continuation.

There are a few problems with this approach, because they mostly put the burden of nullifying on the programmer, with a lot of typing, but also the programmer needs to be clever about the position he puts the nullify.

Basically you want to have the nullify operation as early in the execution path as possible, so that the rest of the path will not do unnecessary calculations. That means you need to analyze carefully, and from experience we can tell you that this is not always easy.

So it is a tedious process in two ways: the typing and the analyzing. That’s why we have been looking for an alternative way of expressing this common operation, and we came up with a construct we borrowed from the C-like languages: the *conditional operator*.

This operator takes the form <condition> **?** <true-value> **:** <false-value>. When the (Bool) <condition> is true, this operator returns <true-value>, and when the <condition> is false it returns the <false-value>. In fact, you could see its usage in the C-based factorial version we showed at the beginning of this article.

We define two forms of the conditional operator in Qupla:

- var = condition
**?**true-value**:****null**

This is our*nullify*operator. It replaces the more tedious and harder to read var = nullifyTrue<**T**>(condition, true-value). It can be used instead of this and Qupla will replace it with that exact code behind the scenes. The nullify operator will only return true-value when condition evaluates to true. In all other cases it will return*null*. It’s easier to read and familiar to a lot of programmers. - var = condition
**?**true-value**:**false-value

This is our*conditional*operator. It goes a bit further than the previous one in that Qupla replaces it with the following code behind the scenes:

tmpTrue = nullifyTrue<**T**>(condition, true-value)

tmpFalse = nullifyFalse<**T**>(condition, false-value)

var = tmpTrue | tmpFalse

The conditional operator will only return true-value when condition evaluates to true. It will only return false-value when condition evaluates to false. In all other cases it will return*null*. Again, it’s easier to read and familiar. The only twist here is the third possible return value*null*when condition is neither true nor false. But we will see how that makes sense in a moment.

Note that Qupla will automatically determine the type specification for the nullify functions from the type of the input true-value. And of course when using the conditional variant the type of false-value must match the type of true-value exactly.

In addition, Qupla knows that we are intending to nullify here, so it can take another tedious task out of our hands. It will analyze where the nullify operation should take place for optimal effect. Qupla will automatically push the nullify up the execution path as far as possible, which means it may actually end up way before the usage of the conditional operator. For example, in the case of return c ? f(b) : null Qupla will detect that it is more efficient to nullify b instead of first evaluating f(b) and then nullifying the result. Qupla will therefore automatically transform this into the equivalent return f(c ? a : null).

This automatic optimization gets more interesting with more complex expressions. Imagine this expression: return c1 ? a : (c2 ? b : c). In this case Qupla will move the nullifyFalse for the false-path of the first conditional operator to operate on c2, which means that both b and c will become nullified when c2 becomes null, because c2 will be neither true nor false. Here is the resulting Abra pseudo-code after optimizing (the separate two merges from both operators get merged into a single one by the optimizer):

tmp1 = nullifyTrue(c1, a)

tmp2 = nullifyFalse(c1, c2)

tmp3 = nullifyTrue(tmp2, b)

tmp4 = nullifyFalse(tmp2, c)

out ret = merge{tmp1, tmp3, tmp4}

One advantage of the simplicity of Abra is that it is easy to write all kinds of optimization functions to compact the resulting code. In fact, it works so well that the Qupla-to-Abra converter just focuses on generating the most straight-forward correct conversion to Abra and relies entirely on the optimization functions to make the code much more compact and efficient.

This is what the compact form of the factorial function looks like when using the conditional operator:

import Qupla

templatefactorial<T> {func Tfactorial<T> (Tn) {isZero= isZero[sign<T>(n)]returnisZero? 1 : mul<T>(n, factorial<T>(decr<T>(n)))

}

}

We no longer have the nullify functions, and we also no longer need to explicitly create a nullified altN for the calculation. Qupla will do all that automatically. Here’s what the generated Abra pseudo-code looks like when instantiating factorial<Tiny>():

branch factorial_9

[ 9] in n

[ 1] p1 = sign_9(n)

[ 1] isZero = isZero_0[p1]

[ 9] p3 = nullifyTrue_9(isZero, n)

[ 9] p4 = const_9_1(p3)

[ 9] p5 = nullifyFalse_9(isZero, n)

[ 9] p6 = decr_9(p5)

[ 9] p7 = factorial_9(p6)

[ 9] p9 = mul_9(p5, p7)

[ 9] out ret = merge{p4, p9}

#### Testing Qupla functions

Now, how would we go about testing a Qupla function? It would be nice to have some kind of unit testing capability that we could use to exercise a function and see if it returns the expected results. Well, Qupla provides built-in unit testing functionality by introducing *test* statements:

test1 = factorial<Tiny>(0)test1 = factorial<Tiny>(1)test2 = factorial<Tiny>(2)test6 = factorial<Tiny>(3)test24 = factorial<Tiny>(4)test120 = factorial<Tiny>(5)test720 = factorial<Tiny>(6)test8683317618811886495518194401280000000 = factorial<Huge>(33)test8683317618811886495518194401280000000 = factorial<Hash>(33)

As you can see, a test statement starts with test, followed by the expected result value, an equals sign, and an expression. Qupla is able to process these expressions when its test mode is turned on, and will then automatically verify that each expression evaluates to the corresponding result value. Any test that fails will cause an error message with the exact line where it happened.

Having these tests in the same source file as the function that they test is very handy while creating/debugging the function. When test mode is off, Qupla will simply ignore the tests. Of course you can also put the tests in a separate file or even in a separate module if you like.

#### Array function

Let’s look at a more complex (and more useful) example next. A lot of programming languages allow you to have a generic array that allows you to store a series of values in indexed memory locations and retrieve the value later by specifying the index.

We’re going to design our own array() function that can do exactly that. We will start out by creating this function for a single type of value. Once we have completed that, we will show how to turn the result into a generic function template that can be used to implement such a function for any type of value.

This example will show how to use a state variable to store the value, and how to use a recursive call path to codify the index. It will make for an excellent demonstration of how different the function call paradigm is in Qupla.

Let’s start with the function signature. We will need to pass in three parameters:

First, a command trit cmd that indicates whether we want to set (1) or get (0) the value in the array. And then of course there are the index and the value themselves. The function will always return the previous value that was stored under the index. That way we can determine whether the index was already used to store a value before (remember that a state variable is initialized with all zero trits) or we can use the previous value in case we still need it.

// just select some random type

// for the value parameter for nowtypeValue [Tiny]

funcValuearray (Tritcmd,Tiny index,Valuevalue)

Note how we kind of prepare already for a more generic function by factoring out the type for the value parameter. This will make it easier to turn this into a function template later on. Now we can use Value wherever we need that type in our code.

So how would we go about creating this obviously flexible data structure in the face of a language that has only fixed types? This is where we leverage an idiosyncrasy of function instantiation. Because every function instance gets its own instance of a state variable.

We just need to make sure that we get to a function instance based on the index. The obvious way to do that is through recursion. Our function will call itself, keeping track of the depth while doing so, and get or set the state variable of the function instance whose depth matches the specified index. The easiest way to do this is decrement the index with every function call until it becomes zero.

Now we only need to add the decision logic that either gets or sets the state variable, and the decision logic to decide whether to return the value or go deeper along the call path.

funcValuearray (Tritcmd,Tiny index,Valuevalue) {state Valuecell

// preserve old value of state variableoldValue=cell

// check if we found the desired call depth yet= isZero[sign<

foundTiny>(index)]

// when found and cmd is 'set' change the state variable= and[

// otherwise do nothing

cellfound, isOne[cmd]] ?value:null

// when found return old value of state variable

// otherwise decrement the index and go one deeperreturnfound?oldValue: array(cmd, decr<Tiny>(index),value)

}

This is still a pretty simple function. Each array() function has a state variable which is the storage cell for the corresponding index. For clarity we preserve the old value, but this is not really necessary because the value of state variables that is used within a function is always the old value anyway. This just makes it a bit more explicit what is happening.

Next we check if we arrived at index zero, and set the found flag if that is the case. When found is true and the cmd trit indicates we want to set the state variable (equals 1), we update the value of cell with the value we passed in. Note that this takes effect only after the function returns. Also note how we set cell to *null* in this case if we don’t want to update it. Remember that a state variable cannot hold null trits, so assigning null to it won’t change it.

Finally, when found is true, we return the old value of the state variable. If not, then we need to go one level deeper in the call hierarchy, so we pass the parameter values to a new instance of the function after decrementing index.

One caveat with this kind of storage usage is that you don’t want the index to become too large because you will run out of circuitry real-estate pretty fast.

Here is the Abra pseudo-code that will be generated by Qupla:

branch array

[ 1] in cmd

[ 9] in index

[ 9] in value

[ 1] p3 = sign_9(index)

[ 1] found = isZero_0[p3]

[ 1] p5 = lut_000000010000000010000000010[p3, cmd]

[ 9] p6 = nullifyTrue_9(found, cell)

[ 1] p7 = lut_T01NNNT01T01NNNT01T01NNNT01[cmd, p3]

[ 9] p8 = nullifyFalse_9(found, index)

[ 9] p9 = decr_9(p8)

[ 9] p10 = nullifyFalse_9(found, value)

[ 9] p11 = array(p7, p9, p10)

[ 9] out ret = merge{p6, p11}

[ 9] latch cell = nullifyTrue_9(p5, value)

What’s noteworthy is how the Qupla optimizer will combine successive LUT operations into a single LUT operation if possible. It does this by generating a new LUT that can perform those successive look-up operations with a single equivalent look-up.

You can see how the and and isOne LUTs are combined in the p5 assignment, and it even goes as far as to combine it with the previous isZero LUT as well.

A similar thing happens with p7, where the implicit nullifyFalse LUT and the previous isZero LUT are combined for the false-branch of the conditional operator in the return statement. Note that if the array() function had only had the cmd parameter this strategy of combining LUTs essentially could have optimized away the need for the found assignment entirely.

You can also see how the optimizer optimized away our usage of oldValue and just uses cell directly. And also noteworthy is how cell only gets its new value in the final latch site, after the out site has been created.

#### A generic array() function

The previous example array() function would be very useful if it could operate on any type. To get it to do that, all we need to do is wrap it in a template declaration and replace the Value type with a generic type T.

Because this will potentially create multiple versions of the array() function, we also need to add a type specifier T to that function for things to work correctly.

templatearray<T>

{func Tarray<T> (Tritcmd,Tinyindex,Tvalue) {state Tcellfound= isZero[sign<Tiny>(index)]cell= and[found, isOne[cmd]] ?value:null

returnfound? cell : array<T>(cmd, decr<Tiny>(index),value)

}

}

Finally, here is a simple test module that can be used to exercise this code and see it in action:

// retrieve default value of a[0]

test 0 = array<Tiny>(0, 0, 1)

// set a[0] to 1

test 0 = array<Tiny>(1, 0, 1)

// verify a[0] is now 1

test 1 = array<Tiny>(0, 0, 0)// retrieve default value of a[1]

test 0 = array<Tiny>(0, 1, 2)

// set a[1] to 2

test 0 = array<Tiny>(1, 1, 2)

// verify a[1] is now 2

test 2 = array<Tiny>(0, 1, 0)

// verify a[0] is still 0

test 1 = array<Tiny>(0, 0, 0)// set a[0] to 3

test 1 = array<Tiny>(1, 0, 3)

// verify a[0] is now 3

test 3 = array<Tiny>(0, 0, 0)

// verify a[1] is still 1

test 2 = array<Tiny>(0, 1, 0)

#### Conclusion

We went through the steps to implement an example factorial() function and a generic array() function template that demonstrates how to use state variables. Along the way we explored some previously unmentioned language constructs, like the import statement and the conditional operator, and some other convenient features of the Qupla programming language.

In the next part of this series, we will explore the Qubic Supervisor and the way it interacts with Abra functions to be able to create even more complex behavior.

Explaining the Qubic Computation Model: Part 5 was originally published in IOTA on Medium, where people are continuing the conversation by highlighting and responding to this story.

iota-news.com is author of this content, TheBitcoinNews.com is is not responsible for the content of external sites.