# DALI expressions and arithmetic operators¶

In this example, we will show simple examples how to use binary arithmetic operators in DALI Pipeline that allow for element-wise operations on tensors inside a pipeline. We will show available operators and examples of using constant and scalar inputs.

## Supported operators¶

DALI currently supports unary arithmetic operators: `+`

, `-`

; binary arithmetic operators: `+`

, `-`

, `*`

, `/`

, and `//`

; comparison operators: `==`

, `!=`

, `<`

, `<=`

, `>`

, `>=`

; and bitwise binary operators: `&`

, `|`

, `^`

. Binary operators can be used as an operation between two tensors, between a tensor and a scalar or a tensor and a constant. By tensor we consider the output of DALI operators (either regular ones or other arithmetic operators). Unary operators
work only with Tensor inputs.

We will focus on binary arithmetic operators, Tensor, Constatn and Scalar operands. The detailed type promotion rules for comparison and bitwise operators are covered in the **Supported operations** section of documentation as well as other examplesarithmetic.

### Prepare the test pipeline¶

First, we will prepare the helper code, so we can easily manipulate the types and values that will appear as tensors in the DALI pipeline.

We use `from __future__ import division`

to allow `/`

and `//`

as true division and floor division operators. We will be using numpy as source for the custom provided data and we also need to import several things from DALI needed to create Pipeline and use ExternalSource Operator.

```
[1]:
```

```
from __future__ import division
import numpy as np
from nvidia.dali.pipeline import Pipeline
import nvidia.dali.ops as ops
import nvidia.dali.types as types
from nvidia.dali.types import Constant
```

### Defining the data¶

As we are dealing with binary operators, we need two inputs. We will create a simple helper function that returns two batches of hardcoded data, stored as `np.int32`

. In an actual scenario the data processed by DALI arithmetic operators would be tensors produced by other Operator containing some images, video sequences or other data.

You can experiment by changing those values or adjusting the `get_data()`

function to use different input data. Keep in mind that shapes of both inputs need to match as those will be element-wise operations.

```
[2]:
```

```
left_magic_values = [
[[42, 7, 0], [0, 0, 0]],
[[5, 10, 15], [10, 100, 1000]]
]
right_magic_values = [
[[3, 3, 3], [1, 3, 5]],
[[1, 5, 5], [1, 1, 1]]
]
batch_size = len(left_magic_values)
def convert_batch(batch):
return [np.int32(tensor) for tensor in batch]
def get_data():
return (convert_batch(left_magic_values), convert_batch(right_magic_values))
```

## Operating on tensors¶

### Defining the pipeline¶

The next step is to define our pipeline. The data will be obtained from `get_data`

function and made available to the pipeline through `ExternalSource`

.

Note, that we do not need to instantiate any additional operators, we can use regular Python arithmetic expressions on the results of other operators in the `define_graph`

step.

Let’s manipulate the source data by adding, multiplying and dividing it. `define_graph`

will return both our data inputs and the result of applying arithmetic operations to them.

```
[3]:
```

```
class ArithmeticPipeline(Pipeline):
def __init__(self, batch_size, num_threads, device_id):
super(ArithmeticPipeline, self).__init__(batch_size, num_threads, device_id)
self.source = ops.ExternalSource(get_data, num_outputs = 2)
def define_graph(self):
l, r = self.source()
sum_result = l + r
mul_result = l * r
div_result = l // r
return l, r, sum_result, mul_result, div_result
```

### Running the pipeline¶

Lets build and run our pipeline

```
[4]:
```

```
pipe = ArithmeticPipeline(batch_size = batch_size, num_threads = 2, device_id = 0)
pipe.build()
out = pipe.run()
```

Now it’s time to display the results:

```
[5]:
```

```
def examine_output(pipe_out):
l = pipe_out[0].as_array()
r = pipe_out[1].as_array()
sum_out = pipe_out[2].as_array()
mul_out = pipe_out[3].as_array()
div_out = pipe_out[4].as_array()
print("{}\n+\n{}\n=\n{}\n\n".format(l, r, sum_out))
print("{}\n*\n{}\n=\n{}\n\n".format(l, r, mul_out))
print("{}\n//\n{}\n=\n{}\n\n".format(l, r, div_out))
examine_output(out)
```

```
[[[ 42 7 0]
[ 0 0 0]]
[[ 5 10 15]
[ 10 100 1000]]]
+
[[[3 3 3]
[1 3 5]]
[[1 5 5]
[1 1 1]]]
=
[[[ 45 10 3]
[ 1 3 5]]
[[ 6 15 20]
[ 11 101 1001]]]
[[[ 42 7 0]
[ 0 0 0]]
[[ 5 10 15]
[ 10 100 1000]]]
*
[[[3 3 3]
[1 3 5]]
[[1 5 5]
[1 1 1]]]
=
[[[ 126 21 0]
[ 0 0 0]]
[[ 5 50 75]
[ 10 100 1000]]]
[[[ 42 7 0]
[ 0 0 0]]
[[ 5 10 15]
[ 10 100 1000]]]
//
[[[3 3 3]
[1 3 5]]
[[1 5 5]
[1 1 1]]]
=
[[[ 14 2 0]
[ 0 0 0]]
[[ 5 2 3]
[ 10 100 1000]]]
```

As we can see the resulting tensors are obtained by applying the arithmetic operation between corresponding elements of its inputs.

The shapes of the arguments to arithmetic operators should match (with an exception for scalar tensor inputs that we will describe in the next section), otherwise we will get an error.

## Constant and scalar operands¶

Until now we considered only tensor inputs of matching shapes for inputs of arithmetic operators. DALI allows one of the operands to be a constant or a batch of scalars. They can appear on both sides of binary expressions.

## Constants¶

In `define_graph`

step, constant operand for arithmetic operator can be: values of Python’s `int`

and `float`

types used directly, or those values wrapped in `nvidia.dali.types.Constant`

. Operation between tensor and constant results in the constant being broadcast to all elements of the tensor.

*Note: Currently all values of integral constants are passed internally to DALI as int32 and all values of floating point constants are passed to DALI as float32.*

The Python `int`

values will be treated as `int32`

and the `float`

as `float32`

in regard to type promotions.

The DALI `Constant`

can be used to indicate other types. It accepts `DALIDataType`

enum values as second argument and has convenience member functions like `.uint8()`

or `.float32()`

that can be used for conversions.

### Using the Constants¶

Let’s adjust the Pipeline to utilize constants first.

```
[6]:
```

```
class ArithmeticConstantsPipeline(Pipeline):
def __init__(self, batch_size, num_threads, device_id):
super(ArithmeticConstantsPipeline, self).__init__(batch_size, num_threads, device_id)
self.source = ops.ExternalSource(get_data, num_outputs = 2)
def define_graph(self):
l, r = self.source()
add_200 = l + 200
mul_075 = l * 0.75
sub_15 = Constant(15).float32() - r
return l, r, add_200, mul_075, sub_15
```

```
[7]:
```

```
pipe = ArithmeticConstantsPipeline(batch_size = batch_size, num_threads = 2, device_id = 0)
pipe.build()
out = pipe.run()
```

Now it’s time to display the results:

```
[8]:
```

```
def examine_output(pipe_out):
l = pipe_out[0].as_array()
r = pipe_out[1].as_array()
add_200 = pipe_out[2].as_array()
mul_075 = pipe_out[3].as_array()
sub_15 = pipe_out[4].as_array()
print("{}\n+ 200 =\n{}\n\n".format(l, add_200))
print("{}\n* 0.75 =\n{}\n\n".format(l, mul_075))
print("15 -\n{}\n=\n{}\n\n".format(r, sub_15))
examine_output(out)
```

```
[[[ 42 7 0]
[ 0 0 0]]
[[ 5 10 15]
[ 10 100 1000]]]
+ 200 =
[[[ 242 207 200]
[ 200 200 200]]
[[ 205 210 215]
[ 210 300 1200]]]
[[[ 42 7 0]
[ 0 0 0]]
[[ 5 10 15]
[ 10 100 1000]]]
* 0.75 =
[[[ 31.5 5.25 0. ]
[ 0. 0. 0. ]]
[[ 3.75 7.5 11.25]
[ 7.5 75. 750. ]]]
15 -
[[[3 3 3]
[1 3 5]]
[[1 5 5]
[1 1 1]]]
=
[[[12. 12. 12.]
[14. 12. 10.]]
[[14. 10. 10.]
[14. 14. 14.]]]
```

As we can see the constant value is used with all elements of all tensors in the batch.

## Dynamic scalars¶

It is sometimes useful to evaluate an expression with one argument being a tensor and the other being scalar. If the scalar value is constant thoughout the execution of the pipeline, `types.Cosntant`

can be used. When dynamic scalar values are needed, they can be constructed as degenerate 1D tensors with shape `{1}`

. If DALI encounters such a tensor, it will broadcast it to match the shape of the tensor argument. Note, that DALI operates on batches - and as such, the scalars are also
supplied as batches, with each scalar operand being used with other operands at the same index in the batch.

### Using scalar tensors¶

We will use an `ExternalSource`

to generate a sequence of numbers which will be then added to the tensor operands.

```
[9]:
```

```
class ArithmeticScalarsPipeline(Pipeline):
def __init__(self, batch_size, num_threads, device_id):
super(ArithmeticScalarsPipeline, self).__init__(batch_size, num_threads, device_id)
# we only need one input
self.tensor_source = ops.ExternalSource(lambda: get_data()[0])
# a batch of scalars from 1 to batch_size
scalars = np.arange(1, batch_size + 1).reshape([batch_size, 1])
self.scalar_source = ops.ExternalSource(lambda: scalars)
def define_graph(self):
tensors = self.tensor_source()
scalars = self.scalar_source()
return tensors, scalars, tensors + scalars
```

Now it’s time to build and run the Pipeline. It will allow to scale our input by some random numbers generated by the `Uniform`

Operator.

```
[10]:
```

```
pipe = ArithmeticScalarsPipeline(batch_size = batch_size, num_threads = 2, device_id = 0)
pipe.build()
out = pipe.run()
```

```
[11]:
```

```
def examine_output(pipe_out):
t = pipe_out[0].as_array()
uni = pipe_out[1].as_array()
scaled = pipe_out[2].as_array()
print("{}\n*\n{}\n=\n{}".format(t, uni, scaled))
examine_output(out)
```

```
[[[ 42 7 0]
[ 0 0 0]]
[[ 5 10 15]
[ 10 100 1000]]]
*
[[1]
[2]]
=
[[[ 43 8 1]
[ 1 1 1]]
[[ 7 12 17]
[ 12 102 1002]]]
```

Notice how the first scalar in the batch (1) is added to all elements in the first tensor and the second scalar (2) to the second tensor.