{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# DALI Binary Arithmetic Operators - Type Promotions\n", "\n", "This example describes the rules regarding the type promotions for binary arithmetic operators in DALI. See \"DALI expressions and arithmetic operators\" for more information about using arithmetic operators in DALI." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prepare the Test Pipeline\n", "\n", "1. Prepare the helper code, so we can easily manipulate the types and values that will appear as tensors in the DALI pipeline.\n", "\n", "2. Numpy will be used as the source for the custom provided data, so we need to import several things from DALI, to create the pipeline and use the `ExternalSource` operator. " ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "from nvidia.dali.pipeline import Pipeline\n", "import nvidia.dali.ops as ops\n", "import nvidia.dali.fn as fn\n", "import nvidia.dali.types as types\n", "from nvidia.dali.types import Constant\n", "\n", "batch_size = 1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Defining the Pipeline\n", "\n", "To define the data, because there are binary operators, we need two inputs.\n", "\n", "1. Create a simple helper function that returns two numpy arrays of given numpy types with arbitrary selected values.\n", "\n", " This simplifies the manipulation of types. In an actual scenario the data that is processed by the DALI arithmetic operators will be tensors that are produced by another operator that contains some images, video sequences, or other data.\n", "\n", "**Note**: The shapes of both inputs need to match since operations are performed element-wise. " ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "left_magic_values = [42, 8]\n", "right_magic_values = [9, 2]\n", "\n", "\n", "def get_data(left_type, right_type):\n", " return ([left_type(left_magic_values)], [right_type(right_magic_values)])\n", "\n", "\n", "batch_size = 1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "2. To define the pipeline, the data will be obtained from the get_data function and made available to the pipeline through ExternalSource.\n", "\n", "**Note**: You do not need to instantiate any additional operators, we can use regular Python arithmetic expressions on the results of other operators.\n", "\n", "3. For convenience, wrap the arithmetic operations usage in a lambda called `operation`, which is specified when creating the pipeline." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "def dali_type(np_type):\n", " return types.to_dali_type(np.dtype(np_type).name)\n", "\n", "\n", "def arithmetic_pipeline(operation, left_type, right_type):\n", " pipe = Pipeline(batch_size=batch_size, num_threads=4, device_id=0)\n", " with pipe:\n", " l, r = fn.external_source(\n", " source=lambda: get_data(left_type, right_type),\n", " num_outputs=2,\n", " dtype=[dali_type(left_type), dali_type(right_type)],\n", " )\n", " pipe.set_outputs(l, r, operation(l, r))\n", "\n", " return pipe" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Type Promotion Rules\n", "\n", "Type promotions for binary operators are described below. The type promotion rules are commutative, and they apply to `+`, `-`, `*`, and `//`. The `/` always returns a float32 for integer inputs, and applies the rules below when at least one of the inputs is a floating point number.\n", "\n", "| Operand Type | Operand Type | Result Type | Additional Conditions |\n", "|:------------:|:------------:|:-----------:| --------------------- |\n", "| T | T | T | |\n", "| floatX | T | floatX | where T is not a float |\n", "| floatX | floatY | float(max(X, Y)) | |\n", "| intX | intY | int(max(X, Y)) | |\n", "| uintX | uintY | uint(max(X, Y)) | |\n", "| intX | uintY | int2Y | if X <= Y |\n", "| intX | uintY | intX | if X > Y |\n", "\n", "The `bool` type is the smallest unsigned integer type and is treated as `uint1` with respect to the table above.\n", "\n", "The bitwise binary `|`, `&`, and `^` operations abide by the same type promotion rules as arithmetic binary operations, but their inputs are restricted to integral types (bool included).\n", "\n", "Only multiplication `*` and bitwise operations `|`, `&`, `^` can accept two `bool` inputs." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Using the Pipeline\n", "\n", "1. Create a pipeline that adds two tensors of type `uint8`, run it, and check the results:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[42 8]] + [[9 2]] = [[51 10]]; \n", "\twith types uint8 + uint8 -> uint8\n", "\n" ] } ], "source": [ "def build_and_run(pipe, op_name):\n", " pipe.build()\n", " pipe_out = pipe.run()\n", " l = pipe_out[0].as_array()\n", " r = pipe_out[1].as_array()\n", " out = pipe_out[2].as_array()\n", " print(\n", " \"{} {} {} = {}; \\n\\twith types {} {} {} -> {}\\n\".format(\n", " l, op_name, r, out, l.dtype, op_name, r.dtype, out.dtype\n", " )\n", " )\n", "\n", "\n", "pipe = arithmetic_pipeline((lambda x, y: x + y), np.uint8, np.uint8)\n", "build_and_run(pipe, \"+\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is some information about how all of the operators behave with different type combinations by generalizing the example above.\n", "You can use the `np_types` or `np_int_types` in the loops to see all possible type combinations. Only a few combinations have been used. You can also set some additional printing options for numpy to make the output more aligned." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "np.set_printoptions(precision=2)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[42 8]] + [[9 2]] = [[51 10]]; \n", "\twith types uint8 + uint8 -> uint8\n", "\n", "[[42 8]] + [[9 2]] = [[51 10]]; \n", "\twith types uint8 + int32 -> int32\n", "\n", "[[42 8]] + [[9. 2.]] = [[51. 10.]]; \n", "\twith types uint8 + float32 -> float32\n", "\n", "[[42 8]] - [[9 2]] = [[33 6]]; \n", "\twith types uint8 - uint8 -> uint8\n", "\n", "[[42 8]] - [[9 2]] = [[33 6]]; \n", "\twith types uint8 - int32 -> int32\n", "\n", "[[42 8]] - [[9. 2.]] = [[33. 6.]]; \n", "\twith types uint8 - float32 -> float32\n", "\n", "[[42 8]] * [[9 2]] = [[122 16]]; \n", "\twith types uint8 * uint8 -> uint8\n", "\n", "[[42 8]] * [[9 2]] = [[378 16]]; \n", "\twith types uint8 * int32 -> int32\n", "\n", "[[42 8]] * [[9. 2.]] = [[378. 16.]]; \n", "\twith types uint8 * float32 -> float32\n", "\n", "[[42 8]] / [[9 2]] = [[4.67 4. ]]; \n", "\twith types uint8 / uint8 -> float32\n", "\n", "[[42 8]] / [[9 2]] = [[4.67 4. ]]; \n", "\twith types uint8 / int32 -> float32\n", "\n", "[[42 8]] / [[9. 2.]] = [[4.67 4. ]]; \n", "\twith types uint8 / float32 -> float32\n", "\n", "[[42 8]] // [[9 2]] = [[4 4]]; \n", "\twith types uint8 // uint8 -> uint8\n", "\n", "[[42 8]] // [[9 2]] = [[4 4]]; \n", "\twith types uint8 // int32 -> int32\n", "\n", "[[42 8]] // [[9. 2.]] = [[4.67 4. ]]; \n", "\twith types uint8 // float32 -> float32\n", "\n", "[[42 8]] | [[9 2]] = [[43 10]]; \n", "\twith types uint8 | uint8 -> uint8\n", "\n", "[[42 8]] | [[9 2]] = [[43 10]]; \n", "\twith types uint8 | int32 -> int32\n", "\n", "[[42 8]] & [[9 2]] = [[8 0]]; \n", "\twith types uint8 & uint8 -> uint8\n", "\n", "[[42 8]] & [[9 2]] = [[8 0]]; \n", "\twith types uint8 & int32 -> int32\n", "\n", "[[42 8]] ^ [[9 2]] = [[35 10]]; \n", "\twith types uint8 ^ uint8 -> uint8\n", "\n", "[[42 8]] ^ [[9 2]] = [[35 10]]; \n", "\twith types uint8 ^ int32 -> int32\n", "\n" ] } ], "source": [ "arithmetic_operations = [\n", " ((lambda x, y: x + y), \"+\"),\n", " ((lambda x, y: x - y), \"-\"),\n", " ((lambda x, y: x * y), \"*\"),\n", " ((lambda x, y: x / y), \"/\"),\n", " ((lambda x, y: x // y), \"//\"),\n", "]\n", "\n", "bitwise_operations = [\n", " ((lambda x, y: x | y), \"|\"),\n", " ((lambda x, y: x & y), \"&\"),\n", " ((lambda x, y: x ^ y), \"^\"),\n", "]\n", "\n", "np_types = [\n", " np.int8,\n", " np.int16,\n", " np.int32,\n", " np.int64,\n", " np.uint8,\n", " np.uint16,\n", " np.uint32,\n", " np.uint64,\n", " np.float32,\n", " np.float64,\n", "]\n", "\n", "for op, op_name in arithmetic_operations:\n", " for left_type in [np.uint8]:\n", " for right_type in [np.uint8, np.int32, np.float32]:\n", " pipe = arithmetic_pipeline(op, left_type, right_type)\n", " build_and_run(pipe, op_name)\n", "\n", "for op, op_name in bitwise_operations:\n", " for left_type in [np.uint8]:\n", " for right_type in [np.uint8, np.int32]:\n", " pipe = arithmetic_pipeline(op, left_type, right_type)\n", " build_and_run(pipe, op_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using Constants\n", "\n", "Instead of operating only on Tensor data, DALI expressions can also work with constants. The constants can be values of the Python int and float types that are used directly, or those values are wrapped in `nvidia.dali.types.Constant`. The operation on the tensor and constant results in the constant that is being broadcasted to all tensor elements. \n", "\n", "**Note**: The same constant is used with all samples in the batch. Currently, the values of integral constants are passed to DALI as `int32`, and the values of the floating point constants are passed to DALI as `float32`. \n", "\n", "Regarding the promotions type, the Python int values will be treated as `int32`, and the float as `float32`. \n", "\n", "The DALI `Constant` can be used to indicate other types. It accepts `DALIDataType` enum values as second argument and has convenience member functions like `.uint8()` or `.float32()` that can be used for conversions.\n", "\n", "1. The expressions in this examples consist of a tensor and a constant, so you can adjust your previous pipeline and the helper functions. These functions need to generate only one tensor." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "def arithmetic_constant_pipeline(operation, tensor_data_type):\n", " pipe = Pipeline(batch_size=batch_size, num_threads=4, device_id=0)\n", " with pipe:\n", " t = fn.external_source(\n", " source=lambda: get_data(tensor_data_type, tensor_data_type)[0],\n", " dtype=dali_type(tensor_data_type),\n", " )\n", " pipe.set_outputs(t, operation(t))\n", "\n", " return pipe\n", "\n", "\n", "def build_and_run_with_const(pipe, op_name, constant, is_const_left=False):\n", " pipe.build()\n", " pipe_out = pipe.run()\n", " t_in = pipe_out[0].as_array()\n", " t_out = pipe_out[1].as_array()\n", " if is_const_left:\n", " print(\n", " \"{} {} {} = \\n{}; \\n\\twith types {} {} {} -> {}\\n\".format(\n", " constant, op_name, t_in, t_out, type(constant), op_name, t_in.dtype, t_out.dtype\n", " )\n", " )\n", " else:\n", " print(\n", " \"{} {} {} = \\n{}; \\n\\twith types {} {} {} -> {}\\n\".format(\n", " t_in, op_name, constant, t_out, t_in.dtype, op_name, type(constant), t_out.dtype\n", " )\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `ArithmeticConstantsPipeline` can be parametrized with a function that takes the only tensor and returns the result of arithmetic operation between that tensor and a constant. The print message was also adjusted.\n", "\n", "2. Check the examples that were mentioned at the beginning:\n", " - `int`\n", " - `float`\n", " - `nvidia.dali.types.Constant`." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[42 8]] + 10 = \n", "[[52 18]]; \n", "\twith types uint8 + -> int32\n", "\n", "[[42. 8.]] + 10 = \n", "[[52. 18.]]; \n", "\twith types float32 + -> float32\n", "\n", "[[42 8]] + 42.3 = \n", "[[84.3 50.3]]; \n", "\twith types uint8 + -> float32\n", "\n", "[[42. 8.]] + 42.3 = \n", "[[84.3 50.3]]; \n", "\twith types float32 + -> float32\n", "\n" ] } ], "source": [ "constant = 10\n", "pipe = arithmetic_constant_pipeline((lambda x: x + constant), np.uint8)\n", "build_and_run_with_const(pipe, \"+\", constant)\n", "\n", "constant = 10\n", "pipe = arithmetic_constant_pipeline((lambda x: x + constant), np.float32)\n", "build_and_run_with_const(pipe, \"+\", constant)\n", "\n", "\n", "constant = 42.3\n", "pipe = arithmetic_constant_pipeline((lambda x: x + constant), np.uint8)\n", "build_and_run_with_const(pipe, \"+\", constant)\n", "\n", "constant = 42.3\n", "pipe = arithmetic_constant_pipeline((lambda x: x + constant), np.float32)\n", "build_and_run_with_const(pipe, \"+\", constant)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The value of the constant is applied to all the elements of the tensor to which it is added.\n", "\n", "3. Check how to use the DALI Constant wrapper.\n", "\n", "4. Passing an `int` or `float` to a DALI Constant marks it as `int32` or `float32` respectively" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[42 8]] * 10:DALIDataType.INT32 = \n", "[[420 80]]; \n", "\twith types uint8 * -> int32\n", "\n", "10.0:DALIDataType.FLOAT * [[42 8]] = \n", "[[420. 80.]]; \n", "\twith types * uint8 -> float32\n", "\n" ] } ], "source": [ "constant = Constant(10)\n", "pipe = arithmetic_constant_pipeline((lambda x: x * constant), np.uint8)\n", "build_and_run_with_const(pipe, \"*\", constant)\n", "\n", "\n", "constant = Constant(10.0)\n", "pipe = arithmetic_constant_pipeline((lambda x: constant * x), np.uint8)\n", "build_and_run_with_const(pipe, \"*\", constant, True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "5. Explicitly specify the type as a second argument, or use convenience conversion member functions." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[42 8]] * 10:DALIDataType.UINT8 = \n", "[[164 80]]; \n", "\twith types uint8 * -> uint8\n", "\n", "10:DALIDataType.UINT8 * [[42 8]] = \n", "[[164 80]]; \n", "\twith types * uint8 -> uint8\n", "\n", "10:DALIDataType.UINT8 * [[42 8]] = \n", "[[164 80]]; \n", "\twith types * uint8 -> uint8\n", "\n" ] } ], "source": [ "constant = Constant(10, types.DALIDataType.UINT8)\n", "pipe = arithmetic_constant_pipeline((lambda x: x * constant), np.uint8)\n", "build_and_run_with_const(pipe, \"*\", constant)\n", "\n", "\n", "constant = Constant(10.0, types.DALIDataType.UINT8)\n", "pipe = arithmetic_constant_pipeline((lambda x: constant * x), np.uint8)\n", "build_and_run_with_const(pipe, \"*\", constant, True)\n", "\n", "\n", "constant = Constant(10).uint8()\n", "pipe = arithmetic_constant_pipeline((lambda x: constant * x), np.uint8)\n", "build_and_run_with_const(pipe, \"*\", constant, True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Treating Tensors as Scalars\n", "\n", "If one of the tensors is considered a scalar input, the same rules apply.\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.9" } }, "nbformat": 4, "nbformat_minor": 2 }