{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Serialization\n", "\n", "## Overview\n", "\n", "This sample shows how to serialize the pipeline to a string.\n", "\n", "## Serialization\n", "\n", "In order to use C API or TensorFlow plugin (or just to save the pipeline with a model, so the training process is fully reproducible) we need to serialize the pipeline. \n", "\n", "Let us make a simple pipeline reading from MXNet recordIO format (for example of using other data formats please see other examples in [examples](.) directory." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from nvidia.dali.pipeline import Pipeline\n", "import nvidia.dali.ops as ops\n", "import nvidia.dali.types as types\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "\n", "base = \"/data/imagenet/train-480-val-256-recordio/\"\n", "idx_files = [base + \"train.idx\"]\n", "rec_files = [base + \"train.rec\"]\n", "\n", "\n", "class SerializedPipeline(Pipeline):\n", " def __init__(self, batch_size, num_threads, device_id, seed):\n", " super(SerializedPipeline, self).__init__(batch_size,\n", " num_threads,\n", " device_id,\n", " seed = seed)\n", " self.input = ops.MXNetReader(path = rec_files, index_path = idx_files)\n", " self.decode = ops.nvJPEGDecoder(device = \"mixed\", output_type = types.RGB)\n", " self.resize = ops.Resize(device = \"gpu\",\n", " image_type = types.RGB,\n", " interp_type = types.INTERP_LINEAR)\n", " self.cmnp = ops.CropMirrorNormalize(device = \"gpu\",\n", " output_dtype = types.FLOAT,\n", " crop = (224, 224),\n", " image_type = types.RGB,\n", " mean = [0., 0., 0.],\n", " std = [1., 1., 1.])\n", " self.res_uniform = ops.Uniform(range = (256.,480.))\n", "\n", " def define_graph(self):\n", " inputs, labels = self.input(name=\"Reader\")\n", " images = self.decode(inputs)\n", " images = self.resize(images, resize_shorter = self.res_uniform())\n", " output = self.cmnp(images)\n", " return (output, labels)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "batch_size = 16\n", "\n", "pipe = SerializedPipeline(batch_size=batch_size, num_threads=2, device_id = 0, seed = 12)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will now serialize this pipeline, using `serialize` function of the `Pipeline` class." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "s = pipe.serialize()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In order to deserialize our pipeline in Python, we need to create another pipeline, this time using the generic `Pipeline` class. We give the same seed to the new pipeline, in order to compare the results." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "pipe2 = Pipeline(batch_size = batch_size, num_threads = 2, device_id = 0, seed = 12)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us now use the serialized form of `pipe` object to make `pipe2` a copy of it." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "pipe2.deserialize_and_build(s)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can compare the results of the 2 pipelines - original and deserialized." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "pipe.build()\n", "original_pipe_out = pipe.run()\n", "serialized_pipe_out = pipe2.run()" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "def check_difference(batch_1, batch_2):\n", " return [np.sum(np.abs(batch_1.at(i) - batch_2.at(i))) for i in range(batch_size)]" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "original_images, _ = original_pipe_out\n", "serialized_images, _ = serialized_pipe_out" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[0.0,\n", " 0.0,\n", " 0.0,\n", " 0.0,\n", " 0.0,\n", " 0.0,\n", " 0.0,\n", " 0.0,\n", " 0.0,\n", " 0.0,\n", " 0.0,\n", " 0.0,\n", " 0.0,\n", " 0.0,\n", " 0.0,\n", " 0.0]" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "check_difference(original_images.as_cpu(), serialized_images.as_cpu())" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "Both pipelines give exactly the same results." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" } }, "nbformat": 4, "nbformat_minor": 2 }