Skip to content

SFU-Multimedia-Lab/DFTS

Repository files navigation

DFTS: Deep Feature Transmission Simulator


DFTS is a simulator intended for studying deep feature transmission over unreliable channels. If you use this simulator in your work, please cite the following paper:

H. Unnibhavi, H. Choi, S. R. Alvar, and I. V. Bajić, "DFTS: Deep Feature Transmission Simulator," demo paper at IEEE MMSP'18, Vancouver, BC, Aug. 2018. [link]

Contents

Overview

A recent study has shown that power usage and latency of inference by deep AI models can be minimized if the model is split into two parts:

  • One that runs on the mobile device
  • The other that runs in the cloud

DFTS image

Our simulator is developed in Python to run with Keras models. The user can choose a keras model and specify the following:

  • Layer at which the model is split
  • The following transmission parameters(currently supported):
    • n-bit quantization
    • channel
    • error concealment techniques

Creating your environment

First clone this repository onto your local machine.

git clone https://github.com/SFU-Multimedia-Lab/DFTS.git

Create and activate a virtual environment on your machine, and navigate to the directory containing the repository.

Install the required dependencies.

pip install -r requirements.txt

If faced with any problems, contents after '==' after each library name can be deleted and run again.

Usage

The main components with which the user interacts, includes the configuration files:

  • params.yml
  • taskParams.yml

After initializing these with the desired configurations, run

python main.py -p params.yml

The params configuration file consists of the following:

Parameter Description Example
Task(integer)
  • value: the task the model is designed for
  • epochs: number of times the monte carlo simulation is run
  • 0 classification, 1 object detection
  • Any integer value
TestInput
  • dataset: dataset in use
  • batch_size: integer denoting the number of samples per batch
  • testdir:
    • images: list containing the path to the directory of test images
    • annotations: list containing paths to annotations directory, for object detection only
    • testNames: list containing paths to text files containing names of the images
  • imagenet
  • 8
  • testdir:
    • ['../annoDirs']
    • ['../imageDirs']
    • ['../test.txt']
Model
  • kerasmodel: official keras model or path to h5 file containing both weights and architecture
  • customObjects: custom modules, classes, functions used to construct the model
  • vgg16 or '../model.h5'
  • keras_layers.example, for each list
SplitLayer Layer at which the model is split, must be one of the names used for the layers block1_pool, in the case for vgg16
Transmission
  • rowsperpacket: number of rows of the feature map to be considered as one packet
  • quantization: number of bits and bool value indicating whether this paramter is to be included for this simulation
  • channel: a channel is selected by providing a bool for the include parameter for that channel, corresponding channel parameters are provided
  • concealment: technique for packet loss concealment is chosen by providing a bool value in the include paramter for that technique, corresponding paramters must be provided
  • 8, True
  • to select randomLossChannel: 0, True
  • to select linear concealment, include: True
OutputDir directory where the results of the simulation must be stored '../simData'

The taskParams configuration file consists of the following paramters for each selected task:

  • reshapeDims: list denoting the reshape dimensions of the images
  • num_classes: integer denoting the number of classes in the dataset
  • metrics: a dictionary containing the metrics the model needs to be evaluated against

Currently, only the parameters provided in the configuration files are supported. The simulation will break if any attempt is made to change the name of the parameters.

Sample configuration files for classification and object detection are provided in the sample folder.

Examples on how to organize data for input to the simulator can be found as follows:

A small subset of these images can be found in the repository itself. First switch to the test-images branch by executing the following:

git checkout test-images

Navigate to the sampleImages folder contained in the sample directory.

Simulator output

The simulator outputs timing and indexing information to the terminal.

Simulator data

The data produced by the simulator will be stored in the specified directory, as a numpy array in a .npy file. The name of the numpy file reflects the parameters of the simulation.

For example if the following parameters are used:

  • splitlayer: block1_pool
  • gilbert channel with 10 percent loss and 1 burst length
  • 8 bit quantization
  • concealment included

The resulting file name is:

block1_pool_8BitQuant_EC.npy

Contributing

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussions.

If you plan to contribute new features, utility functions or extensions to the core, please first open an issue and discuss the feature with us. Sending a PR without discussion might end up resulting in a rejected PR, because we might be taking the core in a different direction than you might be aware of.