LogoLogo
API ReferenceGitHubSlackService StatusLogin
v3.1.5
v3.1.5
  • Deep Lake Docs
  • List of ML Datasets
  • Quickstart
  • Dataset Visualization
  • Storage & Credentials
    • Storage Options
    • Managed Credentials
      • Enabling CORS
      • Provisioning Role-Based Access
  • API Reference
  • Enterprise Features
    • Querying Datasets
      • Sampling Datasets
    • Performant Dataloader
  • EXAMPLE CODE
  • Getting Started
    • Step 1: Hello World
    • Step 2: Creating Deep Lake Datasets
    • Step 3: Understanding Compression
    • Step 4: Accessing Data
    • Step 5: Visualizing Datasets
    • Step 6: Using Activeloop Storage
    • Step 7: Connecting Deep Lake Datasets to ML Frameworks
    • Step 8: Parallel Computing
    • Step 9: Dataset Version Control
    • Step 10: Dataset Filtering
  • Tutorials (w Colab)
    • Creating Datasets
      • Creating Complex Datasets
      • Creating Object Detection Datasets
      • Creating Time-Series Datasets
      • Creating Datasets with Sequences
      • Creating Video Datasets
    • Training Models
      • Training an Image Classification Model in PyTorch
      • Training Models Using MMDetection
      • Training Models Using PyTorch Lightning
      • Training on AWS SageMaker
      • Training an Object Detection and Segmentation Model in PyTorch
    • Data Processing Using Parallel Computing
  • Playbooks
    • Querying, Training and Editing Datasets with Data Lineage
    • Evaluating Model Performance
    • Training Reproducibility Using Deep Lake and Weights & Biases
    • Working with Videos
  • API Summary
  • How Deep Lake Works
    • Data Layout
    • Version Control and Querying
    • Tensor Relationships
    • Visualizer Integration
    • Shuffling in ds.pytorch()
    • Storage Synchronization
    • How to Contribute
Powered by GitBook
On this page
  • How to Accelerate Deep Lake Workflows with Parallel Computing
  • Defining the parallel computing function
  • Executing the parallel computation

Was this helpful?

  1. Getting Started

Step 8: Parallel Computing

Running computations and processing data in parallel.

PreviousStep 7: Connecting Deep Lake Datasets to ML FrameworksNextStep 9: Dataset Version Control

Last updated 2 years ago

Was this helpful?

How to Accelerate Deep Lake Workflows with Parallel Computing

Deep Lake enables you to easily run computations in parallel and significantly accelerate your data processing workflows. This example primarily focuses on parallel dataset uploading.

Parallel computing use cases such as dataset transformations can be found in .

Parallel compute using Deep Lake has two core steps:

  1. Define a function or pipeline that will run in parallel and

  2. Evaluate the function using the appropriate inputs and outputs.

Defining the parallel computing function

The first step is to define a function that will run in parallel by decorating it using @deeplake.compute. In the example below, file_to_deeplake converts data from files into Deep Lake format, just like in . If you have not completed Step 2, please download and unzip the example image classification dataset below:

import deeplake
from PIL import Image
import numpy as np
import os

@deeplake.compute
def file_to_deeplake(file_name, sample_out, class_names):
    ## First two arguments are always default arguments containing:
    #     1st argument is an element of the input iterable (list, dataset, array,...)
    #     2nd argument is a dataset sample
    # Other arguments are optional
    
    # Find the label number corresponding to the file
    label_text = os.path.basename(os.path.dirname(file_name))
    label_num = class_names.index(label_text)
    
    # Append the label and image to the output sample
    sample_out.labels.append(np.uint32(label_num))
    sample_out.images.append(deeplake.read(file_name))
    
    return sample_out

In all functions decorated using @deeplake.compute, the first argument must be a single element of any input iterable that is being processed in parallel. In this case, that is a filename file_name, because file_to_deeplake reads image files and populates data in the dataset's tensors.

The second argument is a dataset sample sample_out, which can be operated on using similar syntax to dataset objects, such as sample_out.append(...), sample_out.extend(...), etc.

The function decorated using @deeplake.compute must return sample_out, which represents the data that is added or modified by that function.

Executing the parallel computation

To execute the parallel computation, you must define the dataset that will be modified.

ds = deeplake.empty('./animals_deeplake_transform') # Creates the dataset

Next, you define the input iterable that describes the information that will be operated on in parallel. In this case, that is a list of files files_list:

# Find the class_names and list of files that need to be uploaded
dataset_folder = './animals'

class_names = os.listdir(dataset_folder)

files_list = []
for dirpath, dirnames, filenames in os.walk(dataset_folder):
    for filename in filenames:
        files_list.append(os.path.join(dirpath, filename))

You can now create the tensors for the dataset and run the parallel computation using the .eval syntax. Pass the optional input arguments to file_to_deeplake and skip the first two default arguments file_name and sample_out.

The input iterable files_list and output dataset ds is passed to the .eval method as the first and second argument respectively.

with ds:
    ds.create_tensor('images', htype = 'image', sample_compression = 'jpeg')
    ds.create_tensor('labels', htype = 'class_label', class_names = class_names)
    
    file_to_deeplake(class_names=class_names).eval(files_list, ds, num_workers = 2)
Image.fromarray(ds.images[0].numpy())

Congrats! You just created a dataset using parallel computing! 🎈

Additional parallel computing use cases such as dataset transformations can be found in .

this tutorial
this tutorial
Step 2: Creating Hub Datasets Manually
338KB
animals.zip
archive
animals dataset