LogoLogo
API ReferenceGitHubSlackService StatusLogin
v3.1.5
v3.1.5
  • Deep Lake Docs
  • List of ML Datasets
  • Quickstart
  • Dataset Visualization
  • Storage & Credentials
    • Storage Options
    • Managed Credentials
      • Enabling CORS
      • Provisioning Role-Based Access
  • API Reference
  • Enterprise Features
    • Querying Datasets
      • Sampling Datasets
    • Performant Dataloader
  • EXAMPLE CODE
  • Getting Started
    • Step 1: Hello World
    • Step 2: Creating Deep Lake Datasets
    • Step 3: Understanding Compression
    • Step 4: Accessing Data
    • Step 5: Visualizing Datasets
    • Step 6: Using Activeloop Storage
    • Step 7: Connecting Deep Lake Datasets to ML Frameworks
    • Step 8: Parallel Computing
    • Step 9: Dataset Version Control
    • Step 10: Dataset Filtering
  • Tutorials (w Colab)
    • Creating Datasets
      • Creating Complex Datasets
      • Creating Object Detection Datasets
      • Creating Time-Series Datasets
      • Creating Datasets with Sequences
      • Creating Video Datasets
    • Training Models
      • Training an Image Classification Model in PyTorch
      • Training Models Using MMDetection
      • Training Models Using PyTorch Lightning
      • Training on AWS SageMaker
      • Training an Object Detection and Segmentation Model in PyTorch
    • Data Processing Using Parallel Computing
  • Playbooks
    • Querying, Training and Editing Datasets with Data Lineage
    • Evaluating Model Performance
    • Training Reproducibility Using Deep Lake and Weights & Biases
    • Working with Videos
  • API Summary
  • How Deep Lake Works
    • Data Layout
    • Version Control and Querying
    • Tensor Relationships
    • Visualizer Integration
    • Shuffling in ds.pytorch()
    • Storage Synchronization
    • How to Contribute
Powered by GitBook
On this page
  • Managing your Credentials with Activeloop
  • Managed Credentials
  • Connecting Deep Lake Datasets to the UI
  • Default Storage
  • Using Manage Credentials with Linked Tensors

Was this helpful?

  1. Storage & Credentials

Managed Credentials

How to manage your credentials with Activeloop Platform

PreviousStorage OptionsNextEnabling CORS

Last updated 2 years ago

Was this helpful?

Managing your Credentials with Activeloop

Managing credentials in Deep Lake enables:

  • Access to the Deep Lake UI for datasets stored in your own cloud

  • Simpler access to Deep Lake dataset stored in your own cloud using the Pyton API

    • No need for continuously specifying cloud access keys in Python

Managed Credentials

In order for the Deep Lake UI to access datasets or linked tensors stored in the user's cloud, Deep Lake must be able to authenticate and gain access the respective cloud resources. Access can be provided using access keys, or using role-based access () that grants Deep Lake permissions to specific cloud resources. The video below summarizes the UI for managing your cloud credentials.

Connecting Deep Lake Datasets to the UI

Once a dataset is connected to Deep Lake, it is assigned a Deep Lake path hub://org_id/dataset_name, and it can be accessed using API tokens and managed credentials from Deep Lake, without continuously having to specify cloud credentials.

Connecting Datasets in the Python API

# Step 1: Create the dataset directly in the cloud using your own cloud creds
ds = deeplake.empty('s3://my_bucket/dataset_name', creds = {...})

# Step 2: Connect the dataset to Deep Lake and specify the managed credentials
# (creds_key) for accessing the data (See Managed Credentials above)
ds.connect(org_id = 'my_org', creds_key = 'my_creds_key')

OR

ds.connect(dest_path = 'hub://my_org/dataset_name', creds_key = 'my_creds_key')

Specifying org_id creates the dataset in the specified org using the dataset_name from the cloud path.

Specifying the dest_path creates the dataset at the org_id and dataset_name from the specified path.

Connecting Datasets in the Deep Lake UI

Default Storage

By default, any dataset created using the Deep Lake path hub://org_id/dataset_name, is stored in Activeloop storage. You may change the default storage location for Deep Lake paths to a location of your choice using the UI below. Subsequently, all datasets created using the Deep Lake path will be stored at the specified location.

Using Manage Credentials with Linked Tensors

ds.create_tensors('images', htype = 'link[image]', sample_compression = 'jpeg')

ds.add_creds_key('my_creds_key', managed=True)

ds.images.append(deeplake.link(link_to_sample, creds_key = 'my_creds_key')

Datasets in Activeloop storage are automatically connected to . Datasets in user's clouds can be connected to Activeloop Platform using python API or UI below. In order to visualize data in the Deep Lake browser application, it is necessary to in the bucket containing any source data.

Managed credentials can be used for accessing data stored in . Simply add the managed credentials to the dataset's creds_keys and assign them to each sample.

Activeloop Platform
enable CORS
linked tensors
provisioning steps here