Skip to content

Original code base for On Pretraining Data Diversity for Self-Supervised Learning

License

Notifications You must be signed in to change notification settings

hammoudhasan/DiversitySSL

Repository files navigation

On Pretraining Data Diversity for Self-Supervised Learning

Code and models will be released upon acceptance.
Hasan Abed Al Kader Hammoud1*   Tuhin Das2*   Fabio Pizzati2*   Philip Torr2   Adel Bibi2   Bernard Ghanem1
1 KAUST, 2 University of Oxford,

SynthCLIP Teaser

Paper GitHub stars

Abstract

We explore the impact of training with more diverse datasets, characterized by the number of unique samples, on the performance of self-supervised learning (SSL) under a fixed computational budget. Our findings consistently demonstrate that increasing pretraining data diversity enhances SSL performance, albeit only when the distribution distance to the downstream data is minimal. Notably, even with an exceptionally large pretraining data diversity achieved through methods like web crawling or diffusion-generated data, among other ways, the inherent distribution shift remains a challenge. Our experiments are comprehensive with seven SSL methods using large-scale datasets such as ImageNet and YFCC100M amounting to over 200 GPU days.

Instructions

Follow the steps below to set up the environment, prepare the dataset, and run the training pipeline:

  1. Create the Conda Environment
    Create a Conda environment named ssl_diversity with Python 3.10:

    conda create -n ssl_diversity python=3.10
    conda activate ssl_diversity
  2. Install Required Packages
    Install the required Python packages specified in the requirements.txt file:

    pip install -r requirements.txt
  3. Install NVIDIA DALI (Optional)
    If you plan to use NVIDIA DALI for augmentations, install it using the following command:

    pip install nvidia-dali-cuda110
  4. Prepare the Dataset
    Run the create_csv.py script to generate a CSV file listing the image paths:

    • Open the script and update the variables as needed:
      root_directory = "some_images"  # Replace with the root directory containing your images
      output_file = "image_paths.csv"  # Specify the desired name of the output CSV file
    • Execute the script:
      python create_csv.py
  5. Update Your YAML Configuration File
    Configure the dataset section in your YAML file as follows:

    # Dataset configuration
    data:
      dataset: "custom"  # Using custom dataset type for CSV
      train_path: "/home/hammh0a/new/solo-learn/image_paths.csv"  # Path to the generated CSV file
      format: "csv"  # Specify CSV format
      num_workers: 8
      no_labels: True
      fraction: 1.0  # Adjust between 0.0-1.0 for partial dataset use
      root_dir: "./"  # Root directory for relative image paths
      path_column: "path"  # Name of the column containing image paths in CSV
  6. Control Training Data Fraction
    Set the fraction parameter in the YAML file to control the percentage of data used during training (e.g., 1.0 for full dataset, 0.5 for 50%).

  7. Run the Training Script
    Execute the training process by running the runner.sh script. Ensure the correct YAML file is specified in the script:

    bash runner.sh

📖 Citation

If you find this work useful in your research, please consider citing:

@misc{hammoud2024pretraining,
      title={On Pretraining Data Diversity for Self-Supervised Learning}, 
      author={Hasan Abed Al Kader Hammoud and Tuhin Das and Fabio Pizzati and Philip Torr and Adel Bibi and Bernard Ghanem},
      year={2024},
      eprint={2403.13808},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

About

Original code base for On Pretraining Data Diversity for Self-Supervised Learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published