Skip to content

Official Implementation of "Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music" (ISMIR 2021)

License

Notifications You must be signed in to change notification settings

salu133445/arranger

Repository files navigation

Arranger

This repository contains the official implementation of "Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music" (ISMIR 2021).

Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music
Hao-Wen Dong, Chris Donahue, Taylor Berg-Kirkpatrick and Julian McAuley
Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), 2021
[homepage] [paper] [video] [slides] [video (long)] [slides (long)] [code]

Content

Prerequisites

You can install the dependencies by running pipenv install (recommended) or python3 setup.py install -e .. Python>3.6 is required.

Directory structure

├─ analysis         Notebooks for analysis
├─ scripts          Scripts for running experiments
├─ models           Pretrained models
└─ arranger         Main Python module
   ├─ config.yaml   Configuration file
   ├─ data          Code for collecting and processing data
   ├─ common        Most-common algorithm
   ├─ zone          Zone-based algorithm
   ├─ closest       Closest-pitch algorithm
   ├─ lstm          LSTM model
   └─ transformer   Transformer model

Data Collection

Bach Chorales

# Collect Bach chorales from the music21 corpus
import shutil
import music21.corpus

for path in music21.corpus.getComposer("bach"):
    if path.suffix in (".mxl", ".xml"):
        shutil.copyfile(path, "data/bach/raw/" + path.name)

MusicNet

# Download the metadata
wget -O data/musicnet https://homes.cs.washington.edu/~thickstn/media/musicnet_metadata.csv

NES Music Database

# Download the dataset
wget -O data/nes http://deepyeti.ucsd.edu/cdonahue/nesmdb/nesmdb_midi.tar.gz

# Extract the archive
tar zxf data/nes/nesmdb_midi.tar.gz

# Rename the folder for consistency
mv nesmdb_midi/ raw/

Lakh MIDI Dataset (LMD)

# Download the dataset
wget -O data/lmd http://hog.ee.columbia.edu/craffel/lmd/lmd_matched.tar.gz

# Extract the archive
tar zxf data/lmd/lmd_matched.tar.gz

# Rename the folder for consistency
mv lmd_matched/ raw/

# Download the filenames
wget -O data/lmd http://hog.ee.columbia.edu/craffel/lmd/md5_to_paths.json

Data Preprocessing

The following commands assume Bach chorales. You might want to replace the dataset identifier bach with identifiers of other datasets (musicnet for MusicNet, nes for NES Music Database and lmd for Lakh MIDI Dataset).

# Preprocess the data
python3 arranger/data/collect_bach.py -i data/bach/raw/ -o data/bach/json/ -j 1

# Collect training data
python3 arranger/data/collect.py -i data/bach/json/ -o data/bach/s_500_m_10/ -d bach -s 500 -m 10 -j 1

Models

  • LSTM model
    • arranger/lstm/train.py: Train the LSTM model
    • arranger/lstm/infer.py: Infer with the LSTM model
  • Transformer model
    • arranger/transformer/train.py: Train the Transformer model
    • arranger/transformer/infer.py: Infer with the Transformer model

Pretrained Models

Pretrained models can be found in the models/ directory.

To run a pretrained model, please pass the corresponding command line options to the infer.py scripts. You may want to follow the commands used in the experiment scripts provided in scripts/infer_*.sh.

For example, use the following command to run the pretrained BiLSTM model with embeddings.

# Assuming we are at the root of the repository
cp models/bach/lstm/bidirectional_embedding/best_models.hdf5 OUTPUT_DIRECTORY
python3 arranger/lstm/infer.py \
  -i {INPUT_DIRECTORY} -o {OUTPUT_DIRECTORY} \
  -d bach -g 0 -bi -pe -bp -be -fi

The input directory (INPUT_DIRECTORY) contains the input JSON files, which can be generated by muspy.save(). The output directory (OUTPUT_DIRECTORY) should contain the pretrained model and will contain the output files. The -d bach option indicates that we are using the Bach chorale dataset. The -g 0 option will run the model on the first GPU. The -bi -pe -bp -be -fi specifies the model options (run python3 arranger/lstm/infer.py -h for more information).

Baseline algorithms

  • Most-common algorithm
    • arranger/common/learn.py: Learn the most common label
    • arranger/common/infer.py: Infer with the most-common algorithm
  • Zone-based algorithm
    • arranger/zone/learn.py: Learn the optimal zone setting
    • arranger/zone/infer.py: Infer with the zone-based algorithm
  • Closest-pitch algorithm
    • arranger/closest/infer.py: Infer with the closest-pitch algorithm
  • MLP model
    • arranger/mlp/train.py: Train the MLP model
    • arranger/mlp/infer.py: Infer with the MLP model

Configuration

In arranger/config.yaml, you can configure the MIDI program numbers used for each track in the sample files generated. You can also configure the color of the generated sample piano roll visualization.

Citation

Please cite the following paper if you use the code provided in this repository.

Hao-Wen Dong, Chris Donahue, Taylor Berg-Kirkpatrick and Julian McAuley, "Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music," Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), 2021.

@inproceedings{dong2021arranger,
    author = {Hao-Wen Dong and Chris Donahue and Taylor Berg-Kirkpatrick and Julian McAuley},
    title = {Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music},
    booktitle = {Proceedings of the International Society for Music Information Retrieval Conference (ISMIR)},
    year = 2021,
}

About

Official Implementation of "Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music" (ISMIR 2021)

Topics

Resources

License

Stars

Watchers

Forks

Sponsor this project