Skip to content

Latest commit

 

History

History
120 lines (89 loc) · 4.93 KB

README.md

File metadata and controls

120 lines (89 loc) · 4.93 KB

OMReader

GitHub contributors GitHub last commit GitHub forks GitHub stars GitHub license

Table of Contents

About

This project aims to convert music sheets into a machine-readable version. We take a simplified version as we support only main symbols where we convert an image of a music sheet to a textual representation that can be further processed to produce midi files or audio files like wav or mp3.

Assumptions

  • input image should be a scanned music sheet.
  • maximum ledgers supported are two above the stafflines and two below the stafflines.
  • each stave should start with a G clef.
  • stem height is greater than or equal 3 * staffSpacing (vertical area between two lines).
  • note head height equals staffSpacing.
  • output file is in GUIDO music notation.

Supported Symbols

Music sheets have a very vast variety of symbols making it hard to handle so for simplicity we handle only the symbols listed below.

Notes

notes we support whole notes, half notes, quarter notes, eigth notes, sixteenth notes, thiry-second notes stemsUp or stemsDown.

Beams

beams we support different kinds of beams combining notes up to thirty-second stemsUp or stemsDown.

Chords

chords we support all kinds of chords.

Special Symbols

special

  • Time signatures: we support only 4/2 and 4/4
  • Accidentals: we support all kinds like double sharp, sharp, flat, double flat, natural.
  • Augmentation dots
  • clefs: we support only G clef.

Pipeline

Preprocessing

  • The input image goes through a series of steps we first apply filters to remove noise like Hyprid Median Filter and Gaussian Filter filtered

  • We apply Rotation then Adaptive Thresholding is used to segment the image into symbols and background. binary

  • We remove Stafflines to find the symbols in the image easier. staffRemoved

  • We clip the image to remove the brace connecting the staves if exists. clipped

  • We partition the image into the composing staves and apply find contours on each stave and feed each symbol to the classifiers. staves1 staves2 staves3

Classification

  • symbols are first fed into template matching using SIFT features to identify the following: Double Sharp, Sharp, Flat, Double Flat, Natural, Whole Note, Time Signatures.

  • if non of the previous symbols was identified we begin classification with our algorithmic approach after removing the stems using the following decision tree. decision-tree

Tools Used

  • Ubuntu-18.04 or Ubuntu-18.04 installed on WSL2
  • vscode
  • python 3.8.5
  • NumPy
  • OpenCV
  • scikit-image

How to Run

conda env create -f requirements.yml

conda activate OMReader

python3 main.py <input directory path> <output directory path>

Note: you can run it on windows but you should ignore creating environment with the command previously mentioned and you need to have anaconda and opencv installed.

Useful Resources:

Contributers:

License: