dasaemjeong.bsky.social
@dasaemjeong.bsky.social
MIR / Assistant Prof. @ Sogang University, Seoul /
You can refer details below:
arXiv: arxiv.org/abs/2505.12863
demo: sakem.in/u-must/

Work done by [Jongmin Jung, DongMin Kim, Sihun Lee, Seola Cho and Dasaem Jeong]@ MALer Lab, Sogang Univ, Hyungjoon Soh, [Irmak Bukey, Chris Donahue @chrisdonahue.com] at CMU🥳
Unified Cross-modal Translation of Score Images, Symbolic Music, and Performance Audio
Music exists in various modalities, such as score images, symbolic scores, MIDI, and audio. Translations between each modality are established as core tasks of music information retrieval, such as aut...
arxiv.org
May 23, 2025 at 1:48 PM
By training a model to generate audio tokens from given score image, the model learn how to read notes from the score image. This led our model to break SOTA for OMR! Vice versa for AMT can work, while the gain was not significant enough compared to the OMR.
May 23, 2025 at 1:44 PM
Score videos are slideshow of audio-aligned score image. Although they does not include any machine-readable symbolic data, we thought these score image - audio pairs can be used for understand each modality, because they share same semantic in (hidden) symbolic music domain.
May 23, 2025 at 1:43 PM
Can we unify these tasks into a single framework? And what would be the benefit of that unification?

Answer: We can exploit tons of Score Video from YouTube!
We collected about 2k hours of score video from YouTube and used 1.3k hours after filtering.
May 23, 2025 at 1:43 PM
Music exists in various modal, and the translation between modality is important MIR Tasks.
Score Image→Symbolic Music: OMR
Audio → MIDI: AMT
MIDI → Audio: Synthesis
Score → Performance MIDI: Performance Rendering
Audio → Music Notation: Complete AMT
May 23, 2025 at 1:42 PM