MusicGenerationFramework
A Framework to generate music for video games
Description
Still in progress.In November 2023, I started my research at the TUT (Tokyo University of Technology) as a research student.The theme of my research is currently Music Generation through Diffusion.The framework is divided mainly in:
- A runtime part, in C++, to preprocess and postprocess data
- A training part, in Python, to train deep learning models
The goal is then to load that model in Unreal Engine, use it to generate MIDI events, and mix them using MetaSounds.
Tools
Programming:
- C
- C++
- Python
- Pytorch
- Unreal Engine
- Visual Studio Code
My Work
Programming:
- Creation of a MIDI Parser
- Specifying, making, and testing different Deep Learning Diffusion Architectures
- Development of a Plugin for Unreal Engine to play MIDI in MetaSounds