![]() ![]() ![]() Perhaps what is needed is a more expressive symbolic representation than MIDI. What might still need some breakthroughs is applying the same technique to raw audio instead of MIDI. "Simply" scaling up is not so simple in reality.) (To be clear, I don't mean to minimize what they've done at all. This seems to be OpenAI's bread and butter now: taking existing techniques and scaling them up with a few tweaks. But I think simply scaling up further (model and dataset size both) could fix that without any breakthroughs. ![]() It is still lacking some longer term structure in the music it generates (e.g. That's exactly what OpenAI has done here. It was obvious to me if you used a very large network, chose the right input representation, and most importantly used a complete dataset of all available music, you would get great results. Most previous attempts at neural net composition restricted the training set to one style of music or even one composer, which is pretty silly if you understand how neural nets work. It displays an impressive understanding of the fundamentals of music theory. I think this is far, far beyond any algorithmic composition system ever made before. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |