Improving controllability or the ability to manipulate one or more attributes of the generated data has become a topic of interest in the context of deep generative models of music. Recent attempts in this direction have relied on learning disentangled representations from data such that the underlying factors of variation are well separated. In this paper, we focus on the relationship between disentanglement and controllability by conducting a systematic study using different supervised disentanglement learning algorithms based on the Variational Auto-Encoder (VAE) architecture. Our experiments show that a high degree of disentanglement can be achieved by using different forms of supervision to train a strong discriminative encoder. However, in the absence of a strong generative decoder, disentanglement does not necessarily imply controllability. The structure of the latent space with respect to the VAE-decoder plays an important role in boosting the ability of a generative model to manipulate different attributes. To this end, we also propose methods and metrics to help evaluate the quality of a latent space with respect to the afforded degree of controllability.
翻译:改进对生成数据的可控性或操纵其一个或多个属性的能力,已成为在音乐的深层基因模型背景下引起关注的一个专题。最近朝此方向的尝试依赖于从数据中学习分解的表达方式,从而将差异的基本因素完全分离出来。在本文件中,我们侧重于分解与可控性之间的关系,方法是利用基于变异自动电解码(VAE)结构的不同监督分解学习算法进行系统研究。我们的实验表明,通过使用不同形式的监督来训练强烈的有区别的编码器,可以实现高度的分解。然而,在没有强大的基因解码器的情况下,分解并不一定意味着可控性。与VAE解码器有关的潜在空间结构在提高基因化模型操纵不同属性的能力方面发挥着重要作用。为此,我们还提出一些方法和指标,以帮助评估潜在空间的质量,使其具有一定的可控性。