Hierarchical Disentangled Representations

2018 年 4 月 15 日 CreateAMind

https://arxiv.org/abs/1804.02086


Abstract

Deep latent-variable models learn representa-

tions of high-dimensional data in an unsuper-

vised manner. A number of recent efforts have

focused on learning representations that disen-

tangle statistically independent axes of varia-

tion, often by introducing suitable modifica-

tions of the objective function. We synthesize

this growing body of literature by formulating

a generalization of the evidence lower bound

that explicitly represents the trade-offs between

sparsity of the latent code, bijectivity of repre-

sentations, and coverage of the support of the

empirical data distribution. Our objective is

also suitable to learning hierarchical representa-

tions that disentangle blocks of variables whilst

allowing for some degree of correlations within

blocks. Experiments on a range of datasets

demonstrate that learned representations con-

tain interpretable features, are able to learn dis-

crete attributes, and generalize to unseen com-

binations of factors.


各种vae的比较!


登录查看更多
4

相关内容

专知会员服务
54+阅读 · 2019年12月22日
Unsupervised Learning via Meta-Learning
CreateAMind
44+阅读 · 2019年1月3日
Disentangled的假设的探讨
CreateAMind
9+阅读 · 2018年12月10日
disentangled-representation-papers
CreateAMind
26+阅读 · 2018年9月12日
Hierarchical Imitation - Reinforcement Learning
CreateAMind
19+阅读 · 2018年5月25日
Auto-Encoding GAN
CreateAMind
7+阅读 · 2017年8月4日
Arxiv
9+阅读 · 2019年4月19日
VIP会员
相关资讯
Unsupervised Learning via Meta-Learning
CreateAMind
44+阅读 · 2019年1月3日
Disentangled的假设的探讨
CreateAMind
9+阅读 · 2018年12月10日
disentangled-representation-papers
CreateAMind
26+阅读 · 2018年9月12日
Hierarchical Imitation - Reinforcement Learning
CreateAMind
19+阅读 · 2018年5月25日
Auto-Encoding GAN
CreateAMind
7+阅读 · 2017年8月4日
Top
微信扫码咨询专知VIP会员