Machine learning approaches now achieve impressive generation capabilities in numerous domains such as image, audio or video. However, most training & evaluation frameworks revolve around the idea of strictly modelling the original data distribution rather than trying to extrapolate from it. This precludes the ability of such models to diverge from the original distribution and, hence, exhibit some creative traits. In this paper, we propose various perspectives on how this complicated goal could ever be achieved, and provide preliminary results on our novel training objective called Bounded Adversarial Divergence (BAD).
Github
Paper