Bernal, Guillermo, Lily Zhou, Erica Yuen, and Pattie Maes. “Paper Dreams: Real-Time Human and Machine Collaboration for Visual Story Development.” XXII Generative Art Conference - GA2019, December 21, 2019.
Work for a Member company and need a Member Portal account? Register here with your company email address.
Bernal, Guillermo, Lily Zhou, Erica Yuen, and Pattie Maes. “Paper Dreams: Real-Time Human and Machine Collaboration for Visual Story Development.” XXII Generative Art Conference - GA2019, December 21, 2019.
Increasing human potential is the underlying incentive for all technological advances. In creativity, technology can be used to facilitate faster design and construction, to improve human creative capability through learning and training, and to enable novel and innovative ways to create. The capacity to express our thoughts with visual mechanisms provides the foundation for meaningful creative practices, including art, design, and science. Here we present Paper Dreams explores how the real-time generation of ideas and visuals based on multi-modal user input can encourage divergent thinking, specifically in graphical story development, while also providing enough agency for users to feel that they have creative ownership over the final output of the collaboration. The web application recognizes user input via sketch recognition[1] and text input while suggesting related elements and synthesizing colors utilizing Conditional Generative Adversarial Networks[2] for inspiration in real-time. The result is a dynamic back-and-forth interaction between the user and the system that explores new elements for creative output. Results of qualitative evaluation (N = 26) show that the features in the Paper Dreams interface contribute to the divergence of an original idea for story development for significantly more users compared to using Adobe Sketch, while maintaining similar perceptions of creative ownership for users who do not self-identify as creative type.