Visual representation in architecture, urban-design and planning is critical for both the design and decision-making processes. Despite major advancements in the field of computer graphics, crafting visual representations is still a complex and costly task, usually carried out by highly-trained professionals. This is particularly true during preliminary design stages - such as zoning exercises or schematic design - in which key decisions are made yet partial information about final design details is available. This work attempts to substitute common practices of urban-design visualization with a machine-learnt, generative approach. By implementing a Deep Convolutional Generative Adversarial Networks (DCGAN) and a Tangible User Interface (TUI), this work aims to allow for real-time urban prototyping and visualizations. The DCGAN model was trained on Cityscapes, a semantic street-view dataset. A version of CityScope (CS), a rapid urban-prototyping platform is used as tangible design interface. Following each design iteration on CS, the DCGAN model generates a rendering associated with the selected street-view in the design space. A light-weight, web-based and platform-agnostic tool was also created visualization and UI. Unlike traditional rendering techniques, this tool could help designers focus on spatial organization, urban programming and massing exercises without the need for detailed design, complex visualization processes and costly setups. This approach could support rudimentary urban design processes that are enhanced by the visual atmosphere, impression and discussion around 'The Image of the City'.