Computational Creativity Articles (Edited by Anna Longo)
-
Art and Language After AI
By ingesting a vast corpus of source material, generative deep learning models are capable of encoding multi-modal data into a shared embedding space, producing synthetic outputs which cannot be decomposed into their constituent parts. These models call into question the relation of conceptualisation and production in creative practices spanning musical composition to visual art. Moreover, artificial intelligence as a research program poses deeper questions regarding the very nature of aesthetic categories and their constitution. In this essay I will consider the intelligibility of the art object through the lens of a particular family of machine learning models, known as ‘latent diffusion’, extending an aesthetic theory to complement the image of thought the models (re)present to us. This will lead to a discussion on the semantics of computational states, probing the inferential and referential capacities of said models. Throughout I will endorse a topological view of computation, which will inform the neural turn in computer science, characterised as a shift from the notion of a stored program to that of a cognitive model. Lastly, I will look at the instability of these models by analysing their limitations in terms of compositionality and grounding.
-
Creativity, co-evolution and co-production The machine as art and as artist
With the understanding that art and technology continue to experience a (rapidly escalating) historical rapprochement, but also with the understanding that our comprehension of art and technology has tended to be constrained by scientific rigour and calculative thinking by one side, or have tended to change to the extreme from the lyrical: the objective of this article is to provide a reflective look for artists, humanists, scientists and engineers to consider these developments from the broader perspective it deserves, while maintaining a focus on what should be the emerging core of this topic which is the relationship between art, technology and science: the state of the art in mechatronics and computing today is such that we can now begin to speak comfortably of the machine as artist, and we can begin to hope, too, that an aesthetic sensibility on the part of the machine might help generate an intelligent more friendly and responsive machine agency overall. The principle of the inhuman emphasises that the questions of ontology are not questions of being as subject, of being as consciousness, of being as Dasein, of being as body, of being as language, of being as human or of being as power, but of being as being. Finally, the ontological principle hypotheses that all beings are ontologically on an equal footing or that all are to the extent that they make a difference. However, until now not much has been said about “algorithmic entities”. From the above, it is clear that there are still many unanswered questions, for example: How to raise the question of techno-diversity when intellectuals yearn for a general artificial intelligence? We must go back to history to orient ourselves in our current situation with a sense of distance. Will it be possible to find strategies to free ourselves from this apocalyptic end of technological singularity and reopen the question of the creative future in machines in relation to humans?
-
Expanded Design Creativity, Machine Learning and Urban Design
The introduction of automated algorithmic processes (e.g. machine learning) in creative disciplines such as architecture and urban design has expanded the design space available for creativity and speculation. Contrary to previous algorithmic processes, machine learning models must be trained before they are deployed. The two processes (training and deployment) are separate and, crucially for this paper, the outcome of the training process is not a spatial object directly implementable but rather code. This marks a novelty in the history of the spatial design techniques which has been characterised by design instruments with stable properties determining the bounds of their implementation. Machine Learning models, on the other hand, are design instruments resulting from the training they undertake. In short, training a machine learning model has become an act of design.
Beside spatial representation traditionally comprising of drawings, physical or CAD models, Machine Learning introduces an additional representational space: the vast, abstract, stochastic, multi-dimensional space of data, and their statistical correlations. This latter domain – broadly referred to as latent space – has received little attention by architects both in terms of conceptualising its technical organisation and speculating on its impact on design. However, the statistical operations structuring data in latent space offer glimpses of new types of spatial representations that challenge the existing creative processes in architectural and urban design. Such spatial representation can include non-human actors, give agency to a range of concerns that are normally excluded from urban design, expand the scales and temporalities amenable to design manipulation, and offer an abstract representation of spatial features based on statistical correlations rather than spatial proximity. The combined effect of these novelties that can elicit new types of organisation, both formally and programmatically. In order to foreground their potential, the paper will discuss the impact of ml models in conjunction with larger historical and theoretical questions underpinning spatial design. In so doing, the aim is not to abdicate a specificity of urban design and uncritically absorb computational technologies; rather, the creative process in design will provide a filter through which critically evaluate machine learning techniques.
The paper tasks to conceptualise the potential of latent space design by framing it through the figure of the paradigm. Paradigms are defined by Thomas Kuhn as special members of a set which they both give rise to and make intelligible. Their ability to relate parts to parts not only resonates with the technical operations of ml models, but they also provide a conceptual space for designers to speculate different spatial organisation aided by algorithmic processes. Paradigms are not only helpful to conceptualise the use of ml models in urban design, they also suggest an approach to design that privileges perception over structure and curation over process. The creative process that emerges is one in ml models are speculative technical elements that can foreground relations between diverse datasets and engender an urbanism of relations rather than objects.
The application of such algorithmic models to design will be supported by the research developed by students part of Research Cluster 14 part of the Master in Urban Design at The Bartlett School of Architecture in London.
-
From Continuous to Discrete to Continuous – Text-to-Image Models as Limit to Indeterminate Phantasy
This essay analyses the interplay of indeterminacy and determinacy in the experience of images generated through text-to-image (T2I) models. Through an interdisciplinary approach, it uncovers three layers of indeterminacy: the computational indeterminacy inherent in text-to-image model processes, the indeterminacy of imagination in Husserl’s concept of protean phantasy, and finally the visual indeterminacy that figures in meaning making in all images. Generated images pass through these stages of indeterminacy, transforming indeterminate phantasy into determined visual objects, resulting in a conflict of consciousness between potential and actual. A distinction emerges between artificial phantasy, characterized by quasi-experience, and artificial imagination, grounded in images both as training data and perceptual image objects. As mediators between indeterminacy and determination, T2I images appear as technical media that mediate multiple forms of indeterminacy, showing the circulation between phantasy and imagination, between continuous and discrete. The generated image marks the limit of the unlimited indeterminate imagination.
-
Grand Theft Autoencoder
The implementation of generative models in deep learning, particularly those of Text-to-Image Synthesis (T2I), are essentially an exaptation of the cognitive processes of the transcendental imagination Kant outlined in his notoriously opaque schematism chapter of CPR. While such engineering feats mirror the liberating force of photography’s invention, they have also proven to be a significant engine for reproducing antediluvian ideologies of art pivoting on claims about what has been stolen by the machine. This paper argues that T2I presents an opportunity to instead reconsider what our models of the procedures of the imagination actually are or could be, and wagers that the interdisciplinary conceptual frameworks supporting machine learning enable us to recuperate from an “incommensurable” synthetic intelligence the necessary resources for revising our understanding of what creativity is and does, with pattern recognition providing the tools for a renewed elaboration of techné to pull a heist upon the transcendental itself.
-
Nonknowledge in Computation. Reflecting on Irrevocable Uncertainty
Abstract
My paper approaches the theme of computational creativity by looking at uncertainty as an epistemic and aesthetic tool that must be examined to address the challenges brought to critical practice by planetary computation. It positions uncertainty as central to how the encounter of the human practitioner with non-human machines is conceptualized, and as a resource for building speculative-pragmatic paths of resistance against algorithmic capture. It proposes ways to cultivate uncertainty and use it as a design material to produce new types of knowledge that question machines’ pre-emptying manoeuvres and resist their capture of potential. The argument proposed is that uncertainty affords the production of new imaginaries of the human-machine encounter that can resist the foreclosure of futures (what will be) and are sustained instead by the uncertainty of potential (what might be) (Munster 2013). Dwelling in a space of potential – Deleuze’s virtual, or what I call a space of ‘maybes’, requires of the practitioner a repositioning of their epistemic perspective and reflecting on the following questions: how can material knowledge be made by engaging with modes of un-knowing and not-knowing in machine interaction? How can these modes of un-knowing and not-knowing be fostered as a critical and political onto-epistemological project of reinventing critical practice for the algorithmic age? (Horl et al. 2021; Hansen 2021, 2015, Pasquinelli and Joler 2020). The paper argues that the machinic unknown should be engaged with - not through the conventional paradigm that pitches human vs machine creativity and attempts to rank and score them through similarities, but rather through a (paradoxical) deepening of the unknowability at the core of the machine (Parisi) and machine’s own incommensurability (Fazi 2020). It then proposes the Chinese notion of wu wei (active non action) (Jullien 2011, 2004, 2000, 1995; Allen 2015, 2011) as a stratagem to experiment with to craft speculative-pragmatic interventions, and to augment the ‘power of maybes’ as a space of anti-production, and resisting reduction (Ito 2019).