Technophany Indexed by DOAJ
Technophany is pleased to announce that all articles are now being indexed by the Directory of Open Access Journals (DOAJ).
Read more about Technophany Indexed by DOAJTechnophany is pleased to announce that all articles are now being indexed by the Directory of Open Access Journals (DOAJ).
Read More Read more about Technophany Indexed by DOAJTechnophany publishes on an "Online First" basis throughout the year, meaning final revision articles prior to their inclusion into the journal's yearly "General Issue" or guest edited "Special Issue" are attributed a unique DOI number and placed into their appropiate section allowing articles to be cited as soon as they are published.
This article examines the concept of symbiosis as a premise for elucidating the origin of the human-technology relationship. The starting point is the work of the biologist Lynn Margulis, who introduced the concepts symbiosis and symbiogenesis in the biological
sciences. Her idea is that a long-lasting physical association that as symbiosis may be defined, will eventually by symbiogenesis lead to an evolutionary novelty. From this perspective the human-technology relationship is explained using philosophical ideas of Bernard Stiegler and Helmuth Plessner, who both considered this relationship essential for being human. I explain what is typical about the human life form as it is thought by them. Basically, the difference between the human and other organisms is that in the human, something is moved outside that in animals stayed within. I explicate that this exteriorisation, as it is called by Stiegler, at the same time is an interiorisation. This movement should be considered as a form of endosymbiogenesis by which the long- lasting use of tools was cognitively internalized in mind and body and became eventually a condition for the origin of an organism with a technological culture—the human.
Paul B. Preciado’s theory of the pharmacopornographic regime provides a radical theoretical analysis of the relationship between gender, technology and capitalism. Firstly, I explicate Preciado’s key concepts and argue that their overarching theoretical project illuminates neoliberal capitalism’s capture and commodification of sexual energies and desire. I contend that contemporary toxic heteronormativity in extreme online communities may be explained as reactionary internalisation/ resistance to this process. I conclude by suggesting Preciado’s theoretical insights gesture toward a progressive and emancipatory pathway for rethinking masculinity.
How do we conceive diversification in the milieu of technics? Though associated-milieu is a useful concept, the Indian terrain of techniques impels one to underscore the notion as a milieu of subversion. The paper begins with a reading of the seminal work done by Debiprasad Chattopadhaya analyzing the impediments in writing a history of ancient Indian science and technology. Whether his effort resuscitating a history of materialism succeeds in interpolating a history of techniques in India is a critique this paper explores, taking cues from Yuk Hui’s incisive reading of the perceived absence of traditions of technical thinking in China, or broadly Asia. Present paper builds on Hui’s concept of “cosmotechnics” alongside the notion of “associated milieus” of counter-cultural techniques of Yoga and Tantra. In the second half, the paper brings into view how these Indian thought and practice traditions of Tantra and Yoga as techniques of the cosmic self and the body are woven into counter-cultures eliciting a critical framework for thinking these associated-milieus as exiled and migrated milieus.
In this article, I use the dialogical ideas of Hans-Georg Gadamer to evaluate whether generative AI is ready to join the ontological conversation that he considers humanity to be. Despite the technical advances of generative AI, Gadamer’s philosophical hermeneutics reveals that it cannot function as a proxy human dialogue partner in pursuit of understanding. Even when free from anthropomorphic projections and reimagined as the “other”, generative AI is found to have a weak epistemology, lack of moral awareness, and no emotions. Even so, it evokes a response in some users that places it on the threshold of being. The most promising dialogical role identified for generative AI is as a digital form of Gadamerian “text” currently constrained by copyright and technical design. Generative AI’s shortcomings risk inhibiting hermeneutical understanding through greater access to summarised knowledge. Nonetheless, the new technology is on the brink of joining the ontological conversation of humanity.
The following paper attempts to articulate a distinctly materialist notion of emergence and the formation of patterns by way of re-visiting two texts that have been considered oddities, if not embarrassments, by the subsequent developments of their respective disciplines: Freud’s Project for a Scientific Psychology and Engels’s Dialectic of Nature. Both texts are strikingly similar in their speculative engagement with the natural sciences and in their potential to inform a renewed engagement with the question of the relation between technology and life. In the concept of “path-breaking” [Bahnung] Freud understands perceptions as inscribing themselves in the structure of the very perceiving apparatus through repetition of what one could call a “material trace” (Sybille Krämer). This notion of the “material trace” can be connected to the key thrust of Engels’s “objective dialectics” in that it “concerns a model of structural emergence”(Hartmut Winkler). I want to propose that these texts can potentially enrich our understanding of how mental formations such as memory take shape and how subjectivity is constituted in material processes. That is, once Freud and Engels are read through recent philosophical thinking on technology (Bernard Stiegler, Catherine Malabou) and the concept of recursivity (Yuk Hui). This approach can also supply resources for a Marxist notion of ideology—namely by performing a turn from a critique that is primarily concerned with the question of how we can penetrate false appearances towards a materialist account of how (“false”) appearances, something like “real abstractions” (Alfred Sohn-Rethel), can emerge out of the “flat plane” of matter.
Book review of Cybernetics and the Origin of Information by Raymond Ruyer, Translated by Amélie Berger-Soraruff, Andrew Iliadis, Daniel W. Smith, and Ashley Woodward.
Review for:
Laura Tripaldi, Parallel Minds: Discovering the Intelligence of Materials. Falmouth, UK: Urbanomic, 2022. 192 pp. $18.95 (Paperback ISBN: 9781913029937)
The present text aims to discuss the proximity between two central concepts in the works of two authors: “syntropic agriculture” and the “Neganthropocene,” respectively in the works of Ernst Götsch and Bernard Stiegler. First, a review of the terminology of the terms “syntropy” and “negentropy” is carried out, establishing that the terms are quasi-synonymous. Secondly, a succinct explanation of syntropic agriculture as an “epistemological key” is offered, one very similar to that of Isabelle Stengers and Ilya Prigogine from Order out of Chaos. Second, an explanation of its origin in indigenous cosmology, as expressed by Viveiros de Castro in Cannibal Metaphysics, is outlined. Syntropic agriculture is one of the ramifications of techniques carried out in agroforestry systems and, therefore, is also called “successional agroforestry systems” (SAFs). It has been used not only for reforestation, but also for small and large-scale agricultural production. The big differences between syntropic agriculture and other agroforestry systems are how the biodiversity of cultivated species is emphasized and how ecological succession is used as a guide to cultivation. That is, syntropic agriculture mimics the processes carried out by ecological succession. The role of human beings is also great, given that the farmer needs to act constantly, intervening in cultivation, sowing, and pruning. Thus, syntropic agriculture establishes the height of syntropy when there is an organization similar to that of a forest, but above all similar to a forest in which man is properly equated. At this point, a parallel is drawn between this model of agriculture and that carried out by Amazonian indigenous peoples, which most likely gave rise to the Amazon Forest.
This paper challenges the frequent demonisation of entropy in discourses which attempt to draw a “naturalised” axiology from thermodynamics, information theory, and related sciences. Such discourses include Wiener’s cybernetics, Stiegler’s negathropology, and Floridi’s information ethics, in each of which entropy designates the evil which must be fought in the name of life, information, or some other notion of “the good.” The perspective the paper develops is Nietzschean. Nietzsche himself rejected the consequences of the Second Law, but I wish to argue that it is possible to affirm entropy, for Nietzschean reasons.
First, the paper argues that the reason Nietzsche rejected the Second Law is that it provides consolation for the pessimist (an argument made by von Hartmann). Eternal return should be affirmed because it is the more difficult position, and so provides the ultimate existential test. However, metaphysical and existential reasons must give way to the more recent scientific evidence, especially the dating of the universe, which undermines Nietzsche’s argument against heat death. While this is alone sufficient reason to affirm entropy, the position is supported by two further classes of reasons. First, the oppositions which whave supported the traditional ascription of values to negentropy and entropy can be challenged; and 2) entropy can be seen as consonant with the characteristics of existence which Nietzsche sought to affirm, especially becoming.
The 19th century saw a dramatic transformation in the basic categories of knowledge, both in metaphysics and in physics the notions of time and matter became intertwined, following the industrial revolution. In a crucial sense, both dialectical materialism and entropic physics reflect the changes induced by the industrial revolution. In this paper, I will examine the notion of entropy so as to form a dialectical concept of it. In doing so, I will relate dialectical materialism and entropy to demonstrate their shared ground. The goal of this will be to show what the consequences of the industrial revolution have been on the form of time itself. Rather than the formal time, linear time of Newtonian mechanics and Kant’s transcendental idealism, time today has become an unorientable surface, relating back to itself in important ways. Taking some topological ideas from Deleuze’s treatment of the third synthesis of time in Difference and Repetition (1969), as well as Žižek’s concept of dialectical materialism from Sex and the Failed Absolute (2019), I will show how these disparate notions of material time bear on the topology of time after the industrial revolution.
In this paper I argue that the contemporary pathologizing of old age is directly tied to the notion of uselessness, understood entropically as that which cannot contribute energy for useful work. The elderly are configured as socially useless and thus threaten the health of the body politic. As a result, they are marginalized, ignored, and treated as waste to be jettisoned from the system. Because understanding bodies as machines able or unable to perform work accords with the second law of thermodynamics, the first half of this paper discusses entropy as both scientific law and philosophical concept. The second section explores the lived experience of aging and the pathologized position of social uselessness through Simone de Beauvoir’s analysis of old age. As she herself ages, the place of senescence comes to play a more pronounced role in her philosophical inquiry. Writing about the lived experience of aging adds a vital dimension to the scientific and philosophical perspectives, insofar as it foregrounds the various ways that entropy is felt: slowing, dissipating, and ultimately dying. While these should be viewed as part of the normal life cycle, they become markers of abjection which are judged harshly against the standard of social utility.
In this article I will try to suggest a transdisciplinary framework in order to analyse the contemporary ecological polycrisis which is usually described as the Entropocene era. According to French philosopher Bernard Stiegler, the Anthropocence can be understoodas an Entropocene, because the current ecological crisis consists in a process of massive increase of entropy in all its forms : thermodynamic entropy (that is, dissipation of physical or chemical energy), biological entropy (as the destruction of biodiversity), and psycho-social entropy (as the reduction of knowledge to data and calculations, through digital disruptive technologies). In order to analyze this situation, we need a transversal conception of entropy : from Bergson’s philosophy of life to Stiegler’s philosophy of technics, going through Schrodinger’s physics, Wiener’s cybernetics, Lotka’s biology and Levi-Strauss’s anthropology, I will try to build a conceptual history of entropies and to explore its economic and political consequences, in order to open new paths beyond the Entropocene era.In this article I will try to suggest a transdisciplinary conception of entropy in order to analyse the contemporary ecological polycrisis which is usually described as the Anthropocene era. According to French philosopher Bernard Stiegler, the Anthropocence can be understood as an Entropocene, becausethe current ecological crisis consists of a process of massive increases of entropy in all its forms: thermodynamic entropy (that is, dissipation of physical or chemical energy), biological entropy (as the destruction of biodiversity), and psycho-social entropy (as the reduction of knowledge to data and calculations, through digital disruptive technologies). In order to analyse this situation, we need a transversal conception of entropy: from Bergson’s philosophy of life to Stiegler’s philosophy of technics, through Schrodinger’s physics, Wiener’s cybernetics, Lotka’s biology and Levi-Strauss’s anthropology, I will try to build a conceptual history of entropies from a Stieglerian point of view and to explore its economic and political consequences in Stiegler’s thought, in order to open new paths beyond the Entropocene era.
If, as Canguilhem argues, the emergence of exosomatic life involves the introduction of a new inconstancy into life’s environment, then both juridical and scientific law can be understood as a response that aims at a new constancy or fidelity, but one that always requires interpretation. Today, however, there is a crisis of law, due to a speed differential between the speed of legal change and the speed of digital network technology. This crisis can be understood in relation to Stiegler’s account of exosomatic life as involving three kinds of memory that struggle against the entropic tendency, but here we argue that there is also a fourth memory: the immune system. Taking this into account can further elucidate Stiegler’s claim that the pharmakon has a third, psychosocial dimension: the pharmakos. By understanding immune function not just as discriminating proper and foreign elements, or friend and enemy, but rather as a retentional and interpretive system, we can understand phenomena such as the designation of a scapegoat as a fault of interpretation that can be compared with accounts of the onset of paranoia. This in turn makes it possible to understand the crisis of contemporary experience as an anaphylactic reaction resulting from a collapse of resonance that amounts to a loss of the knowledge and desire required to live in tension, where the latter is the only meaningful definition of peace.
In non-modern biocultures, contextual human technicity has played a key role in shaping the behaviors and the morphology of non-human species, which in return has simultaneously modulated human morphology and behavior: behavior affords behavior. Studies intersecting anthropology and ecology have framed this process as a biological feedback in which species co-evolve through the constitution of biocultural diversification, thus producing negative entropy through technical activities.
Originating from nineteenth century physics, the concept of entropy—a measure of disorder, randomness, and/or the dissipation of useful energy—underlay a cosmology where order and complexity were seen as highly improbable phenomena in a universe tending toward chaos and disorganisation. Nearly a century later, theoretical frameworks were developed for understanding the production of entropy as an enabling feature of self-organized complexity in the natural world. These ideas would contribute to establishing connections between the origins, development, and evolution of life and the principles of a thermodynamic universe. For some, they also supplied the conceptual foundations for theorizing about a universal natural tendency driving the development of increasingly complex and ordered systems which amplify processes of entropy production and energy dissipation and dispersal. In this paper I chart a path through the aforementioned ideas and present their relevance in framing a relationship between our technological civilization and the Earth system. I then speculate about the prospect of a technosphere whose constitution and activity are aligned with thermodynamic principles of dissipation and entropy-production, drawing on theoretical biology and recent developments in bioengineering to envision a paradigm where technology becomes living matter itself.
Entropy, often defined negatively as disorder or randomness within a system, is vital for organisation while also posing a threat to cyclical reproduction. Entropy is not equivalent to disorganisation, but rather a source of creativity at the local level, even if the tendency towards entropy persists globally. In this article, we build upon Bernard Stiegler’s understanding of entropy, and argue that the interplay of entropy and anti-entropy can be comprehended through Hegel’s notion of negativity, and draw upon the organisational approach to biological systems, which introduces anti-entropy as akin to organisation. Thus, we address Stiegler’s lopsided criticisms of dialectical accounts and argue that the interplay between entropy and anti-entropy is inherently dialectical. We also employ the concept of habit to understand the dialectic of entropy and anti-entropy in the life of organisms, and the delicate balance between stability and plasticity that must be upheld for the thriving of both organisms and their environments.
This paper aims to explore the concept of entropy in Ivan Illich’s overall thinking and deliver a dialog with other authors. Our goals are twofold. First, we aim at pointing out how Illich's early work is relevant for critically thinking about entropy in its relationship to forms of social organization and technology usage. Secondly, we point to how Illich’s later works consider a planetary responsibility. By gathering matter, energy and information, technology is an ambiguous force of both hominization and alienation, world-building and world destruction. For an early Illich, liberation from such new heteronomy was possible. The late Illich, however, adverts against the dangers of a collective responsibility. The attempt to “save life” is a necrophiliac manipulation, dependent on a planetary extension of promethean power. Instead, humankind must nurture the return of Epimetheus: a powerless relationship with the future that places hope as the constitutive force of social fabric.
Appearing to channel the Devil himself, writer Dorion Sagan reports on a deep Earth conference where the former, with technical and philosophical rigor, expands upon Bernard Stiegler’s notion of the Entropocene, the “generalised anthropogenic acceleration in the rate of terrestrial entropization” from which “[m]any of the world’s current politico-ecological crises derive” (White and Moore, 2022). The apparently possessed writer, whose stenography of the deep Earth demon appears to be for self-aggrandizement as part of a suspected Mephistophelean pact, argues that Stiegler’s Entropocene is in fact a specific form of thermodynamic planetary dysfunction. Unlike some other global concerns analyzed by philosophers—e.g., Immanuel Kant’s inquiries into the possibilities of world peace, and speculations, following Fontenelle, on the existence of life on other planets—the analysis of Earth’s planetary condition, is not unique: it is an example of thermodynamic dysfunction in general, which has important and investigable precursors: forest ecosystems exposed to heat and radiation from nuclear runoff, nonliving complex systems (e.g., Bénard cells, Taylor vortices, “multiplying” typhoons, and long-lived autocatalytic Belousov-Zhabotinski chemical reactions) that exhibit physiological malaise, and ultimately “death,” when the temperature, pressure, or electron potential gradients upon which their organization depends become too steep or insufficient. Among the many interwoven themes discussed in one of the Devil’s “outer dens” are senescence, the checkered history and thermodynamic reality of entropy as a measure of the spread of energy, Nietzschean eternal recurrence, life on other planets, and the mythical heat death of the universe.
Like the Copernican revolution which initiated the Modern project, there has been a thermodynamic revolution in the empirical sciences in the last two centuries. The aim of this paper is to show how we might draw from this revolution to make new and startling metaphysical and ethical claims concerning the nature and value of reality. To this end, this paper employs Aristotle’s account of the relation of the various philosophies and sciences to one another to show how we might assert a new theory of being, moral value, and practical action from the primacy of entropic decay asserted in the contemporary mathematical sciences. This paper proceeds to show how, from what the contemporary sciences have concluded concerning the primacy of entropic decay within reality, unbecoming might be forwarded as a new account of the essence of existence: i.e., the first cause and motivating principle of reality’s formal, material, efficient, and final nature. The paper concludes by arguing that a new and surprising account of universal ethical value and normative duty can be deduced from such a metaphysics of decay.
It is, according to Serres, the ‘greatest discovery of history that entropy and information are connected’ – a line of thought he takes throughout epistemological questions, aesthetics, cultural analysis, and a theory of matter. By following Serres’s work, one finds negentropy, entropy, chaos, local orders, the ‘soft’, and the ‘hard’ almost everywhere in his writings. The intellectual context and sources that Serres draws on are an important support to understand the way in which the coupling of informational and thermodynamic entropy takes place, and how it becomes a key operator of entropic differentiation. This text draws a combinatorial map of how Serres connects understandings of entropy across a range of areas of knowledge. In this specific context, Serres’s path of translation harnesses the so-called ‘hard’ and the ‘soft’ forms of entropy in looking at literature and arts, yet also to discuss social phenomena and the formations of societies. By drawing attention to the negative spaces in Serres’s connective path of translating entropies, and in the course of reading his work in context with other philosophies of entropy, this section aims to explore Serres’s translations in the way it both connects and leaves gaps. Approaching Serres’s criticality in this way brings one to the critical, difficult, icy landscapes of the North-West-Passage and the role it plays in his work. The North-West-Passage epitomises a ‘method’ to conceive the difficult path between the natural sciences and the humanities – exactly the kind of path that ‘entropy’ often meanders on. In fact, entropy itself plays an important role in regard the icy landscape’s ecology, e.g. to the degree to which the passage is melted or frozen, and thus, to the possibility of the passage as such. By bringing these multi-layered aspects of entropy as a material, aesthetic, and critical factor together, this contribution places Serres’s take on entropy as an eco-critical path in the face of the melting of icy landscapes.
The implementation of generative models in deep learning, particularly those of Text-to-Image Synthesis (T2I), are essentially an exaptation of the cognitive processes of the transcendental imagination Kant outlined in his notoriously opaque schematism chapter of CPR. While such engineering feats mirror the liberating force of photography’s invention, they have also proven to be a significant engine for reproducing antediluvian ideologies of art pivoting on claims about what has been stolen by the machine. This paper argues that T2I presents an opportunity to instead reconsider what our models of the procedures of the imagination actually are or could be, and wagers that the interdisciplinary conceptual frameworks supporting machine learning enable us to recuperate from an “incommensurable” synthetic intelligence the necessary resources for revising our understanding of what creativity is and does, with pattern recognition providing the tools for a renewed elaboration of techné to pull a heist upon the transcendental itself.
This essay analyses the interplay of indeterminacy and determinacy in the experience of images generated through text-to-image (T2I) models. Through an interdisciplinary approach, it uncovers three layers of indeterminacy: the computational indeterminacy inherent in text-to-image model processes, the indeterminacy of imagination in Husserl’s concept of protean phantasy, and finally the visual indeterminacy that figures in meaning making in all images. Generated images pass through these stages of indeterminacy, transforming indeterminate phantasy into determined visual objects, resulting in a conflict of consciousness between potential and actual. A distinction emerges between artificial phantasy, characterized by quasi-experience, and artificial imagination, grounded in images both as training data and perceptual image objects. As mediators between indeterminacy and determination, T2I images appear as technical media that mediate multiple forms of indeterminacy, showing the circulation between phantasy and imagination, between continuous and discrete. The generated image marks the limit of the unlimited indeterminate imagination.
With the understanding that art and technology continue to experience a (rapidly escalating) historical rapprochement, but also with the understanding that our comprehension of art and technology has tended to be constrained by scientific rigour and calculative thinking by one side, or have tended to change to the extreme from the lyrical: the objective of this article is to provide a reflective look for artists, humanists, scientists and engineers to consider these developments from the broader perspective it deserves, while maintaining a focus on what should be the emerging core of this topic which is the relationship between art, technology and science: the state of the art in mechatronics and computing today is such that we can now begin to speak comfortably of the machine as artist, and we can begin to hope, too, that an aesthetic sensibility on the part of the machine might help generate an intelligent more friendly and responsive machine agency overall. The principle of the inhuman emphasises that the questions of ontology are not questions of being as subject, of being as consciousness, of being as Dasein, of being as body, of being as language, of being as human or of being as power, but of being as being. Finally, the ontological principle hypotheses that all beings are ontologically on an equal footing or that all are to the extent that they make a difference. However, until now not much has been said about “algorithmic entities”. From the above, it is clear that there are still many unanswered questions, for example: How to raise the question of techno-diversity when intellectuals yearn for a general artificial intelligence? We must go back to history to orient ourselves in our current situation with a sense of distance. Will it be possible to find strategies to free ourselves from this apocalyptic end of technological singularity and reopen the question of the creative future in machines in relation to humans?
The introduction of automated algorithmic processes (e.g. machine learning) in creative disciplines such as architecture and urban design has expanded the design space available for creativity and speculation. Contrary to previous algorithmic processes, machine learning models must be trained before they are deployed. The two processes (training and deployment) are separate and, crucially for this paper, the outcome of the training process is not a spatial object directly implementable but rather code. This marks a novelty in the history of the spatial design techniques which has been characterised by design instruments with stable properties determining the bounds of their implementation. Machine Learning models, on the other hand, are design instruments resulting from the training they undertake. In short, training a machine learning model has become an act of design.
Beside spatial representation traditionally comprising of drawings, physical or CAD models, Machine Learning introduces an additional representational space: the vast, abstract, stochastic, multi-dimensional space of data, and their statistical correlations. This latter domain – broadly referred to as latent space – has received little attention by architects both in terms of conceptualising its technical organisation and speculating on its impact on design. However, the statistical operations structuring data in latent space offer glimpses of new types of spatial representations that challenge the existing creative processes in architectural and urban design. Such spatial representation can include non-human actors, give agency to a range of concerns that are normally excluded from urban design, expand the scales and temporalities amenable to design manipulation, and offer an abstract representation of spatial features based on statistical correlations rather than spatial proximity. The combined effect of these novelties that can elicit new types of organisation, both formally and programmatically. In order to foreground their potential, the paper will discuss the impact of ml models in conjunction with larger historical and theoretical questions underpinning spatial design. In so doing, the aim is not to abdicate a specificity of urban design and uncritically absorb computational technologies; rather, the creative process in design will provide a filter through which critically evaluate machine learning techniques.
The paper tasks to conceptualise the potential of latent space design by framing it through the figure of the paradigm. Paradigms are defined by Thomas Kuhn as special members of a set which they both give rise to and make intelligible. Their ability to relate parts to parts not only resonates with the technical operations of ml models, but they also provide a conceptual space for designers to speculate different spatial organisation aided by algorithmic processes. Paradigms are not only helpful to conceptualise the use of ml models in urban design, they also suggest an approach to design that privileges perception over structure and curation over process. The creative process that emerges is one in ml models are speculative technical elements that can foreground relations between diverse datasets and engender an urbanism of relations rather than objects.
The application of such algorithmic models to design will be supported by the research developed by students part of Research Cluster 14 part of the Master in Urban Design at The Bartlett School of Architecture in London.
Abstract
My paper approaches the theme of computational creativity by looking at uncertainty as an epistemic and aesthetic tool that must be examined to address the challenges brought to critical practice by planetary computation. It positions uncertainty as central to how the encounter of the human practitioner with non-human machines is conceptualized, and as a resource for building speculative-pragmatic paths of resistance against algorithmic capture. It proposes ways to cultivate uncertainty and use it as a design material to produce new types of knowledge that question machines’ pre-emptying manoeuvres and resist their capture of potential. The argument proposed is that uncertainty affords the production of new imaginaries of the human-machine encounter that can resist the foreclosure of futures (what will be) and are sustained instead by the uncertainty of potential (what might be) (Munster 2013). Dwelling in a space of potential – Deleuze’s virtual, or what I call a space of ‘maybes’, requires of the practitioner a repositioning of their epistemic perspective and reflecting on the following questions: how can material knowledge be made by engaging with modes of un-knowing and not-knowing in machine interaction? How can these modes of un-knowing and not-knowing be fostered as a critical and political onto-epistemological project of reinventing critical practice for the algorithmic age? (Horl et al. 2021; Hansen 2021, 2015, Pasquinelli and Joler 2020). The paper argues that the machinic unknown should be engaged with - not through the conventional paradigm that pitches human vs machine creativity and attempts to rank and score them through similarities, but rather through a (paradoxical) deepening of the unknowability at the core of the machine (Parisi) and machine’s own incommensurability (Fazi 2020). It then proposes the Chinese notion of wu wei (active non action) (Jullien 2011, 2004, 2000, 1995; Allen 2015, 2011) as a stratagem to experiment with to craft speculative-pragmatic interventions, and to augment the ‘power of maybes’ as a space of anti-production, and resisting reduction (Ito 2019).
By ingesting a vast corpus of source material, generative deep learning models are capable of encoding multi-modal data into a shared embedding space, producing synthetic outputs which cannot be decomposed into their constituent parts. These models call into question the relation of conceptualisation and production in creative practices spanning musical composition to visual art. Moreover, artificial intelligence as a research program poses deeper questions regarding the very nature of aesthetic categories and their constitution. In this essay I will consider the intelligibility of the art object through the lens of a particular family of machine learning models, known as ‘latent diffusion’, extending an aesthetic theory to complement the image of thought the models (re)present to us. This will lead to a discussion on the semantics of computational states, probing the inferential and referential capacities of said models. Throughout I will endorse a topological view of computation, which will inform the neural turn in computer science, characterised as a shift from the notion of a stored program to that of a cognitive model. Lastly, I will look at the instability of these models by analysing their limitations in terms of compositionality and grounding.
Technophany is a journal of the Research Network for Philosophy and Technology, dedicated to the philosophical and historical studies of technologies. |
E-ISSN: 2773-0875 | Privacy Statement | Published by Radboud University Press Supported by Openjournals | Policy Responsible Disclosure |
Supported by the City University of Hong Kong and Hanart Forum |
This work is licensed under a Creative Commons Attribution 4.0 International License.
Site Design and Modification by ein doughnut studio