Entangled Others on Nature, Ecology and AI
Written by arltcollector
In their creative pursuits, Entangled Others, an artistic duo, delve into the complex and expansive realm of nature, ecology, and the more-than-human world, utilizing cutting-edge techniques such as 3D modeling and machine learning to bring their visions to life. Feileacan McCormick explains that Entangled Others intention is to “unveil, and make tangible as an experience, the innate entangled nature of the world”.
We’re proud to present an in-depth interview with Entangled Others as they take us through their practice, opinions on the current state of AI, and their overall mission as a studio.
Can you expand on the creative focus of Entangled Others?
FEILEACAN: We're an artist duo that began collaborating in 2020. We now share a studio practice where the focus is on the natural (more-than-human) world - nature, ecology, and finding ways to use technology (though not exclusively) to create experiences that explore the wonders of the more-than-human world, as well as our relationship to it.
A lot of our focus has been on artificial life. Not as a way of exploring the replacement, or mere recreation of nature, but rather to examine the world around us through it. Very often the digital is seen as a separate, platonic realm from the world we interact with. But in reality, the digital is as much a part of the physical ecosystems of the world as other, biological inhabitants. We've been exploring how to bring experiences of the more-than-human world into the digital and start from that point of view. There is a rich tradition of artificial life going back many decades and we've seen time and time again how research applications of artificial life are relevant to helping us understand the physical world, and this is a very interesting tension to work with (and from). We try to find ways to nurture the understanding that everything is interconnected and that we have a responsibility as equal coexistent entities to care for ourselves and others, to find a better way forward.
SOFIA: We are trying to create interactions that speak about ecology or make us question our relationship with the non-human or more-than-human world. The mission is also to start conversations or ask questions about what it means to create representations of the more-than-human. At the same time, we're talking about bias in AI. For example, we have projects specifically around the limits of the available data about the natural world. And so, a lot of our work is asking questions about what it means to digitize nature, and to take data and create richer datasets that have a more accurate representation of creatures. So, yeah, we talk about nature, but in relation to technology.
Exhibition at MAXXI National Museum of XXI Century Arts
I am curious to know more about your collaborative partnership. Do you have individual roles?
SOFIA: We don't really have assigned roles. The idea is that we are both fully involved in the artistic process, that we both have a say and because art can be deeply personal. We do mind maps to help align our interests in a project and to orient ourselves. Of course, we can't both be doing everything together all the time. Sometimes we have to be like, 'okay, you take care of this. I'll take care of that.’ We discuss a lot of our projects whilst walking. So that's kind of our modus operandi in a way, having a walk in a park, or taking a day to hike in the countryside. Feileacan always carries a little sketchbook that he uses for taking notes and sketching in, he’s really good at that.
FEILEACAN: We often see things slightly differently, which is a great strength as we have a back-and-forth stemming from our different skill sets and perspectives. We've often been quite complimentary in that regard. Sofia's an avid diver, so she'll often come back from a dive with new ideas or experiences that she wants to bring into our works. Or I might sit down and sketch, or test a new way of doing something, and that turns into a new work we develop together. It's this natural back and forth that allows us to work both individually and together.
Sofia diving in the Adriatic Sea
While it may seem paradoxical, you work digitally to deepen perception of the physical world, can you speak more about this?
FEILEACAN: Generative systems and machine learning allow us to work at a different scale when it comes to data and intuition, an affordance that suits us well creatively. By working with machine learning, we can distill great quantities of data. When Sofia dives, she takes countless images. We can then train a neural network to extract essential patterns or qualities, and we find different ways of seeing the more-than-human. When you walk or swim around, you might not necessarily be able to pick out underlying patterns, or the visual essence.
SOFIA: There’s a divide between what's considered natural and what's considered artificial or unnatural. I really like the intersection where it's not clear anymore what is natural and what isn't. If you make a 3D scan of a tree, for example, it's not clear because it's data, but at the same time, that data comes from something we consider natural. Our digital and non-digital experiences are entangled. They are in a feedback loop and don't exist separately from each other. Likewise, the artificial and the natural don't exist separately from each other.
When did you first come across AI, and how has your relationship with it developed over time?
SOFIA: I got interested in AI because I wanted to see what patterns it could extract from my browser history. I was interested in looking at myself and the content I was browsing, and thinking about the idea of mental health in that context. Could an AI learn from that? What does that mean for mental health issues? That was back in 2017, and then my interest really shifted: I got less interested in the brain or mental health, and more fascinated with creativity and what artificial creativity means. For example, what kind of forms can we imagine using it? That's what initiated my first artistic series in this new direction, called Neural Zoo. This was also when Feileacan and I first met, as it was this series that Feileacan saw, and retweeted.
FEILEACAN: I knew AI existed and had seen images of DeepDream online, but I had never really seen any personal relevance to it. When I came across Sofia’s work online, I had no idea how it was done, but found it was amazing, they had a certain tactile sensibility that I really liked. As she started introducing me to the world of AI, things started to click for me, and generative tools and systems suddenly unfolded as relevant. What I found especially interesting was how GANs have this quality of visual distillation where patterns are extracted from a dataset. For example, if you train a neural network on images of jellyfish, what you end up with isn't actually a jellyfish, but something that has an essential jellyfish-ness to it.
If you're sorting through a thousand, ten thousand, or a hundred thousand images that you've created for a dataset, it's very hard to get a sense of the essential patterns or tendencies that define it. That changes when you've trained that in your network, you're suddenly confronted with this tangible distillation of the dataset's features. For example, in the case of ‘This Jellyfish Does Not Exist’, we hadn't realized it during the process, but once we had the final model we discovered we hadn’t created a model of jellyfish. Rather, we had captured but one stage of the life cycle that is what we commonly associate with jellyfish, however, the jellyfish goes through over seven life cycle stages. A metamorphic life cycle that makes a butterfly pale in comparison. Not to mention the fact that some species can regress to a previous stage, making them theoretically immortal. A rather mind-blowing fact, honestly.
The understanding that these tools allow for feedback loops where we are confronted with ourselves in a very different way, and allow us to see the world a little more clearly, was important to me. A parallel is a microscope, one of Sofia’s life-long passions: they are a piece of technology that brings us closer to the world because they allow us to see what the naked eye cannot, features we would otherwise be entirely ignorant of. So even though it's a piece of technology, it actually brings us closer to the natural world. It is this open-ended potential of technology that we attempt to utilize in our practice.
Decohering Delineation, REALTIME at Nxt Museum
Can you articulate the process of working with a neural network ?
FEILEACAN: There are many different types of architecture, especially nowadays, so the way one works with them can vary because of that. But with all neural networks, you have this initial, and entirely crucial aspect of creating a dataset. You have to create and curate the dataset, which takes time and focus because you're saying, “this is relevant, this is not relevant”; “this is interesting, this isn't interesting” etc. Then you have the process of training which can be anything from hours to months depending on the type of neural network architecture, the size of the dataset, and, of course, your hardware. The training for Critically Extant, for example, took more than a couple of months. What you are doing is letting the algorithm attempt to infer patterns and features from the dataset you created. The process of training results in a model which you can run inference upon. By this we mean you can use the model, in the case of generative models (which are the type we most commonly work with), to generate a series of outputs based on what the algorithm has 'learned' from the datasets. One of the benefits of using neural networks is that you can generate a single output, or thousands.
There is an aspect of meditation to exploring outputs. You start to ask questions such as, 'is there something about the visual qualities that are especially interesting in this cluster of outputs, as opposed to another?' Or, as often happens, you discover that a different pattern than what you expected emerges, which might lead you in a new direction, or require that one adjusts the dataset or creates a new one. This is a practice of cyclical refinement, a feedback loop. Very often, what we work with is a daisy chain from the curation of the dataset to the final outputs. Often we work with multiple architectures, models, and pipelines to further refine the process.
Based on your experience working with neural networks, do you find that one network is enough to satisfy your artistic practice or do you create a series of them?
FEILEACAN: It depends on the work. With Chimerical Stories we wanted to highlight the life-cycle of jellyfish, which few have any real understanding of. For this, we only used GANs. But we developed a way of cross-breeding visual qualities from different models, allowing us to mix specific traits of different specimens. That was just for that project, though. For our practice in general, it varies.
SOFIA: We don't focus specifically on one. The concept and how it's executed, whether it's physical, digital, or a hybrid between the two, really becomes the deciding factor for the tools used. Tools are secondary to the execution of the concept in that regard.
How do you decide on the datasets for your work?
SOFIA: It depends on the work, of course, but with the aquatic, for example, we use both data we have filmed – where we record underwater diving with an aquatic drone – and generated artificial data using generative 3D models. I do a lot of diving and snorkeling which means thanks to underwater cameras I am continuously creating and expanding our datasets.
FEILEACAN: For Critically Extant, we used the biggest open, permissible dataset of life on earth, containing some ten thousand species across nearly three million images. The idea was to enact a future recreation of currently critically endangered species as if they were extinct. We wanted to see how well we could recreate these species using cutting-edge technology, while showing that even the latest technology has very clear limitations to what it can do, especially when we lack data. These critically endangered species often have only a scientific name and the date of their last evaluation. With the data available to us, we used neural networks to ask the question: "how well do we know the natural world?" It turns out, not very well. Approximately three million images across 10,000 species is basically a drop in the bucket compared to some 1.2 million species cataloged so far. Meaning that even the biggest, open dataset is entirely inadequate for a realistic recreation of what these species look like. That was part of the project, to have a performative aspect showing the hype around big data and AI is very much inflated and that we actually know far too little about the more-than-human.
What would you say is the biggest misconception about working with AI?
FEILEACAN: That AI is conscious. Theoretically speaking, there's always the potential for future software architecture to have capacity for consciousness. But for now, this is still science fiction. As humans, we tend to imbue things with qualities that aren't there because they match a familiar pattern from another context. For example, large language models provide a textual interface, a kind of interaction you would normally only have with other humans. We don't tend to think about text as coming from a generative system. Rather, we see it as something innately human. From our experience working with these algorithms, they might not be conscious or have any concept of what the data they learned from is (we can generate incredibly realistic images of cats, but the neural network has no concept of what a cat is in the manner we do), but as tools, they enable us to do some pretty amazing things.
The popularity and hype around tools like DALL-E cause a lot of misconceptions as they falsely give the impression that all AI is about writing a text prompt and hitting a button. It's nice that there's been so much focus on AI, but it's become a very monotonous perspective on what AI tools are in terms of artistic practice, what they can do, and what they require. Luckily, we're not the only artists working in varied ways with these tools. With time, there will be a more balanced image of what these tools are and can be.
SOFIA: There's a lot of vocabulary around collaboration with AI that I think is misleading. I know a lot of artists like to talk about their AI practice as collaborative. I respect that as it's often part of their artistic statement, but I personally think it's misleading because it feeds this larger narrative that AI is conscious, or has the capacity of becoming conscious. There's another common misconception that all AI is made with corporate tools like Midjourney or DALL-E. In reality, there's a lot of AI tools, and there are a lot of artists that write their own software. I've had the experience of giving talks where I went through my artistic process, the tools I use and the way I work with a dataset, and people still assumed I work with Midjourney.
Can you talk the meaning behind your Hybrid Ecosystems collection?
SOFIA: The way I see the series is largely a combination of our digital reality with the things we see outside when we explore nature. It started when I saw Feileacan create an image that looked like a field. We used to go on a lot of bike rides to nearby forests back then. The image he generated looked kind of like a circuit board with trees. This concept fit perfectly with the ideas we had been talking about for a while – the connection between the natural and the unnatural (or artificial).
FEILEACAN: We wanted to try something different. A lot of our previous work focused on mapping the boundaries of our knowledge and the representation of the modern human world around us. But with Hybrid Ecosystems, we cast our vision forwards, toward a more ecological future.
Often visions of a more ecological future involve some shiny yet non-existent technology with lots of greenery on top. These (popular) visions of what a more ecological world can be are worryingly monotonous and not very creative. This series here was meant to try something different. Starting, as Sofia always likes to point out, you can't really imagine a color that you've never seen before. Which entails we can't really imagine futures we haven't seen before. But that doesn't mean we can't try, or try to build stepping stones towards something we don't know and haven’t yet experienced. We wanted to combine and recombine the familiar in ways that touch upon a harmonious form of coexistence between the digital and physical, in a positive and aesthetic way. It's not a concrete proposal for the future, but an attempt to start the process of stimulating our imagination of what a better ecological future could look like.
Your work has been exhibited in various, experimental ways; what has been your favorite form of display?
SOFIA: I think two of the most exciting moments were Critically Extant being displayed in Times Square, which was incredible because it's a public intervention in a way, taking over screens that are normally used for advertisements that are extremely expensive for brands to get a spot on. It's just incredible having the chance of using 96 screens at one time, even if it's just for three minutes a day. To use that to talk about the very specific topic of bias in data sets, lack of data, and critically endangered species. It's an incredible way of exhibiting the work.
Another one that was really exciting for us was the inflatable sculptures in the show in Shanghai. It was cool because insects are tiny things, but these sculptures were about eleven meters tall. A size that makes humans feel pretty small by comparison.
Inflatable Sculptures, Shanghai, China
In March 2023, we did a takeover of the botanical garden in Lisbon, with the help of curator Joana Seguro, called Estufa Fría, and collaborated with an artist called Ana Quiroga, who composed a soundtrack for the whole experience. It's was a 20-minute journey where you walk around, listening to the soundtrack and navigate through the botanical garden, were screens were placed all around.
Presentation is super important because a lot of our work has a very strong connection to the physical. Often, our work isn't just a digital piece to be seen in a browser. We really enjoy creating a site-specific, physical presence for a work, creating sculptures, and considering the journey that people will have exploring the space where it's displayed. It becomes a part of the work in the end.
What are some methods for displaying your work that you'd like to experiment with in the future?
FEILEACAN: I have a notebook full. There's a lot of research still to be done, and we’re slowly building towards more physical things, but not in a primary static sense.
SOFIA: I would like to have an exhibition at the Natural History Museum in London or at the Kew Gardens. Those are two places I first saw in David Attenborough's documentaries, and think they would make for incredible exhibition locations. I would also love to make an installation that's more of an intervention in the middle of a forest, someplace that's not so accessible, but in return becomes uniquely context-specific. There’s this return to where the original inspiration came from with that approach, connected to land art in a way.
If you had to choose one collection that embodies Entangled Others the best, which would it be?
SOFIA: I think Artificial Remnants, but I'm very attached to it because it was our first bigger project. Beneath the Neural Waves is also a good example that identifies what we do. It speaks about data and the interconnectedness of things. It was one of the hardest works we've done. It was difficult to develop because we wanted to create an artificial coral reef. We had to figure out where to get 3D data of a reef to train our models, and how to get inhabitants of the coral reef equally represented. There were a lot of problems to think about; we had to create artificial datasets in the end. Interestingly, the results are more a distillation of what we know about coral morphologies, rather than the data collected.
FEILEACAN: Artificial Remnants is definitely a strong candidate. Sediment Nodes also has some embodied qualities, but maybe more on a conceptual level. They embody different aspects of our overall practice. Beneath the Neural Waves is actually, in some ways, our most unrefined series because it's one of the most experimental and still in progress. It's still brimming with potential and unexplored potential for growth. Traveling to coral reefs and attempting underwater 3D scanning was not feasible for us, in terms of resources, and complexity. So, we partnered with Joel Simon, an artist who had written a genetic algorithm for 3D coral growth. We used his algorithm to generate coral 3D models which became the dataset. Working around the limitations of available data became part of the artistic statement, but it also goes to show how experimental and challenging the process was.
Are there other tools outside of AI that you use or would like to use in the future?
SOFIA: Biomaterials are materials that come from organic matter, and used for sculptural purposes. That's something that we've been really into as a line of experimentation over the past few years, but they're extremely challenging to work with. We have, for example, grown mycelium to make a lamp (in 2019).
FEILEACAN: It depends on the medium. We've built tools around Blender for more three-dimensional work, which is a delightfully open and hackable foundation. We also work with p5.js for more generative stuff, and as Sofia mentioned, biomaterials. In general, we’re moving more and more into the physical. Not just working purely digitally, but trying to work in a hybrid manner. It's a hybrid in the sense it's both digital and physical, but we're also trying to find ways to articulate sculpturally, as well as conceptually, this entanglement of the digital and physical. We’re constantly exploring and playing and seeing what happens.
For inquiries, please contact hello@artxcode.io.