"_" \\Extended Intelligences 2
Last updated
Last updated
It was an immersive course, really intense focused on IA, how it works and its possible applications. The lessons were landed by two really knowledge IA teachers, Chris and Pietro. Our first lessons began with their portfolio, two projectss surprised me a lot.
They showed their collective called DOTTOD, I really enjoyed to document two theirs project.
I love the first project. Basically the IA systems system automatically selected the most popular tweet of each politician and a generative AI translated the tweet from italian to english. The translated output was fed to a text-to-image AI to generate an image from the text, this process was made for 31 days to six most influents italians politics. For sure this project give me suggestion because I'm from Italy and stay outside from my country reveals me some important details about il Bel Paese (Xd); but also it's a project that trough AI create a social commentary somethings that we need in this historical moment and maybe with AI could be more eye-catching than other way.
Here there is the link https://www.dottod.net/
The second one was unbelivable for me, because with a classmate from my bachelor degree we were talking two year ago about an idea like that, honestly for me this is quite bit similar to fanta science. I'm glad to be here and see all this project, however the second one is an exploration in a furniture design journey with AI. Basically the methodologies is chose an iconic object and find the designer's original description and feed the description into a text- to-image AI, after that try to replicate the object using IA image as a reference. I think this a perfect way to speculate on the dualism between HUMAN CRAFTING/CREATIVITY between AI.
Here there is the link https://furnitures.dottod.net/
We experimente with AI camera's collective, an amazing tool that captures a "reality snapshot" and modifies it based on text prompts. We were split in groups and we had some fun time with this tools, I think it was a good way to get in touch with AI technology.
So the activity's goal was to captures 4 different reality snapshot and transform it thanks to AI, this transformation happen thanks to a prompt wrote by each teams. It was really interesting saw what other groups did and how works their creativity process. Here there are so examples:
TRANSFORM JAVI and AUXI in human-motorbike
Settings: you see two person in the picture, plase create a motorbike from this two person. The motorbike has to look like a real motorbike, the all meccanism and engeeniring part.
Prompt: My prompt has full detail so no need to add more. DO NOT add any detail, just use it AS-IS. Use the following information as the base details to generate the image: you see two person in the picture, plase create a motorbike from this two person. The motorbike has to look like a real motorbike, the all meccanism and engeeniring part. Scene description: The image depicts a street scene with two individuals kneeling down on a city sidewalk. One person is wearing a green hoodie while the other is dressed in a light-colored top and olive-green pants. Both appear to be in a position where they have their hands on their heads, and they kneel closely together. Behind them, there is a row of parked scooters and motorcycles, which are mostly in shades of black and grey, with one green scooter standing out in the lineup. The street is lined with trees that have lost most of their leaves, suggesting it might be winter or early spring. In the background, you can see a road with vehicles, some buildings, and a street sign indicating directional questions or possibly an advertisement. The general atmosphere gives the impression of a typical urban area.
Revised Prompt picture 2: The image is a creative urban street scene where two individuals have been creatively reimagined as a realistic motorbike with all the mechanical and engineering details. The 'motorbike' individuals are kneeling on the city sidewalk, one person dressed in a green hoodie and the other in a light-colored top with olive-green pants. Their positioning hints that their bodies form the various parts of the bike. Behind them, a lineup of parked scooters and motorcycles add to the biker ambiance, their shades of black, grey and green blend with the urban environment. Visible are trees in winter or early spring, an image of a road filled with vehicles, a few surrounding buildings, and a street sign that seems to carry an advert or directional guidelines.
TRANSFORM ANDRE SLIDING in a waterpark frame
Settings: create a little funny boy in a water slide having fun and beeing very joyful. The context is an futuristic acquatic part, with a lot of joyful people families
Prompt: My prompt has full detail so no need to add more. DO NOT add any detail, just use it AS-IS. Use the following information as the base details to generate the image: create a little funny boy in a water slide having fun and beeing very joyful. The context is an futuristic acquatic part, with a lot of joyful people families Scene description: The image depicts a playground slide set in a park-like area. It features a metal slide that curves gently downward. A person is lying belly-first on a red and orange buoy-like mat, facing downward, ready to slide down. The surrounding environment includes several trees with bare branches, indicating it may be late in the year or early spring. The ground is covered with a mix of dirt and scattered leaves, suggesting the presence of the trees nearby. In the background, there are some partially visible modern buildings, hinting at an urban setting. The slide itself is positioned at a moderate incline, and its surface shows some scattered leaves, suggesting it hasn’t been cleaned recently. Overall, the scene captures a moment of playful fun in a tranquil outdoor space.
Revised Prompt picture 1: In a playful and vibrant scene, a young boy radiates joy as he prepares to slide down a water slide in an ultramodern aquatic park filled with equally merry people and their families. The image marries the simplicity of childhood pleasure with the grandeur of a futuristic setting. The slide gently curves downward, preparing the boy for his exciting descent. An inflatable red and orange buoy-like mat cushions his body as he lays belly-first, eyes filled with anticipation and full of smiles. The park's layout is overflowing with lush greenery and next-gen buildings dot the horizon, lending an urban feel to the place. Leaf-covered ground and bare-branched trees hint at the changing seasons, adding an interesting contrast to the water park's vigorous ambiance. A scene that is not just about fun but also about the beauty of nature intertwined with advanced city life.
prevClosenext
MOTDmatrix website
For the hands-on session, we experimented with Modmatrix, a tool that allows for AI-driven project prototyping inspired by synthesizer modulation matrices. The experimentation was about create an image with AI from a determinate prompt.
This image looks like "La Divinia Commedia" by Dante Alighieri, instead is a huge prompt about a box fights between Pope and Elon Musk.
During explore AI systems we've a task about made an AI systems with Arduino Library. I didn't have a specific fields where work so I choose a group which has a very interesting concept. Basically we want to upload Stream Diffusion on Touch Designer and analyzed how AI works with beauty standars, so how the systems work is:
Camera video take a real time video
AI systems create another image based on a prompt
So for examples if I record in real time my face, it will appears another face based on my face and on our prompt. Basically the prompt was "European standard beauty", "Chinese standard beauty" or "American standard beauty".
At the beginning we just use Midjourney or Chatgpt to analyzed biases or some details to write in our prompt, for examples if you just write "European standard beauty" it will appear a women (but why not a men?). That's explain us the limits of this technology, for have a specific output you need a specific input.
Honestly we didn't reach any goal who was supposed to, basically because we follow a tutorial on youtube too old, so it doesn't fit in a new Stream Diffusion version. At least we understan how the .toe works and how to upload stream diffusion on touch designer and create a virtual enviroment where run it. I didn't give a real personal contribution to this project, I feel so bad for this, but I mean I understand that this field is not my business.
Here you can find GitHub reposotory in which you could find all the files and a step by step process explained.
Strean diffusion it may involve methods like continuous-time diffusion, online diffusion processing, or adaptive sampling to generate results faster for applications such as video synthesis, real-time AI art, or interactive AI assistants. Basically you could use this systems or to have image2image or prompt2image, and you could use stream diffusion in different software as adobe illustrator or photoshop or touch designer.
For me AI is an amzing speculative tools, you could make various project that reveals some details about our society, for examples the gallery of cybernetic interpretations is a project about social issue or political problem that may have to be more clear to masses. I think AI for this things is really functional.
At the same time I think that as a tool, we've to know how to use it and use it in a conscius way. Instead of use this like Donald Trump and make stupid post for stupid people.
https://www.instagram.com/reel/DGhfpgHsOg6/?utm_source=ig_web_copy_link&igsh=MzRlODBiNWFlZA== I don't think that this designe approach could be relevant for my research, but I mean we don't know the future so let's see. With this workshop I understand what is stream diffusion and I mean probably in the future with a powerful laptop I would be able to achieve the goal. I think it was an intense week, because I had this workshop and Elisava Workshop. I didn't reach a real outuput, but for sure I learn somethings.