Reflection 3

The AI product image that I ended up with after LOTS of trial and error.

This week my partner and I presented our Praxis Inquiry 1: FlexiBot! The process of creating the presentation and consolidating all of our research really snuck up on me, and I do wish I had more time at the end to focus a little bit more on branding and design. Because our toy was so heavily research backed, I wanted to make sure that our presentation was solid in explaining all of the choices that went into the creation and development of the toy. Admittedly between that and spending hours messing around with AI, I didn’t get to flex my design muscles as much as I wanted. I now know to start the research and consolidation of that information much earlier in the process so that I can dedicate more time to design at the end. I did learn a lot about utilizing AI, specifically NotebookLM was my favorite for this project, and I am excited to try it out more with the next projects. Julius wasn’t necessarily a huge asset for the set of research we did, but I can see how with more quantitative data it would be extremely useful. I went to war with Krea particularly, really struggling to create a product image based off our illustrations and prototype. I think going in to the project with a less rigid vision of what I wanted the toy to look like would have been useful. I noticed that AI really struggled to create faces or characters on products, as aspects of the faces were often way out of place or skewed disproportionately. I thought this project was extremely beneficial in learning how to use AI with research but I definitely want to learn more about how I can use it in graphic design during the next project.

Our readings this week focused a lot on designing ethical AI and incorporating transparency into our designs. Having talked about our guardrails in class, I found these readings to be very timely and relevant to our learning process on AI. An interesting thinking perspective was that design is an intentional act that shapes the future by fostering conditions for meaningful conversations and thoughtful change. Instead of focusing solely on transformation, we must also consider what values and elements are worth preserving. Human values could be erased essentially unless humans come together and decide what values they want to keep so they don’t just keep pushing forward with innovation. This leans in to my next point of interest: this conversation surrounding interdependence. I thought Anab Jain’s point of view was quite insightful, discussing that humans have never lived in isolation, even though we like to act as though we do. We are so entangled with nonhuman participants as well, animals, plants, computers, etc, to the point of our emotionality, economics, and morals. Because we are all so connected, we have to stop using AI and design for quick fixes because it will lead us to an irreversible future, as seen in her exhibits showing a future lifestyle.

One of the images I had to prompt to get any sort of inclusion in my presentation.

A third topic of discussion I want to touch on is the current lack of inclusion in AI. I noticed this personally while working on FlexiBot because if I wanted any sort of inclusion in my imagery I would have to prompt it as such, saying “a Black child” or “a Latino therapist”. As a white woman, this felt inherently wrong, like I was dancing on this weird line of forced diversity. But on the other hand, I am not just designing the toy for white children, which was the only imagery I was receiving throughout all of the prompting I was doing. The readings backed up my skepticism on this, stating that AI systems must be built to reflect complex social dynamics rather than reinforcing biased narratives. I was trying to figure out the root cause of this, why it kept happening. I never thought that the data sets being considered could be set with bias, I thought it was simply the designers of AI being white and not thinking about inclusion. It made me realize that designers must educate themselves on the potential for bias in datasets, questioning who is left out of data collection. This goes as far as the development of the AI itself, ensuring that a diverse group of stakeholders must be involved in AI development to ensure fairness and mitigate harmful consequences.

Lastly, there were a couple things in the weekly AI newsletters that felt very relevant to my life currently. Having just discussed the Coca-Cola ad in class and their weak use of AI, I read that the US Copyright Office finds that art produced with the help of AI should be eligible for copyright protection under existing law in most cases, but wholly AI-generated works probably are not. The thought that AI could possibly now be claimed as copyright feels weird to me, because how do you make AI know that? If it’s just based off data, that won’t quite stop much… right? However, I do hope it protects artists who are being ripped off from AI producing art that is based on their style. Another piece in news is the release of Open AI’s deep research. As a senior college student currently completing her thesis, this blows my mind. The fact that “you give it a prompt, and ChatGPT will find, analyze, and synthesize hundreds of online sources to create a comprehensive report at the level of a research analyst,” is insane to me because we are spending entire semesters to do what this model can do in 5-30 minutes. Other professionals, like my thesis director, are spending years writing research reports. I think this could cause significant change to the education system as well as the STEM field because it could speed up the timeline on a lot of projects and research.

Previous
Previous

Reflection 4

Next
Next

Reflection 2