Reflection 9

This week we re-visited The Design of Everyday Things as well as Co-Intelligence. I honestly resonated with a lot of the content in TDET, as I often caught myself drawing connections to my life and jotting down little notes in the margin. Norman first discussed the Gulf of Execution and the Gulf of Evaluation, two things that I have always been aware of in design but have never labelled. I began to think of all the times in my life that I blamed myself instead of the technology present. I have literally talked myself into a spiral over not being able to figure something out, feeling hopeless and like an imposter in my own life. Just today I couldn’t figure out how to change a tire; I couldn’t even figure out how to detach the spare from the trunk. There was no sort of cue, no signifier, and I literally drowned in the Gulf of Execution. I absolutely hated that I had to call my guy friend to come do it for me because I didn’t have time to play with it until I figured it out and potentially damaged the car in the process. It felt like the conceptual model for this was based purely on if you have had experience changing a tire before, luckily I was in an area close to my friend and not stranded on the side of the highway or in an unknown area. This design neglected any sort of instruction so my interpretation was frustration since I couldn’t even figure out what actions to make. I know I could have looked up a YouTube video but it’s the principle of it. Why pay so much money for a car when you have to do external research on how to use it? Even a simple diagram on the tools or a label letting me know what the tool even was could have helped in what felt like a lost cause situation.

The next section talked about goal-driven behavior, something I know all too well. Every day I write out my list of goals for what I would like to accomplish and an ideal schedule for the day. You know how many times my day ends up looking like that schedule and I manage to complete all of my goals on the list? Literally never. I enjoy living a life of opportunistic actions, always allowing myself to be in the present by taking that impromptu walk because the weather is nice, buying a coffee out because I’ve had a hard week, calling my sister because I miss her, or going out with my friends since I won’t be living with them in 5 weeks. Yes school and deadlines have always been important to me, but I am not ashamed to admit that my happiness, enjoying my youth, and maintaining relationships is more important. Design and work isn’t going away, but my last semester in college, living in an old house with my best friends and my healthy young body, just to name a few things, are slipping through my fingers. Now that I’ve slightly depressed myself in writing about this, I’ll admit that I struggle a lot with being goal-driven at times. Sure I’ll write stuff I would like to get done, but I rarely feel a strong sense of motivation to actually sit down and dedicate time to work that is not imminent. Self-triggering of external drivers has not consistently worked for me in the past; I need something at stake these days in order to sit down and finish my work. I’m not sure if it’s senioritis, the fear of a challenge, or a weakness as a designer, but I hope to continue to learn about the human mind with these readings so maybe I could get it together!

Norman delved into the human mind and the integration with design and technology. I specifically am interested in his question, “Do we count our technology as an extension of our memory systems? Of our thought processes? Of our mind?” Here’s where I think AI will begin to work in unprecedented ways. Co-Intelligence covers this in the readings for this week: are we going to use AI as a crutch? Will it begin to weaken our minds to a point of reliance that harms humanity? Mollick suggests a human-in-the-loop concept, using AI as a tool to push our thinking further. Although I like that and think that is how I use AI, as well as my other peers in this course, I have very little trust and faith in other people to do the same. For the sake of ease and desire to succeed, I think that many people will not care about coming reliant on it, just as a large percentage of my generation does not care about data security, phone addiction, social media dark patterns, etc. How do we get everyone to care so that we don’t lose society?!?!?

My last thought on TDET is the conversation around emotion and cognition. To be transparent, I am a highly emotional person who has always struggled to separate my feelings from anything I do. “Emotion is highly underrated,” Norman writes. I have to get that framed. I have been told my entire life to strip emotion from my professional life and academic life, two worlds containing creativity and design. If emotion is what provides value judgements and processes information, why are we always pushing to take it out of the equation? I feel this part of the reading connects to Co-Intelligence as well. It is almost as if AI is built to serve as subconscious thought, quick, automatic, good at generalizing and making predictions based on trends, etc. But the lack of the conscious thought- no matter how much the learned personalities try to convince us otherwise- is still lacking. And as Norman says, the conscious thought drives emotions, decisions, informed judgement, deep understanding, etc. These are all qualities that are quintessential to human life and happiness. Being the human in the loop and being able to provide your own insight will keep humans in control, remembering that AI does not have conscious thought. Thinking that AI can feel like a human or experience emotions of its own brings us onto a very slippery slope; humans can be manipulated into exploiting themselves or causing harm. At the end of the day, education surrounding AI and how these models work is extremely important so that users can have informed experimentation and discover how the AI can stay as a tool, not a friend of confidant.

A piece of information I read about in This Week in AI is the study done to discover that emotional engagement with ChatGPT is rare in real-world usage. However, personal conversations that did include more emotional expression were associated with higher levels of loneliness “but lower emotional dependence and problematic use at moderate usage levels.” On the other hand, “non-personal conversations tended to increase emotional dependence, especially with heavy usage.” I think this is extremely relevant to our readings today, because we are seeing in real-time the effects of combining emotion and AI. The research proved that viewing Chat as a friend carried negative effects into the user’s life, something I think is important to consider as we bring AI into our daily lives. Establishing a boundary with yourself on how you are going to use it is an important first step before you create a relationship with a Chat.

Previous
Previous

Reflection 10

Next
Next

Reflection 8