Reflection 7
Wow! Just like that Praxis Inquiry 2 is over. It almost feels like we just started the project, although that is so far from the truth. It really just goes to show how fast time flies. At the beginning of this project, I really was not excited at all. A car? Self-driving? I didn’t particularly care about either subject. Plus, I knew it was going to be a challenge. And as I have discussed with my current mentors, a challenge scares me if I don’t know if I’m good at it. So going in to something I know nothing about, nor necessarily care about, was scary for me. Especially when it came to designing a car interface, even though I overall enjoy UI. However, I found the secret recipe to enjoying a scary project: make it something you care about. Although I don’t really care about car design or self-driving, I care A LOT about women’s well-being and safety. I started to think about my own driving experiences, focusing on what makes them great and what taints them. I couldn’t get the reality of being scared majority of the time I’m driving out of my head. The workshop card game we did in class then helped me focus these thoughts into a problem statement and project framework. Suddenly, brainstorming features and safety assets became exciting; how can I make this car the safest option so women do not have to constantly check behind their backs. Miranda and I had a lot of very long and fruitful conversations about our various experiences with driving and traveling alone. We made a big list of all of our ideas that we wanted to incorporate and then I took that into Chat and had a long conversation on how these features could actually be incorporated. Chat was super helpful in turning ideas into realistic features, taking into consideration the AI learning model and properties of a real car. When it came time to work on image generation, we played around in Firefly and Dream Studio a lot and eventually got a rendering of the outside of the car that we were happy with. It wasn’t nearly so smooth when it came to the interior of the car, we went back and forth so many times and never quite got an interior we were happy with. After watching everyone else’s presentations, I realized that sketching out what I wanted it to look like or illustrating it in procreate would have been extremely helpful to upload into the AI. I will definitely keep that in mind for the next project.
When it came to actually designing the interface, I had a lot of fun. I used Chat to help me outline what I needed to include in the interface so that when I was making the pattern library and deciding where everything would go I had a checklist to help me decide. I also did a ton of field research to kind of get a grasp what an interface would look like that wasn’t Apple’s Carplay, the only interface I’m familiar with. After I designed the interface and the separate screens for the different storyboards, I really wanted to prototype out an interaction so that I could incorporate some sounds and haptics. I generated the different sounds and the voice through AI and went back and forth a ton of times before I got sounds I was generally pleased with. The voice sounds like AI to me but I wasn’t sure if that was because I knew it was made with AI or if that was because I didn’t choose correctly. Overall, I’m pleased with how the project turned out and feel that I learned a lot.
Our readings this week really got me thinking about Praxis Inquiry 2 as The Design of Everyday Things brought up human-centered design and the different design vocabulary. I really loved how the author talked about our tendency to design something and then become frustrated when the user doesn’t know how to use it or uses it “incorrectly”. Designing machines and interfaces have to take the user into account first, instead of creating secret rules that only the machine and designer know. Especially when the resulting difficulties could lead to extreme consequences such as death. This reading was a good reminder that the machine and its design are at fault for difficulties. We have become so used to trying to adapt our lifestyles to machines that do not prioritize effective human-machine interaction. I loved his point that we think since we are people, we understand people. Yet, we continue to design “Norman doors” and fridges that don’t inherently make sense based on their model. It is always refreshing to reconsider that not everyone thinks like me. I definitely do not think in the logical way that my engineer father does, and he would have no problem confirming that! It is a hard reality that human behavior will never be what we want it to be; human behavior is just what it is. It is going to be interested to see how automation works into this idea. Initially, everything is going to seem more complicated, since humans like to do things the way they know how. We seem to always been updating things these days to maintain relevance to current times, and I’ll admit that it is an exhausting cycle. Human centered design was a term that I wasn’t aware of before taking this class, though it seems as though it would be obvious to everyone, right? Designing with humans in mind? I have since learned that it is not the norm, nor the case since good design requires good communication between machine and person. He emphasized that the most satisfaction can arise when something goes wrong, the machine highlights the problems, and the person is able to understand and resolve the problem. This collaboration between human and machine will be crucial to our future developments with AI. I have definitely placed too much emphasis in the past on just the design of something without really considering the experience. However, we discussed user experience a lot in this last project because we wanted to make sure our driver’s interaction with the car was overall positive.
I learned a lot in this reading about affordances and signifies! I have known what they are for a while now without actually knowing the name to describe them. Thinking of an affordance as a relationship instead of a property really changes the way I think in terms of the design of regular objects like chairs and scissors. It’s weird to think of the first person that ever designed those things. Did they have the affordances in mind while they were creating? I will definitely notice missing signifiers in certain things now that it has been pointed out to me. His commentary on airport bathroom doors having no signifiers made me laugh out loud because I have experienced extreme frustration and embarrassment with that in the past.
Lastly, the section about feedback in the reading really resonated with me because this is what I was experimenting with while implementing sounds and haptics into my praxis inquiry interface design. Something I didn’t think about however is that poor feedback can be worse than no feedback at all because it is distracting and irritating. How do you find the happy medium? Provide an option to mute the sounds? I will have to keep in mind that you don’t want the feedback to become too much and get in the way of a calm and relaxing environment.
Co-Intelligence’s introduction started off really strong and I am excited to read more of that book. I am curious to see the author’s insight on incorporating AI into his classes and academics. I can relate to his student’s nerves surrounding AI implementation into future careers and lifestyles. Nobody really knows where it will go. I really love the comparison between AI and to the introduction of the internet and computers, something Helen Armstrong also mentioned. This really helped change my perspective on the use of AI and I try to spread that comparison to anyone who is struggling with the idea of using AI as well. I think this book will be extremely helpful in furthering my knowledge about what AI is and how we are integrating it into our lives.
The thing that stood out to me the most was the development of Deutsche Telekom’s AI phone. Since I’m interested in UI/UX design, I’m curious to see what the interface will look like for this. The AI assistant being in the middle of the lock screen in order to eliminate the need to navigate between apps is something I haven’t heard about before since phone’s are just starting to integrate more AI into current phones. My first reaction is to think that having an assistant at the center of my phone would be really annoying, but then I remembered that Siri exists. It also makes me wonder what the privacy protocol would be for something like this. So I guess my other question is wondering why this phone is any different from Siri or Alexa or how the AI would be more proactive. It will definitely be an interesting story to follow!