Reflection 6
The articles we read this week started off with discussing algorithms’ lack of being able to take action based off context-specific human deduction. We allow AI and technology to become the controller while humans allow themselves to be the passive recipients of outcomes. We change our own lifestyle to fit the technology instead of just editing or correcting the technology to serve us. Dr. Chowdhury discussed the biases that are present in these systems, hence emphasizing the need for human input not just as an afterthought. Through making sure humans are part of the training process and design, systems are designed more human-centric and provide better user experiences. I never really considered the idea that power dynamics is a conversation that must be had while designing and developing systems. Because as humans, we naturally assume that humans are always in control. Yet, that has proven to not always be true. Research is showing us that people are actually trusting automated systems more than humans already. Mistakes and bias are more easily forgiven when they are made by the AI than when humans make them, so we even take away that power from humans, especially those in low-empowering jobs. In this article I also learned about technochauvinism, believing that computers are always superior to humans. All of these ideas surrounding this are extremely intriguing to me because we are so quick to trust and lose trust as well, something Helen Armstrong touched on.
The second article this week brought up the question of privacy and the direction we are headed in with corporations owning our DNA. This really didn’t sit right with me, especially when she was describing this eerie dystopian world that isn’t too unbelievable that it will never happen. Our DNA being used to identify ourselves feels really invasive to me and feels like a quick way for someone evil to go on a power trip. Keeping tabs on us is creepy enough, something that’s already happening, but using it to gain control is something that truly scares me. With the creation of technology and our reliance on it for everything in the current day, it shouldn’t come off as a surprise that privacy is completely dead. I know personally I am much too lax with joining random WiFis, signing on to public devices, making an account on any social media platform, etc. And I suppose it’s never bothered me that much to think about not having a lot of privacy in those spaces because I willingly chose that. But now, with something like biometric scanning, I would not be choosing to give up my data, muchless my DNA. It’s weird to hear of corporations already incorporating this into their products, like Kia and Amazon, but they don’t seem to be doing it will any evil intent. However, I can’t escape the thought of all of your data just being collected on you so often that you become this extremely predictable thing to the system, so predictable that you can be easily influenced by the device that you have built trust with and therefore become vulnerable to losing control. It really freaks me out to hear that Saudi Arabia is investigating creating universal genetic databases through genetic recognition. That’s just too far, that’s invading your bodily rights. But we all know big corporations don’t care about users and people when an opportunity to make money presents itself, so I wouldn’t really trust any of them with my DNA. Her optimistic vision of a future where there is a lot of transparency surrounding data and genetic usage doesn’t seem likely to her, or me, because that would require large scale cooperation and a lack of greed. I’m not sure if that would be attainable. Her questions about privacy really got me thinking as it brought up wondering if we are entitled to the right of our own body, which obviously I believe we are. It’s weird to think about living during a time where we might have to trademark our own DNA!
The AI update that really stood out to me is the creation of the cyborg cockroach that is being used for search and rescue missions. This stood out to me because it really felt like something that we would create during a Praxis Inquiry project. It shows that sometimes you have to think of out of the box, weirder ideas in order to come up with an idea that actually works. It seems inherently simple, but good ideas don’t always need a lot of money or super fancy AI. It’s refreshing to continue to see designs built off of real world applications. I love that since they are so small they have a very small power consumption so they are cost effective and ethical. However, I do worry that they could be used wrongly and be used for spying, breeching privacy like our prior discussion.