Reflection 10

This week’s reading of Co-Intelligence began with a monologue on the unpredictable nature of AI and what sets it apart from other traditional softwares. The fact that the AI can understand the dynamics of human purchasing thought process was really interesting to me, although maybe not shocking. I’ve never really thought about the complexity of human thought, especially when it comes to buying something. The implication that the AI can value and access these scenarios like a human would is truly remarkable, but it is almost as if I have been desensitized to wonders like this… As if I’m saying, why of course it can do that! I think through all of this reading and experimenting with AI I am forgetting the impressiveness behind this whole thing…. This reading was a really great step back to examine the way AI works and what we’re still trying to figure out.

Although AI doesn’t have its own morality, it can interpret our moral instructions, making it appear as if it does possess morality. I think this can become very manipulative very fast. How does someone new to AI differentiate between those two ideas? This is where I bring up my discomfort surrounding the personalities that were historically assigned to machine learning computers, such as ELIZA, PARRY, Eugene Goostman, and Tay. It feels really weird to me, to approach the computer with a human-like name and an assigned personality, all built to trick the human and pass the Turing Test. However, Tay was an early and prime example of this very quickly going wrong when her personality was left up to the data of her users; she quickly turned malicious due to the gross content the Twitter users were feeding her. Although this spooked companies from giving set personalities to chat bots, I think this is still very prevalent. Mollik re-opened this idea when speaking of Replika, which turned away from the created intent, becoming what some people thought of as their significant other and even sexual partner. This is where I question how designers or researchers or engineers will ever know if they are creating something inherently good and not detrimental. Replika and Tay were not created with bad intent, the opposite actually, yet they both turned down a dark role that is harmful for humanity. How can we ever predict what the outcomes of our projects integrated with AI might be? Do we use AI to brainstorm outcomes first before release? Maybe this is where AI can help us come up with the most creative, absurd outcome so that we know how to put a guardrail up against an issue or create a plan to dealing with an undesirable effect.

Another part of the reading that really sparked my interest was Mollick’s three conversations with the chat, each being phrased in a different tone. The responses weirded me out. A lot. The AI was so convinced that it does indeed posses emotions, it was thinking, it could feel and form an opinion surrounding the question at hand. I read it and felt myself questioning if maybe the AI is truly beginning to develop those emotions through learning and growth in development. It kind of left a pit in my stomach. It explained everything so well, uncannily passing the imitation test and making the entire encounter feel so real. The AI was easily able to anthropomorphize itself, pulling what felt like individualized thoughts and feelings. If the AI truly thinks it’s sentient, how do we know and prove that it’s not? If it believes that it is, isn’t it?

In all I was unnerved. I think the day that the AI actually does become sentient is not some future worry, I fear that it could happen sooner rather than later. The whole idea that humans have transformed into cyborgs, relying on technology for almost every aspect of life, made a lot of sense to me. It kind of made me sad. But I think it’s a harsh reality, we will soon be cyborgs with AI and not just technology in general…

I did feel atleast a little bit of hope in parts of the reading as he spoke about AI potentially helping ease the epidemic of loneliness that comes with the internet, with of course the chance that humans become less tolerant, and that it can really help get creativity flowing. This class has really helped me see how I can use it as a tool to strengthen my creativity, not stunt it. My creative blocks are much easier to work through now, though I do worry about using it as a crutch and not finding more traditional ways to inspire my mind. I do have faith in the human mind though, knowing that my creativity won’t ever be fully taken from me!

Something that happened this week in AI that I also experimented with in our Praxis Inquiry was ChatGPT’s 4o image generation update. There is an improved understanding of images, fixing the weird issues it had before with text and hands. It can help you generate different interfaces even for UI, something that really shocked me. Although the wireframes and interfaces aren’t polished by any means, I’m wondering how soon until they are… This could either be a really great jumping off point for UI designers or could start to feel slightly threatening.

Next
Next

Reflection 9