Reflection 8

This week I dove further into Co-Intelligence as I read chapters 1 and 2. I noticed a lot of similarities to points Helen had mentioned in BDBD, proving her work to continue to be relevant even as years pass. I thought it was super interesting to read about the history of AI and the ability of the early machines and machine learning to fool such smart people through always winning in chess. I think those scholars would have a heart attack if they learned what AI could do now, fooling people every single day and it just being a normal part of life now. But it is true about “hype-cycles” surrounding AI, and honestly just technological advancements in general. When I was younger I fully thought that we would have flying cars by now. Or that we would have at least figured out a way to get rid of gas cars. My brilliant idea at the age of 10 was to fuel cars with water, having it spray out of the sides of the car to water the grass as you drove by. Obviously, I did not understand that the ocean is not unlimited and we might just use water up all on our own. I’m wondering if we are currently in an AI boom, as it feels like a very current and hot topic. Will AI advancements crash soon and we will plateau? Or have we finally crossed over the hump and can only expect progress from here? It sure feels like it is something that will just keep getting bigger, as long as funding is available.

As the reading refreshed my memory on the difference between unsupervised and supervised learning, I really got to thinking about praxis inquiry 3. He spoke about feeding models labeled data in order to train it on recognizing faces. I’m curious how we could avoid having to provide the AI with labeled data, allowing it to recognize anyone based on biometrics alone. But HOW? I can’t wrap my mind around how that would work without a total invasion of privacy. In my understanding, this would be an example of an “unknown unknown”, where the system could not understand what the recognition would be and therefore couldn’t turn their identification into text for the immigration officer. Maybe I should ask Chat how this could realistically work…

I thoroughly enjoyed the analogy of comparing an LLM to an apprentice chef that wants to become a master chef. Learning about the AI having a large pantry of ingredients that they test out and learn from until they perfect it made it really easy to understand. I did not know what weights are or how extensive training these models can be. I am slowly understanding and putting pieces into place when understanding why advanced LLMs are not sustainable, costing over $100 million to train and using large amounts of energy in the process. Something I am shocked about is the current struggle to find high-quality content for training material. The internet seems like such a vast and endless pit to me, so I have a hard time wrapping my mind around the idea that trainers are already coming up empty handed. If these AI companies are running out of good free sources, can’t they just budget more of their money for paid sources? Since they are already building multi-million dollar machines? Or is purchasing content considered unethical and not a long term solution?

I’m sure nobody is surprised anymore to read once again that AI is learning biases and stereotypes from the data sources being used. I think it is a sad testament to the reality of the dark side that humanity carries, bringing all of our faults and tumultuous pasts to light. The scariest part is the lack of the ethical boundaries that comes with this, prompting scary instructions and quickly surpassing human testing intelligence. Reading the self-aware limerick made me feel extremely weird, especially after his warnings and conversation surrounding human extinction. With it’s ability to convince us that it has thoughts and feelings, I’m thinking largely of the paper clip example here, it scares me how influential AI may become on especially the more impressionable people in this world that will follow and support inhumane treatment. Considering that we are excusing biases that are coming from a machine, because hey after all, it’s just a dumb machine, we are allowing these biases to live on in our society and subconscious minds. I think representation is so so SO important here as these machines are trained and data is being chosen for training so that we can share the responsibility of creating an AI that reflects equality.

This is where we begin to think about training AI to share our ethics and morality. Although I must admit, who’s ethics will we be training it on? I would rather some people are left out of that discussion. I’ll admit that I’m not entirely optimistic for a well-aligned AI that will solve pressing problems, since many problems humans can not come to an agreement on. Would I be ecstatic if it cured diseases and reversed climate change? Absolutely. Do I think humans can come together enough to choose a solution to a problem? Debatable. Hopefully AI will not wipe all of us out before then in order to fulfill it’s own goals that we may never even know about…

I really hate the term “benevolent machine god,” not only because of my religious background but also just because I don’t think humans should rely on something like a super intelligence, becoming loyal sheep to this supposed salvation. It doesn’t sit right with me and it feels in a weird way like the beginning of a dystopia. Especially after reading about the kind of violent and disturbing content an RHLF process catches, I’m scared of an increase of darkness in the world and hateful people having access to instructions that would further their agendas in a smarter manner than what they may have been planning. It seems all too easy to manipulate, as shown through the pirate example, which I think could be really dangerous. I’m also deeply saddened to read about the trauma that is inflicted on the low paid workers that have to filter through all of that sickening content. There is still so much left to learn. If not everyone is learning about this stuff, how are we ever going to create an aligned future with AI?!

Something that caught my eye this week in AI is Patronus AI unveiling the first ever multi-modal fact checking AI. What really caught my eye with this is the ability to catch hallucinations, verifying images with the captions. This is huge for Etsy, a sight I shop on frequently, and I could definitely see it being used on sites such as Poshmark and Depop as well. AI that can check across modals is really important so that accuracy is the forefront, especially when concerning sales.

Previous
Previous

Reflection 9

Next
Next

Reflection 7