Wow my Java is bad, for anyone who better understands the syntax please feel free to rip me on my now semi-forgotten coding ability.
Anyway, onto the blog.
I really, really hated this presentation. And by hate, I mean I really loved the depth of the presentation and the topic itself, but really hated rediscovering the potentially horrifying topic of AI’s and the moral implications that would come from such an action. The first day was somewhat positive in nature, learning about our progression in the development of intelligence, before leaving us with a foreboding idea of the next two days to come. The progression of a general intelligence to that of a super intelligence is one progression that many would hope to avoid entirely, and if it were to be approached, one that I believe should be managed and guarded with more safety precautions and overall implications than those of nuclear weapons. In other words, kinda serious stuff when compared to the last couple uber units.
But wait, there’s more!
Not to mention the possibility that developing a sentient AI could lead to the creation of an entity so powerful that it could be considered a theoretical god, but there are also the moral implications of what rights an AI with a conscience deserves, and how we are to interact with and control (for as long as we can) such a being. And as our final day revealed, the theoretical creation of a sentient being is one with great ethical weight, forcing us to draw a line at where an AI can be considered a human or a being of equal intellectual and mental capacity, and eventually, we have to question what makes us human in the first place. This is utterly fascinating, and also completely infuriating in terms of being able to think of a plausible boundary or definition to place.
As for the readings, well, first I somewhat have to send out my feelings to the tech group at the realization that half the class didn’t even look at the readings. When I sheepishly raised my hand to say that I had read, what I meant was that I did the readings last Thursday, without knowing what we were supposed to read, so I just read the conversations and about two thirds of the chapter instead, and as a result, I had no idea which author talked about what points and I was terrified that this disconnect in knowledge would embarrass me in the discussions. But that’s not the point of this paragraph, so I digress. I think this was one of my favorite chapters, if not my favorite chapter overall in all the readings. I found the Eiseley and Aldiss (I know we didn’t have to read other authors but I couldn’t resist) readings the greatest out of all of them for their focus on the emotional and internal mental aspects of humanity, and generally just living organisms (birds in Eiseley’s case) overall. The rest of the readings were also solid to enjoyable in my experience, so all round a good chapter.
The synthesis prompt however… What happened? I thought I opened another group’s folder on this one. I’m glad that they shifted the focus the way they did, but self driving cars? It’s certainly an important topic, but arguably it just seems more limited in every way than AI (and it’s still not a bad topic.) I’m not sure what to say here otherwise.
The only connection that I could possibly make would be that of one of the benefits of a globally interactive society, which are massive developments in technology and the sciences (see China, US, parts of Europe, etc). However, in terms of the focus itself on AI, I couldn’t draw any realistic connections whatsoever. Maybe there could be something to be said for analyzing an AI programmed like a human and how it might interact politically with other AIs under the same programming? I feel like I’m really stretching here.