![]() ![]() ![]() If I'm presented as "an engineer working on Z" talking about UI, it would be reasonable to assume that I was a person who knew what they were talking about. I'm a software engineer, but if I went to talk to a reporter about UI issues in a product, ignoring the obvious correct dismissal, I have nothing to do with anything UI related (I haven't worked on UI level anything in it must be the vicinity of a decade, and the UI stuff I did do was a super specific subset). ![]() If you're a reporter and someone from company X comes to you saying they have important issues involving issue Y in product Z, you'll probably assume that they have significant expertise in issue Y - even if their involvement is only in some unrelated subcomponent of the actual product. My interpretation is that they were presenting themselves to the media who understood more about the topic they were discussing than they actually did. Not to say that I think there is a gorilla or cat-level AI out there. Who's to say, and by what standard, whether or not these feelings are "real"?ĪI has been so hyped up without much to show for it, I won't be surprised to see more AI proponents embrace this line of thinking. The bot does seem to talk about their feelings. And there are some obvious "errors" like the bot talking about spending time with "friends and family".īut still, if the standard of sentience is to have feelings, then nonsensical or out of context responses aren't relevant. The guy doesn't seem to be a stranger to controversy. Yet, I don't think ape personhood deserves to be more than an interesting thought exercise.Īccording to what I could find this transcript was in fact heavily edited. Both of which, especially the former, I'm willing to believe are "sentient". I certainly wouldn't think of "gorilla-level intelligence" or "cat-level intelligence". When I hear "sentient AI", I hear "human-level intelligence". Most definitions of "sentience" seem to demand less than what the average person may imagine in the context of AI. I feel like there's a semantical gambit here. Wouldn't an engineer who has worked 7 years at Google working on AI be atleast a bit more skeptical of what constitutes sentientence than the average person? And by that same logic, wouldn't them concluding an AI truly is sentient have more weight? ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |