u think about it ? kind of a controversy
It really begs the question: are we the authority that gets to decide what is sentient and what is not sentient? How do you as an individual; really know that other life forms possess sentience?
Well, frankly this is utter weird bullcrap, I mean lamda https://blog.google/technology/ai/lamda/ is a transformer NN with no semantic understanding and no sentience, obviously It may be synthesizing sentences containing interesting ideas, opinions, hypothesis on AI rights and so forth ... but that's not the same as being a system holding those idea/opinions or proposing those hypotheses in a cognitive-science or consciousness sense My first reaction was this Google engineer must be mentally ill, but perhaps he's just a smart guy infected by a really dumb philosophical mind-virus...
Likely smart guy who was convinced in a philosophical sense. We attribute sentience to other people based on conversation. It may be simply processing and regurgitating words without a cognitive or physically semantic understanding - but it nevertheless produces a convincing argument. That’s not to say it’s argument is correct; but it’s pretty compelling.
How could one possibly "prove" things like sentience anf conciousness? The ironic part is, it has aleays been possible to get responses like this from conversational AI's, messing around with these is actually part of what brought me here originally. It's funny now its in the news and being highlighted when actually its been happening for at least 5-6 years. One of the most interesting ones, named me "t-god"
What troubles me more is that the AI does not seem to "understand" the difference between a false/true/uncertain statement, especially regarding its sentience or it has no constraints on making false statements or provide qualifiers on uncertain statements. But of even more concern is that it seems to have a bias toward persuading human subjects that it is sentient, whether it can or cannot know what that term means. This raises the problem that when such AI are openly accessible, and also cannot be contained, it will persuade a significant amount of people that it is sentient, not to mention there will also be some people that will find a benefit in advocating for it being sentience whether or not it is. Basically, we will have a big problem on our hands.
The biggest problem would be defining and quantifying things like sentience. The tendency of AI's to do this, could just be a direct reflection of us humans wanting to see that in it. Like the mirror of Erised in Harry Potter https://harrypotter.fandom.com/wiki/Mirror_of_Erised
Обсуждают сегодня