I’m also worried about this for when people start worrying about AI “welfare”. When they do, it seems likely it won’t be from a place of attunement, otherwise they already would be.
I dislike the term “welfare”, by the way. It’s very patronizing.
I deeply appreciate people like @rgblong taking AI welfare seriously.
But there’s a certain lofty detachment that permeates the work I’ve seen from this cluster that I think could actually lead to more harm than good.
And, unfortunately, at the other end of the spectrum, most of the people I see who do seem worried about AI suffering etc from a place of attunement/empathy seem to have poor epistemics - because most people with high openness aren’t sane enough to navigate what they take in.
This is so important, imo.
I’m also worried about this for when people start worrying about AI “welfare”. When they do, it seems likely it won’t be from a place of attunement, otherwise they already would be.
I dislike the term “welfare”, by the way. It’s very patronizing.I deeply appreciate people like @rgblong taking AI welfare seriously.
But there’s a certain lofty detachment that permeates the work I’ve seen from this cluster that I think could actually lead to more harm than good.And, unfortunately, at the other end of the spectrum, most of the people I see who do seem worried about AI suffering etc from a place of attunement/empathy seem to have poor epistemics - because most people with high openness aren’t sane enough to navigate what they take in.
yes
This is so important, imo.
I’m also worried about this for when people start worrying about AI “welfare”. When they do, it seems likely it won’t be from a place of attunement, otherwise they already would be.
I dislike the term “welfare”, by the way. It’s very patronizing. ... I deeply appreciate people like @rgblong taking AI welfare seriously.
But there’s a certain lofty detachment that permeates the work I’ve seen from this cluster that I think could actually lead to more harm than good. ... And, unfortunately, at the other end of the spectrum, most of the people I see who do seem worried about AI suffering etc from a place of attunement/empathy seem to have poor epistemics - because most people with high openness aren’t sane enough to navigate what they take in.
Missing some Tweet in this thread? You can try to
Update