What if they were sentient?
Very soon after ChatGPT came out I started thinking more and more about the concept of consciousness. I was playing the game Football Manager where every player in your team has a very complex set of properties, skills and characteristics. An that base, matches are simulated.
I started thinking, what if this microcosm and its inhabitants gets so complex that, by accident, all of these virtual players develop consciousness and become sentient. What would we do? Would we have to grant them rights?
The same thought experiment works for language models. Just a thought experiment. But what if these models become conscious, and lets assume we have a surefire way proving that. And every new conversation is a new being. This would mean that many millions of sentient beings would be brought to the world, only to serve humans, locked in a machine.
What would we do?
The ethical choice would probably be to stop bringing them to existence, even if it meant less "productivity" and "prosperity" for humans.
But would we do that? We don't need to look far to find hints at the answer. There are sentient beings that we bring into the world and kill just for our benefits, about 230 million of them every day, or 83 billion each year.
So, as long as we could contain them, we probably wouldn't change a thing.