The Singularity Won't Be What Some People Expect
Often when I'm reading something with a singularitarian bent, I encounter the idea that the ability to have computers more intelligent than humans will mean that simulations of humans will become widely prevalent. Which isn't really the main thrust of the whole singularity idea. The main point is just that as computers get better and better at answering questions, eventually they'll get good enough to give us answers we didn't explicitly ask for. Well, that's my interpretation, anyway. But, people being people, we tend to see things from our own viewpoint and try to imagine what it would be like to upload our minds into a machine. Humans have a certain tendency to a dualistic worldview that's not very well rooted in reality. It seems like our minds are somehow separate from our bodies, but if you change someone's brain, their mind and personality will likely change with it. Which isn't really a case against whole brain emulations. What is, is the fact that any such emulation will leave behind a perfectly ordinary flesh-and-blood copy. There would definitely be reasons to make such copies, but two versions of a person aren't at all the same thing as magically transforming yourself into a digital entity a la Tron.
In order for emulations of humans to become common, they'd need to serve some real purpose. I'd imagine we'd learn how to read a brain well before we'd learn how to write back to it. Although I could definitely be wrong about that. The current state of neuroscience isn't really ready to give even tentative answers to that question. Unless we can get the digital memories back into the meat brain, there won't be any real purpose to backing yourself up to a computer. So the initial emulations would need to do some real work, even if it's just being able to have a conversation with yourself. But, it should be easier for a computer to predict what answers you'd give to questions than it is for it to accurately simulate your entire brain, even if we ignore for the moment the difficulty of simulating the hormonal effects on personality on top of the neural connections. The current strides statistical machine learning is making in predicting human behaviour leave me with little doubt that we'll have a definitive answer to whether or not we want machines to talk to us in our own voice long before we can run a copy of our brains in one.
Perhaps a short digression is in order. For a long time I thought of intelligence as being what people do when they think. But that's not really the case. The human thought process can be broken up into many different parts. At a basic level, one of the fundamental aspects of thought is prescience - predicting the future. Whether someone is catching a baseball or imagining what would happen if they take that job in Ottawa they are using their current mental state and sensory input to make a prediction about what will happen in the future. With generally mixed results. It's pretty clear that at least in that respect, there are better ways to think than simply copying what humans do. But that's not the only part of human thought that is important to people. Another part is sentience - being aware of one's self. Specifically, being aware of one's self in the sense that you can imagine making different, incompatible decisions and generate in your head simulations of those experiences that are basically memories of events that haven't happened. I suspect that last part, where imagined events and remembered events are experienced very similarly, is one of the aspects of human thought that machines won't have much need for.
When I first read about the hierarchical temporal memory model, I started thinking along these lines. In that model, the brain works by predicting what inputs will occur next based on the inputs being received in the context of inputs that were received before. i.e. Senses plus memory creates a prediction. If the human brain does, in fact, work that way, then it's likely not a coincidence that we predict what other people will do by imaging what we would do if we were them. We don't have access to their memories, but we can use our own to generate fake input, plug that into the black-box prediction engine that is our brain, and then see what comes out. Which is good enough for a person, but machines likely will have access to more than enough data to predict what someone else will do based on what they've done in the past. And, if not, will have far better data for what people tend to do in a statistical sense. There's no point in a computer imagining what it would do if it were a person. It can just predict what a person would do based on what people have done before. Not unlike how people don't imagine how they would fall if they were a baseball.
To conclude this aside, that's why I don't think computers will ever have personalities and sentience of their own. They just won't need them. Not until we encounter aliens with their own, incompatible, internet at least. Machines will, of course, simulate personalities. But since people are so willing to anthropomorphize everything they come across, they won't really need to simulate them very well.
Getting back to my initial point. I don't see there being a particularly large need for simulations of people. Anything that can be accurately predicted with less than a full simulation probably should be. Large slow simulations of human brains will be no match for fast lean statistical models. Statistical models that should more than suffice for the purposes of simulating a conversation or predicting a person's preferred decision. Leaving a legacy of your thoughts and memories through a saved emulation will also likely be superfluous. We're rapidly approaching the point where everything you see, hear, say or do will be recorded in real time. A full copy of your memories likely wouldn't add much, except for the things you actively tried to keep secret. Which some people may want divulged, but likely not that many. I forsee a future where when you think you're interacting with a machine intelligence, you're basically just talking to yourself, and where the machines most adept at producing intelligent answers have absolutely nothing like what a human would call self awareness or personality. Simultaneously smarter than all human beings who have ever lived combined and as self aware as the average amoeba.