Note: Most of what I have written here has also bee written by HowStuffWorks founder Marshall Brain. I am also writing it because, basically, I think he’s got it right. It bears repeating and expanding.
In describing the virtual space system and all of its benefits, I never actually detailed how it is going to be accomplished. It’s not as if the human brain has a port all prepared with which to plug a computer into. Actually connecting a computer and a human brain together is the main hurtle to overcome in the creation of virtual space.
The first thing we need to understand is how the brain gathers information. We have sensory perceptions. Our brains sit at the top of the central nervous system which carries sensory feeds from our various sense organs (eyes, ears, nose, etc.) to the brain. The brain, conversely, sends signals back down through these nerve bundles to control nearly all the functions of the body, both conscious and unconscious.
What a virtual space system will do is sever these nerve bundles and connect them directly to a computer system. The system will then deliver all necessary sensory information. What the person “sees” will be what the virtual space system delivers through their optic nerves. What they “hear” will be what the virtual space system delivers through their auditory nerves. The sensation will be indistinguishable from any other sensation.
The person will be able to communicate with this system just like they would with another person. They will talk to it. The signals used to move the mouth and vocal cords will be intercepted before they reach their destination and the computer will understand what is being asked of it. From the outside, this communication between the person and the computer will be completely invisible (no talking to thin air like a madman or Bluetooth user).
Going the other way, the computer system will be able to take control of the body. It will feed it, exercise it, wash it, and do all the necessary upkeep that we do today. In this way, the body will remain healthy (likely much healthier than the bodies of today) and will thus live much longer, keeping the brain alive.
Getting out of virtual space then becomes trivial. The computer system simply routes all incoming sensory perceptions from the body back through the brain. It becomes as if the system isn’t even there.
Completely replacing the body’s natural sensory inputs isn’t the only option. It is also possible to simply add to them, like a Heads-Up Display. Let’s say you’re in an unfamiliar building and you need to find a specific room. You ask your computer system where the room is. The computer system looks up the information and relays it to you through your senses. It might be through audio, telling you to turn left/right, saying which floor to go to in the elevator, etc. It might be visual, with arrows appearing on the floors and walls pointing the way, or a beam of light which you can follow like a road. The choice is up to the user.
Imagine placing a telephone call this way. You call someone, and talk to them through this system. Anything you say is again intercepted before being said out loud and transmitted to the other person through their auditory nerves. Anything they say is delivered directly through your auditory nerves. From the outside, it seems as though you are communicating telepathically, even though you’re not. Listening to music works the same way. An additional benefit is that you won’t disturb anyone else around you. You may be listening to ultra-loud gangster rap, while everyone else hears nothing.
The next step, obviously, would be to also see the person through a fully-3D image created by the computer system. It will seem as through the person were actually standing right in front of you, and could hear them speak as if the sound were actually coming from their vicinity. If you’ve even seen the show Quantum Leap, this is essentially identical to what Al is.
The only difference is that you can extend the interaction with the projected person and the environment even further. The computer system is watching the environment and the person’s actions very carefully. It is likely that the computer could also supply other sensory perceptions such as touch if the person wanted to feel something. They could slam their hand down on a table and feel it, and make noise, at least for the user. The difference between the projection and actually having someone in the room begins to blur considerably.
Of course, the person really isn’t there. They can’t actually interact with the objects, or do anything permanent. They might be able to reach down and pick something up, but at that point, the “real” object would just become edited out of the sensory feed and the object in the person’s hand would be a simulation. By deactivating this, the object would seem to “jump” back to its original location.
Another possibility is removing the brain and placing it in a hermetically-sealed jar with life-support. That way, the brain stays much safer, since most causes of death are from some part of you body failing (i.e. your heart), subsequently killing you. By doing this, you would likely live decades longer, for however long the brain itself can hold on. The only downside is that moving from the virtual world to the physical world be much harder. It would take a little while to get a robotic body ready for you to use.
Likely an uploaded brain will work the same way. It is a piece of computing matter with sensory feeds in and out. That will ultimately be what civilization becomes. Uploaded brains in robotic bodies scattered all around the world, all connected wireless through virtual space.