A thought about VR.
There are two types of images VR can display right now, video and realtime rendering. Video sucks and makes less and less sense. It is limited by compression and resolution, a 3840x2160(4k) hemispheric video can be viewed with something like a HTC Rift that has a 2560x1440 resolution. The Rift will display parts of the video frame and those parts will be lower resolution than the headsets display and in addition to that the image needs some transformation which distorts it. Bad times for those with good eyes.
It also requires a high bitrate because compression artifacts will be a hundred times more visible and a high framerate in additions to that, resulting in a file that is insanely largely yet not all that good while also being limited to headtracking.
Ok, I'm actually going somewhere with this.
Realtime rendering on the other hand can render directly at a native resolution and take care of the transform/warping at the same time. It is also uncompressed frames and at 1440p that's like ~660MB per second at 60fps, far beyond what video can do.
So my thought is, what if they were combined, video and realtime rendering? Like AR in a headset or invertered AR in a way. A person in a hallway can be broken down into two parts where Unreal can render a really good looking hallway and the person is video filmed with a VR/stereoscopic camera and placed into that scene, making the video smaller and the overall quality would be higher while allowing movement for the player/viewer.
All the usual tricks of rendering can then be used on the video character and the environment or it can be superimposed on a mesh like some VR LA Noir technology.
A bit like this except not made in Unity half a decade ago.
View attachment 897875
Why would it be done this way? I don't know but I would like to see someone try it.