(Don't / Do) Replicate the Real World in VR?

Guideline: Don’t Replicate the Real World
Source: https://sites.umiacs.umd.edu/elm/2017/10/03/3d-visualization-for-nonspatial-data-guidelines-and-challenges/

Question:
The overall message of the guideline given in the above blog is clear (Rule Nr. 7): For 3D visualizations for nonspatial data, it is unfavorable to replicate real-world controls. However, this seems to only cover 3D visualizations on the screen and not fully immersive visualization environments in VR. I think that in such environments, at least for some application cases, the exact opposite would be helpful: try to replica the real world as far as possible and extend/improve/adapt it to smooth away difficulties given by real environments (e.g., limitations given by screen sizes: the entire wall of the virtual environment could be used for display).
This would probably help the analyst to quickly orient in and interact naturally with the virtual environment without a steep learning curve. Of course, the new technology enables new ways to interact with the environment and we should exploit those possibilities, but I also think that it would increase the overall usability to create a familiar environment to the known real world and therefore use a “normal” room as the “base” of the virtual environment and implement visual metaphors.
What is your opinion on that?

Hi Matthias,

I think Niklas’ rule #7 includes VR (and AR) environments. I think the problems he raises are general problems, attributed to the fact that humans live in a 3D environment and evolved to (mostly) cope with it; hence we think it’s the best of all worlds. However, Niklas entry and your question have many levels and I want to try to answer some one by one (not saying these answers will be satisfying as the topic is enormous!)

First, it depends what you mean by “try to replicate the real world as far as possible”. Do you have a concrete case in mind? What is your ‘world’? What is your data? and which interactions do you want to support?

Perception in 3D is still hard, even in real world. If you see a chair, that’s easy to understand because you have seen so many chairs in your life. If you see an unknown object, it’s quite hard to guess its shape. The same holds for virtual 3D, even if stereoscopic. 3D can be a great aid in helping to understand a visualization metaphor such as a 3D scatterplot. Plus, you might be able to understand some 3D relations such as clusters and perhaps their size and overall density and distances in 3D space. For other tasks you may want proper 2D visualizations (e.g. projections).

Then, there is the very specific point you’re making: that of interaction. Again I think the answer is not so easy. Many ‘natural user interfaces’ have drawbacks with respect to mouse and keyboard (if that’s what you’re comparing them to). Interacting with the mouse is very precise and doesn’t cause a lot of fatigue. Moving your arms for a long time to perform 3D movements can be very tiring. Another example are shortcuts. They replace more ‘natural’ interactions such as clicking a menu and selecting a command.

On the other side, there are clearly studies that suggest that more complicated interactions in 3D space, e.g. those involving a high degree of freedom are better performed in 3D space (e.g., [1,2]).

I think the interesting question is how can we combine the best of both worlds: 2D and 3D and how can we adapt our environment (visualizations and interactions) to the respective task or data at hand. I think this is a major design challenge.

What do you think?

[1] http://www.aviz.fr/~bbach/hololens/Bach2018holostudy.pdf
[2] https://hal.inria.fr/hal-01436206/document

The intention was to cover 3D visualization in general, regardless of whether it is shown on a flat screen or in an immersive environment. Perhaps there is a difference in understanding of what “replicating the real world” means, however. As I state in the blog post, I have seen far too many naive 3D user interface environments where people create a 3D depiction of an office in VR to somehow represent a 3D version of a desktop: to print something, just send it to the 3D printer in the corner of your office! It’s a bad idea, because it just adds back all of the physical limitations of the real world that are not necessary and do not contribute anything other than familiarity.

Your argument seems mainly to be about decreasing the learning curve while smoothing away the difficulties of the real world. I would argue that replicating the real world “as far as possible” is going to yield far too many of these difficulties to be useful in the first place. Familiarity will only get you so far. After all, we can’t rest our feet on our virtual Windows or OSX desktops, yet we are perfectly capable of understanding them as virtual workspaces after only a little instruction and training. The only exception is really when the data is truly representational of the real world, like for an architectural previsualization, in which case it does make sense to do this.

Hi Benjamin,
Thank you for your detailed answer!

I was thinking about visualizations in general in VREs, without a specific scenario in mind. And it was meant to be not only about the visualization itself but also about its surroundings. The idea was to use familiar surroundings (and controls) to lower the learning curve and to possibly increase the impression of immersion. I assume that could have positive effects on the effectiveness of the visualization itself. E.g., create a floor where the analyst stands on and do not float through outer space while observing the visualization; build rooms that are similar to the ones we know from real life (if irrelevant for task / not more space is needed). I suggest this to be reasonable at least for aspects that do not obviously disadvantage any task directly.

As your nice example suggests, the question to which degree the real world should be replicated is not easily answered with one general solution. For instance, if the interaction with a 3D visualization is “most natural”, i.e., as if the visualization was a physical 3D object, one would have to walk to the visualization and touch it somehow to interact. This would be very intuitive, but comes at the high cost of fatigue, ineffectiveness and probably inaccurateness compared to a tiny pointer. I fully agree with your suggestion and think it is necessary to balance out for every single case what is more important and if there are any compromises that combine advantages of both.

Thank you for your answer!
I fully agree with your point of view! I also realize the ambiguousness of my question. By “replicating the real world” I was actually referring to making the virtual reality as real/plausible/familiar as possible, including photorealistic rendering and creating “real” looking scenarios (e.g., laboratory room). In the context of visual analytics, creating an environment that could be real (e.g., create a virtual office environment and put line-chart on the wall instead of placing line chart in empty space). Disclaimer: Of course this depends largely on the task at hand. For some cases, it might be useful to enter a different world without any similarity to the real world. I was not referring to installing a 3D trash can where the user has to walk to if he wants to neglect some part of the visualization. I.e., I would mainly refer to all aspects that do not impair the users’ actions (e.g., overall surroundings = office environment, walls+ceiling+floor available …). I think in this aspect VREs are slightly different to 3D visualizations on the screen, as they define the entire surrounding of the user without impairing the visualization itself.

At least for some visualizations, it might hold that the more similar the VRE is to the real world, the more familiar it is, and the easier one gets to work with it. Therefore, if it is not disadvantageous to imitate the surroundings realistically, why not do it? For controls and interactions, good tradeoffs might be optimal to exploit advantages of both (intuitive and practicable – E.g., instead of touching the visualization at a certain position (intuitive), hovering it with a laser beam at the respective position.)

Don’t you think that if there is no obvious disadvantage in making the visualization environment more realistic we should do it? I.e., replicate the world as much as possible - under the premise that interactions, etc. are not impaired by that?

I agree the central point (or the essence) of this guideline, but would like to explore its boundary in more detail where researchers, authors, and reviewers often do not agree with one another. (Note: metaphorically, “central point” and “boundary” are spatial.)

The guideline is for non-spatial data, and we may consider the follow categorization of spatial and non-spatial data:

(a) physically spatial (e.g., geographical maps, medical volume datasets);
(b) inferably spatial (e.g., molecular geometry), which users are familiar with the spatialization;
© artificially spatial (e.g., color space), which users are familiar with the spatialization;
(d) associated with physically-spatial attributes that are often not used or unimportant to visualization tasks (e.g., metro maps, communication network architectures);
(e) non-spatial but commonly spatialized (e.g., height for price, length for time), which users can easily and quickly learn;
(f) non-spatial and seldom spatialized (e.g., a list of courses in a department);
(g) non-spatial and should not be spatialized in any circumstance (this is a placeholder as I have not found an example).

A dataset may feature components in multiple categories, e.g., a building on a map may have its x-y dimensions in (a) and its z dimension in (d).

The blurred boundary is the nature of user-dependence and task-dependence. Categories (b), ©, and (e) naturally make us ask the question “can users become accustomed to the visualization if some data in (f) is to be spatialized?” Category (d) makes us ask the question “in what scenario, bringing back the omitted physically-spatial attributes may be beneficial?” This seems to be matthias.kraus’s question. The benefits may include easy to learning and remember, context-awareness, connecting to other variables not in the data (e.g., rain cover condition), visual consistency with related visualizations that show physically-spatial data, etc.

I guess that we also need to define “replicate the real world” more precisely. What is considered as replicating the real world and what is not? Here is a tentative categorization:

(1) faithfully map all obtainable physically spatial attributes
(2) faithfully map some physically spatial attributes
(3) map some physically spatial attributes at a lower resolution (e.g., common practice)
(4) map some physically spatial attributes with deformation (e.g., metro map)
(5) map some physically spatial attributes to non-spatial visual channels

I guess that semantically “replicate the real world” does not apply to categories © and (e).

In addition, the tasks of disseminative visualization may present different scenarios from those of observational, analytical, and model-developmental visualization. For dissemination, one may justifiably use “replicate the real world” to provide users with novel experience and to grab their attention. There are also a fair amount of evidence to support “replicate the real world” in developing human models (e.g., in sports and medicine).

In my opinion, while this depends on the data and the task, trying to replicate the real world is going to restrict you far too much. For example, if we can agree that a command line is optimal for some tasks (e.g. complex file management), how would you suggest a user in a 3D replica of the world would perform such file management operations? Trying to align themselves with a shell window represented by a 3D surface (like a monitor in a virtual 3D office) would certainly not be the optimal way to do it.

As for familiarity, as I said in my last reply, that is mostly useful when transferring prior knowledge to a new setting, not once you have learned the environment.

Hi, Elm,
I understand and to a large extent agree with your view about “replicate the real world”. However, I am not sure about your example of file management. You must notice that some file management commands are now mostly performed on a GUI with “drag and drop” or “click and show/input”. These include the old Unix commands (ls, mv, find, rm, …). Could some of these commands feature more metaphoric realism in the future? Perhaps we should not close the door. Many years ago, I worked on the design a VE application for managing a large number of files associated to a type of temporal records (not suitable to say here, just imagining project files, student files, etc.). The basic design was virtual rooms for grouping (e.g., projects, classes, etc.). Some tasks required the users to visit these rooms regularly. Files were to be displayed as pieces of paper on the wall. Each piece would automatically slip down a bit each day. Once it has reached the floor, it would be archived away automatically. The users could move a file up if he/she would wish to keep it directly accessible. Although the VE was never implemented and tested, I have not found a mechanism in ordinary UI that can make the observation and decision easy as to what should be archived away.

Hi again,
To follow up on this interesting discussion, we conducted a small user study. In this study, we examined the impact of a more realistic looking environment compared to a quite abstract one on a simple identification task in heightmaps.
For that purpose, we created two scenarios. One virtual space containing nothing but a floor on which the user stands and a cube in the center on which the visualization was displayed. And as a second scenario, a virtual environment that resembled a realistic office environment with a table in the center.
As a visualization, 3D heightmaps were displayed.
Participants were asked to find common peaks in two superpositioned heightmaps - i.e., locations in which both 3D heightmaps had a peak and were of the same height.
We allowed users to walk around in the virtual environment and move each of the two heightmaps individually up and down to better being able to distinguish them.

We measured task completion times, precision and the overall amount of layer movements deployed.
Results indicate a higher task completion time when the office environment was present. Also, in the plain scenario, participants tend to move layers up and down more than in the office scenario.

Our first and fundamental study supports the guideline of not replicating the real environment — however, the reasons for that need to be examined in more detail. A possible explanation would be that the office environment distracted participants from the actual task, impairing the visual comparison of the heightmaps.