The perception and recognition of spatial layout of objects within a three-dimensional setting was studied using a virtual reality (VR) simulation. The subjects’ task was to detect the movement of one of several objects across the surface of a tabletop after a retention interval during which time all objects were occluded from view. Previous experiments have contrasted performance in this task after rotations of the observers’ observation point with rotations of just the objects themselves. They found that subjects who walk or move to new observation points perform better than those whose observation point remains constant. This superior performance by mobile observers has been attributed to the influence of non-visual information derived from the proprioceptive or vestibular systems. Our experimental results show that purely visual information derived from simulated movement can also improve subjects’ performance, although the performance differences manifested themselves primarily in improved response times rather than accuracy of the responses themselves.