Goal-directed approaches to perception usually consider that distance perception is shaped by the body and its potential for interaction. Although this phenomenon has been extensively investigated in the field of perception, little is known about the effect of motor interactions on memory, and how they shape the global representation of large-scale spaces. To investigate this question, we designed an immersive virtual reality environment in which participants had to learn the positions of several items. Half of the participants had to physically (but virtually) grab the items with their hand and drop them at specified locations (active condition). The other half of the participants were simply shown the items which appeared at the specified position without interacting with them (passive condition). Half of the items used during learning were images of manipulable objects, and the other half were non manipulable objects. Participants were subsequently asked to draw a map of the virtual environment from memory, and to position all the items in it. Results show that active participants recalled the global shape of the spatial layout less precisely, and made more absolute distance errors than passive participants. Moreover, global scaling compression bias was higher for active participants than for passive participants. Interestingly, manipulable items showed a greater compression bias compared to non-manipulable items, yet they had no effect on correlation scores and absolute non-directional distance errors. These results are discussed according to grounded approaches of spatial cognition, emphasizing motor simulation as a possible mechanism for position retrieval from memory.