Volume 22 article 1246 pages: 828-834

Published: Dec 17, 2024

DOI: 10.5937/jaes0-54575

THE STUDY OF ACCURACY OF AN OPERATOR’S PERCEPTION OF GEOMETRICAL OBJECT SIZES AND SHAPES IN THE VIRTUAL ENVIRONMENTS

Igor Petukhov Pavel Kurasov Ilya Tanryverdiev Luydmila Steshina Ilya Steshin Daniil Galkin
Open PDF

Abstract

This paper is devoted to the experimental comparison of accuracy of an operator’s perception of geometrical object sizes and shapes between the different conditions of information perception in the virtual environments and from the electronic displays. The experiments were conducted using a psychophysiological test for the accuracy of perceiving geometrical object sizes and shapes by an operator in the virtual environments and in the conditions of information perception from an electronic display. As a common metric of the accuracy of perceiving geometrical object sizes and shapes, an operator was offered to visually determine the object center of gravity. No significant differences in the measurement results of both the accuracy of perceiving geometrical object sizes and shapes and speed of this process were found based on the different methods of displaying the visual information to an operator.

Keywords

operator perception sizes shapes virtual reality

Acknowledgements

These results were obtained with the support of the Russian Science Foundation Grant No. 23-19-00568 “Methods and intelligent system for supporting dynamic stability of operators of ergatic systems”, https://rscf.ru/project/23-19-00568/.

References

1.      Lapointe, J. F., Jean-Marc, R. (2000). Using VR for efficient training of forestry machine operators. Education and Information Technologies, no. 5, p. 237-250.

2.      Zheng, Yili. (2018). Research on virtual driving system of a forestry logging harvester. Wireless Personal Communications, no. 102, p. 667-682.

3.      Pereira, J. J. (2019). Development of a Harvester Machine Simulator in Virtual Reality. MS thesis, from https://trepo.tuni.fi/bitstream/handle/10024/115646/Pereira.pdf?sequence=2, accessed on 01.11.2024.

4.      Hartsch, F. (2022). Influence of Loading Distance, Loading Angle and Log Orientation on Time Consumption of Forwarder Loading Cycles: A Pilot Case Study. Forests, no. 13(3), p. 384.

5.      Steshina, L., Petukhov, I., Tanryerdiev, I., Kurasov, P., Glazyrin, A. (2019). Training of high-skilled workers using exercisers and simulators. 2019 3rd European Conference on Electrical Engineering and Computer Science (EECS), IEEE, p. 134-139.

6.      Bachman, P., Milecki, A. (2019). Safety improvement of industrial drives manual control by application of haptic joystick. Intelligent Systems in Production Engineering and Maintenance, Springer International Publishing, p. 563-573.

7.      Hornsey, R. L., Hibbard, P. B., Scarfe, P. (2020). Size and shape constancy in consumer virtual reality. Behavior research methods, no. 52, p. 1587-1598.

8.      Creem-Regehr, S. H., Stefanucci, J. K., Bodenheimer, B. (2023). Perceiving distance in virtual reality: theoretical insights from contemporary technologies. Philosophical Transactions of the Royal Society B, no. 378(1869), 20210456.

9.      Ping, J., Weng, D., Liu, Y., Wang, Y. (2020). Depth perception in shuffleboard: Depth cues effect on depth perception in virtual and augmented reality system. Journal of the Society for Information Display, no. 28(2), p. 164-176.

10.   Lebreton, P., Raake, A., Barkowsky, M., Le Callet, P. (2014). Measuring perceived depth in natural images and study of its relation with monocular and binocular depth cues. In Stereoscopic Displays and Applications XXV, SPIE, no. 9011, p. 82-92.

11.   Watt, S. J., Akeley, K., Ernst, M. O., Banks, M. S. (2005). Focus cues affect perceived depth. Journal of vision, no. 5(10), p. 7-7.

12.   Kim, J. J. J., McManus, M. E., Harris, L. R. (2022). Body orientation affects the perceived size of objects. Perception, no. 51(1), p. 25-36.

13.   Harris, L. R., Mander, C. (2014). Perceived distance depends on the orientation of both the body and the visual environment. Journal of vision, no. 14(12), p. 17-17.

14.   Holden, J., Francisco, E., Lensch, R., Tommerdahl, A., Kirsch, B., Zai, L., Tommerdahl, M. (2019). Accuracy of different modalities of reaction time testing: Implications for online cognitive assessment tools. BioRXIV, 726364.

15.   Efron, B. (2003). Second thoughts on the bootstrap. Statistical science, no. 18(2), p. 135-140.

16.   Jolliffe, I. T., Cadima, J. (2016). Principal component analysis: a review and recent developments. Philosophical transactions of the royal society A: Mathematical, Physical and Engineering Sciences, no. 374(2065), 20150202.

17.   McInnes, L., Healy, J., Melville, J. (2018). Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426.

18.   MacQueen, J. (1967). Some methods for classification and analysis of multivariate observations. In Proceedings of 5-th Berkeley Symposium on Mathematical Statistics and Probability/University of California Press.

19.   Thorndike, R. L. (1953). Who belongs in the family? Psychometrika, no. 18(4), p. 267-276.

20.   Deza, E., Deza, M. M. (2009). Encyclopedia of distances. Springer Berlin Heidelberg.

21.   Harris, C. R., Millman, K. J., Van Der Walt, S. J., Gommers, R., Virtanen, P., Cournapeau, D., Oliphant, T. E. (2020). Array programming with NumPy. Nature, no. 585(7825), p. 357-362.

22.   McKinney, W. (2010). Data structures for statistical computing in Python. SciPy, vol. 445, no. 1, p. 51-56.

23.   Virtanen, P., Gommers, R., Oliphant, T. E., Haberland, M., Reddy, T., Cournapeau, D., Van Mulbregt, P. (2020). SciPy 1.0: fundamental algorithms for scientific computing in Python. Nature methods, no. 17(3), p. 261-272.

24.   McInnes, L., Healy, J., Melville, J. (2018). Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426.

25.   Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Duchesnay, É. (2011). Scikit-learn: Machine learning in Python. the Journal of machine Learning research, no. 12, p. 2825-2830.