Browsing by Author "Yildiz Z.C."
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item A perceptual quality metric for dynamic triangle meshes(Springer International Publishing, 2017) Yildiz Z.C.; Capin T.A measure for assessing the quality of a 3D mesh is necessary in order to determine whether an operation on the mesh, such as watermarking or compression, affects the perceived quality. The studies on this field are limited when compared to the studies for 2D. In this work, we aim a full-reference perceptual quality metric for animated meshes to predict the visibility of local distortions on the mesh surface. The proposed visual quality metric is independent of connectivity and material attributes. Thus, it is not associated to a specific application and can be used for evaluating the effect of an arbitrary mesh processing method. We use a bottom-up approach incorporating both the spatial and temporal sensitivity of the human visual system. In this approach, the mesh sequences go through a pipeline which models the contrast sensitivity and channel decomposition mechanisms of the HVS. As the output of the method, a 3D probability map representing the visibility of distortions is generated. We have validated our method by a formal user experiment and obtained a promising correlation between the user responses and the proposed metric. Finally, we provide a dataset consisting of subjective user evaluation of the quality of public animation datasets. © 2017, The Author(s).Item A Fully Object-space Approach for Full-reference Visual Quality Assessment of Static and Animated 3D Meshes(SciTePress, 2019) Yildiz Z.C.; Capin T.3D mesh models are exposed to several geometric operations such as simplification and compression. Several metrics for evaluating the perceived quality of 3D meshes have already been developed. However, most of these metrics do not handle animation and they measure the global quality. Therefore, a full-reference perceptual error metric is proposed to estimate the detectability of local artifacts on animated meshes. This is a bottom-up approach in which spatial and temporal sensitivity models of the human visual system are integrated. The proposed method directly operates in 3D model space and generates a 3D probability map that estimates the visibility of distortions on each vertex throughout the animation sequence. We have also tested the success of our metric on public datasets and compared the results to other metrics. These results reveal a promising correlation between our metric and human perception. Copyright © 2019 by SCITEPRESS – Science and Technology Publications, Lda. All rights reservedItem A machine learning framework for full-reference 3D shape quality assessment(Springer, 2020) Yildiz Z.C.; Oztireli A.C.; Capin T.To decide whether the perceived quality of a mesh is influenced by a certain modification such as compression or simplification, a metric for estimating the visual quality of 3D meshes is required. Today, machine learning and deep learning techniques are getting increasingly popular since they present efficient solutions to many complex problems. However, these techniques are not much utilized in the field of 3D shape perception. We propose a novel machine learning-based approach for evaluating the visual quality of 3D static meshes. The novelty of our study lies in incorporating crowdsourcing in a machine learning framework for visual quality evaluation. We deliberate that this is an elegant way since modeling human visual system processes is a tedious task and requires tuning many parameters. We employ crowdsourcing methodology for collecting data of quality evaluations and metric learning for drawing the best parameters that well correlate with the human perception. Experimental validation of the proposed metric reveals a promising correlation between the metric output and human perception. Results of our crowdsourcing experiments are publicly available for the community. © 2018, Springer-Verlag GmbH Germany, part of Springer Nature.