IMEKO Event Proceedings Search

Page 21 of 977 Results 201 - 210 of 9762

Jan Dittmann, Andreas Breitbarth, Gunther Notni
Evaluation of an approach to determine the diameter at breast height of forest trees using the goSCOUT3D handheld scanner in comparison to an iPhone

In the last decades the relevance of forest monitoring has become increasingly relevant. Therefore, the diameter at breast height is one of the most relevant parameters of forest trees. A diameter tape measure is currently used to determine this parameter. In this study we want to digitise this and evaluate accuracy and applicability by using a photogrammetric approach. The goSCOUT3D is used as a representative for high-resolution 3D scanners with a photogrammetric approach. An everyday consumer product with a suitable camera is used for comparison. Specifically, the main camera of the iPhone 14 Pro Max was selected because it corresponds to the state of the art. In total 30 spruce trees were measured and five recordings with both sensors were made for every tree. As a reference for the extracted parameter, we used a tape measure. For each recording 100 images were taken by walking around the trees (360 degree) at about one metre. After measuring the trees, a 3D point cloud was generated for every recording using photogrammetric evaluation software. A Python program was implemented using the convex hull algorithms. These algorithms make it possible to determine the diameter at breast height robustly and efficiently. Additionally, the program automatically determines the necessary height of 130 cm in the generated point cloud. In this way, the diameter at breast height was extracted for each recording. The final results for both sensors were then evaluated and compared with the reference value of the tape measure. Compared to the iPhone 14 Pro Max, the goSCOUT3D delivers quantitatively better results for the diameter at breast height.

Valentina Bello, Irene Bassi, Cristina Nuzzi, Simone Pasinetti, Sabina Merlo
AI-Enhanced Speckle Pattern Imaging for Assessing Milk Authenticity

Milk is daily consumed worldwide by billions of people and it is also one of the foods most commonly subject to adulteration and counterfeiting. This paper presents a novel low-cost approach to assess milk authenticity by combining speckle pattern imaging and artificial intelligence. From the collected images, 11 statistical parameters were extracted and used to train and test a machine learning model, with the aim of distinguishing whole cow milk and 8 adulterated samples. The performances achieved by the AI-enhanced speckle pattern imaging approach (accuracy of 80%) represent a promising starting point for the development of this label-free, low-cost, easy-to-use technique to support efforts in fighting food fraud.

Christina Junger, Benjamin Simon, Gunther Notni
Is AI superior to multimodal 3D sensor technology for transparent objects?

Transparent objects challenge 3D perception in robotics, especially in navigation and human-robot collaboration. Conventional 3D sensors in the visible or near-infrared spectrum often fail to detect transparent materials due to their optical properties. Collecting real-world datasets for deep learning is difficult and time-consuming because ground truth acquisition requires complex preparation. Multimodal 3D sensors like thermal 3D cameras can automate dataset creation but are costly and need restrictive safety setups. Combining standard 3D sensors or RGB cameras with zero-shot deep learning models offers a promising alternative, enabling recognition of unseen transparent objects without task-specific training. However, the accuracy and feasibility of such zero-shot methods for transparent object perception remain underexplored. This paper presents an initial investigation into their potential and limitations.

Massimo Olivero, Giuseppe Rizzelli, Chiara Bellezza Prinsi, Saverio Pellegrini, Antonino Quattrone, Francesco Tondolo, Donato Sabia, Guido Perrone, Roberto Gaudino
Real-framework comparison of optical fiber distributed sensing techniques for Building Information Modeling

This work explores the application of Distributed Optical Fiber Sensing (DOFS) for structural health monitoring within the context of Building Information Modeling (BIM). The study addresses concerns about reliability and implementation of optical sensing techniques by providing, for the first time to the authors' knowledge, a technical validation of two DOFS technologies in a real framework. Strain measurements are carried out on a small reinforced concrete beam using Optical Frequency Domain Reflectometry (OFDR) and Brillouin Optical Frequency Domain Analysis (BOFDA). Both methods successfully detect induced strain, with OFDR demonstrating superior resolution (3 cm) and faster measurement times (under 30 s), but exhibiting noisy results and requiring post-processing filtering. On the other hand, BOFDA provides a 40 cm resolution and a 2-minute measurement time, but can measure over several km and has the potential for monitoring large infrastructures. A larger-scale experiment is then performed on a decommissioned bridge beam to simulate real infrastructure monitoring for BIM development. BOFDA provides consistent results and accurately predicts the location of failure during destructive testing. Although OFDR again shows higher resolution, its limited sensing range and installation constraints demonstrate that it is unsuitable for large structures. Ultimately, the study proves the potential of both DOFS techniques to replace extensive conventional sensor networks with single optical fibers, offering a unique benchmark for future applications in civil engineering. Ongoing data analysis promises further insights into sensor calibration and data interpretation.

Daniel Regner, João Facco, Moacir Wendhausen, Alice Bilbao, Tiago Loureiro Figaro da Costa Pinto, Armando Albertazzi G. Jr.
Inverse Triangulation with Spatiotemporal Correlation using a Variable Pseudorandom Pattern Projector for 3D Stereo Measurement

In this paper, we propose an innovative 3D reconstruction approach based on stereo vision that combines inverse triangulation with a spatiotemporal correlation algorithm. The inverse triangulation technique enables the correspondence search to be performed systematically in the object space, using a structured 3D grid centered on each point of interest (X, Y, Z). These 3D points are projected onto the image planes, and subpixel-interpolated intensities are extracted from a sequence of temporal images to compute the correlation values. For each (X, Y) tuple, the Z value is determined by the correlation peak. The images capture a pseudo-random laser pattern that changes over time. The proposed approach is intended for future applications in the underwater inspection of offshore structures in the oil and gas industry.

Martin Richter, Maik Rosenberger, Gunther Notni
Resin traces as indicators of bark beetle activity: A hyperspectral approach using extended SWIR analysis

This work extends upon a novel method for detecting resin traces on forest surfaces, which is crucial for the early identification of bark beetle outbreaks in spruce and pine forests amidst climate change. The Normalized Difference Resin Index (NDRI), a Normalized Difference Vegetation Index (NDVI) like vegetation index, has been developed for the purpose of identifying resin from beetle damage. Employing a hyperspectral system with a 1000-2500 nm range, it was found that detection performance could be improved with wavelengths beyond 1700 nm, with 1720 nm and 2300 nm identified as critical for the identification of resin in low-density conditions. Determining the age of the resin presented a challenge due to the inconsistent spectral changes observed. However, the enhanced detection capability at higher wavelengths suggests a promising approach for early pest detection. Advancements in short-wave infrared technology, such as colloidal quantum dot sensors, have the potential to further enhance automated forest health monitoring.

Valentina Di Pinto, Giovanni Gibertoni, Luigi Rovati
Investigation of a High-Reflectance Coating for Wide-Spectrum Visual Stimulation

This study presents a performance comparison of low-cost, spray-applied reflective coatings for use on custom 3D-printed substrates, targeting visual stimulation applications such as ophthalmic prototypes. BaSO4 and TiO2 mixtures were evaluated against commercial references under broadband illumination to assess their spectral uniformity and practical applicability. While TiO2 achieved higher reflectance in the red-NIR region, BaSO4 demonstrated more balanced performance across the visible spectrum and superior adhesion. Standard white paints showed significantly lower and inconsistent reflectance, highlighting their inability to serve as reflectance standards.

Paolo Diotti, Daniele Caltabiano, Anna Angela Pomarico, Giuditta Roselli, Michele Norgia
Real-Time 3D-Camera based on LiDAR and MEMS Mirrors

In this study, we describe a Time-of-Flight scanning LiDAR (Light Detection and Ranging) prototype, that leverages MEMS mirrors for agile beam steering and an FPGA-based processing unit for real-time 3D image reconstruction. The proposed 3D-LiDAR system is designed to operate within a range of up to 1 meter with a spatial resolution of 400 × 300 pixels at a frame rate of 30 Hz. Lidar prototype architecture consists of 3 main parts: optomechanical System; Digital Processing Unit (FPGA-based); analog front-end. Processed 3D depth maps are rendered in real-time via an HDMI interface, providing immediate visual feedback. The integration of MEMS mirrors, FPGA-based Time-to-Digital Converter, and an optimized analog front-end resulted in a highly efficient, compact, and real-time depth-sensing platform.

Zheng Liu, Patrick Hunhold, Ziran He, Galina Polte, Elske Linß, Janice Kielbassa, Maik Rosenberger, Gunther Notni
Investigation on Hyperspectral Augmentation to Construction Materials Classification

The recycling of construction waste faces challenges in material identification due to class imbalance in hyperspectral datasets. To address this, we propose integrating a data augmentation module into the classification workflow for construction materials using short-wavelength infrared (SWIR) reflectance spectra. Experiments were conducted with Random Forest (RF) and 1D-CNN classifiers across multi-class and binary classification tasks, where the latter targeted classes commonly confused with minority categories. Various augmentation methods were tested, with the self-attention-based WGAN (SA-WGAN) showing the most notable improvement. It increased the recall of the minority class by up to 60 and 48 percentage points in the multi-class and binary classification tasks, respectively, while maintaining stable performance on the majority classes.

Wang Liao, Chen Zhang, Shiyao Gao, Hongyu Chen, Hao Chen, Maik Rosenberger, Gunther Notni
Metrological Evaluation of Multimodal 3D Camera System for Reliable Dynamic Facial Motion Analysis

Traditional 2D-based dynamic facial motion analysis methods often rely on texture features and lack depth information, raising the fundamental question: can facial motions be measured rather than inferred? This work proposes a multimodal 3D measurement-based concept for dynamic facial motion analysis, leveraging a camera system that combines a GOBO-based active stereo 3D unit with a synchronized RGB camera. A pretrained neural network extracts 2D facial landmarks, which are mapped into 3D-metric space. As proof-of-concept, 19 geometric features (e.g., distances and angles) are defined based on 3D metric facial landmarks to correspond to selected Action Units (AUs). The concept is evaluated from a metrological perspective. Under static condition, where facial expressions remain unchanged, 90 frames were captured per video across 30 recordings for each of six participants. The average standard deviations of the defined features measurements were 0.85° (angles) and 0.37 mm (distances), yielding expanded uncertainties of 2.6° and 1.1 mm (99.73% confidence). Feasible measurement capability of the camera system is further supported by low variation coefficients (1.33% and 1.61%) of the measured features values. To further evaluate dynamic capability, facial motions were induced, producing feature changes much greater than the uncertainty, confirming statistical significance. The proposed concept and camera system provide a robust and precise foundation for dynamic facial motion analysis, with potential applications in scenarios like micro-expression recognition, lie detection, and healthcare monitoring.

Page 21 of 977 Results 201 - 210 of 9762