Kaikki aineistot
Lisää
Abstract Pain is a transient physical reaction that exhibits on human faces. Automatic pain intensity estimation is of great importance in clinical and health-care applications. Pain expression is identified by a set of deformations of facial features. Hence, features are essential for pain estimation. In this paper, we propose a novel method that encodes low-level descriptors and powerful high-level deep features by a weighting process, to form an efficient representation of facial images. To obtain a powerful and compact low-level representation, we explore the way of using second-order pooling over the local descriptors. Instead of direct concatenation, we develop an efficient fusion approach that unites the low-level local descriptors and the high-level deep features. To the best of our knowledge, this is the first approach that incorporates the low-level local statistics together with the high-level deep features in pain intensity estimation. Experiments are evaluated on the benchmark databases of pain. The results demonstrate that the proposed low-to-high-level representation outperforms other methods and achieves promising results.
Abstract Automatic pain recognition is paramount for medical diagnosis and treatment. The existing works fall into three categories: assessing facial appearance changes, exploiting physiological cues, or fusing them in a multi-modal manner. However, (1) appearance changes are easily affected by subjective factors which impedes objective pain recognition. Besides, the appearance-based approaches ignore long-range spatial-temporal dependencies that are important for modeling expressions over time; (2) the physiological cues are obtained by attaching sensors on human body, which is inconvenient and uncomfortable. In this paper, we present a novel multi-task learning framework which encodes both appearance changes and physiological cues in a non-contact manner for pain recognition. The framework is able to capture both local and long-range dependencies via the proposed attention mechanism for the learned appearance representations, which are further enriched by temporally attended physiological cues (remote photoplethysmography, rPPG) that are recovered from videos in the auxiliary task. This framework is dubbed rPPG-enriched Spatio-Temporal Attention Network (rSTAN) and allows us to establish the state-of-the-art performance of non-contact pain recognition on publicly available pain databases. It demonstrates that rPPG predictions can be used as an auxiliary task to facilitate non-contact automatic pain recognition.
Abstract Sulfur isotope measurements in three sulfide (two pyrite and one pyrrhotite) samples on two epoxy mounts showed that the mount‐to‐mount variation of raw δ34S values was negligible when secondary ion mass spectrometry (SIMS) analytical settings remained stable. In consequence, an off‐mount calibration procedure for SIMS sulfur isotope analysis was applied in this study. YP136 is a pyrrhotite sample collected from northern Finland. Examination of thin sections with a polarising microscope, backscattered electron image analyses and wavelength dispersive spectrometry mapping showed that the sample grains display no internal growth or other zoning. A total of 318 sulfur isotope (spot) measurements conducted on more than 100 randomly selected grains yielded highly consistent sulfur isotope ratios. The repeatability of all the analytical results of 34S/32S was 0.3‰ (2s, n = 318), which is the same as that of the well‐characterised pyrite reference materials PPP‐1 and UWPy‐1. Its δ34S value determined by gas mass spectrometry was 1.5 ± 0.1‰ (2s, n = 11), which agrees with the SIMS data (1.5 ± 0.3‰, 2s) calibrated by pyrrhotite reference material Po‐10. Therefore, YP136 pyrrhotite is considered a candidate reference material for in situ sulfur isotope determination.
Abstract Automatically recognizing pain from spontaneous facial expression is of increased attention, since it can provide for a direct and relatively objective indication to pain experience. Until now, most of the existing works have focused on analyzing pain from individual images or video-frames, hence discarding the spatio-temporal information that can be useful in the continuous assessment of pain. In this context, this paper investigates and quantifies for the first time the role of the spatio-temporal information in pain assessment by comparing the performance of several baseline local descriptors used in their traditional spatial form against their spatio-temporal counterparts that take into account the video dynamics. For this purpose, we perform extensive experiments on two benchmark datasets. Our results indicate that using spatio-temporal information to classify video-sequences consistently shows superior performance when compared against the one obtained using only static information.