Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 244
Filtrar
1.
Sensors (Basel) ; 24(2)2024 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-38276360

RESUMO

Human violence recognition is an area of great interest in the scientific community due to its broad spectrum of applications, especially in video surveillance systems, because detecting violence in real time can prevent criminal acts and save lives. The majority of existing proposals and studies focus on result precision, neglecting efficiency and practical implementations. Thus, in this work, we propose a model that is effective and efficient in recognizing human violence in real time. The proposed model consists of three modules: the Spatial Motion Extractor (SME) module, which extracts regions of interest from a frame; the Short Temporal Extractor (STE) module, which extracts temporal characteristics of rapid movements; and the Global Temporal Extractor (GTE) module, which is responsible for identifying long-lasting temporal features and fine-tuning the model. The proposal was evaluated for its efficiency, effectiveness, and ability to operate in real time. The results obtained on the Hockey, Movies, and RWF-2000 datasets demonstrated that this approach is highly efficient compared to various alternatives. In addition, the VioPeru dataset was created, which contains violent and non-violent videos captured by real video surveillance cameras in Peru, to validate the real-time applicability of the model. When tested on this dataset, the effectiveness of our model was superior to the best existing models.


Assuntos
Movimento , Violência , Humanos , Movimento (Física) , Reconhecimento Psicológico , Gravação de Videoteipe
2.
MHSalud ; 20(2): 43-62, Jul.-Dec. 2023. tab, graf
Artigo em Espanhol | LILACS, SaludCR | ID: biblio-1558374

RESUMO

Resumen: Introducción: En el contexto de una compañía profesional de danza, la pandemia ha generado cambios en las dinámicas de trabajo existentes tanto a nivel de áreas profesionales como de gestión. Propósito: Mostrar las acciones tomadas por una compañía profesional de danza en relación con la organización y la gestión para posicionar su quehacer al servicio de la sociedad a través del proyecto Líneas de trabajo en salud y prevención primordial de Danza Universitaria, surgido de la coyuntura de la COVID-19. Metodología: Sistematización de una experiencia de carácter exploratorio comprendida entre los meses de agosto del 2020 a octubre 2021, cuyos hallazgos reportados son el resultado de un proceso de recolección y organización de datos. Resultados: Se obtuvo información detallada y precisa sobre cada aspecto relacionado a la organización y administración del proyecto. Sistematización: Contextualización y análisis de la compañía Danza Universitaria, su recurso humano, visiones y prácticas dancísticas, al posicionar la danza como destreza de movimiento para la salud integral y abarcar el rol de profesionales de la danza para desembocar en los procesos administrativos y de gestión de la compañía. Conclusiones: Se esclarecen las dimensiones del proyecto como un sistema integrado en la comunidad y se visibiliza la capacidad de respuesta de este a las necesidades del entorno y al aporte de la danza y del movimiento para la salud integral. Recomendaciones: Valorar este proyecto como un espacio único que ha servido como objeto de estudio y parte de una experiencia académica que aporta al desarrollo y la gestión de las artes, el movimiento humano, la educación y la salud.


Abstract: Introduction: In the context of a professional dance company, the pandemic has generated changes in the existing work dynamics in its professional and managerial areas. Purpose: to show the organization and management actions taken by a professional dance company, aiming to position its work at the service of society through Lines of work in health and primary prevention of Danza Universitaria, a project that arises in response to the COVID-19 pandemic. Methodology: systematization of exploratory experience whose reported findings are the result of data collection and organization process between the months of August 2020 to October 2021. Results: detailed and precise information was obtained on each aspect related to the organization and project management. Systematization: the dance company, its human resources, visions and dance practices are contextualized and analyzed, positioning dance as a movement skill for integral health and encompassing the role of the dance professional to lead to the administrative and management processes of the company. Conclusions: the dimensions of the project as an integrated system in the community are clarified and its response capacity to the needs of the environment as well as the contribution of dance and movement for integral health is made visible. Recommendations: to value this project as a unique space that has served as an object of study and part of an academic experience that contributes to the development and management of the arts, human movement, education, and health.


Resumo: Introdução: No contexto de uma companhia profissional de dança a pandemia havia gerado mudanças nas dinâmicas de trabalho existentes tanto a nível das áreas profissionais como de gestão. Objetivo: Mostrar as ações tomadas por uma companhia profissional de dança com relação a organização e a gestão para posicionar o que fazer a serviços da sociedade através do projeto de Formas trabalho na saúde e prevenção primordial de Dança Universitária, surgido no período da COVID-19. Metodologia: sistematização de experiência de caráter investigativo no período compreendido nos meses de agosto 2020 a outubro 2021, cujos os traços encontrados no resultado do processo de apuração e organização dos datos. Resultado: obteve-se informações detalhada e precisa sobre cada aspecto relacionado a organização e administrativa do projeto. Sistematização: contextualização e análises da companhia de Dança Universitária, seu recurso humano, visões e prática dancísticas, a posicionar a dança como destreza de movimento para a saúde integral e abranger o rol do profissional de dança para terminar os processos administrativos e de gestão da companhia. Conclusões: deixa claro às dimensões do projeto como um sistema integrado entre a comunidade e se viabiliza a capacidade de resposta deste às necessidades de entorno e soporte da dança e o movimento para a saúde integral. Recomendações: valorizar este projeto como espaço único que serve como objeto de estudo e parte de uma experiência acadêmica que ajuda o desenvolvimento e gestão das artes, o movimento humano, a educação e a saúde.


Assuntos
Organização e Administração , Dançaterapia , Dança/educação , COVID-19 , Colaboração Intersetorial , Movimento (Física)
3.
Artigo em Inglês | MEDLINE | ID: mdl-37681796

RESUMO

New technologies based on virtual reality and augmented reality offer promising perspectives in an attempt to increase the assessment of human kinematics. The aim of this work was to develop a markerless 3D motion analysis capture system (MOVA3D) and to test it versus Qualisys Track Manager (QTM). A digital camera was used to capture the data, and proprietary software capable of automatically inferring the joint centers in 3D and performing the angular kinematic calculations of interest was developed for such analysis. In the experiment, 10 subjects (22 to 50 years old), 5 men and 5 women, with a body mass index between 18.5 and 29.9 kg/m2, performed squatting, hip flexion, and abduction movements, and both systems measured the hip abduction/adduction angle and hip flexion/extension, simultaneously. The mean value of the difference between the QTM system and the MOVA3D system for all frames for each joint angle was analyzed with Pearson's correlation coefficient (r). The MOVA3D system reached good (above 0.75) or excellent (above 0.90) correlations in 6 out of 8 variables. The average error remained below 12° in only 20 out of 24 variables analyzed. The MOVA3D system is therefore promising for use in telerehabilitation or other applications where this level of error is acceptable. Future studies should continue to validate the MOVA3D as updated versions of their software are developed.


Assuntos
Realidade Aumentada , Movimento , Masculino , Humanos , Adulto , Feminino , Adulto Jovem , Pessoa de Meia-Idade , Postura , Movimento (Física) , Extremidade Inferior
4.
Sensors (Basel) ; 23(14)2023 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-37514677

RESUMO

Due to its capacity to gather vast, high-level data about human activity from wearable or stationary sensors, human activity recognition substantially impacts people's day-to-day lives. Multiple people and things may be seen acting in the video, dispersed throughout the frame in various places. Because of this, modeling the interactions between many entities in spatial dimensions is necessary for visual reasoning in the action recognition task. The main aim of this paper is to evaluate and map the current scenario of human actions in red, green, and blue videos, based on deep learning models. A residual network (ResNet) and a vision transformer architecture (ViT) with a semi-supervised learning approach are evaluated. The DINO (self-DIstillation with NO labels) is used to enhance the potential of the ResNet and ViT. The evaluated benchmark is the human motion database (HMDB51), which tries to better capture the richness and complexity of human actions. The obtained results for video classification with the proposed ViT are promising based on performance metrics and results from the recent literature. The results obtained using a bi-dimensional ViT with long short-term memory demonstrated great performance in human action recognition when applied to the HMDB51 dataset. The mentioned architecture presented 96.7 ± 0.35% and 41.0 ± 0.27% in terms of accuracy (mean ± standard deviation values) in the train and test phases of the HMDB51 dataset, respectively.


Assuntos
Aprendizado Profundo , Humanos , Redes Neurais de Computação , Aprendizado de Máquina Supervisionado , Atividades Humanas , Movimento (Física)
5.
Sensors (Basel) ; 23(3)2023 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-36772650

RESUMO

Medical thermography provides an overview of the human body with two-dimensional (2D) information that assists the identification of temperature changes, based on the analysis of surface distribution. However, this approach lacks spatial depth information, which can be enhanced by adding multiple images or three-dimensional (3D) systems. Therefore, the methodology applied for this paper generates a 3D point cloud (from thermal infrared images), a 3D geometry model (from CT images), and the segmented inner anatomical structures. Thus, the following computational processing was employed: Structure from Motion (SfM), image registration, and alignment (affine transformation) between the 3D models obtained to combine and unify them. This paper presents the 3D reconstruction and visualization of the respective geometry of the neck/bust and inner anatomical structures (thyroid, trachea, veins, and arteries). Additionally, it shows the whole 3D thermal geometry in different anatomical sections (i.e., coronal, sagittal, and axial), allowing it to be further examined by a medical team, improving pathological assessments. The generation of 3D thermal anatomy models allows for a combined visualization, i.e., functional and anatomical images of the neck region, achieving encouraging results. These 3D models bring correlation of the inner and outer regions, which could improve biomedical applications and future diagnosis with such a methodology.


Assuntos
Imageamento Tridimensional , Modelos Anatômicos , Humanos , Imageamento Tridimensional/métodos , Movimento (Física) , Artérias , Processamento de Imagem Assistida por Computador
6.
PeerJ ; 11: e14558, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36718456

RESUMO

Background: We investigated the concurrent validity and test-retest reliability of the Jumpo 2 and MyJump 2 apps for estimating jump height, and the mean values of force, velocity, and power produced during countermovement (CMJ) and squat jumps (SJ). Methods: Physically active university aged men (n = 10, 20 ± 3 years, 176 ± 6 cm, 68 ± 9 kg) jumped on a force plate (i.e., criterion) while being recorded by a smartphone slow-motion camera. The videos were analyzed using Jumpo 2 and MyJump 2 using a Samsung Galaxy S7 powered by the Android system. Validity and reliability were determined by regression analysis, typical error of estimates and measurements, and intraclass correlation coefficients. Results: Both apps provided a reliable estimate of jump height and the mean values of force, velocity, and power. Furthermore, estimates of jump height for CMJ and SJ and the mean force of the CMJ were valid. However, the apps presented impractical or poor validity correlations for velocity and power. Compared with criterion, the apps underestimated the velocity of the CMJ. Conclusions: Therefore, Jumpo 2 and MyJump 2 both provide a valid measure of jump height, but the remaining variables provided by these apps must be viewed with caution since the validity of force depends on jump type, while velocity (and as consequence power) could not be well estimated from the apps.


Assuntos
Postura , Smartphone , Masculino , Humanos , Idoso , Reprodutibilidade dos Testes , Movimento (Física) , Gravação de Videoteipe
7.
Sensors (Basel) ; 22(20)2022 Oct 12.
Artigo em Inglês | MEDLINE | ID: mdl-36298088

RESUMO

There exist several methods aimed at human-robot physical interaction (HRpI) to provide physical therapy in patients. The use of haptics has become an option to display forces along a given path so as to it guides the physiotherapist protocol. Critical in this regard is the motion control for haptic guidance to convey the specifications of the clinical protocol. Given the inherent patient variability, a conclusive demand of these HRpI methods is the need to modify online its response with neither rejecting nor neglecting interaction forces but to process them as patient interaction. In this paper, considering the nonlinear dynamics of the robot interacting bilaterally with a patient, we propose a novel adaptive control to guarantee stable haptic guidance by processing the causality of patient interaction forces, despite unknown robot dynamics and uncertainties. The controller implements radial basis neural network with daughter RASP1 wavelets activation function to identify the coupled interaction dynamics. For an efficient online implementation, an output infinite impulse response filter prunes negligible signals and nodes to deal with overparametrization. This contributes to adapt online the feedback gains of a globally stable discrete PID regulator to yield stiffness control, so the user is guided within a perceptual force field. Effectiveness of the proposed method is verified in real-time bimanual human-in-the-loop experiments.


Assuntos
Reabilitação Neurológica , Robótica , Humanos , Robótica/métodos , Movimento (Física) , Redes Neurais de Computação , Retroalimentação
8.
Sensors (Basel) ; 22(20)2022 Oct 19.
Artigo em Inglês | MEDLINE | ID: mdl-36298314

RESUMO

Computer vision techniques can monitor the rotational speed of rotating equipment or machines to understand their working conditions and prevent failures. Such techniques are highly precise, contactless, and potentially suitable for applications without massive setup changes. However, traditional vision sensors collect a significant amount of data to process and measure the rotation of high-speed systems, and they are susceptible to motion blur. This work proposes a new method for measuring rotational speed processing event-based data applied to high-speed systems using a neuromorphic sensor. This sensor produces event-based data and is designed to work with high temporal resolution and high dynamic range. The main advantages of the Event-based Angular Speed Measurement (EB-ASM) method are the high dynamic range, the absence of motion blurring, and the possibility of measuring multiple rotations simultaneously with a single device. The proposed method uses the time difference between spikes in a Kernel or Window selected in the sensor frame range. It is evaluated in two experimental scenarios by measuring a fan rotational speed and a Router Computer Numerical Control (CNC) spindle. The results compare measurements with a calibrated digital photo-tachometer. Based on the performed tests, the EB-ASM can measure the rotational speed with a mean absolute error of less than 0.2% for both scenarios.


Assuntos
Movimento , Movimento (Física) , Monitorização Fisiológica
9.
Sensors (Basel) ; 22(19)2022 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-36236421

RESUMO

It is a challenging task to track objects moving along an unknown trajectory. Conventional model-based controllers require detailed knowledge of a robot's kinematics and the target's trajectory. Tracking precision heavily relies on kinematics to infer the trajectory. Control implementation in parallel robots is especially difficult due to their complex kinematics. Vision-based controllers are robust to uncertainties of a robot's kinematic model since they can correct end-point trajectories as error estimates become available. Robustness is guaranteed by taking the vision sensor's model into account when designing the control law. All camera space manipulation (CSM) models in the literature are position-based, where the mapping between the end effector position in the Cartesian space and sensor space is established. Such models are not appropriate for tracking moving targets because the relationship between the target and the end effector is a fixed point. The present work builds upon the literature by presenting a novel CSM velocity-based control that establishes a relationship between a movable trajectory and the end effector position. Its efficacy is shown on a Delta-type parallel robot. Three types of experiments were performed: (a) static tracking (average error of 1.09 mm); (b) constant speed linear trajectory tracking-speeds of 7, 9.5, and 12 cm/s-(tracking errors of 8.89, 11.76, and 18.65 mm, respectively); (c) freehand trajectory tracking (max tracking errors of 11.79 mm during motion and max static positioning errors of 1.44 mm once the object stopped). The resulting control cycle time was 48 ms. The results obtained show a reduction in the tracking errors for this robot with respect to previously published control strategies.


Assuntos
Robótica , Fenômenos Biomecânicos , Movimento (Física) , Robótica/métodos , Visão Ocular
10.
Sensors (Basel) ; 22(17)2022 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-36080999

RESUMO

Object location is a crucial computer vision method often used as a previous stage to object classification. Object-location algorithms require high computational and memory resources, which poses a difficult challenge for portable and low-power devices, even when the algorithm is implemented using dedicated digital hardware. Moving part of the computation to the imager may reduce the memory requirements of the digital post-processor and exploit the parallelism available in the algorithm. This paper presents the architecture of a Smart Imaging Sensor (SIS) that performs object location using pixel-level parallelism. The SIS is based on a custom smart pixel, capable of computing frame differences in the analog domain, and a digital coprocessor that performs morphological operations and connected components to determine the bounding boxes of the detected objects. The smart-pixel array implements on-pixel temporal difference computation using analog memories to detect motion between consecutive frames. Our SIS can operate in two modes: (1) as a conventional image sensor and (2) as a smart sensor which delivers a binary image that highlights the pixels in which movement is detected between consecutive frames and the object bounding boxes. In this paper, we present the design of the smart pixel and evaluate its performance using post-parasitic extraction on a 0.35 µm mixed-signal CMOS process. With a pixel-pitch of 32 µm × 32 µm, we achieved a fill factor of 28%. To evaluate the scalability of the design, we ported the layout to a 0.18 µm process, achieving a fill factor of 74%. On an array of 320×240 smart pixels, the circuit operates at a maximum frame rate of 3846 frames per second. The digital coprocessor was implemented and validated on a Xilinx Artix-7 XC7A35T field-programmable gate array that runs at 125 MHz, locates objects in a video frame in 0.614 µs, and has a power consumption of 58 mW.


Assuntos
Algoritmos , Computadores , Movimento (Física)
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA