Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 422
Filtrar
1.
J Environ Manage ; 367: 121996, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39088905

RESUMO

Monitoring forest canopies is vital for ecological studies, particularly for assessing epiphytes in rain forest ecosystems. Traditional methods for studying epiphytes, such as climbing trees and building observation structures, are labor, cost intensive and risky. Unmanned Aerial Vehicles (UAVs) have emerged as a valuable tool in this domain, offering botanists a safer and more cost-effective means to collect data. This study leverages AI-assisted techniques to enhance the identification and mapping of epiphytes using UAV imagery. The primary objective of this research is to evaluate the effectiveness of AI-assisted methods compared to traditional approaches in segmenting/identifying epiphytes from UAV images collected in a reserve forest in Costa Rica. Specifically, the study investigates whether Deep Learning (DL) models can accurately identify epiphytes during complex backgrounds, even with a limited dataset of varying image quality. Systematically, this study compares three traditional image segmentation methods Auto Cluster, Watershed, and Level Set with two DL-based segmentation networks: the UNet and the Vision Transformer-based TransUNet. Results obtained from this study indicate that traditional methods struggle with the complexity of vegetation backgrounds and variability in target characteristics. Epiphyte identification results were quantitatively evaluated using the Jaccard score. Among traditional methods, Watershed scored 0.10, Auto Cluster 0.13, and Level Set failed to identify the target. In contrast, AI-assisted models performed better, with UNet scoring 0.60 and TransUNet 0.65. These results highlight the potential of DL approaches to improve the accuracy and efficiency of epiphyte identification and mapping, advancing ecological research and conservation.


Assuntos
Dispositivos Aéreos não Tripulados , Costa Rica , Ecossistema , Monitoramento Ambiental/métodos , Aprendizado Profundo , Inteligência Artificial , Florestas , Plantas , Floresta Úmida , Árvores
2.
Food Res Int ; 192: 114836, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39147524

RESUMO

The classification of carambola, also known as starfruit, according to quality parameters is usually conducted by trained human evaluators through visual inspections. This is a costly and subjective method that can generate high variability in results. As an alternative, computer vision systems (CVS) combined with deep learning (DCVS) techniques have been introduced in the industry as a powerful and an innovative tool for the rapid and non-invasive classification of fruits. However, validating the learning capability and trustworthiness of a DL model, aka black box, to obtain insights can be challenging. To reduce this gap, we propose an integrated eXplainable Artificial Intelligence (XAI) method for the classification of carambolas at different maturity stages. We compared two Residual Neural Networks (ResNet) and Visual Transformers (ViT) to identify the image regions that are enhanced by a Random Forest (RF) model, with the aim of providing more detailed information at the feature level for classifying the maturity stage. Changes in fruit colour and physicochemical data throughout the maturity stages were analysed, and the influence of these parameters on the maturity stages was evaluated using the Gradient-weighted Class Activation Mapping (Grad-CAM), the Attention Maps using RF importance. The proposed approach provides a visualization and description of the most important regions that led to the model decision, in wide visualization follows the models an importance features from RF. Our approach has promising potential for standardized and rapid carambolas classification, achieving 91 % accuracy with ResNet and 95 % with ViT, with potential application for other fruits.


Assuntos
Averrhoa , Frutas , Redes Neurais de Computação , Frutas/crescimento & desenvolvimento , Frutas/classificação , Averrhoa/química , Aprendizado Profundo , Inteligência Artificial , Cor
3.
Parasit Vectors ; 17(1): 329, 2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39095920

RESUMO

BACKGROUND: Identifying mosquito vectors is crucial for controlling diseases. Automated identification studies using the convolutional neural network (CNN) have been conducted for some urban mosquito vectors but not yet for sylvatic mosquito vectors that transmit the yellow fever. We evaluated the ability of the AlexNet CNN to identify four mosquito species: Aedes serratus, Aedes scapularis, Haemagogus leucocelaenus and Sabethes albiprivus and whether there is variation in AlexNet's ability to classify mosquitoes based on pictures of four different body regions. METHODS: The specimens were photographed using a cell phone connected to a stereoscope. Photographs were taken of the full-body, pronotum and lateral view of the thorax, which were pre-processed to train the AlexNet algorithm. The evaluation was based on the confusion matrix, the accuracy (ten pseudo-replicates) and the confidence interval for each experiment. RESULTS: Our study found that the AlexNet can accurately identify mosquito pictures of the genus Aedes, Sabethes and Haemagogus with over 90% accuracy. Furthermore, the algorithm performance did not change according to the body regions submitted. It is worth noting that the state of preservation of the mosquitoes, which were often damaged, may have affected the network's ability to differentiate between these species and thus accuracy rates could have been even higher. CONCLUSIONS: Our results support the idea of applying CNNs for artificial intelligence (AI)-driven identification of mosquito vectors of tropical diseases. This approach can potentially be used in the surveillance of yellow fever vectors by health services and the population as well.


Assuntos
Aedes , Mosquitos Vetores , Redes Neurais de Computação , Febre Amarela , Animais , Mosquitos Vetores/classificação , Febre Amarela/transmissão , Aedes/classificação , Aedes/fisiologia , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Culicidae/classificação , Inteligência Artificial
4.
Curr Med Chem ; 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39092736

RESUMO

BACKGROUND: Computational assessment of the energetics of protein-ligand complexes is a challenge in the early stages of drug discovery. Previous comparative studies on computational methods to calculate the binding affinity showed that targeted scoring functions outperform universal models. OBJECTIVE: The goal here is to review the application of a simple physics-based model to estimate the binding. The focus is on a mass-spring system developed to predict binding affinity against cyclin-dependent kinase. METHOD: Publications in PubMed were searched to find mass-spring models to predict binding affinity. Crystal structures of cyclin-dependent kinases found in the protein data bank and two web servers to calculate affinity based on the atomic coordinates were employed. RESULTS: One recent study showed how a simple physics-based scoring function (named Taba) could contribute to the analysis of protein-ligand interactions. Taba methodology outperforms robust physics-based models implemented in docking programs such as AutoDock4 and Molegro Virtual Docker. Predictive metrics of 27 scoring functions and energy terms highlight the superior performance of the Taba scoring function for cyclin- dependent kinase. CONCLUSION: The recent progress of machine learning methods and the availability of these techniques through free libraries boosted the development of more accurate models to address protein-ligand interactions. Combining a naïve mass-spring system with machine-learning techniques generated a targeted scoring function with superior predictive performance to estimate pKi.

5.
Ann Hepatol ; 29(5): 101528, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38971372

RESUMO

INTRODUCTION AND OBJECTIVES: Despite the huge clinical burden of MASLD, validated tools for early risk stratification are lacking, and heterogeneous disease expression and a highly variable rate of progression to clinical outcomes result in prognostic uncertainty. We aimed to investigate longitudinal electronic health record-based outcome prediction in MASLD using a state-of-the-art machine learning model. PATIENTS AND METHODS: n = 940 patients with histologically-defined MASLD were used to develop a deep-learning model for all-cause mortality prediction. Patient timelines, spanning 12 years, were fully-annotated with demographic/clinical characteristics, ICD-9 and -10 codes, blood test results, prescribing data, and secondary care activity. A Transformer neural network (TNN) was trained to output concomitant probabilities of 12-, 24-, and 36-month all-cause mortality. In-sample performance was assessed using 5-fold cross-validation. Out-of-sample performance was assessed in an independent set of n = 528 MASLD patients. RESULTS: In-sample model performance achieved AUROC curve 0.74-0.90 (95 % CI: 0.72-0.94), sensitivity 64 %-82 %, specificity 75 %-92 % and Positive Predictive Value (PPV) 94 %-98 %. Out-of-sample model validation had AUROC 0.70-0.86 (95 % CI: 0.67-0.90), sensitivity 69 %-70 %, specificity 96 %-97 % and PPV 75 %-77 %. Key predictive factors, identified using coefficients of determination, were age, presence of type 2 diabetes, and history of hospital admissions with length of stay >14 days. CONCLUSIONS: A TNN, applied to routinely-collected longitudinal electronic health records, achieved good performance in prediction of 12-, 24-, and 36-month all-cause mortality in patients with MASLD. Extrapolation of our technique to population-level data will enable scalable and accurate risk stratification to identify people most likely to benefit from anticipatory health care and personalized interventions.


Assuntos
Registros Eletrônicos de Saúde , Humanos , Masculino , Feminino , Pessoa de Meia-Idade , Medição de Risco , Idoso , Prognóstico , Causas de Morte , Aprendizado Profundo , Fatores de Risco , Valor Preditivo dos Testes , Hepatopatia Gordurosa não Alcoólica/mortalidade , Hepatopatia Gordurosa não Alcoólica/diagnóstico , Adulto , Redes Neurais de Computação , Estudos Retrospectivos
6.
Radiol Bras ; 57: e20230096en, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38993952

RESUMO

Objective: To develop a natural language processing application capable of automatically identifying benign gallbladder diseases that require surgery, from radiology reports. Materials and Methods: We developed a text classifier to classify reports as describing benign diseases of the gallbladder that do or do not require surgery. We randomly selected 1,200 reports describing the gallbladder from our database, including different modalities. Four radiologists classified the reports as describing benign disease that should or should not be treated surgically. Two deep learning architectures were trained for classification: a convolutional neural network (CNN) and a bidirectional long short-term memory (BiLSTM) network. In order to represent words in vector form, the models included a Word2Vec representation, with dimensions of 300 or 1,000. The models were trained and evaluated by dividing the dataset into training, validation, and subsets (80/10/10). Results: The CNN and BiLSTM performed well in both dimensional spaces. For the 300- and 1,000-dimensional spaces, respectively, the F1-scores were 0.95945 and 0.95302 for the CNN model, compared with 0.96732 and 0.96732 for the BiLSTM model. Conclusion: Our models achieved high performance, regardless of the architecture and dimensional space employed.


Objetivo: Desenvolver uma aplicação de processamento de linguagem natural capaz de identificar automaticamente doenças cirúrgicas benignas da vesícula biliar a partir de laudos radiológicos. Materiais e Métodos: Desenvolvemos um classificador de texto para classificar laudos como contendo ou não doenças cirúrgicas benignas da vesícula biliar. Selecionamos aleatoriamente 1.200 laudos com descrição da vesícula biliar de nosso banco de dados, incluindo diferentes modalidades. Quatro radiologistas classificaram os laudos como doença benigna cirúrgica ou não. Duas arquiteturas de aprendizagem profunda foram treinadas para a classificação: a rede neural convolucional (convolutional neural network - CNN) e a memória longa de curto prazo bidirecional (bidirectional long short-term memory - BiLSTM). Para representar palavras de forma vetorial, os modelos incluíram uma representação Word2Vec, com dimensões variando de 300 a 1000. Os modelos foram treinados e avaliados por meio da divisão do conjunto de dados entre treinamento, validação e teste (80/10/10). Resultados: CNN e BiLSTM tiveram bom desempenho em ambos os espaços dimensionais. Relatamos para 300 e 1000 dimensões, respectivamente, as pontuações F1 de 0,95945 e 0,95302 para o modelo CNN e de 0,96732 e 0,96732 para a BiLSTM. Conclusão: Nossos modelos alcançaram alto desempenho, independentemente de diferentes arquiteturas e espaços dimensionais.

7.
Sensors (Basel) ; 24(14)2024 Jul 18.
Artigo em Inglês | MEDLINE | ID: mdl-39066062

RESUMO

Marker-less hand-eye calibration permits the acquisition of an accurate transformation between an optical sensor and a robot in unstructured environments. Single monocular cameras, despite their low cost and modest computation requirements, present difficulties for this purpose due to their incomplete correspondence of projected coordinates. In this work, we introduce a hand-eye calibration procedure based on the rotation representations inferred by an augmented autoencoder neural network. Learning-based models that attempt to directly regress the spatial transform of objects such as the links of robotic manipulators perform poorly in the orientation domain, but this can be overcome through the analysis of the latent space vectors constructed in the autoencoding process. This technique is computationally inexpensive and can be run in real time in markedly varied lighting and occlusion conditions. To evaluate the procedure, we use a color-depth camera and perform a registration step between the predicted and the captured point clouds to measure translation and orientation errors and compare the results to a baseline based on traditional checkerboard markers.

8.
Bioengineering (Basel) ; 11(7)2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-39061753

RESUMO

Signal processing is a very useful field of study in the interpretation of signals in many everyday applications. In the case of applications with time-varying signals, one possibility is to consider them as graphs, so graph theory arises, which extends classical methods to the non-Euclidean domain. In addition, machine learning techniques have been widely used in pattern recognition activities in a wide variety of tasks, including health sciences. The objective of this work is to identify and analyze the papers in the literature that address the use of machine learning applied to graph signal processing in health sciences. A search was performed in four databases (Science Direct, IEEE Xplore, ACM, and MDPI), using search strings to identify papers that are in the scope of this review. Finally, 45 papers were included in the analysis, the first being published in 2015, which indicates an emerging area. Among the gaps found, we can mention the need for better clinical interpretability of the results obtained in the papers, that is not to restrict the results or conclusions simply to performance metrics. In addition, a possible research direction is the use of new transforms. It is also important to make new public datasets available that can be used to train the models.

9.
BMC Bioinformatics ; 25(1): 231, 2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-38969970

RESUMO

PURPOSE: In this study, we present DeepVirusClassifier, a tool capable of accurately classifying Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) viral sequences among other subtypes of the coronaviridae family. This classification is achieved through a deep neural network model that relies on convolutional neural networks (CNNs). Since viruses within the same family share similar genetic and structural characteristics, the classification process becomes more challenging, necessitating more robust models. With the rapid evolution of viral genomes and the increasing need for timely classification, we aimed to provide a robust and efficient tool that could increase the accuracy of viral identification and classification processes. Contribute to advancing research in viral genomics and assist in surveilling emerging viral strains. METHODS: Based on a one-dimensional deep CNN, the proposed tool is capable of training and testing on the Coronaviridae family, including SARS-CoV-2. Our model's performance was assessed using various metrics, including F1-score and AUROC. Additionally, artificial mutation tests were conducted to evaluate the model's generalization ability across sequence variations. We also used the BLAST algorithm and conducted comprehensive processing time analyses for comparison. RESULTS: DeepVirusClassifier demonstrated exceptional performance across several evaluation metrics in the training and testing phases. Indicating its robust learning capacity. Notably, during testing on more than 10,000 viral sequences, the model exhibited a more than 99% sensitivity for sequences with fewer than 2000 mutations. The tool achieves superior accuracy and significantly reduced processing times compared to the Basic Local Alignment Search Tool algorithm. Furthermore, the results appear more reliable than the work discussed in the text, indicating that the tool has great potential to revolutionize viral genomic research. CONCLUSION: DeepVirusClassifier is a powerful tool for accurately classifying viral sequences, specifically focusing on SARS-CoV-2 and other subtypes within the Coronaviridae family. The superiority of our model becomes evident through rigorous evaluation and comparison with existing methods. Introducing artificial mutations into the sequences demonstrates the tool's ability to identify variations and significantly contributes to viral classification and genomic research. As viral surveillance becomes increasingly critical, our model holds promise in aiding rapid and accurate identification of emerging viral strains.


Assuntos
COVID-19 , Aprendizado Profundo , Genoma Viral , SARS-CoV-2 , SARS-CoV-2/genética , SARS-CoV-2/classificação , Genoma Viral/genética , COVID-19/virologia , Coronaviridae/genética , Coronaviridae/classificação , Humanos , Redes Neurais de Computação
10.
J Imaging ; 10(7)2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-39057732

RESUMO

Precise annotations for large medical image datasets can be time-consuming. Additionally, when dealing with volumetric regions of interest, it is typical to apply segmentation techniques on 2D slices, compromising important information for accurately segmenting 3D structures. This study presents a deep learning pipeline that simultaneously tackles both challenges. Firstly, to streamline the annotation process, we employ a semi-automatic segmentation approach using bounding boxes as masks, which is less time-consuming than pixel-level delineation. Subsequently, recursive self-training is utilized to enhance annotation quality. Finally, a 2.5D segmentation technique is adopted, wherein a slice of a volumetric image is segmented using a pseudo-RGB image. The pipeline was applied to segment the carotid artery tree in T1-weighted brain magnetic resonance images. Utilizing 42 volumetric non-contrast T1-weighted brain scans from four datasets, we delineated bounding boxes around the carotid arteries in the axial slices. Pseudo-RGB images were generated from these slices, and recursive segmentation was conducted using a Res-Unet-based neural network architecture. The model's performance was tested on a separate dataset, with ground truth annotations provided by a radiologist. After recursive training, we achieved an Intersection over Union (IoU) score of (0.68 ± 0.08) on the unseen dataset, demonstrating commendable qualitative results.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA