Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros











Intervalo de ano de publicação
1.
Front Physiol ; 13: 780917, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35615677

RESUMO

Background: We evaluated the implications of different approaches to characterize the uncertainty of calibrated parameters of microsimulation decision models (DMs) and quantified the value of such uncertainty in decision making. Methods: We calibrated the natural history model of CRC to simulated epidemiological data with different degrees of uncertainty and obtained the joint posterior distribution of the parameters using a Bayesian approach. We conducted a probabilistic sensitivity analysis (PSA) on all the model parameters with different characterizations of the uncertainty of the calibrated parameters. We estimated the value of uncertainty of the various characterizations with a value of information analysis. We conducted all analyses using high-performance computing resources running the Extreme-scale Model Exploration with Swift (EMEWS) framework. Results: The posterior distribution had a high correlation among some parameters. The parameters of the Weibull hazard function for the age of onset of adenomas had the highest posterior correlation of -0.958. When comparing full posterior distributions and the maximum-a-posteriori estimate of the calibrated parameters, there is little difference in the spread of the distribution of the CEA outcomes with a similar expected value of perfect information (EVPI) of $653 and $685, respectively, at a willingness-to-pay (WTP) threshold of $66,000 per quality-adjusted life year (QALY). Ignoring correlation on the calibrated parameters' posterior distribution produced the broadest distribution of CEA outcomes and the highest EVPI of $809 at the same WTP threshold. Conclusion: Different characterizations of the uncertainty of calibrated parameters affect the expected value of eliminating parametric uncertainty on the CEA. Ignoring inherent correlation among calibrated parameters on a PSA overestimates the value of uncertainty.

2.
Electrophoresis ; 42(16): 1543-1551, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33991437

RESUMO

A new tool for the solution of electromigrative separations in paper-based microfluidics devices is presented. The implementation is based on a recently published complete mathematical model for describing these types of separations, and was developed on top of the open-source toolbox electroMicroTransport, based on OpenFOAM® , inheriting all its features as native 3D problem handling, support for parallel computation, and a GNU GPL license. The presented tool includes full support for paper-based electromigrative separations (including EOF and the novel mechanical and electrical dispersion effects), compatibility with a well-recognized electrolyte database, and a novel algorithm for computing and controlling the electric current in arbitrary geometries. Additionally, the installation on any operating system is available due to its novel installation option in the form of a Docker image. A validation example with data from literature is included, and two extra application examples are provided, including a 2D free-flow IEF problem, which demonstrates the capabilities of the toolbox for dealing with computational and physicochemical modeling challenges simultaneously. This tool will enable efficient and reliable numerical prototypes of paper-based electrophoretic devices to accompany the contemporary fast growth in paper-based microfluidics.


Assuntos
Microfluídica , Algoritmos , Dispositivos Lab-On-A-Chip , Modelos Teóricos , Software
3.
Biomimetics (Basel) ; 4(1)2019 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-31105195

RESUMO

The International Work Conference on Bioinspired Intelligence (IWOBI) is an annual event that comprises both an international peer-reviewed scientific conference and a set of workshops and other activities in order to foster the research abilities and expertise of young researchers in the field of bioinspired intelligence. IWOBI 2018 has been characterized by a strong transdisciplinary component. The main conference themes were at the intersection between classical engineering disciplines and computer science, and the life and health sciences. This was motivated by the scientific environment that defines research that is being conducted in Costa Rica. Even though IWOBI is an international event, it was very important for the local organizing committee to focus on knowledge areas that were considered of special interest to Costa Rican researchers and to students looking to start their scientific careers. With such great expectations, IWOBI 2018 has been the first IWOBI conference in history to have parallel tracks. In addition to a regular track, a biocomputation and related techniques track was developed, as well as another one devoted to high-performance computing (HPC) systems applications for life and health sciences applications. Workshops were another important resource developed within IWOBI 2018. They were considered a very important tool in order to foster and train young researchers within the country and they are a very valuable chance to establish direct networking with elite researchers from different countries and research interests. IWOBI 2018 was the first IWOBI conference that implemented real and effective workshops. There were two workshops, one of them devoted to COPASI software and the other one focused on the use of the message passing interface (MPI) parallel programming library.

4.
BMC Syst Biol ; 12(Suppl 5): 96, 2018 11 20.
Artigo em Inglês | MEDLINE | ID: mdl-30458766

RESUMO

BACKGROUND: The Smith-Waterman (SW) algorithm is the best choice for searching similar regions between two DNA or protein sequences. However, it may become impracticable in some contexts due to its high computational demands. Consequently, the computer science community has focused on the use of modern parallel architectures such as Graphics Processing Units (GPUs), Xeon Phi accelerators and Field Programmable Gate Arrays (FGPAs) to speed up large-scale workloads. RESULTS: This paper presents and evaluates SWIFOLD: a Smith-Waterman parallel Implementation on FPGA with OpenCL for Long DNA sequences. First, we evaluate its performance and resource usage for different kernel configurations. Next, we carry out a performance comparison between our tool and other state-of-the-art implementations considering three different datasets. SWIFOLD offers the best average performance for small and medium test sets, achieving a performance that is independent of input size and sequence similarity. In addition, SWIFOLD provides competitive performance rates in comparison with GPU-based implementations on the latest GPU generation for the large dataset. CONCLUSIONS: The results suggest that SWIFOLD can be a serious contender for accelerating the SW alignment of DNA sequences of unrestricted size in an affordable way reaching on average 125 GCUPS and almost a peak of 270 GCUPS.


Assuntos
Algoritmos , Sequência de Bases , Alinhamento de Sequência/métodos , Software , Biologia Computacional , DNA/química
5.
Front Physiol ; 9: 292, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29643815

RESUMO

Atherosclerotic plaque rupture and erosion are the most important mechanisms underlying the sudden plaque growth, responsible for acute coronary syndromes and even fatal cardiac events. Advances in the understanding of the culprit plaque structure and composition are already reported in the literature, however, there is still much work to be done toward in-vivo plaque visualization and mechanical characterization to assess plaque stability, patient risk, diagnosis and treatment prognosis. In this work, a methodology for the mechanical characterization of the vessel wall plaque and tissues is proposed based on the combination of intravascular ultrasound (IVUS) imaging processing, data assimilation and continuum mechanics models within a high performance computing (HPC) environment. Initially, the IVUS study is gated to obtain volumes of image sequences corresponding to the vessel of interest at different cardiac phases. These sequences are registered against the sequence of the end-diastolic phase to remove transversal and longitudinal rigid motions prescribed by the moving environment due to the heartbeat. Then, optical flow between the image sequences is computed to obtain the displacement fields of the vessel (each associated to a certain pressure level). The obtained displacement fields are regarded as observations within a data assimilation paradigm, which aims to estimate the material parameters of the tissues within the vessel wall. Specifically, a reduced order unscented Kalman filter is employed, endowed with a forward operator which amounts to address the solution of a hyperelastic solid mechanics model in the finite strain regime taking into account the axially stretched state of the vessel, as well as the effect of internal and external forces acting on the arterial wall. Due to the computational burden, a HPC approach is mandatory. Hence, the data assimilation and computational solid mechanics computations are parallelized at three levels: (i) a Kalman filter level; (ii) a cardiac phase level; and (iii) a mesh partitioning level. To illustrate the capabilities of this novel methodology toward the in-vivo analysis of patient-specific vessel constituents, mechanical material parameters are estimated using in-silico and in-vivo data retrieved from IVUS studies. Limitations and potentials of this approach are exposed and discussed.

6.
J. health inform ; 8(2): 73-79, abr.-jun. 2016. graf
Artigo em Português | LILACS | ID: biblio-1113

RESUMO

Big Data é um termo utilizado para descrever conjuntos de dados cuja captura, armazenamento, distribuição e análise requerem métodos e tecnologias avançadas devido a qualquer combinação de seu tamanho (volume), a frequência de atualização (velocidade) e diversidade (heterogeneidade). Este artigo apresenta uma revisão bibliográfica sobre aplicações de Big Data em saúde pública e em genômica. São descritos diversos exemplos, e alguns desafios tecnológicos relacionados com a análise destes dados são identificados. Também é discutida a utilização de nuvens computacionais no processamento de Big Data. No nosso ponto de vista, a nuvem computacional é uma plataforma adequada para o processamento de grandes volumes de dados, e pode ser usada também em diversas aplicações relacionadas à saúde pública e genômica. Diversos trabalhos disponíveis na literatura e citados neste artigo corroboram esta visão.


Big Data is a term used to describe data sets whose capture, storage, distribution and analysis require advanced methods and technologies due to any combination of its size (volume), the update frequency (speed) and diversity (heterogeneity). This paper presents a literature review of Big Data applications in public health and genomics. Several examples are described, and some technological challenges related to the analysis of these data are identified. The use of computational clouds for Big Data processing is also discussed in this paper. In our view, the cloud is an appropriate platform for the processing of large volumes of data, and it can be used in several applications related to public health and genomics. Several studies available in the literature and cited in this paper corroborate this view.


Big Data es un término usado para describir conjuntos de datos cuya captura, almacenamiento, distribución y análisis requieren métodos y tecnologías avanzadas, debido a una combinación de su tamaño (volumen), la frecuencia de actualización (velocidad) y la diversidad (heterogeneidad). Este artículo presenta una revisión de la literatura de aplicaciones Big Data en la salud pública y la genómica. Se describen varios ejemplos, y algunos de los desafíos tecnológicos relacionados con el análisis de estos datos se identifica. También discute es el uso de nubes computacionales en el procesamiento de grandes volúmenes de datos. En nuestra opinión, la plataforma de computación en la nube es adecuado para el procesamiento de grandes volúmenes de datos, y también puede ser utilizado en diversas aplicaciones relacionadas con la salud pública y la genómica. Varios estudios disponibles en la literatura y citados en este artículo corroboran esta opinión.


Assuntos
Saúde Pública , Base de Dados , Genômica , Computação em Nuvem
7.
Adv Appl Bioinform Chem ; 8: 23-35, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26604801

RESUMO

Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

8.
SHB12 (2012) ; 2012: 25-32, 2012 Oct 29.
Artigo em Inglês | MEDLINE | ID: mdl-28967001

RESUMO

Drug-related adverse events pose substantial risks to patients who consume post-market or Drug-related adverse events pose substantial risks to patients who consume post-market or investigational drugs. Early detection of adverse events benefits not only the drug regulators, but also the manufacturers for pharmacovigilance. Existing methods rely on patients' "spontaneous" self-reports that attest problems. The increasing popularity of social media platforms like the Twitter presents us a new information source for finding potential adverse events. Given the high frequency of user updates, mining Twitter messages can lead us to real-time pharmacovigilance. In this paper, we describe an approach to find drug users and potential adverse events by analyzing the content of twitter messages utilizing Natural Language Processing (NLP) and to build Support Vector Machine (SVM) classifiers. Due to the size nature of the dataset (i.e., 2 billion Tweets), the experiments were conducted on a High Performance Computing (HPC) platform using MapReduce, which exhibits the trend of big data analytics. The results suggest that daily-life social networking data could help early detection of important patient safety issues.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA