Second Annual Texas A&M Research Computing Symposium
Last Updated: June 19, 2018
Using Machine Learning Based Surrogate Models, Nonlinear Finite Element Analysis and Optimization Techniques to Design Road Safety HardwareAkram Abu Odeh, Texas Transportation Institute, A-Abu-Odeh@tti.tamu.edu
Autonomous Computational Materials DiscoveryRaymundo Arroyave, Anjana Talapatra, Shahin Boluki, Xiaoning Qian, Ed Dougherty
Microstructure-based Homogenization: FFT vs FEMA. Cruzado 1,2*, J. Segurado 4, and A. Benzerga 1,2,3
1 Department of Aerospace Engineering, Texas A&M University. R. Bright Building, 701 Ross St, College Station, TX 77840, USA.
2 Center for intelligent Multifunctional Materials and Structures CiMMS, TEES, Texas A&M University, College Station, TX 77840, USA.
3 Department of Materials Science & Engineering, Texas A&M University, College Station, TX 77840, USA.
4 Department of Materials Science, Technical University of Madrid/Universidad Politécnica de Madrid,E. T. S. de Ingenieros de Caminos. 28040 - Madrid, Spain.
* email@example.com, firstname.lastname@example.org, email@example.com
Advances in computational material science and computing resources have shifted the paradigm in material discovery from empirical trial-and-error to virtual testing. In a range of applications where thermo-mechanical performance is key, virtual testing implies resolving microstructural features and carrying out computational homogenization. This problem becomes increasingly challenging when using complex non-linear material models. The effective behavior of materials with microstructures is affected by various microstructural features (grains, phases, precipitates, etc.) and thus depends on attributes of these features. Mean-field models typically reduce the microstructure to the volume fraction. However, in many cases spatial fluctuations in the thermomechanical fields are critical in obtaining the average response, especially for phenomena driven by localized fields (fracture, fatigue, etc.) The Finite element method (FEM) is commonly used to compute the effective response of some representative volume element (RVE). However, high-fidelity simulations are computationally demanding, both in terms of CPU power and memory. Alternatively, spectral formulations, such as the Fast Fourier Transform (FFT), provide efficient and fast algorithms that allow improving the computational performance by orders of magnitude. Here, we report on systematic comparisons between FEM and FFT analyses of various RVEs in a matrix-particle system whereby the matrix obeys a nonlinear transformation-induced inelastic behavior and the precipitates are elastic, within a small-strain formulation. The FFT calculations were carried out on a single processor whereas the FEM analyses were performed using up to 200 CPUs. This provides a systematic way to assess the computational efficiency of the FFT based code. In turn, future analyses with a parallelized FFT code presents an unprecedented capability of dealing with very large RVE sizes that will remain out of reach of FEM type methods for a long time. Alternatively, a serial FFT code can be used to conduct high-throughput analyses in guiding microstructure-based design of materials.
Computational Chemistry: Guiding the Design of Molecular DevicesAndreas Ehnbom, Michael B. Hall§, John A. Gladysz§ Department of Chemistry, §Faculty Advisors
Recent efforts to synthesize molecular gyroscopes containing a platinum core have unveiled several competing molecular architectures (see abstract graphics, compoundsa-d).1 We sought to understand if there exists any thermodynamic preference for one of these species, and moreover, to probe how the linker length (n) and ligand set (L=F, Cl, Br, I, H, Me, Ph) would impact the relative stability of a-d as this could help us steer experimental endeavors in a clear direction. A combination of density functional theory and molecular dynamics was used to probe these issues using both gas-phase calculations and solvent corrected models. Dispersion corrections were also implemented to account for the dispersion interactions that were of particular importance for compounds with large linker length (n=18–22). Calculation data were compared to experimentally harnessed X-ray structures, which showed good structural agreements. It was concluded that for short-medium linker lengths (n=10–14), the gyroscope architecture (b) proved to be the most stable isomer in the presence of small to medium sized ligands (L=H, halides). However, the "parachute" structure (b) was favored for larger ligands (L=Me, Ph) with short linker lengths. The "ear-type" structures (c, d) were high in energy relative to a-b with short linker length species (n=10–14) due to steric effects exerted in two of the smaller macrocycles, or "ears". Interestingly, c and d are closer in energy to b when the linker length increases (n≥16), and this tendency have also been experimentally observed. These investigations will help us to direct the synthesis to specifically target a-d by providing a rationale for the stereoelectronic underpinnings on how components such as ligand type and linkage length impact the relative stabilities. (1) Joshi, H.; Kharel, S.; Ehnbom, A.; Skopek, K.; Hess, G. D.; Fiedler, T.; Hampel, F.; Bhuvanesh, N.; Gladysz, J. A. J. Am. Chem. Soc. 2018, XX, XXX. (in press) DOI: 10.1021/jacs.8b02846
Mapping the stiffness distribution of solids using digital cameras, mechanics-based inverse algorithms, and high-performance computingMaulik Kotecha, Mechanical Engineering Department, Texas A&M University firstname.lastname@example.org
In this talk, the feasibility to recover the material property distribution of a three-dimensional heterogeneous sample is presented. This is done using only measured surface displacements and two-dimensional sample using just the boundary displacements and inverse algorithms without making any assumptions about local homogeneities or the material property distribution. The findings of this research could benefit the field of breast cancer detection and other medical imaging related applications. This inverse approach also finds its application in the field of manufacturing and material science for non-destructive testing of materials and material characterization. To better represent the actual scenario, measured displacements used in simulated experiments are augmented with noise, significantly higher than anticipated measurement noise. Two dimensional problems in plane strain and three dimensional problems with multiple stiff inclusions are tested with simulated experiments. The inverse method recovers the shear modulus values in the inclusions and background well and reveals the shape as well as location of the inclusions clearly. Actual experimental data from a Digital Image Correlation System are utilized in the second part of the talk to to demonstrate feasibility of solving the inverse problem with full field measurements. The finite element method is being used as part of solving the inverse problem, which requires high computational power and thus more computational time as well. With efficient incorporation of OpenMP and MPI parallelization algorithms at various stages of computation, the required computational time is significantly reduced.
Parallel Finite Element Methods to Simulate Dynamics of Earthquakes and Fracturing.Dunyu Liu (email@example.com), Benchun Duan(firstname.lastname@example.org), Bin Luo (email@example.com). Geology and Geophysics Department.
We focus on numerical modeling of dynamics of earthquakes and fracturing, involving physical-based spontaneous rupture propagation, ground shaking induced by earthquakes, hydraulic fracturing, and earthquake cycles over hundreds of years. We develop and maintain EQdyna and EQquasi, two Finite Element codes written in Fortran and parallelized in MPI and OpenMP. EQdyna features an under-integrated hexahedral element stabilized by hourglass control. With the explicit central difference time integration, the method has been proved to efficient and accurate in simulating spontaneous rupture and wave propagation. It adopts the traction-at-split-node technique to model ruptures/fractures. In recent years, we incorporate the Perfectly Matched Layer absorbing boundary to reduce model sizes and Coarse-grained Q model to simulate anelastic attenuation of realistic earth material to seismic waves. Efforts are devoted to implementing experimentally derived rate- and state- friction as well. To model physical processes of earthquake cycles, it requires temporal scales from seconds to tens of years, a hundred meters in spatial discretization, and tens of kilometer sizes of earthquake faults. We develop an implicit Finite Element code EQquasi to meet the challenge. EQquasi features fully integrated hexahedral elements and highly efficient parallel sparse linear solver such as PARDISO and MUMPS. Further efforts using domain decomposition to scale the model size up with MPI is in progress. Because the intrinsic complexities in earthquake cycle simulation, we interchangeably call EQdyna and EQquasi for co-seismic spontaneous ruptures in seconds and inter-seismic deformation in tens of years, respectively. A batch script is written to enable automation of the process to simulate many earthquake cycles on ADA. Generally, our simulations on ADA use tens to hundreds of threads on HPC for several hours.
Experimental Serial and Parallel Algorithms for the Maximum Clique ProblemYang Liu, HPRC, firstname.lastname@example.org
Given a graph, the Maximum Clique problem is to find a clique of the largest number of vertices in the graph. This problem is a very important NP_hard problem with numerous real-world applications: coding theory, distributed fault diagnosis, geometry, etc. In this talk we will show the journey of experimental serial algorithms and their performance speedup, and discuss our investigations and challenges on the MPI parallelization of those serial algorithms.
TAMU HPRC OnDemand – A Gateway for Scientific and Big Data DiscoveriesPing Luo, HPRC, email@example.com
High performance computing has become increasingly important for scientific and big data research. Open OnDemand, an open source web portal system developed at the Ohio Supercomputing Center, makes it easy and convenient to utilize HPC for research. TAMU HPRC OnDemand is a local installation and customization of Open OnDemand on the Ada cluster. In this presentation, we will discuss how to perform traditional HPC functionalities through TAMU HPRC OnDemand, and also demonstrate the power of its interactive apps. In the end, we will see current user statistics and users’ feedback on TAMU HPRC OnDemand.
Issues with near real time non-linear analysis of bridge dataJohn M. Nichols, Department of Construction Science, TAMU, firstname.lastname@example.org; Adrienn K. Tomor, Department of Geography and Building Managment, University of the West of England, Bristol
There is an eternal conflict between the reality of experimental data and the available theory to explain the results. Massive acceleration data sets for large bridges provide data with high precision across a broad temperature and loading range. Two significant issues arise, the first issue is to analyze the data in near real time and the second issue is the non-linear elasticity arising from high axial loads combined with bending moments, which has a direct impact on the measured natural frequency response. The objective of this paper is to outline the methods used to deal with these issues on bridges in Europe and the USA.
Planting spin-glass solutions for algorithms benchmarkingDilina Perera, Department of Physics and Astronomy, Texas A&M University; Firas Hamze, D-Wave Systems Inc.; Helmut G. Katzgraber, Department of Physics and Astronomy, Texas A&M University
Many optimization problems belong to the NP-hard complexity class, and, in practice, can only be tackled with heuristic algorithms. Evaluating the performance of such algorithms requires hard benchmark problems with known solutions. Here we introduce an efficient, tunable approach for generating spin-glass instances of arbitrary size for which the optimal solutions are known by construction. The underlying principle of this method is to decompose the model graph into edge-disjoint subgraphs, and use elementary building blocks with different levels of frustration. The building blocks are constructed such that they share a common local ground state, which guarantees that the ground state of the entire problem is known a priori. Using population annealing Monte Carlo and simulated annealing, we compare the computational hardness of the planted problems across various instance classes that result from numerous ways in which the building blocks can be chosen. Our method offers significant advantages over previous approaches for planting solutions in that it is easy to implement, scales to arbitrary sizes, requires no computational overhead, and the problems have highly-tunable typical complexity.
Using Computers as Tools to Design Therapeutics, Materials and Elucidate the Structures of Key Biological AxesPhanourios Tamamis, Artie McFerrin Department of Chemical Engineering, Texas A&M University
Tamamis computational lab utilizes state of the art tools in molecular dynamics simulations, free energy calculations, and develops innovative computational protocols for the design of novel protein-based therapeutics, the design of functional amyloid biomaterials, the study of RNA modifications interacting with proteins, and the structural and energetic elucidation of biomolecular complexes involving small compounds, peptides and RNA binding to proteins. In this talk, we will present an overview demonstrating the importance of using computers as tools to study the inhibition and disassembly of amyloid fibril formation associated with diabetes, Alzheimer’s and Parkinson’s diseases and the design of compounds/proteins to serve as therapeutics for these amyloid diseases; the design of novel functional amyloid biomaterials with advanced applications, and the understanding of peptide self-assembly at the molecular level; and the study and structural delineation of interactions formed between biomolecular complexes involving RNAs, proteins and small compounds.
Big Computing in High Energy PhysicsDavid Toback, Texas A&M Department of Physics, email@example.com
High Energy Particle Physicists are (and have been) leaders in Big Data/Big Computing for decades. In this talk we will focus on the Big Collaborations (including the Large Hadron Collider that recently discovered the Higgs boson) and their needs, as well as how we work with the rest of our collaborators doing dark matter searches, astronomy and large scale theoretical calculations/simulations. We will discuss our use of the Brazos cluster for the bulk of our computing needs because it has both allowed us to cope with our High Throughput requirements as well as our issues with working with collaborators, data and software from around the world in a grid computing environment. Finally, we will present some results on how well things have worked, as well as some comments about what has worked and what would be helpful in the future.
High performance computing enriches understanding of chromosome segregationQi Zheng, TAMU School of Public Health (firstname.lastname@example.org)
A cell's ploidy value is the number of chromosomes (genomes) residing in that cell. Biologists now believe that bacterial polyploidy is more common than they thought before. For a given mutation, a cell is called homozygous if all genomes in that cell carry the same mutation of interest. A long-standing challenge is to explain how homozygosity arises in a highly polyploid cell population. Let a cell having c genomes carry just one mutated genome; simulation suggests, if chromosome segregation is random, one would wait on average for c cell generations to see a homozygous cell arising. With c=100, one would then see about 10^30 cell divisions, which is about the total number of microbial cell divisions occurring annually on the earth. To escape this dilemma, biologists in 1980 proposed a model of nonrandom segregation. Recent biologists favor the gene conversion hypothesis. The random segregation model, despite its conceptual simplicity and intuitive appeal, fell out of favor before its elementary properties were ever understood. The advent of high performance computing technology has freed the random segregation model from the shackles of intractable mathematics. Using an agent-based simulation approach, I have caught glimpses into the joint effects of mutation and selection on the formation of homozygosity. For example, if cells carrying relatively more mutated genomes are selected very 20 generations, a succession of 3 rounds of selection would be sufficient to produce a large number of homozygous cells having a ploidy value of 100. This and other computational findings shed new light on the hypothesis of random chromosome segregation, and may have important evolutionary implications.