9:00 | BREAKFAST |
---|---|
9:30 | Lucas Bouck (Carnegie Mellon University): Discontinuous Galerkin Method for Advection Diffusion with Nonsmooth Velocity |
In this talk, we study the advection diffusion equation with nonsmooth velocity field. Often in fluid problems or PDE-constrained optimization problems, the velocity field is in a Sobolev space weaker than Lipschitz functions, which poses challenges for numerics and analysis. To solve the problem, we propose a Hybridizable Discontinuous Galerkin Method (HDG) combined with classical upwinding. We prove that our discrete solution converges strongly to a renormalized solution of the transport equation as the meshsize and the diffusion coefficient go to 0, even with the presence of boundary layers. A key feature of our analysis is that there is no condition on the meshsize and the diffusion coefficient. | |
9:50 | Sarswati Shah
(George Mason University): Weakly compressible two-layer shallow-water flows with friction along channels |
We present a weakly compressible approach to describe two-layer shallow water flows in channels with arbitrary cross sections. The standard approach for those flows results in a conditionally hyperbolic balance law with non-conservative products while the current model is unconditionally hyperbolic. A detailed description of the properties of the model is provided, including entropy inequalities and entropy stability. Furthermore, a high-resolution, non-oscillatory semi-discrete path-consistent central-upwind scheme is presented. The scheme extends existing central-upwind semi-discrete numerical methods for hyperbolic balance laws. Properties of the model such as positivity and well balance will be discussed. Along with the description of the scheme and proofs of these properties, we present several numerical experiments that demonstrate the robustness of the numerical algorithm. | |
10:10 | Mansur Shakipov (University of Maryland, College
Park): Inf-Sup Stability of Evolving Surface Parametric FEM |
We develop a parabolic inf-sup theory for the continuous in time, discrete in space evolving surface parametric FEM. We discuss the sufficient conditions for the inf-sup stability (in the sense of Banach-Nečas) to be uniform and how geometric errors pollute the quasi-best approximation result. As a byproduct of the analysis, both results hold under minimal assumptions on data regularity. We conclude with numerical simulations. | |
11:00 | Keegan Kirk (George Mason University): How to Insulate Optimally |
Given a fixed amount of insulating material, how should one coat a heat-conducting body to optimize its insulating properties? A rigorous asymptotic analysis reveals this problem can be cast as a convex variational problem with a non-smooth boundary term. As this boundary term is difficult to treat numerically, we consider an equivalent (Fenchel) dual variational formulation more amenable to discretization. We propose a numerical scheme to solve this dual formulation on the basis of a discrete duality theory inherited by the Raviart-Thomas and Crouzeix-Raviart finite elements, and show that the solution of the original primal problem can be reconstructed locally from the discrete dual solution. We discuss the a posteriori and a priori error analysis of our scheme, derive a posteriori estimators based on convex optimality conditions, and present numerical examples to verify theory. As an application, we consider the design of an optimally insulated home. | |
11:20 | Xuenan Li (Columbia University): A Constrained Optimization Approach for Constructing Rigid Bar Frameworks with Higher-order Rigidity |
We present a systematic approach for constructing bar frameworks with higher-order rigidity using constrained optimization, where each local minimum corresponds to a prestress stable configuration that is not first-order rigid. By allowing certain edge lengths to vary and optimizing the length of specific free edges, we demonstrate that the resulting frameworks are prestress stable but not first-order rigid under certain weak conditions. Our approach applies to both 2D and 3D bar frameworks, producing a wide range of prestress stable but not first-order rigid examples that have not been presented in the existing literature. Additionally, we present a bifurcation method to obtain rigid but not second-order rigid bar framework. Our results highlight connections between rigidity properties and constrained optimization, offering new insights into the construction and analysis of bar frameworks with higher-order rigidity. This is a joint work with Miranda Holmes-Cerfon and Christian Santangelo. | |
11:40 | Rohit Khandelwal (George Mason University): Variational Problems with Gradient Constraints: A priori and A posteriori error identities |
Nonsmooth variational problems are ubiquitous in science and engineering, for e.g., fracture modeling and contact mechanics. This talk presents a generic primal-dual framework to tackle these types of nonsmooth problems. Special attention is given to variational problems with gradient constraints. The key challenge here is how to project onto the constraint set both at the continuous and discrete levels. In fact, both a priori and a posteriori error analysis for such nonsmooth problems has remained open. On the basis of a (Fenchel) duality theory at the continuous level, an a posteriori error identity for arbitrary conforming approximations of primal-dual formulations is derived. In addition, on the basis of a (Fenchel) duality theory at the discrete level, an a priori error identity for primal (Crouzeix–Raviart) and dual (Raviart–Thomas) formulations is established. The talk concludes by deriving the optimal a priori error decay rates. | |
12:00 | Gonzalo Benavides (University of Maryland, College Park): Liquid Crystal Networks bending energy and its approximation via a Local Discontinuous Galerkin method |
Liquid Crystal Networks bending energy and its approximation via a Local Discontinuous Galerkin method Liquid Crystal Networks (LCN) are materials made of elastomeric polymer networks densely cross-linked with Liquid Crystal molecules. They can develop complex shapes upon actuation, and find applications in the design of biomedical devices and robotics. Inspired by the works of Ozenda-Sonnet-Virga (2020) we start from the classical classical Bladon-Warner-Terentjev (1994) 3D trace formula and utilize the Kirchhoff-Love assumption to derive a 2D membrane elastic energy. Under the immersibility assumption for the First Fundamental Form, said energy reduces to a pure 2D Bending energy in terms of, among other geometric quantities, the Second Fundamental Form (SFM). We deduce an equivalent energy that replaces the SFM by the Hessian of the deformation, which makes it amenable for computations. Along the lines of Bonito-Guignard-Nochetto-Yang (2020), we propose a Local Discontinuous Galerkin (LDG) \(H^2\)-gradient flow to minimize the nonconvex Bending energy, and develop a suitable numerical analysis. We finish the talk by presenting several numerical experiments. This joint work with Lucas Bouck, Ricardo Nochetto and Shuo Yang, builds upon Lucas' PhD dissertation. | |
3:00 | Talhah Ansari (Technical University of Munich): Developments in meshless model reduction |
Accurate description of the constitutive relation and the material parameters are necessary for constructing accurate digital twins of structures. Model updating-based parameter identification is a popular approach employing numerical models such as FEM to build and validate the accuracy of the digital twins based on real measurements. However, the real-world structure's material parameters and constitutive relation might differ from the ideal properties used in the model. Common SHM approaches focus on identifying the material parameters such as Young's modulus of the structure to detect an anomaly [1,2]. While this information represents the current state of the structure accurately, prediction regarding the remaining life of the structure is challenging as the underlying constitutive behaviour has likely changed, thereby highlighting the necessity of material model-accurate predictive digital twins. In this work, we address the material model identification by first decoupling it into two manageable sub-problems, material parameters identification and constitutive model identification, to gain a deeper understanding of each component's characteristics. The parameter identification includes identifying the material constants, given a constitutive law. While the constitutive problem discovers the constitutive model influencing the material behaviour given a set of material parameters. Both the sub-problems are formulated as adjoint-based sensitivities driven optimization tasks with the objective to minimize the weighted differences between the model and the deformation measurements obtained from displacement or strain sensors. The parameters identification follows the continuous optimization technique proposed by the authors in [1,2]. For the constitutive model identification, the continuous relaxation approach is used to convert the discrete variable optimization to a continuous problem enabling smoother exploration. Different fidelities for both the sub-identification problems are investigated. To tackle the ill-conditioning of high-fidelity problems, techniques such as Vertex Morphing are explored. The material parameters and constitutive problems are subsequently combined and the comprehensive material model identification problem is formulated. The results and challenges arising from this combined optimization are examined in detail. Various structural problems and material models are analysed to demonstrate the effectiveness of the approach. [1] Facundo N Airaudo, Rainald Löhner, Roland Wüchner, and Harbir Antil. "Adjoint-based determination of weaknesses in structures". In: Computer Methods in Applied Mechanics and Engineering 417 (2023), p. 116471. [2] Rainald Löhner, Facundo Airaudo, Harbir Antil, Roland Wüchner, Fabian Meister, and Suneth Warnakulasuriya. "High-Fidelity Digital Twins: Detecting and Localizing Weaknesses in Structures". In: AIAA SCITECH 2024 Forum. 2024, p. 2621. | |
3:20 | Pierre Amenoagbadji (Columbia University): Time-harmonic wave propagation in junctions of two periodic half-spaces |
In this work, we consider the 2D linear Helmholtz equation in presence of a periodic half-space. A numerical method has been proposed by Fliss, Cassan, Bernier (2010) to solve this equation under the critical assumption that the medium stays periodic in the direction of the interface. In fact, in this case, a Floquet-Bloch transform can be applied with respect to the variable along the interface, thus leading to a family of closed waveguide problems. The purpose of this work is to deal with the case where the medium is no longer periodic in the direction of the interface, that is, if the periodic half-space is not cut in a direction of periodicity. As it is done by Gérard-Varet, Masmoudi (2015), we use the crucial (but non-obvious) observation that the medium has a quasiperiodic structure along the interface, namely, it is the restriction of a higher dimensional periodic structure. Accordingly, the idea is to interpret the studied PDE as the "restriction" of an augmented PDE in higher dimensions, where periodicity along the interface is recovered. This so-called lifting approach allows one to extend the ideas by Fliss, Cassan, Bernier (2010), but comes with the price that the augmented equation is non-elliptic (in the sense of the principal part of the differential operator), and therefore more complicated to analyse and to solve numerically. | |
3:40 | Tongtong Li (University of Maryland, Baltimore County): A structurally informed data assimilation approach for discontinuous state variables |
Data assimilation is a scientific process that combines available observations with numerical simulations to obtain statistically accurate and reliable state representations in dynamical systems. However, it is well known that the commonly used Gaussian distribution assumption introduces biases for state variables that admit discontinuous profiles, which are prevalent in nonlinear partial differential equations. In this talk, we focus on the design of a new structurally informed prior that exploits statistical information from the simulated state variables. In particular, we construct a new weighting matrix based on the second moment of the gradient information of the state variable to replace the prior covariance matrix used for model/data compromise in the data assimilation framework. We further adapt our weighting matrix to include information in discontinuity regions via a clustering technique. Our numerical experiments demonstrate that this new approach yields more accurate estimates than those obtained using ensemble transform Kalman filter (ETKF) on shallow water equations. | |
4:00 | Mohammadhossein Mohammadisiahroudi (Lehigh University): Quantum Computing-based Sensitivity Analysis for PDE-constrained Optimization |
Quantum computing is an emerging paradigm offering significant speed-ups for solving specific mathematical problems. In recent years, optimization and scientific computing researchers have developed quantum algorithms that demonstrate complexity advantage for large-scale problems. A key area of focus has been to leverage quantum linear algebra techniques to solve linear systems that arise in optimization and scientific computing applications. We propose quantum computing-based direct and adjoint methods for implicit sensitivity analysis in PDE-constrained optimization. The proposed quantum approaches achieve exponential speed-up in complexity with respect to the problem dimension, i.e., the number of state variables, compared to classical methods. Notably, in the quantum computing framework, both the direct and adjoint methods exhibit similar computational complexity, a departure from their classical counterparts. | |
4:50 | William Sands (University of Delaware): Adaptive-Rank Methods for Multi-Scale BGK Equations via Greedy Sampling |
We present a novel sampling-based adaptive-rank method for simulating multi-scale BGK equations. Our approach extends the semi-Lagrangian adaptive-rank (SLAR) framework developed for the Vlasov-Poisson system and introduces a greedy sampling strategy that requires only entry-wise access to the solution. Unlike traditional low-rank integrators, this method avoids explicit low-rank decompositions of nonlinear terms, such as the local Maxwellian in the BGK collision operator. To enforce mass, momentum, and energy conservation, we incorporate a locally macroscopic conservative (LoMaC) technique, which couples the kinetic equation to its macroscopic counterpart. We show that the method is asymptotic-preserving under suitable singular value decay conditions and benefits from a local semi-Lagrangian solver, allowing large time steps. The macroscopic system is advanced using high-order stiffly-accurate DIRK methods, with nonlinear solves handled efficiently via a Jacobian-free Newton-Krylov (JFNK) approach. This iterative correction procedure produces a dynamic closure, improving accuracy across different regimes. Numerical results demonstrate the method's ability to capture shocks and discontinuities while maintaining efficiency in mixed-regime problems where the Knudsen number varies across several orders of magnitude. | |
5:10 | Nanyi Zheng
(University of Delaware): Sampling-Based Adaptive Rank Integrators for Multi-scale Kinetic Models |
In this talk, we introduce a sampling-based semi-Lagrangian adaptive rank (SLAR) method, which leverages a cross approximation strategy— also known as CUR or pseudo-skeleton decomposition—to efficiently represent low-rank structures in kinetic solutions. The method dynamically adapts the rank of the solution while ensuring numerical stability through singular value truncation and mass-conservative projections. By combining the advantages of semi-Lagrangian integration with low-rank approximations, SLAR enables significantly larger time steps compared to conventional methods and is extended to nonlinear systems such as the Vlasov-Poisson equations using a Runge-Kutta exponential integrator. Building on this framework, we further develop the SLAR method for the multi-scale BGK equation, introducing an asymptotically accurate approach that eliminates the need for low-rank decompositions of the local Maxwellian in the collision operator. To enforce conservation of mass, momentum, and energy, we propose a novel locally macroscopic conservative (LoMaC) technique, which discretizes the macroscopic system using high-order DIRK methods. Additionally, a dynamic closure strategy is employed to self-consistently adjust macroscopic moments, enabling robust simulations across both kinetic and hydrodynamic regimes, even in the presence of shocks and discontinuities. We validate our method through extensive benchmark tests on linear advection, upto 3D3V nonlinear Vlasov-Poisson, and multi-scale kinetic problems, demonstrating its accuracy, stability, and computational efficiency. The sampling-based adaptive rank framework is shown to be an effective approach in overcoming the curse of dimensionality for high-dimensional multi-scale kinetic problems. | |
5:30 | Sanghoon Na (University of Maryland, College Park): Curse of Dimensionality in Neural Network Optimization |
The curse of dimensionality in neural network optimization under the mean-field regime is studied. It is demonstrated that when a shallow neural network with a Lipschitz continuous activation function is trained using either empirical or population risk to approximate a target function that is \(r\) times continuously differentiable on \([0, 1]^d\), the population risk may not decay at a rate faster than \(t^{-4r/(d-2r)}\) where \(t\) is an analog of the total number of optimization iterations. This result highlights the presence of the curse of dimensionality in the optimization computation required to achieve a desired accuracy. Instead of analyzing parameter evolution directly, the training dynamics are examined through the evolution of the parameter distribution under the 2-Wasserstein gradient flow. Furthermore, it is established that the curse of dimensionality persists when a locally Lipschitz continuous activation function is employed, where the Lipschitz constant in \([-x,x]\) is bounded by \(O(x^\delta)\) for any \(x\in\mathbb{R}\). In this scenario, the population risk is shown to decay at a rate no faster than \(t^{-(4+2\delta)r/(d-2r)}\). To the best of our knowledge, this work is the first to analyze the impact of function smoothness on the curse of dimensionality in neural network optimization theory. | |
5:50 | END |