COSAM » Departments » Mathematics & Statistics » Research » Seminars » Stochastic Analysis

Stochastic Analysis


DMS Stochastic Analysis Seminar
Mar 29, 2023 01:10 PM
352 Parker Hall


gu.jpg

Speaker: Yu Gu, University of Maryland

Title: KPZ on a large torus

Abstract: I will present the recent work with Tomasz Komorowski and Alex Dunlap in which we derived optimal variance bounds on the solution to the KPZ equation on a large torus, in certain regimes where the size of the torus increases with time. We mostly use the tools from stochastic calculus and I will also try to give a heuristic explanation of the 2/3 and 1/3 exponents in the 1+1 KPZ universality class.


More Events...

DMS Stochastic Analysis Seminar
Mar 22, 2023 01:10 PM
352 Parker Hall


Speaker: Cheuk-Yin Lee, National Tsing Hua University, Taiwan


Title: Parabolic stochastic PDEs on bounded domains with rough initial conditions: moment and correlation bounds

Abstract: In this talk, I will present my joint work with David Candil and Le Chen about nonlinear parabolic SPDEs on a bounded Lipschitz domain driven by a Gaussian noise that is white in time and colored in space, with Dirichlet or Neumann boundary condition. We establish explicit bounds for the moments and correlation function of the solutions under a rough initial condition that is given by a locally finite signed measure. Our focus is on studying how the >moment bounds and related properties of the solutions depend on the rough initial data and the smoothness and geometric property of the domain. For $C^{1,{\em a}}$-domains with Dirichlet boundary condition, we obtain moment bounds under a weak integrability condition for the initial data which need not be a finite measure. Our results also imply intermittency properties of the solutions.
DMS Stochastic Analysis Seminar
Mar 15, 2023 01:10 PM
352 Parker Hall


salins.jpg
 
Speaker: Dr. Michael Salins, Boston University
 
Title: The stochastic heat equation with superlinear forcing
 
Abstract: I outline some recent results about the stochastic heat equation defined on an unbounded spatial domain. In general, the solutions to these equations are unbounded in space, which can make the analysis of their behaviors difficult. I present global existence and uniqueness results when the equation is exposed to superlinear forcing terms in the cases when the forcing term is dissipative (pushing away from infinity) and when the superlinear forcing term is accretive (pushing toward infinity). I also present a result that proves that the laws of the solution have a density in the superlinear dissipative case. 
 
Joint work with Samy Tindel.

DMS Stochastic Analysis Seminar
Mar 01, 2023 01:10 PM
352 Parker Hall


 
 
nane.jpg
Speaker: Dr. Erkan Nane (Auburn)
 
Title: Continuity with respect to fractional order for a family of time fractional stochastic heat equations
 
Abstract: In this talk we present continuity with respect to fractional order of the solution to a certain class of space-time fractional stochastic equations. Our results extend the main results in both [1] and [2].
 
[1] M. Foondun. Remarks on a fractional-time stochastic equation, Proc. Amer. Math. Soc. 149 (2021), 2235-2247.
[2] D.D. Trong, E. Nane, N.D. Minh, and N.H. Tuan. Continuity of solutions of a class of fractional equations, Potential Anal. 49 (2018), no. 3, 423-478.
 

DMS Stochastic Analysis Seminar
Jan 25, 2023 01:10 PM
352 Parker Hall


 
Speaker: Dr. Le Chen (Auburn)
 
Title: Superlinear stochastic heat equation
 
Abstract: In this talk, we will discuss the superlinear stochastic heat equation. It is known that when the forcing term and the diffusion coefficient are Lipschitz continuous, there exists a unique random field solution for all time, which is called global solution. We explore the existence of a global solution when the Lipschitz condition are replaced by certain superlinear growth conditions. This gives another instance of the delicate balance between the smoothing effect of the heat kernel and the roughening effect of the multiplicative noise.
 
This talk will be based on a recent work with Jingyu Huang and an ongoing project with Mohammud Foondun, Jingyu Huang, and Mickey Salins.

DMS Stochastic Analysis Seminar
Nov 08, 2022 02:30 PM
356 Parker Hall


 jingyuhuang.jpg

Speaker: Jingyu Huang, University of Birmingham, UK

Title: Fourier transform method in stochastic differential equation (SPDE)

Abstract: We consider the Fourier transform method in stochastic heat equation on \(\mathbb{R}^d\)

        \(  \frac{\partial \theta}{\partial t} = \frac{1}{2} \Delta \theta(t,x) + \theta(t,x) \dot{W}(t,x).\)

We study the existence and uniqueness of the solution under Fourier mode.

Then we apply the similar approach to the turbulent transport of a passive scalar quantity in a stratified, 2-D random velocity field. It is described by the stochastic partial differential equation
        \(  \partial_t \theta(t,x,y) = \nu \Delta \theta(t,x,y) + \dot{V}(t,x) \partial_y \theta(t,x,y), \quad t\ge 0\:\: \text{and}\:\: x,y\in \mathbb{R},\)
where \(\dot{V}\) is some Gaussian noise. We show via a priori bounds that, typically, the solution decays with time. The detailed analysis is based on a probabilistic representation of the solution, which is likely to have other applications as well. This is based on joint work with Davar Khoshnevisan from University of Utah.


DMS Stochastic Analysis Seminar
Nov 01, 2022 02:30 PM
326 Parker Hall


zhu.jpg

Speaker: Prof. Lingjiong Zhu, Florida State University

Title: Langevin algorithms are core Markov Chain Monte Carlo methods for solving

 

Abstract: Langevin algorithms are core Markov Chain Monte Carlo methods for solving machine learning problems. These methods arise in several contexts in machine learning and data science including Bayesian (learning) inference problems with high-dimensional models and stochastic non-convex optimization problems including the challenging problems arising in deep learning. In this talk, we illustrate the applications of Langevin algorithms through three examples: (1) Langevin algorithms for non-convex optimization; (2) Decentralized Langevin algorithms; (3) Constrained sampling via penalized Langevin algorithms.


DMS Stochastic Analysis Seminar
Oct 11, 2022 02:30 PM
326 Parker Hall


Speaker: Dr. Panqiu XIA (Auburn)Title: The moment asymptotics of super-Brownian motionsAbstract: The super-Brownian motion (sBm), or Dawson-Watanabe superprocess, is a typical example of the measure-valued Markov processes. In the spatial dimensional one, the sBm, viewed as a measure on R, is absolutely continuous with respect to the Lebesgue measure. Moreover, the density of this measure is the unique solution to the stochastic heat equation with the diffusion coefficient taking the form of square root. This equation is one of the most important example of the stochastic partial differential equation with non-Lipschitz coefficients. In this talk, I will first give a brief introduction to sBm's and then show some recent results about moment formula and large time and high order moment asymptotics of sBm's.
DMS Stochastic Analysis Seminar
Sep 27, 2022 02:30 PM
326 Parker Hall


zhu.jpgSpeaker: Prof. Lingjiong Zhu, Florida State UniversityTitle: The Heavy-Tail Phenomenon in stochastic gradient descent (SGD)Abstract: In recent years, various notions of capacity and complexity have been proposed for characterizing the generalization properties of stochastic gradient descent (SGD) in deep learning. Some of the popular notions that correlate well with the performance on unseen data are (i) the flatness of the local minimum found by SGD, which is related to the eigenvalues of the Hessian, (ii) the ratio of the stepsize to the batch-size, which essentially controls the magnitude of the stochastic gradient noise, and (iii) the tail-index, which measures the heaviness of the tails of the network weights at convergence. In this paper, we argue that these three seemingly unrelated perspectives for generalization are deeply linked to each other. We claim that depending on the structure of the Hessian of the loss at the minimum, and the choices of the algorithm parameters, the distribution of the SGD iterates will converge to a heavy-tailed stationary distribution. We rigorously prove this claim in the setting of quadratic optimization: we show that even in a simple linear regression problem with independent and identically distributed data whose distribution has finite moments of all order, the iterates can be heavy-tailed with infinite variance. We further characterize the behavior of the tails with respect to algorithm parameters, the dimension, and the curvature. We then translate our results into insights about the behavior of SGD in deep learning. We support our theory with experiments conducted on synthetic data, fully connected, and convolutional neural networks.

This is based on the joint work with Mert Gurbuzbalaban and Umut Simsekli.


DMS Stochastic Analysis Seminar
Aug 23, 2022 02:30 PM
326 Parker Hall


Speaker: Le Chen

Title: Matching moment lower bounds for stochastic wave equation

Abstract: The one-dimensional stochastic wave equation with multiplicative space-time white noise has been studied as early as in the Walsh notes in 1980's. The upper bounds for the moment Lyapunov exponents were known in the literature, while to obtain the matching lower bounds has been an open problem for a while. In this talk, we will present a recent joint work (arXiv:2206.10069) with Yuhui Guo and Jian Song from Shandong University, China, where we obtained these matching lower bounds.


DMS Stochastic Analysis Seminar
Apr 26, 2022 12:00 PM
352 Parker Hall


Speaker: Antony Pearson

Title: Adaptive and hybrid classification with domain-dependent digraphs

Abstract: Class cover catch digraph (CCCD) classifiers are a family of nonparametric prototype selection learners. Previous work has demonstrated that CCCD classifiers perform well in the context of class imbalance, whereas state-of-the-art classifiers require resampling or ensemble schemes to achieve similar performance. It is also known that one of the two well-known types of CCCD classifier, the random walk (RW-), performs better than the pure (P-) CCCD classifier in the context of class overlap, i.e., when two classes have substantial similarity. Unfortunately, RW-classifiers suffer from large training time and are less accurate when there is no class overlap. In this work we describe an adaptive decision framework for pure versus random walk classifiers, which may offer superior classification accuracy and sub-cubic computational complexity. We propose a hybrid classifier borrowing the strengths of both types of CCCD classifier that partitions the sample space into a region of high class overlap where a RW-CCCD is trained, and a region in which class supports are separated, where a P-CCCD is trained. The hybrid strategy offers superior classification accuracy compared P-CCCD or RW-CCCD classifiers trained individually, and improved computational complexity over RW-CCCD classifiers.


More Events...


Last Updated: 09/21/2022