# Research

## Brief research summary:

My research is in the area of analysis and geometry in several complex variables, with a focus on \(L^2\)-techniques and their applications. In particular, I have been working on a “twisted” adaptation of Bo Berndtsson’s complex Brunn-Minkowski theory: I retain the same results under weaker assumptions on the weight functions defining the \(L^2\) spaces of holomorphic functions. Berndtsson’s Nakano-positivity result possesses several applications in complex analysis and geometry: notably plurisubharmonic variation results for Bergman kernels (which can be interpreted in terms of the Bergman metric), a proof of the Ohsawa-Takegoshi theorem with sharp estimates, and applications to Kähler-Einstein geometry.

Although Berndtsson presents his results for pseudoconvex domains in \(\mathbb{C}^n\), it is known that the same results hold for Stein manifolds, with the \(L^2\) spaces now consisting of sections of a holomorphic line bundle \(L\) over the Stein manifold \(X\) equipped with a Hermitian metric \(e^{-\varphi}\). The “twist” comes from twisting the line bundle by a trivial bundle and rewriting a metric \(e^{-\psi}\) for \(L\) as \(\tau e^{-\varphi}\) with \(\tau > 0\). Using this, Donnelly and Fefferman obtained a basic estimate for the \(\bar{\partial}\)-operator which is different from the classical one. This basic estimate results in a theorem for the \(\bar{\partial}\)-operator with \(L^2\)-estimates on complete Kähler manifolds which differs from Hörmander’s classical theorem on \(L^2\)-estimates for the \(\bar{\partial}\)-operator. Indeed, in contrast with Hörmander’s classical result – which Berndtsson uses to prove his Nakano-positivity result – the twisted \(\bar{\partial}\)-theorem with \(L^2\)-estimates does not require the (family of) Hermitian metric(s) for the line bundle to have positive curvature. In fact, when the Stein manifold possesses a negative plurisubharmonic function \(-e^{-\eta}\), the (family of) Hermitian metric(s) for the line bundle may be chosen to have some amount of negative curvature. For instance, the curvature for the Hermitian metric for the line bundle can be as negative as \(-2e^{\eta}\partial_X \bar{\partial}_X (-e^{-\eta})\) along the base \(X\).

This allows us to directly extend Berndtsson’s Nakano-positivity result to weights that are not necessarily plurisubharmonic. In particular, in the case of Griffiths-positivity, I have extended Berndtsson’s result to general trivial families of Stein manifolds. We are then able to recover Berndtsson’s plurisubharmonic variation results under our reduced positivity assumptions. When the families are non-trivial, I have some more restricted extensions of the plurisubharmonic variation results.

In particular, two further questions of interest are:

- How big can the lower curvature bound \(-2e^{\eta}\partial_X \bar{\partial}_X (-e^{-\eta})\) be in the case of the unit ball, for example?
- How can these results be used to prove \(L^2\)-extension theorems for non-plurisubharmonic weights?

In particular, answers to question 2 in the case of the unit ball can lead to novel \(L^2\) interpolation theorems for the unit ball.

### Talks about my research work:

**October 2020:**Contributed talk at the Second Mid-Atlantic Analysis Meeting (MAAM 2020), with slides.**April 2020:**Short departmental recital talk about my thesis progress, with slides.

### Related talks:

**October 2020:**Talk about the applications of Berndtsson’s Nakano-positivity theorem in Kähler-Einstein geometry, with slides, given at the Geometric Analysis Learning Seminar (Mathematics Department, Stony Brook University).**April 2020:**Talk about optimal \(L^2\) extension theory via the Berndtsson-Lempert technique, and applications, with slides, given at the Student Differential Geometry Seminar (Mathematics Department, Stony Brook University).

## Other research interests:

In addition to my research in several complex variables, I also have interests in probability theory and statistical theory. I’m particularly interested in two main themes.

### Information geometry:

Information geometry is the study of families of probability distributions using differential geometry. The work of Amari has shown the power of this approach in the context of statistical inference. In particular, Komaki’s *Annals of Statistics* paper (2006) shows that the existence of positive superharmonic functions (equivalently, negative subharmonic functions, which are connected to my thesis work) results in the existence of shrinkage priors asymptotically dominating the Jeffreys prior. In particular, the introduction of complex variables allows for a considerable level of computational simplicity. This can be seen in the work of Choi and Mullhaupt, and more recently in the work of Komaki and Oda. Interestingly, natural connections between information geometry and several complex variables have been discovered early on by Burbea and Rao. Burbea and Rao offered numerous examples of well-known families of probability distributions whose Fisher information metric coincides with the Poincaré metric, after introducing complex variables. They also showed more generally that when the parameter space is complex, the Fisher information metric coincides with the Bergman metric, under certain conditions. My own observations have led me to formulating Komaki’s conditions on shrinkage priors in terms of scalar curvature when the Fisher information metric is Kähler. Further work in this direction could lead to interesting results shedding light on more natural connections between Bergman geometry and the Fisher information metric.

One approach to information geometry is to define the statistical manifold \(S(X)\) of a measurable space \(X\) as the infinite-dimensional space of all probability measures on \(X\). One can then fix a dominating measure \(\mu \in S(X)\), and define an \(L^2\)-inner product on the tangent space \(T_{\mu}(X)\) to that point \(\mu\) as follows.

\[\begin{aligned} g(\sigma_1,\sigma_2) = \displaystyle\int_X \dfrac{d\sigma_1}{d\mu}\cdot\dfrac{d\sigma_2}{d\mu}d\mu, \end{aligned}\]where \(\sigma_1, \sigma_2 \in T_{\mu}(X)\) and \(d/d\mu\) denotes the Lebesgue-Radon Nykodim derivative. This makes the statistical (or information) manifold into a Hilbert manifold, in general. This approach is adopted by a number of authors, notably Itoh and Satoh, Shishido, and Bauer, Bruveris and Michor. The particularity of this point of view is that it allows for less regularity assumptions and for infinite-dimensionality, both of which are of natural consideration in the context of probability and statistics. More importantly, the earlier work of Shishido shows clear connections to Ebin’s work (1968) on the space of Riemannian metrics. In fact, it is essentially the same context up to some modifications, whereby the \(L^2\)-metric on the space of Riemannian metrics would correspond to the \(L^2\)-metric on the space of probability measures. Therefore, I believe that a clear understanding of these connections would lead to a systematic understanding of this infinite-dimensional approach to information geometry, and lead to numerous new results in probability theory and statistical theory.

### Related talks:

**November 2020:**Talk about strong symplectic structures and Poisson structures of the space of probability measures, with slides, given at the Information Geometry Seminar (Department of Applied Mathematics & Statistics, Stony Brook University).**May 2020:**Talk about Strong the analysis and geometry of Bergman kernels, and the Fisher Information Metric, with slides, given at the Information Geometry Seminar (Department of Applied Mathematics & Statistics, Stony Brook University).

### Statistical learning:

In machine learning, a major problem is that of overfitting, and “regularization” (e.g. Tikhonov regularization), which is the process of restricting the hypothesis space \(\mathcal{H}\), can solve this problem. Reproducing kernel Hilbert spaces are a useful choice for \(\mathcal{H}\). The more general setting of my research to date is that of vector bundles, and in that case, the Bergman kernels can be thought of as the reproducing kernels of Hilbert spaces of vector-valued or function-valued functions. By considering various learning problems in a functional setting, one can use the theory of reproducing Kernel Hilbert spaces to widen the scope of applications of Hilbert space methods to machine learning. Reproducing kernel Hilbert spaces of vector-valued functions overall play a central role that reproducing kernel in machine learning.

More generally, H. Zhang, Y. Xu and J. Zhang established in a theory of reproducing kernel Banach spaces for machine learning. I am currently working on a project with Pawel Polak on on high-frequency volatility estimation using \(\ell_1\)-penalty for breakpoints detection. Further research using functional-analytic methods along the lines of the work of H. Zhang, Y. Xu and J. Zhang could very well lead to more general methods and applications adapted to settings which are naturally less regular, like that of time series.

I am interested in exploring such methods given my previous experience with highly sophisticated versions of the reproducing kernel Hilbert space theory.