Although I have interests in other areas of mathematics, my main areas of interest on which my work has focused are numerical algorithms and condition-based complexity, computational algebraic geometry, and random geometry.

Numerical algorithms are algorithms and as such discrete in nature. They are specified by a finite *program* and they perform a computation composed by a discrete sequence of logical/arithmetic operations. They differ from classical algorithms because they deal with *continuous problems* where the input (and output) are described partially by sequences of real (and complex) numbers. With these, numerical algorithms can only work with approximations and not the real number themselves. In other words, numerical algorithms are doomed to run on inexact inputs, must round up the numbers along the way and can only produce approximate answers to the continuous problem they deal with (when this answer includes real numbers). Despite all of this, numerical algorithms manage to solve problems both in theory and in practice. Which makes numerical algorithms the oxymoron by which the discrete explores the continuous.

Analyzing the complexity of numerical algorithms is a challenge. The familiar worst-case complexity is hopeless for numerical algorithms. No matter how small the errors allowed in the approximations, there will always be inputs for which these errors make the algorithm produce a completely incorrect answer or loop forever. There are two common ways of solving this issue: worst-case input-restricted complexity and condition-based complexity.

In *worst-case input-restricted complexity*, one restricts the inputs that one allows for the numerical algorithm. This restriction is done by allowing only rational numbers of bounded bit-size in the input and by considering only *well-posed* inputs, which are inputs for which “reasonably” small errors do not affect the answer to the considered problem “dramatically”. Although many times successful, this complexity analysis tends to give bounds that are far away from the behavior of the algorithm observed in practice, and it fails to explain the phenomenon by which not all inputs take the same time.

In *condition-based complexity*, one performs a parameterized complexity analysis in terms of a property of the input called *condition number*. The condition number is a quantitative measure of “numerical sensitivity” of the input for the given continuous problems and tell us how much the output of the problem varies with respect to how much the input varies. Condition numbers may vary depending on how we measure variations (e.g., maximum vs. Euclidean norm) and on the nature of the variations we consider (e.g., worst-case vs. average). However, condition numbers are independent of the analyzed algorithm, they are a geometric property of the input with respect the considered continuous problem with which one can explain the behavior of the numerical algorithms in practice and why some inputs take less time than others.

Further, condition-based complexity allows us to understand how much precision does a numerical algorithm need to guarantee a correct output and how does this algorithm behave probabilistically, when a sufficiently reasonable probability distribution is assumed on the input. In the first case, the precision needed is usually controlled by the logarithm of the condition number. In the second case, the study of the distribution of the condition number of the problem on a random input allows one to get complexity estimates with high probability (weak complexity) and on expectation (average complexity).

My interests focus on the development and analysis of numerical algorithms for a variety of problems (mainly in algebraic geometry) inside the framework of condition-based complexity. In this, I have a special focus on how to develop and analyze numerical algorithms based on adaptive subdivisions methods and on improving existing complexity analyses by exploring, for example, alternative measures of errors and more general distributions.

Algebraic geometry deals broadly with objects that can be defined using polynomials and these objects' properties. With the advent of computers, the possibility of turning certain constructive, but humanly impracticable, methods into working algorithms became real. This possibility was also motivated by the fact that polynomials appear in many practical applications (robotic arms, chemical networks, dynamical systems, etc.), which made that the possibility also became a necessity. This algorithmic possibility is the focus of computational algebraic geometry whose goal is to develop and analyze algorithms that solve problems in algebraic geometry.

Among the possible computational challenges, my focus lies in the search for zeros of polynomial equations and the computation of topological invariants of semialgebraic sets, which are sets described by real polynomials, inequalities, and their combination. In the first one, I am interested in understanding better theoretically the practitioners' tricks from a complexity viewpoint in the complex case and the subdivision and grid methods in the real case. In the second one, I try to develop an algorithm that computes the homology of semialgebraic sets in average singly exponential time in opposition to the state of the art algorithms that are either doubly exponential in the worst case or singly exponential with high probability. My long term project is to use computational algebraic geometry to effectively attack the first part of Hilbert's 16th problem in high degrees.

Given a random geometric structure, what can we say about the distributions of its properties? One might wonder about the topological properties of the object. For example, given a random polynomial system, what is the distribution of its number of zeros? Or given a random underdetermined polynomial system, what is the distribution of the Betti numbers of its zero sets? And one might also consider the geometric properties of the object. For example, given a random polynomial system, what is the distribution of the minimum separation between two or its zeros? Or given a random underdetermined polynomial system, what is the distribution of the reach of its zero sets?

The centrality of this question can be appreciated in several places. First, in applications, it may help to comprehend why certain phenomena are more common than others. Second, in constructive algorithms, it may help to explain why a certain method fails to produce examples of a certain sort or why it will successfully produce examples of some desired sort. Third, in real algebraic geometry, random geometry is the only way to understand what happens “in general” since, in contrast to the complex case, there is no generic behavior in the real world.

My interests in random geometry focus on the distribution of topological invariants and, of course, condition numbers of random polynomial systems. A particular future goal is to generalize the current methods in real random algebraic geometry from Gaussian distributions to more general distributions, making current probabilistic estimates more robust.

The following shows my papers and preprint in the last few years. For a complete list, see the list on my CV or my list of arXiv preprints.

(Together with Peter Bürgisser and Felipe Cucker)

(Together with Felipe Cucker and Alperen A. Ergür)

Accepted at ISSAC2019

(Together with Peter Bürgisser and Alperen A. Ergür)

P. Bürgisser, F. Cucker, J. Tonelli-Cueto; Computing the Homology of Semialgebraic Sets. I: Lax Formulas, *Foundations of Computational Mathematics*, May 2019. DOI 10.1007/s10208-019-09418-y