Collapse Expand

Search

 

xml

18 seminars found


, Wednesday

Topological Quantum Field Theory


, University of California, Santa Barbara.

Abstract

Skein modules were introduced by Józef H. Przytycki as generalisations of the Jones and HOMFLYPT polynomial link invariants in the 3-sphere to arbitrary 3-manifolds. The Kauffman bracket skein module (KBSM) is the most extensively studied of all. However, computing the KBSM of a 3-manifold is known to be notoriously hard, especially over the ring of Laurent polynomials. With the goal of finding a definite structure of the KBSM over this ring, several conjectures and theorems were stated over the years for KBSMs. We show that some of these conjectures, and even theorems, are not true. In this talk I will briefly discuss a counterexample to Marche’s generalisation of Witten’s conjecture. I will show that a theorem stated by Przytycki in 1999 about the KBSM of the connected sum of two handlebodies does not hold. I will also give the exact structure of the KBSM of the connected sum of two solid tori and show that it is isomorphic to the KBSM of a genus two handlebody modulo some specific handle sliding relations. Moreover, these handle sliding relations can be written in terms of Chebyshev polynomials.


, Wednesday

Topological Quantum Field Theory


, University of Alberta.

Abstract

Given a commutative algebra in a braided monoidal category , the category of local A-modules, , is defined as a subcategory of the category of right -modules in C. Pareigis showed that , which is important for studying vertex operator algebra extensions, is a braided monoidal category under very general conditions. In this setting, I will present a criterion for to be a rigid monoidal category. When is pivotal/ribbon, I will also discuss when the category is pivotal and when is ribbon.

As an application, I will show that when is a modular tensor category and is a commutative simple symmetric Frobenius algebra in , then is a modular tensor category. Furthermore, I will discuss methods to construct such commutative algebras using simple currents and the Witt group of non-degenerate braided finite tensor categories. This presentation is based on joint work with Kenichi Shimizu.


, Wednesday

Topological Quantum Field Theory


, University of Southern California.

Abstract

Since the foundational work of Freedman, Kitaev, Larsen, and Wang, it has been understood that 3-dimensional topological quantum field theories (TQFTs), described via modular tensor categories, provide a universal model for fault-tolerant topological quantum computation. These TQFTs, derived from quantum groups at roots of unity, achieve modularity by semisimplifying their representation categories—discarding objects with quantum trace zero. The resulting semisimple categories describe anyons whose braiding enables robust quantum computation.

This talk explores recent advances in low-dimensional topology, focusing on the use of nonsemisimple categories that retain quantum trace zero objects to construct new TQFTs. These nonsemisimple TQFTs surpass their semisimple counterparts, distinguishing topological features inaccessible to the latter. For physical applications, unitarity is essential, ensuring Hom spaces form Hilbert spaces. We present joint work with Nathan Geer, Bertrand Patureau-Mirand, and Joshua Sussan, where nonsemisimple TQFTs are equipped with a Hermitian structure. This framework introduces Hilbert spaces with possibly indefinite metrics, presenting new challenges.

We further discuss collaborative work with Sung Kim, Filippo Iulianelli, and Sussan, demonstrating that nonsemisimple TQFTs enable universal quantum computation at roots of unity where semisimple theories fail. Specifically, we show how Ising anyons within this framework achieve universality through braiding alone. The resulting braiding operations are deeply connected to the Lawrence–Krammer–Bigelow representations, with the Hermitian structure providing a nondegenerate inner product grounded in quantum algebra.





, Friday

Mathematics for Artificial Intelligence


Francisco Vasconcelos, ISR & Instituto Superior Técnico.

Abstract

Traditional neural networks prioritize generalization, but this flexibility often leads to geometrically inconsistent transformations of input data. To account for variations in object pose — such as rotations or translations — models are typically trained on large, augmented datasets. This increases computational cost and complicates learning.

We propose an alternative: neural networks that are inherently invariant or equivariant to geometric transformations by design. Such models would produce consistent outputs regardless of an object’s pose, eliminating the need for data augmentation. This approach can potentially extend to a broad range of transformations beyond just rotation and translation.

To realize this, we use geometric algebra, where operations like the geometric product are naturally equivariant under pseudo-orthogonal transformations, represented by the group SO(4,1). By building neural networks on top of this algebra, we can ensure transformation-aware computation.

Additionally, we address permutation invariance in point clouds. Instead of treating them as unordered sets of vectors, we represent them functionally — as sums of Dirac delta functions — analogous to sampled signals. This avoids point ordering issues entirely and offers a more structured geometric representation.

This leads us to functional neural networks, where the input is a function rather than a vector list, and layers are continuous operators rather than discrete ones like ReLU or linear layers. Constructed within geometric algebra, these networks naturally maintain the desired invariant and equivariant properties.

, Friday

Mathematics for Artificial Intelligence


Francisco Vasconcelos, ISR & Instituto Superior Técnico.

Abstract

Traditional neural networks prioritize generalization, but this flexibility often leads to geometrically inconsistent transformations of input data. To account for variations in object pose — such as rotations or translations — models are typically trained on large, augmented datasets. This increases computational cost and complicates learning.

We propose an alternative: neural networks that are inherently invariant or equivariant to geometric transformations by design. Such models would produce consistent outputs regardless of an object’s pose, eliminating the need for data augmentation. This approach can potentially extend to a broad range of transformations beyond just rotation and translation.

To realize this, we use geometric algebra, where operations like the geometric product are naturally equivariant under pseudo-orthogonal transformations, represented by the group SO(4,1). By building neural networks on top of this algebra, we can ensure transformation-aware computation.

Additionally, we address permutation invariance in point clouds. Instead of treating them as unordered sets of vectors, we represent them functionally — as sums of Dirac delta functions — analogous to sampled signals. This avoids point ordering issues entirely and offers a more structured geometric representation.

This leads us to functional neural networks, where the input is a function rather than a vector list, and layers are continuous operators rather than discrete ones like ReLU or linear layers. Constructed within geometric algebra, these networks naturally maintain the desired invariant and equivariant properties.






, Thursday

Probability in Mathematical Physics

Room P3.31, Mathematics Building, Instituto Superior TécnicoInstituto Superior Técnico &


, Grupo de Física Matemática.

Abstract

We first introduce a brief review of the history of Brownian Motion up to the modern experiments where isolated Brownian particles are observed. Later, we introduce a one-space-dimensional wavefunction model of a heavy particle and a collection of light particles that might generate “Brownian-Motion-Like” trajectories as well as diffusive motion (displacement proportional to the square-root of time). This model satisfies two conditions that grant, for the temporal motion of the heavy particle:

  1. An oscillating series with properties similar to those of the Ornstein-Uhlenbeck process;
  2. A best quadratic fit with an “average” non-positive curvature in a proper time interval.

We note that Planck’s constant and the molecular mass enter into the diffusion coefficient, while they also recently appeared in experimental estimates; to our knowledge, this is the first microscopic derivation in which they contribute directly to the diffusion coefficient. Finally, we discuss whether cat states are present in the thermodynamic ensembles.

(Joint for with W.D. Wick)

File available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5287796


, Friday

Mathematics for Artificial Intelligence


, Scuola Normale Superiore di Pisa.

Abstract

How many different problems can a neural network solve? What makes two machine learning problems different? In this talk, we'll show how Topological Data Analysis (TDA) can be used to partition classification problems into equivalence classes, and how the complexity of decision boundaries can be quantified using persistent homology. Then we will look at a network's learning process from a manifold disentanglement perspective. We'll demonstrate why analyzing decision boundaries from a topological standpoint provides clearer insights than previous approaches. We use the topology of the decision boundaries realized by a neural network as a measure of a neural network's expressive power. We show how such a measure of expressive power depends on the properties of the neural networks' architectures, like depth, width and other related quantities.

References:
Ballester et all, Topological Data Analysis for Neural Network Analysis: A Comprehensive Survey
Papamrakou et al, Position: Topological Deep Learning is the New Frontier for Relational Learning
Petri and Leitão, On The Topological Expressive Power of Neural Networks

, Friday

Mathematics for Artificial Intelligence


António Leitão, Scuola Normale Superiore di Pisa.

Abstract

How many different problems can a neural network solve? What makes two machine learning problems different? In this talk, we'll show how Topological Data Analysis (TDA) can be used to partition classification problems into equivalence classes, and how the complexity of decision boundaries can be quantified using persistent homology. Then we will look at a network's learning process from a manifold disentanglement perspective. We'll demonstrate why analyzing decision boundaries from a topological standpoint provides clearer insights than previous approaches. We use the topology of the decision boundaries realized by a neural network as a measure of a neural network's expressive power. We show how such a measure of expressive power depends on the properties of the neural networks' architectures, like depth, width and other related quantities.

References:
Ballester et all, Topological Data Analysis for Neural Network Analysis: A Comprehensive Survey
Papamrakou et al, Position: Topological Deep Learning is the New Frontier for Relational Learning
Petri and Leitão, On The Topological Expressive Power of Neural Networks


, Friday

Mathematics for Artificial Intelligence


, Sapienza University of Rome.

Abstract

The Hopfield Neural Network has played, ever since its introduction in 1982 by John Hopfield, a fundamental role in the inter-disciplinary study of storage and retrieval capabilities of neural networks, further highlighted by the recent 2024 Physics Nobel Prize.

From its strong link with biological pattern retrieval mechanisms to its high-capacity Dense Associative Memory variants and connections to generative models, the Hopfield Neural Network has found relevance both in Neuroscience, as well as the most modern of AI systems.

Much of our theoretical knowledge of these systems however, comes from a surprising and powerful link with Statistical Mechanics, first established and explored in seminal works of Amit, Gutfreund and Sompolinsky in the second half of the 1980s: the interpretation of associative memories as spin-glass systems.

In this talk, we will present this duality, as well as the mathematical techniques from spin-glass systems that allow us to accurately and rigorously predict the behavior of different types of associative memories, capable of undertaking various different tasks.

, Friday

Mathematics for Artificial Intelligence


, Sapienza University of Rome.

Abstract

The Hopfield Neural Network has played, ever since its introduction in 1982 by John Hopfield, a fundamental role in the inter-disciplinary study of storage and retrieval capabilities of neural networks, further highlighted by the recent 2024 Physics Nobel Prize.

From its strong link with biological pattern retrieval mechanisms to its high-capacity Dense Associative Memory variants and connections to generative models, the Hopfield Neural Network has found relevance both in Neuroscience, as well as the most modern of AI systems.

Much of our theoretical knowledge of these systems however, comes from a surprising and powerful link with Statistical Mechanics, first established and explored in seminal works of Amit, Gutfreund and Sompolinsky in the second half of the 1980s: the interpretation of associative memories as spin-glass systems.

In this talk, we will present this duality, as well as the mathematical techniques from spin-glass systems that allow us to accurately and rigorously predict the behavior of different types of associative memories, capable of undertaking various different tasks.


Instituto Superior Técnico
Av. Rovisco Pais, Lisboa, PT