Complex contour deformations of the path integral have previously been used to mitigate sign problems associated with non-zero chemical potential and real-time evolution in lattice field theories. This talk details their application to lattice calculations where the vacuum path integral is instead real and positive -- allowing Monte Carlo sampling -- but observables are afflicted with a sign...
I will review a method for taming sign problems in lattice field theory called “path integral contour deformations”, or, "thimbology". I will describe how to use thimbology to understand qubit systems, and argue that machine-learned contour deformations may offer a competitive route to simulating qubits in real-time.
The Hubbard model is a foundational model of condensed matter physics. Formulated on a honeycomb lattice it provides a crude model for graphene; on a square lattice it may model high-Tc superconductors. I will present first-principles numerical results characterizing the quantum phase transition of the Hubbard model on a honeycomb lattice between a Dirac semimetal to an antiferromagnetic...
The direct simulation of the real-time dynamics of strongly correlated quantum fields remains an open challenge in both nuclear and condensed matter physics due to the notorious sign problem. Here we present a novel machine-learning inspired strategy [1] that significantly improves complex Langevin simulations of quantum real-time dynamics.
Our approach combines two central ingredients: 1)...
Effective String Theory (EST) is a non-perturbative framework used to describe confinement in Yang-Mills theory through the modeling of the interquark potential in terms of vibrating strings. An efficient numerical method to simulate such theories where analytical studies are not possible is still lacking. However, in recent years a new class of deep generative models called Normalizing Flows...
In this talk we introduce how deep learning helps in solving inverse problems in the scope of extreme QCD matter study. The study of QCD matter under extreme conditions presents numerous challenging inverse problems, where the forward problem is straightforward but the inversion is not, such as in-medium interaction retrival, spectral function reconstruction, nuclear matter equation of state...
Sign problems in lattice QCD prevent us from non-perturbatively calculating many important properties of dense nuclear matter both in and out of equilibrium. In this talk, I will discuss recent developments in numerical methods for alleviating sign problems in lattice field theories. In these methods, the distribution function in the path integral is modified via machine learning such that the...
Lattice gauge equivariant convolutional neural networks (L-CNNs) are neural networks consisting of layers that respect gauge symmetry. They can be used to predict physical observables [1], but also to modify gauge field configurations. The approach proposed here is to treat a gradient flow equation as a neural ordinary differential equation parametrized by L-CNNs. Training these types of...
When training normalizing flows to approximate Boltzmann probability distribution, the usual approach to calculating gradients, based on the "reparametrization trick" requires backpropagation through the action. In the case of more complicated actions like fermionic action in QCD, this raises performance issues as well as problems with numerical stability. We present an estimator based on the...
We explore continuous flows as generative models, focusing on their architectural flexibility in implementing equivariance, and test them on the $φ^4$ theory. Using this setup, we show how a machine-learning approach enables transfer between lattice sizes and allows us to learn for a continuous range of theory parameters at once. Investigating the sample efficiency of training, we find that...
To deal with the topological freezing in gauge systems, we develop a variant of trivializing map proposed in Luecher 2019. We in particular consider the 2D U(1) pure gauge model, which is the simplest gauge system having the topology. The trivialization is divided into several stages that each stage corresponds to integrating out local degrees of freedom, and thus can be seen as a...
Lattice gauge-equivariant convolutional neural networks (LGE-CNNs) can be used to form arbitrarily shaped Wilson loops and can approximate any gauge-covariant or gauge-invariant function on the lattice. Here we use LGE-CNNs to describe fixed point (FP) actions which are based on inverse renormalization group transformations. FP actions are classically perfect, i.e., they have no lattice...
Reconstructing, or generating, Hamiltonian associated with high dimensional probability distributions starting from data is a central problem in machine learning and data sciences. We will present a method —The Wavelet Conditional Renormalization Group —that combines ideas from physics (renormalization group theory) and computer science (wavelets, Monte-Carlo sampling, etc.). The Wavelet...
In the interesting physical limits, the numerical solution of the Dirac equation in an SU(3) gauge field suffers from critical slowing down, which can be overcome by state-of-the-art multigrid methods. We introduce gauge-equivariant neural networks that can learn the general paradigms of multigrid. These networks can perform as well as standard multigrid but are more general and therefore have...
The Restricted Boltzmann Machine(RBM) was introduced many years ago as an extension of the Boltzmann Machine (BM) (or the inverse Ising problem). In BM, one aimed to infer the couplings of an Ising model such that it reproduces the statistics of a given dataset. Within such an approach, it is necessary to specify the structure of the interacting variables in order to correctly reproduce the...
Restricted Boltzmann Machines (RBMs) are stochastic neural networks, known for learning a latent representation of the data and generating statistically similar new data. From the statistical physicist’s point of view, an RBM is a highly familiar object: a disordered Ising spin Hamiltonian, in which the spins are distributed on a bipartite lattice. Such energy function can be expanded as an...
In recent years, there has been a growing interest in the application of normalizing flows for sampling in lattice field theory. Successful achievements have been made in various domains, including scalar field theories, U(1) and SU(N) pure gauge theories, as well as fermionic gauge theories. Furthermore, recent developments have shown promising results for full Lattice QCD. Although these...
Normalizing Flows are a class of deep generative models recently proposed as a promising alternative to conventional Markov Chain Monte Carlo in lattice field theory simulations. Such architectures provide a new way to avoid the large autocorrelations that characterize Monte Carlo simulations close to the continuum limit. In this talk we explore the novel concept of Stochastic Normalizing...
Finding interpretable order parameters for the detection of critical phenomena and self-similar behavior in and out of equilibrium is a challenging endeavour in non-Abelian gauge theories. Tailored to detect and quantify topological structures in noisy data, persistent homology allows for the construction of sensitive observables. Based on hybrid Monte Carlo simulations of SU(2) lattice gauge...
Recent advancements in large-scale computing and quantum simulation have revolutionized the study of strongly correlated many-body systems. These developments have granted us access to extensive data, including spatially resolved snapshots that contain comprehensive information about the entire many-body state. However, interpreting such data poses in general significant challenges, often...
A goal of unsupervised machine learning is to build representations of complex high-dimensional data, with simple relations to their properties. Such disentangled representations make it easier to interpret the significant latent factors of variation in the data, as well as to generate new data with desirable features. The methods for disentangling representations often rely on an adversarial...
Exploratory study of training a Gomoku-Agent (generalization of tic tac
toe) using pure Deep Reinforcement Learning. Different training
approaches and neural network architectures are studied. The performance
of the resulting agents is compared to tree search based competitors of
the Gomocup.
Restricted Boltzmann Machines (RBMs) are well known tools used in Machine Learning to learn probability distribution functions from data. We analyse RBMs with scalar fields on the nodes from the perspective of lattice field theory. Starting with the simplest case of Gaussian fields, we show that the RBM acts as an ultraviolet regulator, with the cutoff determined by either the number of hidden...
Neural Network (NN) architectures at initialization define field theories. Certain large width limits of architectures result in free field theories due to Central Limit Theorem (CLT); deviations from CLT via finite width, and correlated, dissimilar NN parameters turn on field interactions. Edgeworth method provides a way to construct NN field theory actions using connected Feynman diagrams,...
Decades-long literature testifies to the success of statistical mechanics at clarifying fundamental aspects of deep learning. Yet the ultimate goal remains elusive: we lack a complete theoretical framework to predict practically relevant scores, such as the train and test accuracy, from knowledge of the training data. Huge simplifications arise in the infinite-width limit, where the number of...
In this talk, we present new neural network architectures inspired by effective field theories, designed to improve the scaling of the training cost for the generation of lattice field theory configurations using normalizing flows. Initially, we deal with poor acceptance rates in simulations of large lattices for scalar field theory in two dimensions and then discuss possible extensions to...
While approximations of trivializing field transformations for lattice path integrals were considered already by early practitioners, more recent efforts aimed at ergodicity restoration and thermodynamic integration formulate trivialization as a variational generative modeling problem. This enables the application of modern machine learning algorithms for optimization over expressive...