fermionic superfluids: the structure of polarized vortices

237
FERMIONIC SUPERFLUIDS: THE STRUCTURE OF POLARIZED VORTICES By CHUNDE HUANG A dissertation submitted in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY WASHINGTON STATE UNIVERSITY Department of Physics and Astronomy MAY 2020 c Copyright by CHUNDE HUANG, 2020 All Rights Reserved

Transcript of fermionic superfluids: the structure of polarized vortices

FERMIONIC SUPERFLUIDS: THE STRUCTURE OF POLARIZED VORTICES

By

CHUNDE HUANG

A dissertation submitted in partial fulfillment ofthe requirements for the degree of

DOCTOR OF PHILOSOPHY

WASHINGTON STATE UNIVERSITYDepartment of Physics and Astronomy

MAY 2020

c© Copyright by CHUNDE HUANG, 2020All Rights Reserved

c© Copyright by CHUNDE HUANG, 2020All Rights Reserved

ii

To the Faculty of Washington State University:

The members of the Committee appointed to examine the dissertation of

CHUNDE HUANG find it satisfactory and recommend that it be accepted.

Michael M. Forbes, Ph.D., Chair

Mark G. Kuzyk, Ph.D.

Peter Engels, Ph.D.

iii

ACKNOWLEDGMENTS

First of all, I would like to thank my supervisor, Michael Forbes, for all his

encouragement, support, and kindness throughout my Ph.D., which was the most

important chapter of my life to pursue my dream, even it was full of challenge,

loneliness and struggling. He more than anyone, has made my graduate study in the

U.S. a positive one. Professor Forbes’s willingness to help students and enthusiasm for

science makes my experience in Pullman unforgettable. I have learned a tremendous

amount about theoretical physics from him and benefited a lot from being his student.

He spent many hours every week in the first half-year after I joined his group to teach

me how to do theoretical research and answered tons of questions in person and online

via Skype chat, and he is always ready to help his students out of any difficulty at

any time.

Thanks are also due to professor Peter Engels for his support when I was work-

ing on my master’s degree projects, he was very supportive and taught me many

experimental skills. I would also like to thank professor Mark Kuzyk for all his help

and for sharing his insight of life. I want to extend my appreciation to Vandna

Gokhroo and Qingze Guang for all their generous help when I was struggling with

iv

many fundamental theories.

I want to thank all my current collaborators in the Forbes’s group. Specifically

Khalid Hossain, Ryan Corbin, Ted Delikatny, Spatarshi Sarkar, Kyle Elsasser, and

Praveer Tiwari, with whom we help and learn from each other. Ryan spent many

days correcting the entire thesis; he is always kind-hearted and thoughtful. I also like

to thank other collaborators in Engels’ group, specifically Amin Khamehchi, Chris

Hamner, Maren Mossman, Thomas Bersano, and Shen Wei, who had been so helpful

during my time in the group. My thanks also go to the current members in Kuzyk’s

group, specifically Bojun Zhou, Ankita Bhuyan, Becka Oehler, and Zoya Ghorbani.

I would like to thank the former and current administrative and technical staff

in the Physics department at WSU, including Sabreen Dodson, Kris Boreen, Laura

Krueger, Tom Johnson, Robin Stratton, Thomas Busch, Steve Langford. Their hard

work has made my life and study here much easier and more productive.

I owe thanks to my families for their love and support, my parents gave me

all their best and are always proud of me just because I am their child. My younger

brother has been enduring all the pressure and taking the family responsibilities which

are supposed to be on my shoulders.

I would like to thank some of my friends for their help and friendship during

all the hardships. Specifically Xiaoshan Huang(黄晓山), Qianheng Yang(杨谦恒),

Shaodong Hou(侯绍东), Tiecheng Zhou(周铁成), and Lu Liu(刘露).

v

Finally, I would like to thank professor Douglas Osheroff who encouraged me to

pursue my interest in physics many years ago. Without his encouragement and help,

my dream would have withered away, I can never thank him more than enough. I am

always very grateful for all the good people I have met in this beautiful country.

This research was supported in part by the Natural Science Foundation.

vi

FERMIONIC SUPERFLUIDS: THE STRUCTURE OF POLARIZED VORTICES

Abstract

by Chunde Huang, Ph.D.Washington State University

May 2020

Chair: Michael M. Forbes

In this dissertation, applications of mean-field theories to fermionic systems are

introduced. The discussion starts with standard BCS theory and proceeds to the

state-of-the-art density functional theory for fermionic superfluidity in the unitary

regime. A connection is made between a polarized fermionic quantum vortex and

Fulde Ferrell (FF) states assuming the local density approximation over the radial

direction. It will be shown that vortices are a realization to a fairly significant degree of

the FF states. After that, I will present a theory called local quantum friction that can

be used to remove energy from a fermionic system continuously. The cooling methods

are unitary so that they can maintain the orthogonality of single-particle states which

are distributed over many compute nodes of a supercomputer. It will significantly

reduce the time used to prepare a quantum many-body simulation by reducing the

vii

communication among compute nodes. At the end of this thesis (appendix H.), the

application of a digital micromirror device (DMD) as a light modulator for ultracold

experiments is introduced, it includes new methods used to generate arbitrary dipole

potentials and holograms.

viii

TABLE OF CONTENTS

Page

ACKNOWLEDGMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi

LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x

LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

CHAPTERS

1. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1 Dissertation Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2. SUPERFLUIDITY THEORY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.1 Electron-Phonon Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.2 Meissner Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.3 Perfect Conductor versus Superconductor . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.4 London Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.5 BCS Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.6 Theoretical Formalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.7 Off-Diagonal Long-Range Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

3. ASYMMETRIC SUPERFLUID LOCAL DENSITY APPROXIMATION 40

3.1 The Unitary Fermi Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.2 Thomas-Fermi Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

3.3 Superfluid Local Density Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

3.4 Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

3.5 Asymmetric Superfluid Local Density Approximation . . . . . . . . . . . . . . . 61

ix

4. POLARIZED VORTICES, FULDE FERRELL STATES . . . . . . . . . . . . . . . . 65

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

4.2 Experimental Evidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

4.3 Structure of Polarized Vortices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

4.4 BCS Vortices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

4.5 2D Phase Diagram of FF States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

5. QUANTUM FRICTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

5.1 Fermionic DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

5.2 Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

5.3 Procedure and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

5.4 BCS Cooling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

5.5 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

A. Rotating Frame Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

B. Matrix Representation of Kinetic Operator . . . . . . . . . . . . . . . . . . . . . . . . . 126

C. 2D Harmonic Oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

D. DVR Basis Tutorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

E. Mean Field Decoupling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

F. UV, IR Errors and Bloch Twisting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

G. Vortices in Cylindrical Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

H. Digital Mirror Device Based Optical and Spatial Laser Modulator . . . 160

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222

x

LIST OF TABLES

Table Page

1.1 Quark Mass and Charge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

4.1 Algorithm: Balanced Vortex Simulation Using BCS Theory . . . . . . . . . 75

5.1 Algorithm: Normalizing the Weight Factors . . . . . . . . . . . . . . . . . . . . . . . . 105

5.2 Initial States and Ground State Overlap . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

0.1 Algorithm: Intensity Modulation Pattern Generation . . . . . . . . . . . . . . . 169

0.2 Algorithm: Intensity Modulation for Direct Imaging . . . . . . . . . . . . . . . . 173

0.3 Algorithm: DMD Patch Division . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

0.4 Algorithm: Phase Map Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

0.5 Algorithm: Gerchberg-Saxton Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

0.6 Algorithm: Binarized Gerchberg-Saxton Algorithm . . . . . . . . . . . . . . . . . 187

xi

LIST OF FIGURES

Figure Page

1.1 Schematic Structure of a Neutron Star . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1 Electron Phonon Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.2 BCS Dispersion Relation with Positive µ . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

2.3 BCS Dispersion Relation with Negative µ . . . . . . . . . . . . . . . . . . . . . . . . . . 36

3.1 Coupled Pairs in BEC and BCS limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.2 Scattering Length a vs External Magnetic Field . . . . . . . . . . . . . . . . . . . . 45

4.1 Fulde Ferrell Fermi Surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

4.2 Analogy of a Circular Slice of a Vortex to a Fulde Ferrell State. . . . . . 71

4.3 Effective Interaction as a Function of µ, δmu, ∆, and kc . . . . . . . . . . . . 73

4.4 Weakly Coupled Symmetric Vortex. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

4.5 Symmetric Vortex and Homogeneous Results in Radial Direction. . . . 78

4.6 Symmetric Vortex and Homogeneous Results in Radial Directionwith Strong Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

4.7 Weakly Polarized Vortex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

4.8 Weakly Polarized Vortex and Homogeneous Results in Radial Direction 81

4.9 Strongly Polarized Vortex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

4.10 Strongly Polarized Vortex and Homogeneous Results in Radial Di-rection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

4.11 Vortices in Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

4.12 2D Phase Diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

4.13 FF states on 2D Phase Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

5.1 Single-Particle Orbits on Compute Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . 89

5.2 Cooling Potential in UV and IR limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

xii

5.3 Imaginary Cooling Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

5.4 Unitary Cooling Potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

5.5 Ground States for Different g . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

5.6 Initial States for Cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

5.7 Cooling efficiency comparison for Ec = 1.2E0 . . . . . . . . . . . . . . . . . . . . . . . 112

5.8 Cooling efficiency comparison for Ec = 1.1E0 . . . . . . . . . . . . . . . . . . . . . . . 113

5.9 Cooling efficiency comparison for Ec = 1.01E0 . . . . . . . . . . . . . . . . . . . . . . 114

5.10 An Initial State Has no Overlap With the Ground State . . . . . . . . . . . . 117

5.11 Two particles System with Initial States (|φ0〉, |φ1〉) . . . . . . . . . . . . . . . . 118

5.12 Two particles System with Initial States (|φ2〉, |φ4〉) . . . . . . . . . . . . . . . . 118

5.13 Ten particles System with Initial States (|φ0〉-|φ10〉) . . . . . . . . . . . . . . . . . 119

5.14 Exchange Cooling Potential Fragments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

0.1 Basis functions for Sinc DVR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

0.2 UV, IR, and Twisting Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

0.3 Different States of Two Micromirrors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

0.4 Pattern on a DMD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

0.5 Physical Geometry of a DMD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

0.6 Double Moving Potential Barriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

0.7 Optical Setup for Direct Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

0.8 DMD Patch Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

0.9 Optical Setup for Fourier Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175

0.10 Actual Optical Setup for Fourier Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . 176

0.11 Distorted Wave Front . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

0.12 DMD Patches and Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

xiii

0.13 Phase Map Unwrapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

0.14 First Diffraction Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

0.15 Phase Correction On a Complex Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

0.16 Gerchberg Saxton Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186

0.17 Binarized Gerchberg Saxton Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188

0.18 Comparison of Gerchberg-Saxton (GS) algorithms . . . . . . . . . . . . . . . . . . 189

0.19 Ideal Gaussian Beam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

0.20 Ideal Gaussian Beam Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

xiv

Dedication

To my mother, father, brother and niece

1

CHAPTER 1. INTRODUCTION

Superfluidity appears in a range of systems from ultracold atomic gases [1] to

neutron stars, and from electrons in metals to quark-gluon plasmas [2] and color

superconductors [3]. A striking property of a superfluid is the lack of viscosity. In

1938, Kapitza published a one-page letter to Nature [4] on his study of the viscosity

of liquid 4He, which will make a phase transition from the normal fluid state (helium

I) to superfluid state (helium II) when the temperature drops below Tλ = 2.17K (λ-

point). He found that the upper limit of the viscosity of helium II is abnormally low.1

He then called the helium II a ’superfluid’ by analogy with the superconductors [5].

In the same year, Allen and Misener also reported their study with similar results by

observing the flow of helium II through a long and thin tube [6].

Fermionic superfluidity requires pairing of fermions, such as electrons in super-

conductors and superfluid He-3. In the middle of 1950s, with the rapid understanding

of superconductivity, John Bardeen, Leon Cooper, and John Robert Schrieffer con-

structed a theory to describe the microscopic effects of superfluidity. Their theory

was called Bardeen-Cooper-Schrieffer (BCS) theory [7]. In the BCS model, fermionic

superfluidity is the effect of fermionic paring through attractive interactions between

1He found that the helium II has a viscosity more than 104 times smaller than that of a hydrogen

gas. The latter had the least viscosity at that time.

2

pairs of particles; such a pair is called a Cooper pair [8]. In the standard BCS theory,

the pairing takes place between two particles with opposite spins, which is also called

s-wave pairing. If the paired particles have the same spin, they can not form s-wave

pairing because the Pauli exclusion principle , but they may form p-wave superfluid-

ity [9]. In ultracold atomic gases [10], the pairing can happen between two hyper-fine

states (also called pseudo-spins) with proper couplings. In an s-wave superfluid, when

the populations of the two different species are the same, the entire system can con-

dense to a superfluid no matter how small the attractive interaction is. However,

if the populations are different, excess particles that cannot form pairs may lead to

unusual phase transitions and states.

In condensed matter physics, the Fermi energy is the energy of the highest occu-

pied quantum state (for a free Fermi gas, the Fermi energy is equal to the chemical

potential, which is also defined as the energy required to add one more particle to

a system). In reciprocal space (momentum space or k-space), a Fermi surface is the

surface that separates occupied and unoccupied electron states [11]. A polarized sys-

tem may exhibit some interesting properties, as the Fermi surfaces will deform due

to varying of chemical potentials and pairing strengths. For instance, when a giant

star collapses, it may form a neutron star with a diameter on the order of just ten

kilometers and with a mass no more than three of solar masses [12, 13]. The struc-

ture of a neutron star may be reminiscent of the structure of Earth, which contains

3

various layers [14]. A schematic structure may look like fig. 1.1. The massive grav-

itational force will induce such an enormous pressure on the core of a neutron star

that electrons will be squeezed into the protons to form neutrons. Motivated by the

observation of glitches [15, 16] (a sudden increase of the rotational frequency of a

neutron star) and the properties of superfluidity, it is suspected that a neutron star

may have a crust of superfluid neutrons and protons (because protons are charged

particles, superfluid protons will form a superconductor) [17]. If neutrons are further

crushed, they may form a quark superfluid in the core where the pairing in the center

of a neutron star is between up, down, and strange quarks [18]. Because their masses

are different (see table 1.1: the up quark weighs about 2.16 MeV, the down quark

weighs about 4.67 MeV, the strange quark weighs about 93 MeV [19]), the effective

chemical potential for the strange quarks is different from that of the up and down

quarks (strange quarks are much heavier). The asymmetric chemical potential may

lead to polarization in the superfluid, all the phases of polarized superfluids may

also take place inside a neutron star, such as exotic new phases in ultracold Fermi

gases [20].

4

Quark Mass Charge

up 2.16+0.49−0.26 MeV 2

3e

down 4.67+0.48−0.17 MeV −1

3e

strange 93+11−5 MeV -1

3e

Table 1.1: Quark Mass and Charge

In 1962, Clogston published a very readable two-page paper [21] on the upper

limit magnetic field in hard superconductors by associating the maximum value of

the field H0 to the critical temperature Tc, i.e., H0 ∝ Tc. When the magnetic field is

beyond H0 balanced superconductivity can not survive, so H0 is a critical field, also

called the Clogston limit in some sources. The Clogston limit is the Zeeman energy

(energy splitting) due to H0 which should exceed the binding energy of a Cooper pair

(∝ ∆) to have Pauli paramagnetism (a type of magnetism in which only the electrons

near the Fermi level contribute to the paramagnetic susceptibility) [22]. In more

general cases, the Clogston limit can be understood as a value of chemical potential

difference at which the energy of a superfluid state is equal to a normal state, beyond

which the normal states will be more energetically favorable. A salient question

arises whether any other unusual states that accommodate superfluidity other than

the conventional BCS state can exist under imbalanced densities caused by the strong

field.

5

Structure of a Neutron Star

Neutrons

Confined quarks

Crust

Outer Core

Inner Core

Envelope

Figure 1.1: The structure of a neutron star [23]: The density increases as the radiusgoes deeper to the center (the core). The envelope (outer crust) is a solid layer thatcontains ions, electrons, etc. The crust (inner crust) may be made of ion latticesoaked in superfluid neutrons. The outer core may be made of proton and neutronsuperfluids, and a dilute electron gas. The inner core is still unknown; it may bemade of superfluids with pairing among up, down and strange quarks.

6

In 1964, two groups independently predicted a superconducting state that can

persist above the upper critical field. Fulde and Ferrell [24], and Larkin and Ovchin-

nikov [25] proposed a superconducting state where the pairing field oscillates (change

signs) in space, the so-called Fulde-Ferrel-Larkin-Ovchinnikov (FFLO) state which

will be discussed in chapter 4.

1.1 Dissertation Organization

This dissertation is organized as follows. Chapter 2 reviews the history of su-

perconductivity. The standard BCS theory is introduced based on the mean-field

approximation with reasonable detail that is accessible to students. In Chapter 3, I

present a density functional theory for superfluid states that is the state-of-the-art

theory to describe superfluidity in the unitary regime, and details of its formalism will

be discussed. In Chapter 4, the theories described in Chapter 2 and Chapter 3 will be

applied to study polarized vortices and bizarre FFLO states, which is one of the main

results of this dissertation. In Chapter 5, I present a unitary cooling theory that can

be used to prepare a fermionic system to its ground state in theoretical simulations.

The method is called local quantum friction, in contrast to imaginary time cooling,

which is nonlocal as it requires that all wavefunctions reside in all other compute

nodes in order to perform reorthogonalization. Thus imaginary time cooling can be

7

very ineffective if there are thousands of wavefunctions distributed on hundreds of

nodes due to the communication among these nodes. This method can be useful in

the study of fermionic superfluids and vortices.

At the end of this dissertation, several appendices are included. Parts of them

are used as introductions to discuss some theory notations with detailed derivations.

Others describe numerical details to represent quantum operators such as kinetic

operators in different bases. DVR bases that can be used to simulate 2D and 3D

spherical systems are introduced in appendix D., such as nuclear physics and quantum

vortices. In appendix F., the notation of UV, IR errors, and Bloch twisting method

are briefly discussed. In appendix H., I will present several experimental technologies

used to generate arbitrary optical dipole potentials using a DMD. In addition, two new

algorithms used to modulate the laser intensity and phase profiles will be introduced.

A modified Gerchberg-Saxton algorithm is introduced, which takes the binary nature

of a DMD into account, and can converge to the desired image pattern much faster.

These technologies can be used in ultracold atom experiments to generate perfect

Gaussian beams, optical lattices, atom circuits, and vortices and can be useful for

more extensive researches. The work described in this appendix was finished for my

master’s degree (non-thesis) supervised by professor Peter Engels and has not been

published anywhere.

8

CHAPTER 2. SUPERFLUIDITY THEORY

In 1911, Kamerlingh Onners [5] discovered superconductivity when he was study-

ing the transport properties of mercury (Hg) at very low temperatures. It was found

that when the temperature was lowered enough to the vicinity of 4.2 kelvin (K), the

resistivity of Hg would become so low that it was beyond detectable, which was quite

surprising. From the classical point of view, it is expected that the resistance only

goes to zero when the temperature is zero, where all microscopic particles should be

at rest if no external force is applied. The vanishing of resistance at finite temper-

atures is unexpected, as we may think it should go to zero smoothly as T → 0. A

theoretical model called BCS theory was constructed to describe superconductivity,

which requires attractive interaction between particles. This theory will be introduced

pedagogically in this chapter. At first sight the theory may be counterintuitive, since

electrons repel each other because of the Coulomb potential. However, in a conductor,

the effect can be mediated by positive charges. The periodic presence of electrons

and positively charged nucleons renders the overall lattice to be nearly neutralized.

Thus in the microscopic view, the interaction and dynamics among electrons under

the presence of positive ion lattice become more subtle.

9

2.1 Electron-Phonon Interaction

Inside a metal, the protons form a 3D lattice while the valence electrons that

can move freely form an electron gas. Any of these electrons travelling in metals will

experience an attractive interaction with protons and a repulsive interaction with

other electrons. Due to the presence of the positive lattice, which is relatively fixed

in position, the overall interaction between two electrons near the Fermi surface can

be weakly attractive. One classical picture [26] to explain this phenomenon is: when

an electron travels through the lattice, the attractive force between protons and the

electron will contract the local lattice. When the electron is gone, the squeezed lat-

tice will create a higher local potential due to the density change, which generates

an overall attractive force on the nearby electrons, and thus it can be equivalently

interpreted as attractive interaction among those electrons. The effective attractive

interaction will bond two electrons together to form a so-called Cooper pair. This

procedure may happen at all temperatures, however it is negligible when the tem-

perature is higher than a critical value where the thermal fluctuation will render the

Cooper pairs unstable. The lattice will bounce back from the contraction once the

electron is gone, while a next electron will start the cycle again, which makes lat-

tice points oscillate about their equilibrium positions. The oscillation of the lattice

will create time-dependent density modulation, which is like the sound created by

10

the vibration of macro-objects, so the excited vibrations are called phonons, and the

attractive interaction between each pair of electrons mediated by phonons is called

electron-phonon interaction [27].

Electron-Phonon-Interaction

+ + +

+ + +

+ + +

+ + +

+ + + +

+ + + +

+ + + +

+ + + +

e𝑘 −𝑘

+ + + +

+ + + +

+ + + +

+ + + +

e𝐹−𝐹

Figure 2.1: An electron (with momentum k) travelling through the positive latticewill contract its local lattice even it is gone (region inside the red-dashed circle). Theincreasing local positive charge density will attractive other electrons nearby (the onewith momentum −k).

2.2 Meissner Effect

From the view of classical electromagnetism, materials can be attracted or re-

pelled by an external magnetic field. Diamagnetic materials (like water) are repelled

by a magnetic field, while ferromagnetic materials are attracted by a magnetic field.

In most materials, diamagnetism is a weak effect that can only be detected by sensi-

tive laboratory instruments, but superconductors exhibit strong diamagnetism below

Tc. In 1933, Meissner discovered that the magnetic flux density B would be expelled

11

when the temperature falls below the critical transition temperature of Tc. At that

temperature, the material is a superconductor and it also acts as a strong diamag-

net because it repels a magnetic field entirely from its interior. The property that

a superconductor becomes a perfect diamagnet is called the Meissner effect. In a

bulk superconductor, it turns out that the magnetic flux in a closed-loop or hole is

quantized. The smallest discrete “quanta” is called a magnetic flux quantum [28, 29],

it is denoted as Φ0 and has a value [30] that is a combination of fundamental physical

constants:

Φ0 =h

2|e|≈ 2.067833848 . . . 10−15Wb (2.1)

where h is the Plank constant, and e is the electron charge.

There are two types of superconductors that will expel magnetic flux. In type-I

superconductors, there is no intermediate state that shows up when material tran-

sitions from the superconducting state to the normal state as the magnetic field in-

creases. In type-II superconductors, intermediate states that are called mixed states

will show up before the material fully transitions to the normal state. In such a mixed

state, the magnetic field partially penetrates the body of the material, which leads

to the formation of an array of tubes, each of these tubes carries at least one of the

magnetic flux quantum (an integer times Φ0).

12

2.3 Perfect Conductor versus Superconductor

In terms of conductivity, a perfect conductor is a material that can transport

currents without electrical resistance. If this is the only requirement for a perfect

conductor, we may be able to derive some properties for such material (some others

may argue that a perfect conductor should preserve zero resistance under any strong

magnetic field, and arbitrarily high operating temperature, but such an absolutely

perfect conductor probably does not exist). Let us study a conductor using Maxwell’s

equations [31], recall the relationship between B and H in electromagnetism:

~B = ~H + 4π ~M. (2.2)

Here, ~M is the magnetization or magnetic polarization defined as the density of

magnetic dipole moments in a magnetic material:

~M =d~m

dV(2.3)

where ~m is the magnetic moment. For a superconductor (Type-I), ~B = 0 inside the

material, and the magnetic susceptibility can be computed as:

χ = ∂ ~M/∂ ~H = − 1

4π(2.4)

For a material to be a perfect conductor, its electric charge should be accelerated

freely when an electric field is applied to it. Invoking Newton’s second law:

m~r = −e ~E (2.5)

13

The current can be derived as (where n is the charged particle density):

~J = −en~r (2.6)

Combining with the previous equation yields:

~J =ne2

m~E (2.7)

Now let us invoke the Maxwell equations to study the case of perfect conductivity.

Faraday’s law can be written as:

∇× ~E = −1

c

∂ ~B

∂t(2.8)

Substituting ~E with the previous relation gives:

∇× ∂ ~J

∂t= −ne

2

cm

∂ ~B

∂t(2.9)

Together with the Ampere’s law, which reads:

∇× ~B =4π

c~J (2.10)

and we can get:

∇×∇× ∂ ~B

∂t= −4πne2

mc2

∂ ~B

∂t

∇(∇ · ∂~B

∂t)−∇2∂

~B

∂t= −4πne2

mc2

∂ ~B

∂t

∇2∂~B

∂t=

4πne2

mc2

∂ ~B

∂t

∇2∂~B

∂t= λ−2∂

~B

∂t

(2.11)

14

where λ =√

mc2

4πne2is the penetration depth.

In the above derivation, we used the the relation ∇ · ~B = 0 and the identity:

∇×∇× ~c = ∇(∇ · ~c)−∇2~c (2.12)

The last line of the previous derivation can be solved easily to get:

∂ ~B

∂t=

(∂ ~B

∂t

)r=0

e−r/λ (2.13)

where r is measured from the surface down into the bulk. This simply means the

change rate of the magnetic field (∂~B∂t

) decays exponentially with r, and it goes to

zero inside the material, i.e.:

∂ ~B

∂t= 0, (2.14)

which implies the magnetic field should be constant, but not necessarily be zero inside

a perfect conductor. So from the classical theory, we can not get the Meissner effect

(B = 0) in a superconductor. What we can conclude is that a superconductor is a

perfect conductor, but a perfect conductor is not necessarily a superconductor. Since

the Meissner effect is a property of a superconductor, that means a superconductor

is a flux expelling medium. But the last equation implies that a perfect conductor is

a flux conserving material.

15

2.4 London Equation

Based on the fact that the superconductor has zero flux inside, if we want to

get the condition B = 0, we can look back to the previous derivation and it can be

found that if we can replace ∂ ~B∂t

in the last line of eq. (2.11) with ~B, we would get the

desired result: i.e:

∇2 ~B = λ−2 ~B (2.15)

This is the phenomenological model proposed by the London brothers, from which

the magnetic field can be solved:

~B = ~Br=0e−r/λ (2.16)

which means the internal flux of a conductor goes to zero and correctly capture the

Meissner effect. Combined with Ampere’s law:

∇× ~B =4π

c~J (2.17)

we can get the relation between ~J and ~B:

∇× ~J = −ne2

mc~B (2.18)

Recall that ~B = ∇× ~A, where ~A is the vector potential, and n is the charge density.

The above equation can be rearranged to:

~J = −ne2

mc~A (2.19)

16

This is the London equation, which can not be formally derived from classical the-

ories [32], by can be recovered from quantum theory. To gain some insight to the

equation, take the time derivative of the current:

∂ ~J

∂t= −ne

2

mc

∂ ~A

∂t

=ne2

mc( ~E +∇φ) (Lorenz gauge)

(2.20)

The result can be interpreted: for a perfect conductor, there is no resistance, electrons

will experience a uniform force and thus have a uniform acceleration, which lead to

constantly increasing current, because the current is proportional to the electron

velocity. Once there is current, if the external field is turned off, the current can

persist as the speed of a charged particle will remain unchanged.

2.5 BCS Theory

In 1957, almost a half-century later, after the first discovery of Kamerlingh-

Onnes, Bardeen, Cooper, and Schrieffer came up with a theory [7] that answered the

mystery of superconductivity from the microscopic view. The theory was coined as the

BCS theory. Several key experimental discoveries contribute to our understanding of

the properties of a superconductor, such as the isotope effect, in which the transition

temperature Tc decays with the square of the isotope mass. As the mass largely

comes from lattice ions, it may imply that the lattice should play an essential role

17

in the superconducting state formation. Another observation is that the specific

heat at low-temperature decays exponentially, which suggests the energy spectra of

a superconductor must be gapped while that of a regular metal is not.

2.6 Theoretical Formalism

2.6.1 Hartree-Fock-Bogoliubov Approximation

We start with standard BCS theory, which is the Hartree-Fock-Bogoliubov (HFB)

approximation [33] to the following family of Hamiltonians:

H =∑σ=↑,↓

∫ψ†σ(x)

(− ~2

2m∇2 + Vσ(x)

)ψσ(x)d3x

+1

2

∫∫ψ†↑(x)ψ†↓(x

′)V (x, x′) ψ↓(x′)ψ↑(x)d3x′d3x

(2.21)

where σ = {↑, ↓} is the spin, and ψ↓(x) and ψ↑(x) are fermionic field operators that

create or destroy a fermion with spin σ at position x. The V (x, x′) is the interaction

potential, V↑(x) and V↓(x) are the effective single particle potentials that include the

chemical potentials µ = {µ↑, µ↓} for different spin species. It is convenient to convert

from spatial representation to momentum representation using the Fourier transform

18

of operators and potentials:

ψ†σ(x) =∑k

〈k|x〉 c†kσ =∑k

e−ikxc†kσ

ψσ(x) =∑k

〈x|k〉 ckσ =∑k

eikxckσ

Vkk′ =

∫∫V (x, x′)eikx+ik′x′d3xd3x′

(2.22)

Substituting into eq. (2.21) and with some rearrangement yields:

H =∑kσ

ξkc†kσckσ +

1

N

∑kk′

Vkk′ c†k↑c†−k↓c−k′↓ck′↑ (2.23)

where N is the number of electrons. k is the momentum (wave-number), the operator

c†kσ creates a particle with momentum k and spin σ, while ckσ destroys one particle

correspondingly. The chemical potential is already included in the ξk as defined:

ξk = εk − µ, εk =~2k2

2m(2.24)

where εk is the kinetic energy. The second term of eq. (2.23) describes the interaction

between two electrons with opposite momentum and spin. In second quantization lan-

guage, it destructs a Cooper pair with opposite momenta and spin, and subsequently

creates another pair.

2.6.2 Theoretical Challenge

In the previous discussion, eq. (2.21) and eq. (2.23) are exact if we only take

the two body interaction into account (in a real system, there are three-body inter-

19

actions, four-body interactions2 which are much weaker compared to the two-body

interaction). For a many-body Fermi system, the overall wavefunction has to satisfy

the anti-symmetry property, and it can be constructed from single particle wave-

functions in a determinant form [34]. Let all the determinants form a complete set

(|Ψ1〉 , |Ψ2〉 , . . . |Ψn〉). Then the overall wave function with interactions can be ex-

pressed as the superposition of these complete states:

|Ψ〉 =n∑i=0

ci |Ψi〉 (2.25)

Let us consider a small system with twenty particles in a 3D box. Each single-

particle wavefunction can be represented as a 32× 32× 32 array3, then a many-body

wavefunction can be represented as a (32× 32× 32)20 array of complex numbers. If

every number takes 16-bites of memory, then the total memory bits needed to store

a many-body wavefunction is:

M =(32× 32× 32)20 × 16

1024× 1024× 1024≈ 3× 1082GB (2.26)

Such an enormous number of bits is larger than the number of atoms in the visi-

ble universe and is simply inaccessible based on classical computing architectures. A

quantum computer may have the capacity to address such a challenge. However, clas-

sical data may not be equivalent to the quantum information, and we still do not have

2Also N-body interactions.3The choice of 32 is arbitrary here. In a real and meaningful simulation, the selection of grid

point number should be properly checked based on the UV and IR errors, see appendix F.

20

a practical quantum computer that works for such purpose yet. Nevertheless, exper-

imentally, ultra-cold atoms may serve as such a “quantum computer” because they

can be set up to simulate a quantum system. Due to these restrictions, numerically

solving the full many-body wavefunction directly is basically prohibited.

2.6.3 DFT

As discussed above, the problem with the HFB method is that it requires huge

computational resources, which makes it impossible to apply this theory exactly to a

practical simulation. Density functional theories (DFT) provides an appealing alter-

native. The idea of density functional theories (DFT) originated with Hohenberg and

Kohn [35] and Kohn and Sham [36]. In Hohenberg and Kohn’s 1964 paper [35], they

proved that: In an interacting system, there is a universal functional of the density

F [n(r)], which is independent of the external potential V (r), from which the ground

state energy can be computed by minimizing its value as a functional of the density:

Eground state =

∫V (r)n(r)d3r + F [n(r)] (2.27)

It says that the ground state energy of a many-body system can be uniquely deter-

mined by a density functional that only depends on the spatial coordinates. However

it is hard to get the functional form even for free fermionic systems. In 1965, Kohn

and Sham devised a simple method [35] to treat an inhomogeneous system with inter-

21

action in a self-consistent manner with an equivalent formulation using single-particle

orbits, which retains the exact nature of DFT. They converted an interacting system

with a real potential into a non-interacting system with effective single-particle poten-

tial and constructed a determinant as the ground state wavefunction, which takes the

Pauli exclusion into account. It is this publication where local density approximation

(LDA) is introduced. It is a set of approximations to the exchange-correlation energy

functional that only depends on the electronic density, and does not include any of

the derivative terms. That is why it is called ’local’. Bogoliubov-de Gennes (BdG)

(or HFB) is an example.

In Kohn and Sham’s method, eq. (2.27) can be expressed more explicitly, let us

replace F [n(r)] with its expansion:

Eground state =

∫V (r)n(r)d3r + T [n(r)] + U [n(r)] (2.28)

where T [n(r)] is the kinetic term, and U [n(r)] contains all the interaction terms.

The two-body interaction term is called the Hartee term, while all other higher order

many-body interaction terms are called the exchange-correlation. The many-body

interaction term U [n(r)] can be written as:

U [n(r)] =

Hartree term︷ ︸︸ ︷∫∫V (r, r′)n(r)n(r′)d3rd3r′+

exchange-correlation︷ ︸︸ ︷Eexc[n(r)]

(2.29)

The problem is that the exchange-correlation part remains unknown and has to be

approximated.

22

2.6.4 Mean Field Approach

If the interaction term is zero, we have a free Fermi gas, which is exactly solvable

within the Kohn-Sham model. Perturbation methods may be useful if the interaction

is small. Another observation is that if we can reduce the interaction expression from

a four operators term to some combination of quadratic terms, we would be able to

transform the Hamiltonian into a quasiparticle picture without interactions. One way

to solve eq. (2.23) is to use the variational mean-field method [33, 37]. Alternatively,

we can perform the mean-field decoupling (see appendix E.).

c†k,↑c†−k,↓c−k′↓ck′↑ ≈ 〈c

†k,↑c

†−k,↓〉 c−k′↓ck′↑

+ c†k,↑c†−k,↓ 〈c−k′↓ck′↑〉 − 〈c

†k,↑c

†−k,↓〉 〈c−k′↓ck′↑〉

(2.30)

Plug the result into eq. (2.23) and define:

∆k = − 1

N

∑k′

Vkk′ 〈c−k′↓ck′↑〉 (2.31)

which is called the gap equation. With a little more rearrangement we yield the mean

field effective Hamiltonian in momentum space:

H =∑kσ

ξkσc†kσckσ −

∑k

(∆kc

†k,↑c

†−k,↓ + ∆∗kc−k,↓ck,↑

)+∑k

∆k 〈c†k,↑c†−k,↓〉 (2.32)

23

This Hamiltonian can be expressed as a matrix:

H =∑k

(c†k,↑, c−k,↓

)ξk↑ ∆k

∆∗k −ξ−k↓

ck,↑

c†−k,↓

+∑k

∆k 〈c†k,↑c†−k,↓〉+

∑k

ξ−k↓

(2.33)

where the last term is a constant energy shift coming from the application of the

anticommutation to the spin down particles:

∑k

ξ−k↓c†−k,↓c−k,↓ =

∑k

ξ−k↓ −∑k

ξ−k↓c−k,↓c†−k,↓ (2.34)

The constant term will be ignored in the future calculation, but it must be taken into

account if the total energy of the BCS is compared to other competing states.

2.6.5 Bogoliubov transformation

The matrix above is not diagonalized, which means we can not get the energy

spectra directly from the diagonal terms. However we can diagonalize it to get the

eigenvalues and eigenvectors. The eigenvectors can be used to construct the trans-

formation matrix . Here I will perform the calculation explicitly to find the transfor-

mation matrix elements. It can be seen that those 2 × 2 matrices are independent

of each other and thus can be diagonalized in a new basis. Denoting the new basis

24

operators as γk,↑ and γ†−k,↓, we have the transform relationship: γk,↑

γ†−k,↓

= Bk

ck,↑

c†−k,↓

=

B11ck,↑ +B12c†−k,↓

B21ck,↑ +B22c†−k,↓

(2.35)

where Bk is the transformation matrix for wavevector k. We should impose the

requirement that in the new basis, the matrix representation is diagonalized and B

should be canonical, i.e. it must preserves the fermionic anti-commutation relations:{γσµ, γσν

}= 0{

γ†σµ, γ†σν

}= 0{

γ†σµ, γσν

}= δµ,ν

(2.36)

Invoking the anticommutation relations yields:

1 ={γk,↑, γ

†k,↑

}={B11ck,↑ +B12c

†−k,↓, B11c

†k,↑ +B12c−k,↓

}= B2

11 +B212

1 ={γ−k,↓, γ

†−k,↓

}={B21c

†k,↑ +B22c−k,↓, B21ck,↑ +B22c

†−k,↓

}= B2

21 +B222

0 ={γk,↑, γ−k,↓

}={B11ck,↑ +B12c

†−k,↓, B21c

†k,↑ +B22c−k,↓

}= B11B21 +B12B22

(2.37)

25

With some algebra, the transformation matrix Bk can be chosen as:

Bk =

uk −vk

vk uk

(2.38)

where uk, vk ∈ C and they satisfy the relation u2k + v2

k = 1, expanding the matrix

yields:

γk,↑ = ukck,↑ − vkc†−k,↓

γ†−k,↓ = u∗kc†−k,↓ + v∗kck,↑

(2.39)

by taking the inverse transform we will get:

ck,↑ = u∗kγk,↑ + vkγ†−k,↓

c†−k,↓ = ukγ†−k,↓ − v

∗kγk,↑

(2.40)

The transformation to a new basis is called Bogoliubov transformation [38, 39]. The

new operator γ†k,↑ creates a Bogoliubov quasiparticle that is a combination of an

particle and a hole, which can be though of as a superposition state, so the physical

meaning of uk is that u2k is the probability that a particle with momentum k and spin

↑ is in the superposition state, while v2 is the probability of a hole state.

Substituting ck,↑ and c−k,↓ with γk,↑ and γ†−k,↓, and requiring that the Hamilto-

nian is diagonalized in the new basis will give the ratio:

vkuk

=

√ξ2k + |∆k|2 − ξk

∆∗k(2.41)

26

where ξk = (ξk↑ + ξ−k↓)/2. Together with the condition u2k + v2

k = 1, it can be found:

|uk|2 =1

2

(1 +

ξkEk

)|vk|2 =

1

2

(1− ξk

Ek

)ukvk =

∆k

2Ek

u2k − v2

k =ξkEk

(2.42)

where

Ek =

√ξ2k + |∆k|2 (2.43)

Since both u and v are complex numbers, they have an overall phase difference,

without loss of generality, we may let u be real, and v continues to be complex. Then

it is sometimes handy to write uk and vk as trigonometric functions:

uk = sin βk vk = eiθ cos βk (2.44)

where βk can be determined from eq. (2.41) or eq. (2.42). The effective Hamiltonian

can be simplified to:

H =∑kσ

Ekγ†kσγkσ + E0 (2.45)

where

E0 =∑k

(ξk − Ek + ∆k 〈c†k,↑c

†−k,↓〉

)(2.46)

is the ground state energy. In the γ basis, originally coupled particles due to the

pairing interaction now can be describe by free quasiparticles.

27

2.6.6 BCS Ground State

The BCS ground state is the vacuum of quasiparticle operators, i.e.,

γkσ |ΨBCS〉 = 0 for all k and σ (2.47)

We want to relate the BCS ground state to the bare vacuum of particles |0〉. It is

found that the BCS ground state can be written as [33, 34]:

|ΨBCS〉 =∏k

(uk + vkc

†k,↑c

†−k,↓

)|0〉 (2.48)

where it may be interpreted as: for each k, we either have a Cooper pair with ampli-

tude vk or do not have a Cooper pair with amplitude uk.

2.6.7 Gapped and Gapless Materials

At the Fermi energy, where ξk = εk − µ = 0, Ek = ∆k (see eq. (2.43)), then (see

eq. (2.46)):

E0 =∑k

(∆k 〈c†k,↑c

†−k,↓〉

)= constant (2.49)

Then from eq. (2.45), we may see that the minimum energy to create a pair of quasi-

particles with momentum k is:

Ek = Etotal − E0 =

〈H〉︷ ︸︸ ︷(∑δ

Ek + E0

)−E0 =

∑δ

Ek = 2∆k (2.50)

28

That is why the expression of ∆k is also called the gap equation. A system with a

nonzero energy gap spectra is a gapped system. Otherwise, it is called gapless. For

example, a conventional conductor is a gapless materials because exciting a particle-

hole pair near the Fermi surface only requires very little energy.

Finally, we can compute 〈ck,↑c−k,↓〉 by applying eq. (2.40):

〈ck,↑c−k,↓〉 = −u∗kvk(〈γ−k,↓γ

†−k,↓〉 − 〈γ

†k,↑γk,↑〉

)= −u∗kvk

(1− 〈γ†−k,↓γ−k,↓〉 − 〈γ

†k,↑γk,↑〉

)= −u∗kvk (1− n−k,↓ − nk,↑)

(2.51)

where n−k,↓ = 〈γ†−k,↓γ−k,↓〉 and nk,↑ = 〈γ†k,↑γk,↑〉 are the quasiparticle densities.

2.6.8 Quasiparticle Energy Spectra

Diagonalization of the Hamiltonian

In the previous subsection, we know that the transformation matrix Bk is uni-

tary and is enough to ensure the canonical structure of the anti-commutation rela-

tion. Thus it is safe to solve the eigenvalue problem of the Hamiltonian denoted by

eq. (2.33). It is not hard to find that the column vectors of Bk are eigenvectors of the

matrix in eq. (2.33). The eigenvalues can be found to be:

E±,k =ξk,↑ − ξ−k,↓

√(ξk,↑ + ξ−k,↓

2

)2

+ ∆2 (2.52)

29

Then the Hamiltonian can be written in the new basis in the form:

H =∑k

(γ†k,↑, γ−k,↓

)E+,k 0

0 E−,−k

γk,↑

γ†−k,↓

+∑k

∆k 〈c†k,↑c†−k,↓〉 (2.53)

In a homogeneous system4, the ∆k is same for all momentum k. Then the last term

in the above equation can be written as:

∑k

∆k 〈c†k,↑c†−k,↓〉 = ∆

∑k

〈c†k,↑c†−k,↓〉 = ∆ν (2.54)

where ν =∑

k 〈c†k,↑c

†−k,↓〉, is called the anomalous density. In some situations, such

as in ultra-cold atom systems where s-wave scattering dominates, ∆ and ν may be

related by ∆ = gν, where g is some physical property of the system, such as interaction

strength, then the Hamiltonian can be put as:

H =∑k

(γ†k,↑, γ−k,↓

)E+,k 0

0 E−,k

γk,↑

γ†−k,↓

+∆2

g(2.55)

In some derivations, the sign of g may have a minus sign in front of it. Here a negative

g means the interaction is attractive, and repulsive if it is positive.

Particle Number

The number of particles in different spin states in the original basis ck,σ can be

computed. However, in a numerical calculation, we may prefer to compute the result

4In a homogeneous system, the momentum k is a good quantum number.

30

in the γ basis. Here we derive how to calculate the densities in the c basis given the

results from the γ basis.

To compute the spin up population, we need to determine the expectation value

of c†k,↑ck,↑, and then sum over all k in the momentum space, i.e., we need to calculate:

N↑ =∑k

〈c†k,↑ck,↑〉

=∑k

〈(ukγ†k,↑ + vkγ−k,↓)(ukγk,↑ + vkγ†−k,↓)〉

=∑k

〈u2kγ†k,↑γk,↑ + v2

kγ−k,↓γ†−k,↓ + ukvkγ−k,↓γk,↑ + ukvkγ

†k,↑γ

†−k,↓〉

=∑k

〈u2kγ†k,↑γk,↑ + v2

kγ−k,↓γ†−k,↓〉

=∑k

u2k 〈γ

†k,↑γk,↑〉+ v2

k 〈γ−k,↓γ†−k,↓〉

=∑k

u2k 〈γ

†k,↑γk,↑〉+ v2

k(1− 〈γ†−k,↓γ−k,↓〉)

(2.56)

In the last line of the equation, the anti-commutation relation is applied. At finite

temperature, the expectation value of any state occupation is determined by the Fermi

statistics. The Fermi distribution function f(E) can be written as follows, where E

is the energy of that state:

f(E) ≡ fβ(E) =1

1 + eβE=

1− tanh(βE/2)

2, (2.57)

β =1

kBT, f(E) + f(−E) = 1, (2.58)

31

Then the two expectation terms in eq. (2.56) can be written explicitly as:

〈γ†k,↑γk,↑〉 = f(E+,k)

〈γ−k,↓γ†−k,↓〉 = 1− 〈γ†−k,↓γ−k,↓〉) = 1− f(E−,k) = f(−E−,k)

(2.59)

The spin up particle number is:

N↑ =∑k

u2kf(E+,k) + v2

kf(−E−,k) (2.60)

Similarly, we can find the spin down particle number:

N↓ =∑k

u2kf(E−,k) + v2

kf(−E+,k) (2.61)

Anomalous Density

To compute ν as defined by 〈c†k,↑c†−k,↓〉, the same approach can be adopted to

represent ν in terms of uk, vk and quasiparticle state energy (E+,k, E−,k). It is worth to

point out that the reason ν is called anomalous density is because a normal particle

32

density is defined as 〈c†c〉.

ν =∑k

〈ck,↑c−k,↓〉

=∑k

〈(u∗kγk,↑ + vkγ†−k,↓)(ukγ−k,↓ − v

∗kγ†k,↑)〉

=∑k

〈|uk|2γk,↑γ−k,↓ − |vk|2γ†−k,↓γ

†k,↑ + ukvkγ

†−k,↓γ−k,↓ − u

∗kv∗kγk,↑γ

†k,↑〉

=∑k

〈ukvkγ†−k,↓γ−k,↓ − u∗kv∗kγk,↑γ

†k,↑〉

=∑k

ukvk 〈γ†−k,↓γ−k,↓〉 − u∗kv∗k 〈γk,↑γ

†k,↑〉

=∑k

ukvkf(E−,k)− u∗kv∗k(1− f(E+,k))

=∑k

ukvkf(E−,k)− u∗kv∗kf(−E+,k)

(2.62)

In a numerical simulation, ν may be used to update ∆ in an iteration scheme.

2.6.9 Visualization of Dispersion

We can visualize the quasiparticle energy-momentum dispersions defined by

eq. (2.52) to better understand the physical meaning of variables, such as the gap

∆ and the chemical potential difference δµ. Here the relevant relations are summa-

33

rized:

ξ↑,↓ =~2k2

2m− µ↑,↓ (2.63)

ε± =ξ↑ ± ξ↓

2(2.64)

E =√ε2+ + |∆|2 (2.65)

E± = ε± ± E (2.66)

µ = (µ↑ + µ↓)/2 (2.67)

δµ = (µ↑ − µ↓)/2 (2.68)

The dispersion relations E± can be plotted as functions of δµ and the gap ∆. They also

depend on the momentum k (1D case). For δµ = (0, 0.5µ, 0.75µ), and ∆ = (0, 0.5µ),

k is in the units of the Fermi momentum kF , the results are shown in fig. 2.2. The

chemical penitential difference δµ can be understood as a knob that can change the

relative occupancy of the two species. Changing δµ is equivalent to shifting the

dispersions up or down. When temperature T = 0, all states with negative energy

will be occupied, so shifting the energy spectra may also change the particle densities

na and nb. If δµ < ∆ the densities will not change because no spectrum will cross the

zero-energy line as long as δµ does not exceed the gap energy. When δµ > ∆, one of

the spectra (the upper one) will cross the zero line, previously unoccupied states from

the upper line now are partially filled (see the right-bottom panel of fig. 2.2. When

∆ is non-zero, these two energy spectra are separated with the minimum energy gaps

34

at ±kF , and the energy gaps are equal to 2∆. This gap is the energy required to

break a Cooper pair, and it only requires ∆ to create a excited state by moving a

quasiparticle out of the system.

The result of letting µ be lightly negative (µ = −0.1) is shown in fig. 2.3. It is

interesting to find that the two lines always have a gap in between even when ∆ = 0.

This is because if µ < 0:

ε+ =ξ↑ + ξ↓

2

=~2k2

2m− µ ≥ |µ|

(2.69)

which leads to positive E ≥ |µ| as E =√ε2+ + |∆|2, and the minimum gap between

E+ and E− is 2|µ|. Physically, a negative chemical potential means it costs energy to

take particles away from the system. This may happen in strong attractive regime,

where the interaction is strong enough to create dimmers. To take particles out of

such a system, we need to overcome the binding energy to break bound states.

35

Quasiparticle Dispersions for µ > 0

2 1 0 1 2k/kF

3

2

1

0

1

2

3

±/

= 0.0 , = 0.0

+

2 1 0 1 2k/kF

3

2

1

0

1

2

±/

= 0.5 , = 0.0

+

2 1 0 1 2k/kF

3

2

1

0

1

2

3

±/

= 0.0 , = 0.5

+

2 1 0 1 2k/kF

4

3

2

1

0

1

2

±/

= 0.75 , = 0.5

+

Figure 2.2: Given a positive µ = 1, for different combinations of δµ and the gap ∆,the two energy spectra also change. The energy difference at k = ±kF defines thevalue of 2∆, which is the minimum energy required to break one Cooper pair6.

6∆ is the energy required to create an excited state that is not breaking a Cooper pair.

36

Quasiparticle Dispersions for µ < 0

2 1 0 1 2k/kF

4

2

0

2

4

±/

= -0.0 , = 0.0

+

2 1 0 1 2k/kF

4

2

0

2

4

±/

= -5.0 , = 0.0

+

2 1 0 1 2k/kF

4

2

0

2

4

±/

= -0.0 , = 5.0

+

2 1 0 1 2k/kF

4

2

0

2

±/

= -7.5 , = 5.0

+

Figure 2.3: Given a slightly negative µ = −0.1, for different combinations of δµ andthe gap ∆, the two energy spectra also change. The difference from the positive µcases is that even when ∆ = 0, the two spectra are separated.

37

2.6.10 Breached-Pair States

As the chemical potential increases beyond the gap, the upper branch of the

quasiparticle dispersion (solid blue line the right-bottom panel in fig. 2.2) crosses the

zero-energy line (the blue dashed line). In momentum space, there exist four such

nodes on the zero line that are gapless modes with nonzero condensate. These states

are called gapless superfluid states. Since there is no pairing for these modes, they

are sometimes also called Breached Pair (BP) [40, 41] states or Sarma states7 [42].

One feature of these states that can be seen from the dispersion is that even if they

are separated in momentum space, in real space, they form a polarized homogeneous

superfluid [43]. The BP states are in general not stable when compared to the fully

gapped (symmetric) BCS superfluid states, but some works suggest it may be realized

in QCD [44] with interaction among quarks with different masses, Forbes et al. [41]

studied some stability criteria for the BP state. A stable breached-pair state in a 2D

system with p-wave coupling is discussed in [45].

7In the 1960s, Sarma studid the effect of a uniform exchange field on electron spins using the

BCS theory. After analyzing the self-consistent solutions, he found a transition state where the gap

vanishes but pointed out that it is unstable.

38

2.6.11 Determine the Gap

Because of the nonzero gap energy, a superfluid state can survive from a thermal

fluctuation at a temperature below the critical value Tc. Experimentally, the gap ∆ is

most directly observed in tunneling experiments [46] and in reflection of microwaves

from superconductors [47]. The paring gap of a trapped ultracold Fermi gas at unitar-

ity8 was determined using tomographic microwave spectroscopy method [48], and the

maximum gap found is ∆/EF ≈ 0.48. Later on, these experimental data were used to

precisely determine the gap in studies based on Quantum Monte Carlo methods [49],

and it was found that ∆/EF > 0.4.

2.7 Off-Diagonal Long-Range Order

Off-Diagonal Long-Range Order (ODLRO) [50] is a measure of macroscopic quan-

tum coherence, which can be defined:

ODLROFermion = limr→∞〈ψ(x)ψ∗(x+ r)〉

= Φ(x)Φ∗(x+ r) 6= 0

ODLROBoson = limr→∞〈ψ↑(x1)ψ↓(x2)ψ∗↑(x1 + r)ψ∗↓(x2 + r)〉

= Φ(x1, x2)Φ∗(x1 + r, x2 + r) 6= 0

(2.70)

8When the s-wave scattering dominates, and the scattering length is infinite, also see chapter 3.

39

where ψ(x) and ψ(x1, x2) are single particle field operators. Φ(x) and Φ(x1, x2) are

macroscopic wavefunctions. For a BEC, it is equal to√n(x)eiφ(x). For a BCS system,

it is can be associated with the pairing field with the form: ∆(x) = ∆(x)eiφ(x). The

ODLRO represents the phase correlation for single-particle wavefunctions, as a ran-

dom phase will render the expectation values to zero. It was found that superfluidity

only takes place in systems with ODLRO [51].

40

CHAPTER 3. ASYMMETRIC SUPERFLUID LOCAL

DENSITY APPROXIMATION

In this chapter, we will revisit some properties of two-component Fermi gases in

the unitary regime where the s-wave scattering length is infinite using DFT in three-

dimensional space. The Thomas Fermi model will be reviewed since its formalism is

simple but acts as the root for DFT. It also gives some essential factors, like the n5/3

density dependence in the energy term.

3.1 The Unitary Fermi Gas

3.1.1 Ultracold Atom Physics

Ultracold atoms are an exceptionally versatile and highly flexible platform to

test novel physical theories. As dilute atoms are maintained at temperatures close to

absolute zero, the thermal fluctuations are significantly suppressed, and the quantum

behavior dominates. At that low temperature (typically at the nano-Kelvin scale),

atoms turn into quantum degenerate atomic gases which come in two varieties: BEC

formed by bosons and degenerate Fermi gas (DFG) formed by fermions. Because of

the nature of the low energy of such systems, the ultracold atoms serve as an excellent

testbed for superfluid dynamics. One of the most critical advantages that make

41

ultracold atoms ideal playgrounds for quantum experiments is its flexibility. In such

a system, physicists can engineer the many-body Hamiltonian or dispersion relation in

many aspects: The kinetic terms can be dressed by spin-orbit coupling [52–55]. The

two-body particle scattering length a can be tuned continuously from −∞ to ∞ via

a method called Feshbach resonance (also called Fano-Feshbach resonance) [56, 57].

The external potentials can also be engineered as demanded, such as optical lattices

that are created using counter-propagating laser beams, superlattices that are mixed

from different frequencies of lasers [58, 59]. Recent developments of experimental

techniques using close-to-resonance laser beams allow researchers to implement spin-

dependent potentials [60], which may be used to implement spin-polarized droplets

in the unitary Fermi gas (UFG) [61]. Using a DMD, one can implement superlattices

with much larger site spacing compared to the coherent length and other dipole

potentials with arbitrary profiles and phase maps (see appendix H.). For example,

in the mean-field limit, the BEC can be described by the Gross-Pitaevskii equation

(GPE) [1] as shown in eq. (3.1), each of its component can be tuned experimentally

modified.

( Raman Dressing︷ ︸︸ ︷− ~2

2m

∂2

∂r2+

Digital Micromirror Device︷︸︸︷V (r) +

FeshbachResonance︷ ︸︸ ︷4π~2asm|ψ(r)|2

)= µψ(r) (3.1)

This is the GPE that governs the dynamics of a BEC in the mean-field limit. For

different parts of the equation, we can modify the dynamics within some extended

42

freedom. The Raman dressing [62] can be used to change the kinetic terms, and a

DMD can be used to generate arbitrary external dipole potentials. The two-body

interaction can be tuned using the Feshbach resonance technique.

3.1.2 BCS to BEC crossover

The reason we discuss ultracold atomic gases briefly above is that UFGs are

routinely realized in such systems. Before the achievement of ultracold atoms, super-

conductors and superfluids were well described either by the BCS theory of weakly

attractive Fermi systems with pairing or by the BEC theory of weakly repulsive inter-

actions9 due to the residual effect from Pauli exclusion of their Fermi constituents [1,

9]. On the BCS side, the two-body coupling is weak, and the size of a Cooper pair is

much larger than the inter-particle spacing 1/kF (kF is the Fermi momentum). Thus

Cooper pairs are strongly overlapping, as shown in fig. 3.1. In the BEC limit, the

two-body coupling is strong, and Fermions form tightly bound diatomic molecules

called bosonic dimers. They are also strongly overlapping if the system temperature

T is lower than the critical temperature Tc. When T � Tc, these dimers are not

overlapping but still tightly coupled (bound states).

9On the BEC side, the interaction between dimers are repulsive, but the interaction between

fermions is still attractive.

43

BEC and BCS Regimes

𝑇 < 𝑇𝑐

weak couplingstrong coupling

BCSBEC

𝑇 ≫ 𝑇𝑐

Figure 3.1: Left: In the BEC limit, where the two-body coupling is strong, Fermiconstituents are tightly bond, if T � Tc, these bond dimers are not overlapping.Right: In the BCS limit where Cooper pairs are loosely bound, they are stronglyoverlapping when T < Tc (in superfluid states).

The idea that there may exist a smooth crossover between the BCS limit and

the BEC limit was proposed by Keldysh [63]. Eagles [64] and Leggett [65] indepen-

dently noted that the BCS ground state wavefunction is also capable of describing

the continuous evolution from the BCS limit to the formation of BECs as the two-

body attractive interaction between fermions is increased (see g in eq. (2.55)). In

dilute Fermi gases, the effective potential range is much smaller than the interparticle

distance, and he interaction can be characterized by a scattering length a, which is

tunable by the Feshbach resonance [66–69] (for a good review, see [70]) as mentioned

at the beginning of the section. With this important experimental method readily

available, physicists can turn the knob to change the two-body scattering length in

their experiments. In late January 2004, Deborah Jin’s group reported the first ex-

44

perimental realization [71] of fermionic atom pairs in the BCS-BEC crossover regime

using fermionic potassium-40 atoms. The fermionic condensates seen in this work

occur in the BCS-BEC crossover regime, where 1/(kFa) → 0 as a → ∞. About one

and half months later, Wolfgang Ketterle’s group at MIT independently reported the

observation of pairs of fermionic atoms in an ultracold lithium-6 gas [72]. In these

pioneering works, by tuning an external magnetic field, one can sweep the particle

interaction from the weakly attractive side in the BCS limit to the strongly attrac-

tive interaction in the BEC limit smoothly passing through the BCS-BEC crossover

regime. The relation between the external magnetic field B and the scattering length

a is given by [73]:

a(B) = abg

(1− ∆B

B −B0

)(3.2)

where abg < 0 is the off-resonant background scattering length, ∆B and B0 are the

width and position of the resonance respectively [1]. For 6Li [72], the relation is

plotted in fig. 3.2, see [74] for more details.

45

Feshbach Resonance for Lithium-6

𝐵𝐸𝐶 𝐵𝐶𝑆

Crossover regime

Δ𝐵

𝑎

|103𝑎0|

𝐺

Figure 3.2: Plot of eq. (3.2) for lithium-6: B0 is the position of resonance. Left ofthe resonance is the BEC regime, to its right is the BCS regime. The blue area is thecrossover regime. a0 is the Bohr radius.

It can be seen that as B → B0 − 0+, a → +∞ if B is approaching B0 from left

side. Otherwise B → B0 + 0+, a → −∞. For weakly attractive Fermi gases, when

a < 0, if 1/kFa→ −∞, systems will be in the BCS limit. When 1/kFa→ −∞, it is

in the BEC limit. What we are interested most is the regime where 1/kFa→ 0 inside

the crossover region, called the unitary regime.

46

3.1.3 Unitary Regime

One of the features of Fermi gases in the unitary regime is that the pairing gap is

large in terms of the Fermi energy [10, 75]. This also means the unitary Fermi gas has

a high critical temperature Tc. Many works suggest that unitary gases have the largest

value of Tc/TF among all known fermionic superfluids [76–79]. In this unitary regime,

the scattering length is much larger than the interparticle spacing (|kFa� 1|). When

the interparticle spacing is much larger than the effective interaction range, the detail

of the short-range interaction potential becomes irrelevant. In the unitary regime, the

interaction between atoms can be described by the effective zero-range limit potential

and the s-wave scattering length a, and such properties are universal.

3.2 Thomas-Fermi Theory

For a free Fermi gas and a given external potential V (r), the Thomas-Fermi

model [80, 81] provides a functional form for both the kinetic energy and poten-

tial energy, which are functionals of particle densities. It is the root of the density

functional theory we will discuss in the next section.

47

3.2.1 Formalism

In three-dimensional momentum space, when the temperature T = 0, a uniform

system composed of free fermions with spin up (↑) and down (↓) has well-defined

Fermi surfaces for both species. Let the spin up and spin down species have the

same population N↑ = N↓, the total particle number is N = N↑ + N↓. Let all the

particles be confined in a box with side length L and volume V so that the minimum

wavevector that can be accommodated in each direction is kmin = πL

, the minimum

unit volume in momentum space is:

Vk =π3

V(3.3)

Then the number of particles inside a Fermi surface with maximum momentum KF

can be computed as:

N = 2×18× 4

3πk3

F

Vk=V k2

F

3π2(3.4)

In this formula, a factor of 2 accounts for the two different spins. A little rearrange-

ment of the formula yields:

3π2n = k3F or n =

k3F

3π2(3.5)

So the density of particle has been related to the Fermi momentum kF .

48

3.2.2 Kinetic Energy

Knowing the momentum for each particle, the total kinetic energy can be com-

puted:

T =∑σ

∑k≤kF

~2k2

2m(3.6)

We can convert this summation to an integral as:

T = 2×(

1

)31

Vk

∫ kF

0

~2k2

2m4πk2dk =

V

10π2

~2

mk5F (3.7)

where the factor of 2 is the same as that in eq. (3.4). So the kinetic energy per unit

volume is:

T =T

V=

1

10π2

~2

mk5F

=3~2k2

F

10mn

=~2

10m(π4/3)(3n)5/3

(3.8)

This result is for a uniform system. In a non-uniform system (inhomogeneous), if the

external potential is changing slowly and smoothly, locally everything looks homoge-

neous, and we can patch all the homogeneous solutions to yield the overall result for

the inhomogeneous system. Then we can say the density is a function of the position,

i.e., n = n(r), the Fermi momentum kF is also a function of the position kF (r). We

can assume a similar form as in the uniform case that relates kF and n as eq. (3.5):

n(r) =k3F (r)

3π2(3.9)

49

Note that the energy density is related to the density with a power of 53, this is

different from the bosonic case [10].

3.2.3 Total Energy Functional

Similarly, the kinetic energy can also be expressed in a functional form:

T [n] =

∫d3r

3~2k2F (r)

10mn(r)

=

∫d3r

~2π4/335/3

10mn(r)5/3

(3.10)

where T [. . . ] represents a functional. Then the total energy of the system with fixed

particle number as a functional of particle density can be written as:

E[n] = T [n] +

∫d3rV (r)n(r)

N =

∫d3rn(r)

(3.11)

Minimizing the E[n] with a Lagrange multiplier µ is equivalent to setting the first

order functional derivative of the total energy to zero (for an excellent and concise

introduction of functional analysis, check out appendix A in [82]):

δ(T [n] +∫d3rV (r)n(r)− µ

∫d3rn(r))

δn= 0 (3.12)

Solving the above variation equation yields:

µ =~2

2m(3π2n)2/3 + V (r)

=~2k2

F (r)

2m+ V (r)

(3.13)

50

The multiplier µ may be identified as the local chemical potential at position r. If the

external potential changes slowly when compared in terms of the Fermi wavelength

( ∆V (r)V (r)∆r

� kF ), the local density approximation above is valid. Combine the above

equation and eq. (3.9) to eliminate the kF gives:

n(r) =1

3π2~3{2m[µ− V (r)]}3/2 (3.14)

Interacting System

The discussion above is focused on non-interacting systems. In an interacting

system, as a particle will be affected by other particles, the effective potential Veff

will be different from the external potential. Some correction terms should be taken

into consideration, such as the Coulomb interaction and the exchange-correlation

potential:

Veff = V (r) + g

∫d3r′

n(r′)

r − r′+ Vxc[n] (3.15)

The second term in the RHS is the Hartree potential (Coulomb interaction), and the

last term is the exchange-correlation potential.

51

3.2.4 Finite Temperature

When T 6= 0, the occupancy for each state should be determined using the

Fermi-Dirac distribution function:

fσ,k(r) =1

1 + eβ(εσ,k+Vσ(r)−µσ)(3.16)

where the effective potentials and chemical potential may depend on the spin. The

result from previous subsection can be generalized straightforwardly with very little

modification.

3.2.5 Nonlocal Effects

So far, the discussion is limited to uniform systems and based on the assumption

of the local density approximation. This type of scheme is simple, but in the general,

it is not accurate. Some researchers [83, 84] have considered nonlocal effects by adding

the first-order derivative terms of the density, or even higher orders. These gradient

terms are necessary when correction due to surface effects is needed [85]. These

methods with functionals of the density containing higher-order derivative correction

terms are the extended Thomas-Fermi functionals [33, 86].

52

3.3 Superfluid Local Density Approximation

3.3.1 Introduction to SLDA

DFT was first applied to investigate the electronic structure or nuclear structure

of many-body systems. Bulgac and Yu [87] introduced a superfluid DFT based on the

BdG approach to superfluid fermions. Before that, there were some extensions of the

DFT to study superfluidity in terms of nonlocal pairing interactions [88–90], which

are less intuitive and hard to use in practical calculations. Bulgac [91] extended the

density functional theory to study the UFG in a practical way of doing local calcu-

lation, and the theory is called the superfluid local density approximation (SLDA).

In DFT, E[n(r)] only depends on the interaction of the system. The idea [92] is to

deduce E[n(r)] for the UFG, and determine some important parameters that fix the

functional. As the functional is independent of the external potential, the result can

be applied to systems with arbitrary external potential, such as optical lattices. How-

ever, as mentioned before, the exact form of the functional is unknown even though

the Hohenberg-Kohn theorem shows that the total energy as a unique functional of

the particle density exists.

53

3.3.2 Strategy

In order to tackle the challenge that the exact form of the energy functional is

unknown, a simple strategy should be set before any attempt to introduce a new

functional for many-body problems. To propose a DFT model, several factors should

be considered:

• Firstly, a good starting point where a new functional can be derived is preferred.

For the UFG, the BdG functional may play such a role.

• Secondly, any additional term added to the new functional should be constrained

by dimensional analysis and symmetries.

• Thirdly, set the expected accuracy and understand all possible contributions to

the error and the order of error, such as the Hartree-Fock energy and the shell

effects. Then test the error carefully.

• Fourthly, it should be clear what data is available that can be used to check

and tune all the parameters. Experimental results and ab initio results can be

used to fix these parameters.

• Finally, apply the new functional to a new system with different external po-

tentials

54

The SLDA is a functional of density via single-particle orbitals and it subsumes

the BdG. The orbital number is much larger than the particle number because of the

pairing field, which breaks the particle symmetry and all the orbits have fractional

occupancy. It may be able to deal with systems with tens or hundreds of particles

given the available computing power. In comparison, ab initio methods may be able to

study a few hundred particles, but they can be used to fix the functional parameters.

The BdG functional only retains the pairing terms g 〈a†b†〉 〈ba〉 = gν†ν, while the

Hartree terms g 〈a†a〉 〈b†b〉 = gnanb vanish. To see this, first recall the relationship

between the scattering length a and the two body interaction strength g (see the

derivation of the equation (305) in [93]):

m

4π~2a=

1

g+

1

2

∫0≤k≤kc

d3k

(2π)3

1h2k2

2m+ i0+

=1

g+

m

2~2π2kc (3.17)

the LHS is finite for all scattering lengths, while in the RHS kc is the momentum

cutoff that will take a large value in terms of the Fermi momentum in order to have

good density accuracy, i.e., kc � kF (in practical calculation, we have to consider all

occupied orbits). This requirement will send g → 0 because of we set a momentum

cutoff kc that is large, which in turn will make the Hartree term g 〈a†a〉 〈b†b〉 =

gnanb → 0 as na and nb are finite. For a weak interaction, where a is small and

negative, the Hartree energy may be calculated perturbatively [94] to have the value

∝ 4π~2am

nanb by summing up additional diagrams that is also divergent and will cancel

out the divergent term in the RHS of the above equation to recover the finite result.

55

However this does not make sense when we go to the unitary regime where a→∞ as

the Hartree energy will diverge. This drawback of the BdG is going to be addressed

in the SLDA functional, which includes both the pairing and n5/3 Hartree interaction

terms.

3.3.3 SLDA Formalism

In terms of orbitals, the particle densities ~n = (na, nb), the anomalous density ν,

the kinetic energy density ~τ = (τa, τb) and currents ~j = (ja, jb) can be computed10:

na(r) =∑n

|un(r)|2 fβ (En) , nb(r) =∑n

|vn(r)|2 fβ (−En)

τa(r) =∑n

|∇un(r)|2 fβ (En) , τb(r) =∑n

|∇vn(r)|2 fβ (−En)

ν(r) =1

2

∑n

un(r)v∗n(r) (fβ (−En)− fβ (En))

ja(r) = − i2

∑n

[u∗n(r)∇un(r)− un(r)∇u∗n(r)]fβ (En)

jb(r) = − i2

∑n

[vn(r)∇v∗n(r)− v∗n(r)∇vn(r)]fβ (−En)

(3.18)

where un(r) and vn(r) are wavefunctions in the BdG formalism, En is the nth eigenen-

ergy. fβ (En) = 1/ (exp (βEn) + 1) is the Fermi distribution function11 and β = 1/T .

10Unlike particle densities or anomalous density, each current ja or jb will have multiple com-

ponents, this is because currents are vectors. In three-dimensional systems, each of them will have

three components corresponding three (x, y and z) directions.11The Boltzmann constant kb is set to one.

56

Attention should be paid to the order of conjugation in the current terms, some lit-

erature may give wrong formulas [92]. The calculation of these densities should be

unchanged with a new functional.

In the BdG, the functional form can be written as:

E [na, nb, τa, τb] =~2

m

(τa + τb

2

)−∆†ν

=~2τa2ma

+~2τb2mb

+ gν†ν

(3.19)

where ∆ = −gν. To address the Hartree energy issue in the BdG, the SLDA func-

tional introduces two new dimensionless parameters α, β:

E [na, nb, τa, τb]SLDA =~2

m

(1

2(αaτa + αbτb)

+ β3

10

(3π2)2/3

(na + nb)5/3

)+ gν†ν

(3.20)

where αa = αb = α and β do not depend on any type of density as they are dimen-

sionless. These type of parameters are permitted by dimensional analysis. For future

discussion, it is convenient to define:

τ± = τa ± τb, α± =αa ± αb

2(3.21)

Note that: τaαa + τbαb = α+τ+ + α−τ−.

3.4 Regularization

One issue with the functional is that in eq. (3.18), the ν term and kinetic terms

(τa, τb) are divergent. The problem originates from the fact that we do not use the

57

actual physical potentials in these calculations. Instead, effective interactions are

used.

For two-particle scattering in quantum mechanics [95], a partial wave analysis

can be used to decompose an incoming wave into its angular momentum components

such as s-waves (l = 0 is an isotropically scattered wave) and p-waves (l = 1). The

cross-section of each component or channel is associated with a phase shift. If the

energy of the incoming particle is sufficiently low, the contribution from the s-wave

channel scattering will dominate. Thus in ultra-cold atom physics, it is appropriate

consider only the s-wave scattering, and it can be approximated by zero-range effective

potential [10, 93].

However, unlike the physical potential which is in general not finite-ranged (for

example, the Coulomb potential from a point charge is non zero at any finite distance

from the source; even though it decreases with increasing distance, it is never zero,

and not decreases fast enough as it changes ∝ 1/r), the effective potential approxi-

mation may cause unphysical results like divergent anomalous density. To deal with

the problem, we need to regularize the theory. Regularization means some schemes of

constraint or cutoff should be introduced to make those divergent terms finite. There

are many ways to regularize a functional: one may use dimensional [96] regulariza-

tion or simply set a maximum energy Ec cutoff for an inhomogeneous system, or a

momentum cutoff kc for homogeneous systems.

58

In effective field theory, one can choose an interaction that is easy to calculate

(such as the g in eq. (3.17)) and also choose a regularization scheme 12 that is con-

venient to proceed by fixing a physical quantity (like the two-body scattering length

a). Then the effective interaction may depend on the regularization scheme. Finally,

we can check if our results are independent of the regularization scheme 13.

In the BdG, we may regularize eq. (3.19) by selecting a momentum cutoff kc, and

holding the scattering length a in eq. (3.17) fixed, then g would be a function of kc,

denoted as gc(kc). For more general cases, a similar scheme may be made by holding

a finite function C(na, nb) fixed at given particle densities or polarization, letting it

recover the two body scattering length a at zero density limit. Then eq. (3.17) can

be adjusted to adopt the modification:

C (na, nb) = −α+v

∆+

1

2

∫d3k

(2π)+

1α+h2k2

2m− µ+

α++i0+

=α+

gc+ Λc

(3.22)

where C(na, nb) can be tuned using experimental data and ab initio results for the

asymmetric case that will be discussed in the next section. The α term here can

12The most straightforward method is to use a momentum cutoff kc for a homogeneous system,

and an energy cutoff Ec for an inhomogeneous system, this method is used in the research for this

thesis.13For example, one can check if the particle densities and the pairing gap are unchanged when

distinct large momentum cutoffs are used.

59

be interpreted as effective mass, which can be a functional of densities too. One

purpose of using α is to fit experimental data and Monte-Carlo data better. In the

continuous limit the total kinetic energy density and the anomalous ν can be written

as integrands (in d-dimensional systems):

ν = −∫ ∞

0

ddk

(2π)d∆

2√ε2+ + ∆2

(3.23)

τ = τa + τb =

∫ ∞0

ddk

(2π)dk2

[1− ε+√

ε2+ + |∆|2

](3.24)

For a large cutoff kc in terms of the Fermi momentum kF , kc � kF and k2c/2m� ∆,

the residue of ν and τ (integration from kc to ∞) parts can be estimated as:

νres ∼ −∫ ∞kc

ddk

(2π)dm∗∆

h2k2

τres ∼∫kc

ddk

(2π)d2m∗2|∆|2

h4k2

(3.25)

where m∗ = m/α+ is the effective mass. In the BdG α+ = 1, while in the SLDA

α+ = 1.094 (the value may change slightly in other works, for example, it is 1.14

in [91]). In a one-dimensional system, these integrals are convergent, but are divergent

in higher dimensions. If we only look at divergent terms of the energy functional:

ESLDA =~2

m

(α+

τa2

+ α+τb2

)+ gν†ν + . . .

=~2

2mα+τ + gν†ν + . . .

=~2

2m∗τ + gν†ν + . . .

(3.26)

The regularization should be done so the total energy density is also finite, which

means all divergent terms should be cancelled. As the particle densities na and nb

60

are always finite, we can consider only the ν and τ terms, and the residual energy for

the previous equation can be written:

Eres ≈~2α+

2mτres − [gν†ν]res

=~2α+

2mτres + ∆†νres

=

∫ ∞kc

ddk

(2π)d

[m∗|∆|2

h2k2− m∗|∆|2

h2k2

]→ 0

(3.27)

The additional term µ+α+

in the denominator of eq. (3.22) does not change the integral

in the limit of the infinite cutoff. However, the shift of the pole will significantly

improve the convergence when regularized with a cutoff [97].

The integral of Λ and its limit when kc →∞ in various dimensions are summa-

rized as follows:

Λ1Dc =

m

~2

1

2πk0α+

lnkc − k0

kc + k0

→ −m~2

1

πkc→ 0,

Λ2Dc =

m

~2

1

4πα+

ln

(k2c

k20

− 1

)→ m

~2

1

2πα+

lnkck0

,

Λ3Dc =

m

~2

kc2π2α+

(1− k0

2kclnkc + k0

kc − k0

)→ m

~2

kc2π2α+

,

(3.28)

The subscript c means a cutoff is applied, and

α+~2k20

2m− µ = 0,

α+~2k2c

2m− µ = Ec. (3.29)

where µ is the average chemical potential. In homogeneous systems Ec = α+~2k2c2m

,

while in inhomogeneous system, the momentum cutoff kc is replaced with a energy

cutoff Ec since the momentum k is not a good quantum number. Note that the cutoff

depends on the position, as the effective chemical potentials include the external

61

potential, and the effective mass has the α+ term that is a function of densities in

the ASLDA.

3.5 Asymmetric Superfluid Local Density Approximation

To extend the previous functional to the case where na 6= nb (let na > nb), the

SLDA functional will need to be modified to have more freedom to fit new data.

The idea is to introduce an effective mass as a function of the polarization p =

(na−nb)/(na +nb) or the particle densities (na, nb). The α and β terms in the SLDA

now are functions of p, or α = α(na, nb) and β = β(na, nb). The new functional is

called asymmetric SLDA or ASLDA which has the energy density functional:

EASLDA =~2

m

(αa (na, nb)

τa2

+ αb (na, nb)τb2

+D (na, nb))

+ gν†ν (3.30)

where the function D = D(na, nb) is defined to be consistent with the Hartree term

in the SLDA when na = nb.

D (na, nb) =(6π2 (na + nb))

53

20π2

[G(p)− α(p)

(1 + p

2

) 53

− α(−p)(

1− p2

) 53]

(3.31)

The terms inside the right bracket are functions of the polarization which are dimen-

sionless. Other new forms can also be used to replace these terms. But the idea is

to propose a form (the simpler, the better) that can produce good agreement with

experimental and ab initio results. Using the Monte-Carlo data [98] and with some

62

improvement [99, 100], G(p) turns out to be very simple:

G(p) = 0.357 + 0.642p2 (3.32)

The α is then tuned to be a polynomial function of p (one can propose some other

forms of equations, but polynomial equations are the simplest), it only depends on

the p but to the power of six, this is because such a polynomial fits experimental and

ab initio results nicely:

α(p) = 1.094 + 0.156p

(1− 2p2

3+p4

5

)− 0.532p2

(1− p2 +

p4

3

)(3.33)

C(na, nb) in the ASLDA has the simple form:

C(na, nb) =α+(p)(na + nb)

1/3

γ, γ = −11.11(94) (3.34)

3.5.1 Matrix Representation

In practical simulations, we will formulate the calculation in matrix form. Before

we proceed, let us rearrange the energy density functional. Recalling the definition

in eq. (3.33), the eq. (3.30) can be rewritten as:

EASLDA =~2

2m

(α+τ+ + α−τ− +D (na, nb)

)+ gν†ν (3.35)

Taking the derivative of the eq. (3.22):

dC = d

(α+

g

)+ dΛ (3.36)

63

dΛ may be neglected if Ec →∞ [91].

dC =dα+

g− α+

g2dg (3.37)

Solving the above equation for dg yields:

dg = − g2

α+

dC +g

α+dα+ (3.38)

To get a matrix representation, we need to vary the energy density with respect to

un and vn. First, vary EASLDA with respect to na, nb and ν:

δEASLDA

δn=

~2

2m

(∂α+

∂nτ+ +

∂α−∂n

τ− +∂D

∂n

)− g2ν†ν

α+

∂C

∂n+gν†ν

α+

∂α+

∂n

=~2

2m

(∂α+

∂nτ+ +

∂α−∂n

τ− +∂D

∂n

)− ∆†∆

α+

∂C

∂n− ∆†ν

α+

∂α+

∂n

=~2

2m

∂α−∂n

τ− +∂αa∂n

(~2τ+

2m− ∆†ν

α+

)− ∂C

∂n

∆†∆

α+

+~2

m

∂D

∂n

= V

δEASLDA

δν= gν† = ∆†

(3.39)

where n = na/b, the V = Va/b are two effective potential corrections (not bare external

potentials), and the relation ∆ = −gν is used for simplification14. Invoke definitions

of na and nb:

∂na∂un

= u∗n∂na∂vn

= 0

∂nb∂vn

= v∗n∂nb∂un

= 0

∂ν

∂un= vn

∂ν

∂vn= un

(3.40)

14Attention should be paid to the sign here(−), it should be consistent in numerical calculation.

64

Other terms due to external potentials (Ua, Ub) and bare chemical potentials (µa, µb)

are simple since they are merely factors of the particle densities. All the results can

be pieced together with the help of the chain rule principle. Before doing that, the

contribution from kinetic densities should be considered, which enters into δEASLDA

δu∗:

δ

δu∗

∫EASLDAd

3r =δ

δu∗

∫d3r

~2

2m[α+∇2u]

δu∗

∫d3r

~2

2m[∇u∗α+∇u]

= − δ

δu∗

∫d3r

~2

2m[u∗∇(α+∇u)]

= −∫d3r

~2

2m[∇(α+∇u)]

(3.41)

where u = (un, vn), and K = ~22m

[∇(α+∇u)]. In the third line of the above equa-

tion, integration by part is invoked and the boundary conditions of the wavefunction

u(r)→ 0 as r → |∞| are used (if the boundary conditions are not satisfied, numerical

simulation may lack accuracy). Finally, the constraint that all (un, vn) should be

orthonormal will introduce Lagrangian factors which serve as the energy En, just like

the chemical potential µ in GP theory for BEC. By some algebra and rearrangement,

a matrix representation of the ASLDA functional can be found: Ka − µa + Va ∆†

∆ −Kb + µb − Vb

un

vn

= En

un

vn

(3.42)

The matrix form can be used in practical simulations. The gap equation can be

solved using simple iteration or more effective methods such as Broyden’s method [101].

65

CHAPTER 4. POLARIZED VORTICES, FULDE

FERRELL STATES

In this chapter, we study the connection between a polarized vortex and an exotic

FFLO state.

4.1 Introduction

For a conventional two species (↑,↓) superfluid state, if the pairing takes place in

the s-wave channel15 where the pair of particles have equal but opposite momentum,

the Cooper pair will have zero center-of-mass momentum. If the chemical potential

difference δµ = (µ↑ − µ↓)/2 is tuned to a value above the Clogston limit, the paired

particles can be broken, and the superfluidity will be destroyed. For a superconductor,

the magnetic field can alter the superconductivity because it will change the chemical

potentials of different spin states (the magnetic field can either couple to the orbital

motion or spin degree of electrons [102, 103]), which will cause phase transitions or

entirely destroy it at critical magnetic fields [21, 104]. However, Fulde, Ferrell, Larkin,

and Ovchinnikov (FFLO) proposed that spatially varying the pairing field (or order

15In cold atom physics, the s-wave channel typically dominates. There are other pairings, such

as p-wave pairing for a system with a single species of particles.

66

parameter) may keep the superfluid state stable [24, 25] even at a value of chemical

potential difference where the superfluidity was supposed to disappear. Fulde and

Ferrell proposed that a spatial order parameter of the form ∆(x) = ∆0e−i2qx. A

superfluid state with this type of pairing field may survive above the Clogston limit

and is called a Fulde-Ferrell (FF) state. Such a state carries finite superfluid current in

its ground state, which seems to violate the Bloch theorem [105], which points out that

there can not be a net current in any fermion system when at its ground state within

the nonrelativistic regime16. Actually, the superfluid current will be canceled by the

normal current flows backward in such a state. Larkin and Ovchinnikov suggested

another form ∆(x) = ∆0 cos(qx), a superfluid state with this type of pairing field is

called a Larkin-Ovchinnikov (LO) state. For both FF and LO states, the q is a finite

momentum vector that will bridge the gap between the different Fermi surfaces due

to polarization (fig. 4.1). The difference between FF states and LO states is that the

FF states break the time-reversal symmetry while LO states break the translational

symmetry [103]. In general, the order parameter ansatz can be more complicated,

which may be associated with multiple different momentums, q1, q2 . . ., i.e., the pairing

16There is no publication from Bloch himself on this theorem, but D.Bohm introduced his idea

in [105]

67

field can be expanded as the summation of different plane waves:

∆(x) =∑

q∈(q1,q2... )

cqeiqx (4.1)

where cq 6= 0 is the expansion coefficient. Such a field will enable the pairing between

particles that have momentum difference by q1, q2, . . .

In many works, it is found that LO states give lower energy than FF states [106–

109]. However, it turns out that both the FF state paring field ansatz and the LO

state paring field ansatz predict the same parameter regime in a phase diagram for

many different systems (see the review paper [103]). So it may be enough to use

the FF state order parameter ansatz to study a system of interest. Nevertheless,

one should keep in mind the possible subtle differences, such as phase transition

conditions, may not be identical [110, 111]. Some other studies found the FF state

can be more energetically favorable than the LO state [112], such as in a spin-orbit

coupling (SOC) Fermi gas [113, 114].

68

Paring Between Two Fermi Surfaces

𝐾𝐹↓

𝐾𝐹↑

𝐾𝐹↓

𝐾𝐹↑ 2𝑞𝑛↑ = 𝑛↓

𝑛↑ ≠ 𝑛↓𝑛↑ ≠ 𝑛↓

a b c

Figure 4.1: (a): The two types of particles have the same population. Thus theirFermi surfaces are matched, pairing takes place at the vicinity of the full circularFermi surface (blue and red circles). (b): The populations of spin-up and spin-downspecies are different, their Fermi surfaces are mismatched. Let KF↑ −KF↓ = q. (c):Due to the FF state pairing field, particles with momentum k+ q and −k+ q will becoupled, which is equivalent to shifting one of the Fermi surfaces by 2q and touchingthe other one at one edge. The pairing then will take place in that contact region(pink).

4.2 Experimental Evidence

LOFF states are polarized Fermi superfluids because they are sitting in the

regime where the chemical potential difference is larger than the gap. The question

of whether such states exist has been a subject of intense study. However, despite

all the extensive efforts on studies of various materials and systems (such as ultra-

cold atom platforms), experimental evidence remains inconclusive. The main reason

may be because conditions to support stable LOFF states are very stringent, and the

valid parameter regime can be very narrow. For instance, for a material to support

LOFF states, it may need to be very clean, or the impurities may readily destroy the

69

existence of LOFF states [115–117] among different systems, such as certain heavy

fermion [118] and organic superconductors [119, 120]. In the area of layered super-

conductors, which are predicted to be one of the best candidates for LOFF phases,

scientists have been trying with extensive efforts to search for FFLO phases [121–126].

Recently some experimental result has been reported positively [120]. Beyond super-

conductivity in condensed matter physics, FFLO states are also predicted in spin

(pseudo-spin) Fermi systems. Especially in ultracold Fermi gases (UFG) [103, 127],

experimental progress from MIT and Rice discussed the possibility of such states [20,

128], but they did not find it. Some more recent developments on layered organic

superconductors provide an important step towards the FFLO study [129].

4.3 Structure of Polarized Vortices

In normal fluids, a vortex will not survive without pumping more energy to com-

pensate the dissipation due to viscosity. However, there is no viscosity in superfluids.

A vortex is a topological feature that has 2π (or integer times of 2π) winding in

a closed circular path and quantized superfluid flow. Vortices can persist in both

Bose and Fermi superfluid systems, and this is one of the hallmark effects associated

with superfluidity. In a BEC, vortices can be induced by mechanically rotating the

container through the phase transition from normal states to superfluid states, such

70

as in helium. Alternatively, by stirring a BEC cloud using a tight laser beam, or

by phase imprinting using a laser with angular phase winding generated by a DMD

(see appendix H.), or other variations of cooling methods [130–132]. In 1999, follow-

ing the first creation of a BEC [133–135], the first vortex experiments was observed

in a magnetically trapped Rubidium-87 [136].

Fermionic superfluids do not generally support polarization, and the nature of the

ground state of a slightly polarized unitary Fermi gas remains an open question [137].

However, a superfluid vortex naturally supports polarization since its pairing gap has

to vanish in its center (continuity of a wave function) [87, 99]. The structure of a

polarized vortex is not well understood and may have some interesting properties.

We realized that the core of a polarized vortex might be connected to superfluidity

in Fulde Ferrell states [61]. By assuming the Thomas-Fermi (TF) approximation (see

chapter 3) in the radial direction of the vortex, it is possible to check this conjecture.

The basic idea is to treat each narrow circular slice of a vortex as a homogeneous

subsystem as shown in fig. 4.2, then to combine results (including gaps, densities,

and currents) for all slices, and compare to the inhomogeneous results of the vortex.

71

Unwrap a Circular Slice of a Vortex to a Fulde Ferrell State

𝑞↑/↓ = ±1

2𝑟

2𝜋0

Figure 4.2: A circular slice of a vortex is treated as a homogeneous subsystem withpairing between two spins that differ in momentum by q = 1/2r

4.4 BCS Vortices

For a 2D superfluid vortex, we can solve the following inhomogeneous equation

to get the pairing field:− ~22m∇2 − µa ∆(r)eiwθ

∆(r)e−iwθ ~22m∇2 + µb

ψn = Enψn (4.2)

where r is the radius, θ is the angular angle, ψ(r, θ) is the wavefunction defined as

ψTn (r, θ) = [un(r, θ), vn(r, θ)], un(r, θ) and vn(r, θ) can be interpreted as the nth orbit

wavefunctions for a pair of fermions (spin ↑ and spin ↓) [33], µa and µb are effective

chemical potentials which include bare chemicals potentials and external potentials.

w is an integer defines the winding number around a loop enclose the core, and the

inhomogeneous pairing field ∆(r) (including the phase eiwθ) varies the over radial

direction, and should be solved self-consistently in an iteration scheme. Once the gap

72

equation is satisfied, the particle densities, anomalous density and currents can be

computed as shown in eq. (3.18). Throughout the iteration, the two-body interaction

strength g = ∆/ν is held fixed for given µ = (µa, µb) and ∆, as well as a momentum

cutoff kc using the homogeneous result, i.e. g = g(µa, µb,∆, kc).

4.4.1 Regularization

As we have discussed in the previous chapter, there are several ways to regularize

a functional. In the above calculation, the kinetic terms and the ν term will diverge

when kc → ∞, this would be problematic. We need to regularize the theory by

holding the two-body scattering length a fixed and introducing a momentum cutoff

kc. The effective interaction strength g will be fixed too, which should depend on

kc. To calculate g, we use the homogeneous method to compute g as a function of

the average chemical potential (µ = (µa + µb)/2), an initial gap ∆0, and the cutoff

kc. One reason to use the average chemical potential instead of µa and µb is that we

can check the effect of the chemical potential difference in a consistent way with a

common µ. Another reason is that it can be proven that g is a monotonic function of

∆ if δµ is zero as shown in fig. 4.3, otherwise it is not. Thus to uniquely determine

an effective g, it is enough to use the average chemical potential µ and ∆. A large ∆

means strong interaction, the value of |g| grows as ∆ increases.

73

Coupling Strength

0 2 4 6 8 10 12/eF

1.8

1.6

1.4

1.2

1.0

0.8

0.6

0.4

g

g( , , )= 0.0= 2.5= 5.0= 7.5= 10.0

Figure 4.3: When δµ = 0, it can be found that the g is a monotonic function of ∆.If δµ is non-zero, for some range of g, there can be two ∆s that will yield the sameg. In the figure, eF is the Fermi energy, while ∆ and δµ are in the units of eF .

4.4.2 Symmetric Vortices

To test the idea, it is good practice to check the case of balanced vortices where

na = nb. For a balanced vortex, the density profiles should be the same for the two

species (↑, ↓). First, we conduct the simulation for a 2D balanced vortex using BCS

theory in a box with a cylindrical external potential of radius R, and grid N × N .

Recall that a grid in the box that confines the vortex will impose a natural energy

74

cutoff on the system17. Then for µa = µb, and ∆0, we can fix the effective interaction

strength g = g0. The simulation procedure is summarized in table 4.1. The result is

shown infig. 4.4, where we can see that na = nb and the pairing field vanishes at the

center of the vortex. This is due to the requirement of continuity of the wavefunctions

(in the effective field theory [138], the pairing field acts like a wavefunction).

17The momentum cutoff has to be much larger than the Fermi momentum because with pairing,

orbits with energy much higher than the Fermi level will also be partially occupied, they have to be

taken into consideration.

75

Algorithm: Balanced Vortex Simulation Using BCS Theory

1. Pick a box with size L× L (L ≥ 2R) and grid points N ×N .

2. Fix the interaction strength g0 = g((µa + µb)/2,∆0, kc).

(a) Using homogeneous method to compute the anomalous density ν.

(b) Compute g0 using the relation g0 = ∆0/ν.

3. Compute the anomalous density ν(x, y) using the inhomogeneous BCS

method eq. (3.18).

4. Update the pairing field ∆(x, y) = g0ν(x, y).

5. Check if ∆(x, y) is converged. If not, go to Step 3.

6. Compute densities: na, nb, ν, τa, τb, ja, jb.

Table 4.1: Algorithm: Balanced Vortex Simulation Using BCS Theory

Since the vortex solution is symmetric in the angular direction, these quantities

should depend only on the radius so we can plot them in 1d as shown in fig. 4.5,

where ∆0 = 0.75µ is in the weak coupling regime, and the x-axis is the radius in the

units of the healing length hξ = ~/√

2m∆. It can be found that the total density at

76

the center of the vortex is non-zero. This is different from bosonic vortices in a BEC

where the particle densities are fully depleted [1]. We include the result from the

homogeneous calculation on top of it (orange-dotted lines). We see that the overall

agreement is good for the regime outside the vortex. In the vicinity of the vortex,

the homogeneous calculation is not satisfying as it gives zero density at the core, and

the pairing field does not agree well with the BCS results. These discrepancies can

be understood as the homogeneous method failing to satisfy the gap equation, where

the trivial solution ∆ = 0 is used instead, which suggests a normal state18. However,

in a strong coupling balanced case, the agreement is better as can be seen in fig. 4.6.

Another contribution to these discrepancies is that the inhomogeneous calculation

automatically takes radial gradient terms of wavefunctions into account, while the

homogeneous FF states simulation does not.

18This is mainly due to the fact we assume the momentum difference δk = 2q = 1/r, which

diverge when r → 0. When it is comparable to the momentum cutoff, it will lead to no solution for

the gap equation. The question on how to address this issue will be left for future work.

77

Weakly Coupled Symmetric Vortex: ∆ = 0.75µ,δµ = 0

2 1 0 1 2

2

1

0

1

2

| |

1

2

3

4

5

6

7

8

2 1 0 1 2

2

1

0

1

2

n +

0.5

1.0

1.5

2.0

2.5

3.0

3.5

2 1 0 1 2

2

1

0

1

2

n

2

1

0

1

2

31e 14

2 1 0 1 2

2

1

0

1

2

ja

0.25

0.50

0.75

1.00

1.25

1.50

1.75

2 1 0 1 2

2

1

0

1

2

jb

0.25

0.50

0.75

1.00

1.25

1.50

1.75

2 1 0 1 2

2

1

0

1

2

j +

0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

Figure 4.4: The converged result of a balanced vortex simulation using BCS theory(∆0 = 0.75µ, δµ = 0). Top panels are: The total particle density n+ = na +nb (left).The density difference n− = na − nb(let na ≥ nb)(middle). ∆(x, y)(right). Bottompanels: Current ja(left). Current jb (middle). Total Current j+ = ja + jb (right). Inthese current panels, the direction of current is also plotted as arrows with the lengthbeing proportional to the amplitude. Note: we do not see counter currents in thebalanced case.

78

Weakly Coupled Symmetric Vortex: ∆ = 0.75µ,δµ = 0

0 1 2 3 4 5 6 70.0

0.5/

BCSHomogeneous

0 1 2 3 4 5 6 70

2

4

n p

BCSHomogeneous

0 1 2 3 4 5 6 7

0

1

2

n m

BCSHomogeneous

0 1 2 3 4 5 6 7r/h

0

1

2

j a

BCSHomogeneous

0 1 2 3 4 5 6 7r/h

0

1

2

j b

BCSHomogeneous

Figure 4.5: The converged result of a balanced vortex simulation using BCS theoryin the radial direction. Solid lines are from homogeneous calculations.

79

Strongly Coupled Symmetric Vortex: ∆ = 5µ,δµ = 0

0 1 2 3 4 5 6 70

1

2

3

4

5

/

BCSHomogeneous

0 1 2 3 4 5 6 70.0

0.2

0.4

0.6

0.8

1.0

n p

BCSHomogeneous

0 1 2 3 4 5 6 70.5

0.0

0.5

1.0

1.5

2.0

n m

BCSHomogeneous

0 1 2 3 4 5 6 7r/h

0.0

0.1

0.2

0.3

0.4

j a

BCSHomogeneous

0 1 2 3 4 5 6 7r/h

0.0

0.1

0.2

0.3

0.4

j b

BCSHomogeneous

Figure 4.6: Results for ∆0 = 5µ, which is in the strong coupling regime. We can seethat at the boundary, the BCS gap decreases, this is because the healing length issmall in the strong coupling case that the boundary effect will be more obvious.

4.4.3 Weakly Polarized Vortices

To study the polarized vortices where na 6= nb, the chemical potentials should be

different, i.e. δµ = µa−µb 6= 0. For δµ < ∆0, we may see a density imbalance around

the core of the vortex. In fig. 4.7, δµ = 0.25µ, we do see that n− 6= 0 around the

center of the vortex core, which may be regarded as Fulde Ferrell states. The radial

density profiles and the homogeneous results are shown in fig. 4.8. The agreement

of the gaps are good outside the core, and qualitatively fit well near the core. Inside

the core, the homogeneous results fail to reproduce the structure of the vortex (see

80

footnote.18). In this slightly polarized system, it can be found that point 4 and 5

indicated by the orange dots (counted from left to right) of the homogeneous pairing

gap represent two FF states because these two states have non-zero gaps, and their

spin-up and spin-down densities are different (see also the middle column of fig. 4.11).

Weakly Coupled Asymmetric Vortex: ∆ = 0.75µ,δµ = 0.25µ

2 1 0 1 2

2

1

0

1

2

| |

1

2

3

4

5

6

7

8

2 1 0 1 2

2

1

0

1

2

n +

0.5

1.0

1.5

2.0

2.5

3.0

3.5

2 1 0 1 2

2

1

0

1

2

n

0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

1.6

2 1 0 1 2

2

1

0

1

2

ja

0.2

0.4

0.6

0.8

1.0

2 1 0 1 2

2

1

0

1

2

jb

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

2 1 0 1 2

2

1

0

1

2

j +

0.25

0.50

0.75

1.00

1.25

1.50

1.75

Figure 4.7: The converged result of an asymmetric vortex simulation using BCStheory (∆0 = 0.75µ, δµ = 0.25µ). Top panels are: The total particle density n+ =na + nb (left). The density difference n− = na − nb (let na ≥ nb) (middle). ∆(x, y)(right). Bottom panels: Current ja (left). Current jb (middle). Total Current j+ =ja + jb (right). Note: we see a little counter currents in the weakly imbalanced case.

81

Weakly Coupled Asymmetric Vortex: ∆ = 0.75µ,δµ = 0.25µ

0 1 2 3 4 5 6 70.0

0.5/

BCSHomogeneous

0 1 2 3 4 5 6 70

2

4

n p

BCSHomogeneous

0 1 2 3 4 5 6 7

0

1

2

n m

BCSHomogeneous

0 1 2 3 4 5 6 7r/h

0

1j a

BCSHomogeneous

0 1 2 3 4 5 6 7r/h

0

1j b

BCSHomogeneous

Figure 4.8: The converged result of an asymmetric vortex simulation using BCStheory in the radial direction. Solid lines are from homogeneous calculations.

4.4.4 Increase the Polarized Vortices

If we increase the polarization by setting the δµ = 0.45µ, where a more interesting

vortex emerges as shown in fig. 4.9. It can be found that the vortex core is larger than

the case when δµ = 0.25µ, and we also see counterflows for both ja and jb. As the

polarization increased, more FF states can be found in the homogeneous calculation.

From the gap panel, it can be found that the 4th to 11th points are all FF states (also

see the right column of fig. 4.11).

82

Weakly Coupled Asymmetric Vortex: ∆ = 0.75µ,δµ = 0.45µ

2 1 0 1 2

2

1

0

1

2

| |

1

2

3

4

5

6

7

8

2 1 0 1 2

2

1

0

1

2

n +

0.5

1.0

1.5

2.0

2.5

3.0

3.5

2 1 0 1 2

2

1

0

1

2

n

0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

2 1 0 1 2

2

1

0

1

2

ja

0.1

0.2

0.3

0.4

0.5

0.6

0.7

2 1 0 1 2

2

1

0

1

2

jb

0.1

0.2

0.3

0.4

0.5

0.6

2 1 0 1 2

2

1

0

1

2

j +

0.2

0.4

0.6

0.8

1.0

1.2

1.4

Figure 4.9: The converged result of an asymmetric vortex simulation using BCS the-ory (∆0 = 0.75µ, δµ = 0.45µ). Top panels are: The total particle density n+ = na+nb(left). The density difference n− = na−nb(let na ≥ nb)(middle). ∆(x, y)(right). Bot-tom panels: Current ja (left). Current jb (middle). Total Current j+ = ja + jb(right). Note: we can see counter currents in the imbalanced case.

83

Weakly Coupled Asymmetric Vortex: ∆ = 0.75µ,δµ = 0.45µ

0 1 2 3 4 5 6 70.0

0.5/

BCSHomogeneous

0 1 2 3 4 5 6 70

2

4

n p

BCSHomogeneous

0 1 2 3 4 5 6 7

0

1

2

n m

BCSHomogeneous

0 1 2 3 4 5 6 7r/h

0

1

j a

BCSHomogeneous

0 1 2 3 4 5 6 7r/h

0.0

0.5

1.0

j bBCSHomogeneous

Figure 4.10: The converged result of an asymmetric vortex simulation using BCStheory in the radial direction. Solid lines are from homogeneous calculations.

To compare the results from those different cases, panels for each case are rear-

ranged in a single column shown in fig. 4.11. The first column is for the symmetric

case, the second one is for the asymmetric case with slight polarization, and the third

column is for the asymmetric case with larger polarization.

84

Comparison of three Weakly Coupled Vortices: ∆ = 0.75µ

Figure 4.11: Three vortices for ∆/µ = 0.75. Left: symmetric case δµ = 0. Middle:weakly polarized case δµ/µ = 0.25. Right: strongly polarized case δµ/µ = 0.45.

85

4.5 2D Phase Diagram of FF States

The phase diagram for homogeneous bulk systems using the BdG can be found [139],

where in a 3D system, the FF state only can exist in a very small and narrow pa-

rameter region, sandwiched between standard superfluid states and normal states.

Here the 2D homogeneous phase diagram is constructed in a similar way as in fig.

3 of [139] (see the left panel of fig. 4.12). The result is done in the grand-canonical

ensemble where the chemical potentials are fixed and pressure is maximized in terms

of the center-of-mass momentum q and the gap ∆ for a given value of the effective

interaction strength g (by choosing a ∆0 and µ) as shown in fig. 4.3. Then by com-

paring the pressures of different competitive states (normal state parameterized by

µ, ∆ = 0, superfluid state parameterized by µ, ∆ = ∆0) we can find the most stable

states [41]. In the right panel of fig. 4.12, the regime above the FF states are normal

states, those below are standard superfluid states. Comparing between the left panel

to the 3D result [139], we find that the FF state occupies a larger parameter regime

in the 2D case. If we allow q 6= 0, the ground state region under the condition of a

finite current will open up additional region.

86

Fulde Ferrell Phase Diagram

0.50 0.75 1.00 1.25 1.50 1.75 2.001/akF

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

n/n

0.0 0.2 0.4 0.6 0.8 1.0/

0.0

0.2

0.4

0.6

0.8

1.0

/

Figure 4.12: Fulde Ferrell state phase diagram in 2D. The δµ is the chemical potentialdifference, n is the total density, δn = na − nb is the density difference, a is the 2Dscattering length, kF are the Fermi momenta, and µ is the average chemical potential.Left: the x-axis is −1/akF , y-axis is the δn/n, which represents the polarization.Right: x-axis is the gap in the units of average chemical potential ∆/µ, y-axis is thepotential difference in the units of average chemical potential.

In the previous section, several FF states are found in the homogeneous results,

they are plotted in the phase diagram in fig. 4.13 (triangles in left panel). We can see

that they are not in the FF states region, thus are not ground states. The numbers

in the legends can also be found in fig. 4.11.

87

Fulde Ferrell States From Vortices

0.5 1.0 1.5 2.01/akF

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

n/n

1

2

34

56

78

9

123456789

0.0 0.2 0.4 0.6 0.8 1.0/

0.0

0.2

0.4

0.6

0.8

1.0

/

1 2

3 4 5 6 7 8 9

123456789

Figure 4.13: The nine FF states indicated in fig. 4.11 are plotted on the phase diagram(left panel). They are not ground states as they are not inside the FF state region.The two diamond-shaped are for the weakly polarized vortex, while those trianglesare for the strongly polarized vortex.

88

CHAPTER 5. QUANTUM FRICTION

A common problem with both simulations and experiments is to prepare a quan-

tum system in the ground state. Experimentally one can usually obtain a good

approximation of the ground state by allowing high-energy particles to evaporate at

the cost of losing particles19. Here we discuss a particular technique useful for finding

ground states in quantum simulations suitable for large systems, which may only be

simulated on high-performance computing (HPC) clusters.

5.1 Fermionic DFT

In numerical simulations, one has a variety of techniques with distinct compu-

tational cost. The motivation of the method described in this chapter is to find the

ground states of fermionic systems using DFT. The state of a fermionic DFT consists

of a set of single-particle states ψ = {ψn}. Each of these single-particle states ψn

must be orthogonal to other states to ensure that the Pauli exclusion principle is sat-

isfied20. The challenge is that, for large systems, these states can comprise terabytes

of data, and must be distributed over an HPC cluster as shown in fig. 5.1. Most of

19The real challenge may be to prepare a single particle in its ground state.20The physical state used in the DFT is a Slater determinant constructed from these single-particle

orbitals.

89

the minimization techniques destroy the orthogonality of single-particle states and

thus need continual reorthogonalization (i.e., via the Gram-Schmidt process). The

reorthogonalization requires communication between all of the compute nodes. Com-

munication is one of the slowest aspects of computing on a cluster, and this commu-

nication effectively prohibits the application of standard minimization techniques for

large systems.

Quantum Simulation of Fermionic Models

𝜓1…𝜓100

𝜓200…𝜓300

𝜓400…𝜓500

𝜓500…𝜓600

𝜓300…𝜓400

𝜓100…𝜓200

Figure 5.1: Single-particle wavefunctions can be stored and evenly distributed over allcompute nodes. However, conventional cooling methods requires continual reorthog-onalization, which requires exchanging all wavefunctions among nodes.

In contrast, real-time evolution by applying the Hamiltonian can be efficiently

implemented. Such evolution requires communicating the same Hamiltonian H[{ψn}]

90

to each of the nodes, but this typically only requires sending the effective potential

(the equivalent of a single wavefunction of information) to each of the nodes which is

feasible for large states.

The method of local quantum friction discussed here provides real-time evolution

with a modified Hamiltonian Hc to remove energy from the system while maintaining

orthogonality of the single-particle states. We shall demonstrate this technique here

using bosons simulated with the GPE. Since the bosonic DFT depends on a single

wavefunction (sometimes called an orbital-free DFT), one has access to the other

techniques. However, we proceed to keep the fermionic example in mind.

5.2 Formulation

Consider the GPE: the ground state can be defined as a constrained variation

of an energy functional E[ψ] while fixing the particle number N [ψ]. The variational

condition defines the time-evolution of the system:

i~dψ

dt= (H[ψ]− µ)ψ =

δ(E[ψ]− µN [ψ])

δψ†(5.1)

E[ψ] =

∫d3~x

(~2|~∇ψ|2

2m+g

2n2(~x) + V (~x)n(~x)

)

N [ψ] =

∫d3x n(~x), n(~x) = |ψ(~x)|2

(5.2)

91

This gives rise to the usual GPE effective Hamiltonian:

H[ψ] =−~2∇2

2m+ gn(~x) + V (~x) (5.3)

5.2.1 Imaginary Time Cooling

The most straightforward approach to the minimization problem is the method

of steepest descent (going downhill):

dψ ∝ −δE[ψ]

δψ†∝ −Hψ (5.4)

Note: There is a slight subtlety here since we are minimizing a function of a complex

field ψ. A careful treatment breaking ψ into real and imaginary parts coupled with

the fact that the energy is a real symmetric function E[ψ†, ψ] = E[ψ, ψ†] shows that

dψ ∝ −∂E/∂ψ† indeed gives the correct descent direction.

Thus, we can implement a continuous gradient descent if we evolve

~∂ψ

∂τ= −Hψ = −i~∂ψ

∂t(5.5)

with τ = −it which is equivalent to our original evolution with respect to an “imag-

inary time” t = iτ . Mathematically, this can be expressed by including a “cooling

phase” in front of the evolution:

eiφi~∂ψ

∂t= Hψ (5.6)

92

Real-time evolution is implemented when eiφ = 1, while imaginary-time cooling is

implemented when eiφ = i. Complex-time evolution with eiφ ∝ 1 + εi can be used to

mimic superfluid dynamics with dissipation. This is implemented in the simulations

through the cooling parameter ε. Imaginary time cooling is realized with large values

of ε.

Directly implementing evolution with an imaginary component to the phase will

not only reduce the energy but will also reduce the particle number. Generally,

this is not desirable, so we must rescale the wavefunction to restore the particle

number. Scaling the wavefunction ψ → s(t)ψ corresponds to a term in the evolution

∂ψ/∂t ∝ s′(t)ψ. This can be implemented by adding a constant to Hamiltonian, aka

a chemical potential:

eiφi~∂ψ

∂t= (H − µ)ψ (5.7)

A little investigation shows that one should take:

µ(t) =〈ψ|H|ψ〉〈ψ|ψ〉

(5.8)

Expressed in another way, we make the change in the state |dψ〉 ∝ H |ψ〉 orthogonal

to |ψ〉 so that the state “rotates” without changing length:

|dψ〉 ∝ (H − µ) |ψ〉 , 〈dψ|ψ〉 = 0 (5.9)

This immediately gives the condition µ = 〈H〉 given above, which will preserve the

normalization of the state no-matter what the cooling phase. Incidentally, even if

93

one performs real-time evolution, using such a chemical potential can be numerically

advantageous as it minimizes the phase evolution of the state. This has no physical

significance, but it reduces numerical errors and allows one to use larger time-steps.

5.2.2 Quantum Friction

One can derive cooling from another perspective, which allows the desired gen-

eralization. Consider how the energy of system E[ψ] changes when we evolve the

system with a “cooling” Hamiltonian Hc = H†c , which we restrict:

i~ |ψ〉 ≡ i~∂ |dψ〉dt

= Hc |ψ〉 (5.10)

The change in energy is:

E = 〈ψ| ∂E∂ 〈ψ|

+∂E

∂ |ψ〉|ψ〉

= 〈ψ|H|ψ〉+ 〈ψ|H|ψ〉

=−〈ψ|HcH|ψ〉+ 〈ψ|HHc|ψ〉

i~=〈ψ|[H, Hc]|ψ〉

i~

(5.11)

If we can choose Hc to ensure that the last term is negative-definite, then we have

a cooling procedure. The last term can be more usefully expressed in terms of the

normalized density operator R = |ψ〉 〈ψ| / 〈ψ|ψ〉 and using the cyclic property of the

94

trace:

〈ψ|[H, Hc]|ψ〉〈ψ|ψ〉

= Tr(R[H, Hc]

)= Tr

(Hc[R, H]

)~E = −〈ψ|ψ〉Tr

(i[R, H]Hc

)(5.12)

This gives the optimal choice:

Hc =(i[R, H]

)†= i[R, H], ~E = −〈ψ|ψ〉Tr(H†c Hc), (5.13)

ensuring a continuous steepest descent. It turns out that this choice is equivalent to

imaginary time cooling with rescaling:

Hc =−i〈ψ|ψ〉

(H |ψ〉 〈ψ| − |ψ〉 〈ψ| H)

i |ψ〉 = Hc |ψ〉 = −i

(H − 〈ψ|H|ψ〉

〈ψ|ψ〉

)|ψ〉

= −i(H − µ) |ψ〉

(5.14)

One can include an arbitrary real constant in Hc, but this amounts to re-scaling

t. It might be tempting to include a large constant so that the energy decreases

rapidly, but then one will need to take correspondingly smaller time-steps exactly

negating the effect.

From now on, we will assume our states are appropriately normalized, dropping

the factors of 〈ψ|ψ〉 = 1.

95

5.2.3 Fermions

Now consider the same approach for cooling an orbital based DFT (Hartree-Fock)

whose states are formed as a Slater determinant of N orthonormal single-particle

states |ψn〉 where 〈ψm|ψn〉 = δmn. In this case, the density matrix has the form:

R =∑n

|ψn〉 〈ψn| (5.15)

The same formulation applies with maximal cooling being realized for Hc = i[R, H]

so that

Hc |ψi〉 = −i∑n

(H |ψn〉 〈ψn|ψi〉 − |ψn〉 〈ψn|H|ψi〉

)= −i

(H |ψi〉 −

∑n

|ψn〉 〈ψn|H|ψi〉

) (5.16)

This again amounts to imaginary time evolution with H plus additional corrections

to ensure that the evolution maintains the orthogonality of the single-particle states

〈ψm|ψn〉 = δmn (effecting a continuous Gram-Schmidt process). It is clear that over-

laps between all states Hni = 〈ψn|H|ψi〉 must be computed, making this approach

expensive in terms of communication.

As before, we will now assume that all single-particle states are orthonormal, i.e.:

〈ψm|ψn〉 = δmn (5.17)

96

5.2.4 Local Formulation

Instead of using the full Hc which is constructed from all single-particle wave-

functions, one can use the same formalism, but consider alternative forms of Hc that

are easier to compute. In particular, one can consider local operators in position or

momentum space:

Hc = βKKc + βV Vc (5.18)

where Vc is a potential diagonal in position space 〈x|Vc|x〉 = Vc(x) and Kc is potential

diagonal in momentum space 〈k|Kc|k〉 = Kc(k):

Kc =

∫dk

2π|k〉Kc(k) 〈k| (5.19)

Vc =

∫dx |x〉Vc(x) 〈x| (5.20)

Proceeding as before, we can assure that the energy decreases if

~E = −iTr(Hc[R, H]) ≤ 0 (5.21)

Vc(x) = i 〈x|[R, H]|x〉 = ~n(x)

Kc(k) = i 〈k|[R, H]|k〉 = ~n(k)

(5.22)

where the time-derivatives are taken with respect to evolution with the original Hamil-

tonian:

~n(x) = ~ 〈x| ˙R|x〉 = −i 〈x|[H, R]|x〉 (5.23)

~n(k) = ~ 〈k| ˙R|k〉 = −i 〈k|[H, R]|k〉 (5.24)

97

If the Hamiltonian separates H = K+ V into pieces that are local in momentum and

position space respectively, then we can simplify these:

〈x|[R, H]|x〉 = 〈x|[R, K]|x〉 , 〈k|[R, H]|k〉 = 〈k|[R, V ]|k〉 . (5.25)

The meaning of these “cooling” potentials are elucidated by considering the continuity

equation:

n(x) = −~∇ ·~j(x) ∝ Vc(x) (5.26)

Then if the density is increasing at x due to a converging current, then the cooling

potential will increase at this point to slow the converging flow, thereby removing

kinetic energy from the system. The interpretation for VK(k) is similar in the dual

space, though less intuitive.

Here are some explicit formulae:

Vc(x) = 2=(ψ∗(x) 〈x|K|ψ〉

)= 2=

(ψ∗(x) 〈x|H|ψ〉

)Kc(k) = 2=

(ψ∗(k) 〈k|V |ψ〉

)= 2=

(ψ∗(k) 〈k|H|ψ〉

) (5.27)

Thus, we can apply the original Hamiltonian H to |ψ〉 and consider the diagonal

pieces in position and momentum space. In the fermionic cases, these would need to

be summed over all single-particle wavefunctions.

Vc(x) =∑i

2=(ψ∗i (x) 〈x|K|ψi〉

)(5.28)

Kc(k) =∑i

2=(ψ∗i (k) 〈k|V |ψi〉

)(5.29)

98

The Vc cooling potential has been implemented in [140], where we found that Kc also

works well. In our simulations, it is also found that without Kc, Vc alone can not

cool some systems to ground states. In most situation, its efficiency is not as good

the combination of Kc and Vc.

5.2.5 Departure from Locality

We can generalize these operators slightly to cool in a non-local fashion. For

example, consider the following cooling Hamiltonian. (The motivation here is that

the operators D are derivatives, so this Hamiltonian is quasi-local.)

Hc =

∫dx D†a |x〉Vab(x) 〈x| Db + h.c.,

~E = −i(∫

dx Vab(x) 〈x|Db[R, H]D†a|x〉+ h.c.

) (5.30)

We can ensure cooling if we take:

~Vab(x) = (i~ 〈x|Db[R, H]D†a|x〉)∗

= i~ 〈x|Da[R, H]D†b|x〉

= 〈x|Da|ψ〉 〈x|Db|ψ〉∗

+ 〈x|Da|ψ〉 〈x|Db|ψ〉∗

(5.31)

If Da,b(x) are just derivative operators 〈x|Da|ψ〉 = ψ(a)(x), then we have

~Vab(x) = ψ(a)(x)∂

∂xψ(b)(x) + ψ(a)(x)

∂xψ(b)(x) (5.32)

where ψ(x) = −i~ 〈x|H|ψ〉 is the time-derivative with respect to the original Hamil-

tonian. Note that these potentials are no longer diagonal in either momentum or

99

position space, so they should be implemented in the usual fashion with an integrator

like ABM.

5.2.6 Dyadic cooling

We also consider approximating Hc by a set of dyads where real space states and

momentum space states are mixed to form the cooling potential.

Hc =∑n

|an〉 fn 〈bn|+ h.c. (5.33)

where |an〉 is the position state, and |bn〉 is the momentum state. fn is the factor that

should be determined in order to make the derivative of energy with respect to time

negative (downhill direction):

i~E =∑i

(fi∑n

(〈ψn|H|ai〉 〈bi|ψn〉 − 〈ψn|ai〉 〈bi|H|ψn〉

)+ h.c.

)(5.34)

We can ensure the LHS to be non-positive by taking:

fi =i

~∑n

(〈ai|H|ψn〉 〈ψn|bi〉 − 〈ai|ψn〉 〈ψn|H|bi〉

)(5.35)

A simplification occurs if |an〉 = |bn〉:

Hc =∑n

|an〉 fn 〈an|+ h.c. (5.36)

where

fi =i

~∑n

(〈ai|H|ψn〉 〈ψn|ai〉 − 〈ai|ψn〉 〈ψn|H|ai〉

)(5.37)

100

Choosing |an〉 = |x〉 leads to our local cooling potential Vc while choosing |an〉 = |k〉

leads to Kc.

5.3 Procedure and Discussion

To apply these cooling potentials in a simulation, we can combine them together

with the original Hamiltonian H0 to form a full cooling Hamiltonian Hc:

Hc = α

(β0H0 + βvVc + βkKc + βdVd + βyVy

)(5.38)

where Vc and Kc are defined in eq. (5.28), Vd is the derivative cooling potential defined

in eq. (5.31), and Vy is the dyadic cooling potential define in eq. (5.36). The factors

(β0, βv, βk, βd, βy) are some real numbers (also called weights) except β0 can be

complex if one wants to use the imaginary cooling method. The overall factor α can

affect the time step we take to evolve wavefunctions. In general, a larger factor means

a smaller time step. In an adaptive solver, such as an initial value problem (IVP)

solver, this value of α does not change the overall wall time 21, because adaptive

methods pick smaller time steps if α is larger, and larger time steps if α is smaller.

In either case, the overall time will remain fairly unchanged. Thus, in our discussion,

21The time computers take to finish the calculation.

101

we will ignore the α factor, and we can choose value of β0 to be unity22, then all other

β terms can be interpreted as wight ratios in terms of β0:

Hc = H0 + βvVc + βkKc + βdVd + βyVy (5.39)

Our goal is to find the optimal β terms that can cool a system efficiently. But from

the expressions for these cooling potentials, they all depend on the wavefunctions,

so in general there will not be any optimal combination of β terms that works best

for all situations. An ideal solution is to change their values based on the real-time

wavefunctions.

5.3.1 Procedure

Make Cooling Potentials “Physical”

Since we do not have a good theory to make predictions with, we can perform

some numerical tests to find the “best” factors. Before that, we need to make these

cooling potentials “physical” so that they will have units of energy, and ensure that

they do not depend on the UV and IR scales. Note that eq. (5.28) may have units of

energy density; to have units of energy, we simply divide it by the minimum lattice

volume dx (it is dS = dx× dy for two-dimensional systems, and dV = dx× dy × dz

22β0 can not be zero, because the original Hamilton will create currents. Without currents, Vc

and Kc will not be able to remove energy.

102

for three-dimensional systems). That means we can pick different box sizes L and

lattice spacing dx = L/N (N is the number of discrete points in the range of L) as

shown in fig. 5.2. In the simulation, the wave function is defined as:

ψ(x) = e−x2/2+ix (5.40)

where in the top panels, we plot the particle density n(x) = |ψ(x)|2 for both UV (left)

and IR (right) cases. In the left-bottom panel, we plot cooling potentials (Vc terms)

with the same UV scale (the same point spacing dx). “Potentials” with the same IR

scale (the same box size L) are plotted in the right-bottom panel.

103

Cooling Potential in UV and IR limits

10 5 0 5 10x

0.6

0.8

1.0

1.2

1.4

1.6| |2(UV)

dx=0.2,N=128dx=0.1,N=256dx=0.05,N=512

4 2 0 2 4x

0.0

0.2

0.4

0.6

0.8

1.0

Vc(UV)

dx=0.2, N=128dx=0.1, N=256dx=0.05, N=512

10 5 0 5 10x

0.6

0.8

1.0

1.2

1.4

1.6| |2(IR)

dx=0.1,N=128dx=0.1,N=256dx=0.1,N=512

4 2 0 2 4x

0.0

0.2

0.4

0.6

0.8

1.0

Vc(IR)

dx=0.1, N=128dx=0.1, N=256dx=0.1, N=512

Figure 5.2: Using different dx and box sizes L = dx×N to represent the same wavefunction, we shift each line by a small value in vertical direction to make all of themvisible. Top-Left: Particle densities |ψ|2 for different lattice spacings with the samebox size (UR).Top-Right: Particle densities |ψ|2 for different box sizes with the samelattice spacing (IR). Bottom-Left: Cooling potentials for different point spacing (UV).Bottom-Right: Cooling potentials for different box sizes (IR).

104

Normalize Cooling Potentials

In previous part, we required all the cooling potentials to have some physical

properties so they all have units of energy and are independent of the box configura-

tion. However, their weight factors βv, βk, βd, and βy can still be arbitrary. We want

to normalize these cooling potentials separately such that: for a given initial 1D state,

when β = 1 (β ∈ {βv, βk, βd, βy}), and each cooling Hamiltonian Hc (Hc = H0 + βV ;

V ∈ {Vc, Kc, Vd, Vy}) can cool the initial state to the final state ψf (x) (which may

not be the ground state) in a 1D harmonic system with minimum wall time using an

adaptive solver. The method is summarized in table 5.1. It is worth pointing out

that since the way to pick an initial wavefunction can be arbitrary, our simulation

uses the first excited state in a box, i.e.

〈x|ψi〉 = ψ(x) =

√2

Lsin(

πx

L) (5.41)

Such a state will have some overlap with the actual ground state for a harmonic

oscillator. This is important because if the initial state have no overlap with the

ground state, it suggests that these two states are not in the same Hilbert subspace.

Thus no local unitary operator can “rotate” the initial state to the ground state, and

any unitary cooling potential will fail.

105

Algorithm: Normalize Weight Factors

1. Pick an unnormalized cooling potential Vu, Vu ∈ {Vc, Kc, Vd, Vy}.

2. Let the ground state be |ψ0〉, and select an initial state |ψi〉 (make sure

〈ψ0|ψi〉 6= 0).

3. Let the original Hamiltonian be H0 = p2

2m+ 1

2mx2 (ω = 1).

4. Construct the cooling Hamiltonian Hc = H0 + βVu, so dψ(x)dt

= Hcψ(x).

5. Find the optimal β (denoted as βo) so that a adaptive solver takes minimum

time to cool the system to a state with minimum energy.

6. Define the normalized cooling potential Vn: Vn = βVu.

Table 5.1: Algorithm: Normalizing the Weight Factors

5.3.2 Preliminary Results

Once all the cooling potentials are normalized, we can compare their efficiency.

A simple test starts with a wave function in a box; then, we use different cooling

potentials to cool down the initial wavefunction to a state with energy just 1% above

106

that of the ground state of a harmonic oscillator. We first test the imaginary cooling

method by setting β0 = −i and all other coefficients to zero in eq. (5.38). The

result is shown in fig. 5.3. The imaginary cooling method is very efficient that it

only takes 0.34 seconds23 to cool down to the target energy. The left panel shows

the particle densities for the initial state (solid line), ground state (dashed line), and

the final state (crossed line). It can be seen that the final state is basically on the

top of the ground state line. In the middle panel, we plot the energy difference ratio

(E − E0)/E0 vs. physical time, where E is the intermediate energy at any given

physical time24, and E0 is the ground energy. In the right panel, we plot the −dE/dt

vs. physical time, which should be non-negative as the energy gradient should always

be positive-define in order to remove energy from the system, and we do see a positive

curve above zero line from the plot. Similarly, we stack test results for Vc, Kc, Vd in

the same plot as shown in fig. 5.4. From the results, we can find that unitary cooling

methods are slower than the imaginary method. However, the merit of unitary cooling

is to reduce communication traffic inside a supercomputer, which may cost even more

time since we need to send all wavefunctions among compute nodes. Among the

23This 0.34s is considered fast in the context of wall time using the code we have, a more consistent

way to compare the efficiency may be to use the number of function calls. But the wall-time also

gives quantitative measurement.24physical time is the time it takes a physical system to reach a state in the physical world, not

in the computing time (wall time).

107

unitary potentials, Vc and Kc have comparable efficiency, while the Vd and Vy are

much slower (see the wall time in the middle panels) and also are not able to reach

the 1% target above the ground energy (the middle panel of the last row can not

reach a relative error of 10−2 above the ground state energy). Further simulations

have shown that the Vd and Vy are not as efficient as Vc and Kc. They may work well

if we could find a good method to adjust the βd and βy adaptively; this will be left

for future work. In the rest of the discussion, we will focus on Vc and Kc.

Imaginary Cooling

10 5 0 5 10x

0.0

0.1

0.2

0.3

0.4

0.5

| (x)|2

initfinalGround

0.00 0.25 0.50 0.75 1.00 1.25Physical Time

10 2

10 1

100

101

(E-E0)/E0Wall Time=0.3387

0.00 0.25 0.50 0.75 1.00 1.25Physical Time

101

102

103

104

-dE/dt-dE/dt

Figure 5.3: Left: the solid line is the initial state density, the dashed line is the groundstate density, and the crossed line is the final state density with energy equal to 1.01times of the ground state energy. Middle: the energy difference ratio with respect toground state energy as the physical time, it only takes about 0.34s to cool to the levelwith energy 1% above the ground energy. Right: The energy gradient at differentphysical time

.

108

Comparison of Three Distinct Cooling Potentials

10 5 0 5 100.0

0.1

0.2

0.3

0.4

0.5

| (x)|2

initfinalGround

0 2 4 610 2

10 1

100

101

(E-E0)/E0Wall Time=13.39

0 2 4 6

10 6

10 4

10 2

100

102

-dE/dt-dE/dt

10 5 0 5 100.0

0.1

0.2

0.3

0.4

0.5initfinalGround

0 5 10 1510 2

10 1

100

101

Wall Time=18.59

0 5 10 15

10 4

10 2

100

102

104-dE/dt

10 5 0 5 10x

0.0

0.2

0.4

0.6

0.8

1.0initfinalGround

0 10 20 30Physical Time

101

Wall Time=62.55

0 10 20 30Physical Time

10 4

10 2

100

102 -dE/dt

Figure 5.4: From top to bottom are results for Vc, Kc and Vd. Left: the solid line is theinitial state density, the dashed line is the ground state density, and the crossed lineis the final state density with energy equal to 1.01 times of the ground state energy.Middle: relative error in energy vs. physical time. Right: The energy gradient atdifferent physical time.

109

5.3.3 Simulation and Discussion

The idea that combinations of different cooling potentials may be more efficient is

worthy of study. As we have mentioned before, these cooling potentials are sensitive to

the states that they are constructed from. But so far, we do not have a good method

to adjust their weight factors in real-time. This discussion will focus on how the

combinations of Vc and Kc weight factors change cooling outputs. The Hamiltonian

used for the discussion is:

H = −∇2

2+ω2x2

2+ g

n

2(5.42)

where m = ~ = ω = 1, g ∈ {−1, 0, 1} with proper units, and n is the total particle

density.

Ground State Densities for Different gs

6 4 2 0 2 4 6x

0.0

0.1

0.2

0.3

0.4

0.5

0.6

|(x

)|2

g=1g=0g=-1

Figure 5.5: The ground state densities for different values of g

110

Simulations are tested with four different initial wavefunctions [141] (see fig. 5.6):

ψUN(x) = 1/√L a uniform wavefunction (5.43)

ψBS(x) = π−1/4e−x2/2.0+ix a bright-soliton wavefunction (5.44)

ψGM(x) = C

10∑i=1

cie− x2

2n2i a random mixing of Gaussian functions (5.45)

ψST (x) =

√2

Lsin(

πx

L) a standing wavefunction in a box (5.46)

where ci and ni are positive random numbers in the range of (0, 1), L is the box size,

C is a normalizing constant that ensures∫dxψ†GM(x)ψGM(x) = 1.

Densities of Four Initial Wavefunctions

6 4 2 0 2 4 6x

0.0

0.1

0.2

0.3

0.4

0.5

|(x

)|2

STGMUNBS

Figure 5.6: The four different initial state density profiles. They have different overlapwith the ground state of the Hamiltonian. The cooling speeds may vary from case tocase.

For each initial state, we test all different values of g. In a simulation, a target

111

energy above the ground state energy is set. Once the cooling procedure reaches that

energy level, the simulation stops, and the wall time is recorded. For any combination

of initial states and interactions g, test cases which only use Vc (βk = 0), and cases

with both Vc and Kc are run to verify the idea that combination of Vc and Kc may

be more efficient, the best results from the former cases and the latter cases are

compared. Then for different target energies (20%, 10%, and 1% above the ground

energy), the best results are plotted; cases with the same initial state are grouped

into a panel. For instance, in fig. 5.7, the energy level is set to 20% above the ground

energy, while in the top-left panel we plot the best results (in terms of wall time)

for the cases with uniform initial states that can reach the target energy (or lower).

For g = 0, the best case with Vc only (solid blue line) reaches the target in about

4.6 seconds, while the case with both Vc and Kc (dashes blue line) reaches the same

energy in about 3 seconds. Meanwhile, it can reach the level about 10−5 above the

ground energy in 4.6 seconds. That means the combination of Vc and kc can cool

a uniform initial state to the target state faster than the case where we only use

Vc. For g = −1, we can only see the result from the combination of Vc and Kc.

This is because with Vc only, the cooling potential can not cool the system to the

target energy. That means that, for some situations, we may not be able to reach

the desired target without the help of Kc. Similar arguments can be applied to other

three panels, and can be extended to different energy cutoff as shown in fig. 5.8 (10%

112

above the ground level) and fig. 5.9 (1% above the ground level). In the left column

of fig. 5.9, we only see dashed lines, which suggests that to achieve lower energy, the

role of Kc may be indispensable for some situations. From a careful observation of all

these “best” results, we may conclude that contribution from the Kc potential clearly

will accelerate the cooling procedure, and help a system to reach a lower energy level.

Cooling to States with Energy 20% above the Ground Energy

0 1 2 3 4 5 6 7Wall Time

10 3

10 2

10 1

100

101

102

(E-E

0)/E

0

State:UN

V=0.1, K=0.0, g=0V=0.1, K=0.3, g=0V=0.1, K=0.0, g=1V=0.1, K=0.3, g=1V=0.1, K=0.0, g=-1V=0.1, K=0.3, g=-1

0 1 2 3 4 5 6 7Wall Time

10 5

10 4

10 3

10 2

10 1

100

101

(E-E

0)/E

0

State:GM

V=0.2, K=0.0, g=0V=0.1, K=0.4, g=0V=0.1, K=0.0, g=1V=0.2, K=0.5, g=1V=0.1, K=0.0, g=-1V=0.1, K=0.4, g=-1

0 1 2 3 4 5 6Wall Time

10 2

10 1

100

101

102

(E-E

0)/E

0

State:ST

V=0.1, K=0.0, g=0V=0.5, K=1.0, g=0V=0.1, K=0.0, g=1V=0.6, K=1.0, g=1

0 1 2 3 4 5 6Wall Time

10 4

10 3

10 2

10 1

100

(E-E

0)/E

0

State:BS

V=0.7, K=0.0, g=0V=0.2, K=0.9, g=0V=1.0, K=0.0, g=1V=0.2, K=0.9, g=1V=0.2, K=0.0, g=-1V=0.2, K=1.0, g=-1

Figure 5.7: The parameters βV and βK are for those tests with the smallest wall timeto reach an energy within 20% above the ground state energy. Solid lines are for caseswith only Vc, dashed lines are for cases with both Vc and Kc.

113

Cooling to States with Energy 10% above the Ground Energy

0 1 2 3 4 5 6 7Wall Time

10 3

10 2

10 1

100

101

102

(E-E

0)/E

0

State:UN

V=0.1, K=0.0, g=0V=0.1, K=0.3, g=0V=0.1, K=0.0, g=1V=0.1, K=0.3, g=1V=0.1, K=0.3, g=-1

0 1 2 3 4 5 6Wall Time

10 5

10 4

10 3

10 2

10 1

100

101

(E-E

0)/E

0

State:GM

V=0.2, K=0.0, g=0V=0.1, K=0.5, g=0V=0.2, K=0.0, g=1V=0.1, K=0.5, g=1V=0.2, K=0.0, g=-1V=0.1, K=0.2, g=-1

0 1 2 3 4 5 6Wall Time

10 4

10 3

10 2

10 1

100

101

102

(E-E

0)/E

0

State:ST

V=0.2, K=0.0, g=0V=0.1, K=0.5, g=0V=0.2, K=0.0, g=1V=0.3, K=1.0, g=1

0 1 2 3 4 5 6Wall Time

10 4

10 3

10 2

10 1

100(E

-E0)

/E0

State:BS

V=0.9, K=0.0, g=0V=0.2, K=0.9, g=0V=1.0, K=0.0, g=1V=0.2, K=0.9, g=1V=0.2, K=0.0, g=-1V=0.2, K=1.0, g=-1

Figure 5.8: The parameters βV and βK are for those tests with the smallest wall timeto reach an energy within 10% above the ground state energy. Solid lines are for caseswith only Vc, dashed lines are for cases with both Vc and Kc.

114

Cooling to States with Energy 1% above the Ground Energy

1 2 3 4 5 6 7Wall Time

10 4

10 3

10 2

10 1

100

101

102

(E-E

0)/E

0

State:UN

V=0.1, K=0.6, g=0V=0.1, K=0.6, g=1V=0.1, K=0.6, g=-1

0 1 2 3 4 5 6Wall Time

10 5

10 4

10 3

10 2

10 1

100

101

(E-E

0)/E

0

State:GM

V=0.2, K=0.0, g=0V=0.1, K=0.5, g=0V=0.2, K=0.0, g=1V=0.1, K=0.4, g=1V=0.2, K=0.0, g=-1V=0.1, K=0.4, g=-1

1 2 3 4 5 6 7Wall Time

10 4

10 3

10 2

10 1

100

101

102

(E-E

0)/E

0

State:ST

V=0.1, K=0.8, g=0V=0.1, K=0.6, g=1

0 1 2 3 4 5 6 7Wall Time

10 4

10 3

10 2

10 1

100(E

-E0)

/E0

State:BS

V=0.8, K=0.0, g=0V=0.5, K=0.7, g=0V=0.6, K=0.0, g=1V=0.2, K=0.6, g=1V=0.1, K=0.0, g=-1V=0.5, K=0.7, g=-1

Figure 5.9: The parameters βV and βK are for those tests with the smallest wall timeto reach an energy within 1% above the ground state energy. Solid lines are for caseswith only Vc, dashed lines are for cases with both Vc and Kc.

5.4 BCS Cooling

So far, all the previous simulations use a single state. For a Fermi system, it may

have multiple particles, which will occupy different states. In this section, cooling

with multiple states will be presented using the 1D harmonic oscillator Hamiltonian

115

for sake of simplicity. Initial states come from a system with fermions confined in a

1D box. Simulations are done with both Vc and Kc, the speed of cooling is not the

focus in this section. Attention is paid to the properties of the unitary cooling in

simple many-body systems. Let Φ = {|φ0〉 , |φ1〉 , . . . } be the set of all states of free

Fermi particles in a 1D box, and let Ψ = {|ψ0〉 , |ψ1〉 , . . . } be the state set for a 1D

harmonic oscillator. |φ0〉 is the ground sate, and |φn〉 is the nth excited state in a box;

|ψ0〉 is the ground state and |ψn〉 is the nth excited state for the harmonic system.

Let the box sit in the range of (−L/2,L/2), then:

φn(x) = 〈x|φn〉 (5.47)

=

1L

sin(knx) for n even√1L

cos(knx) for n odd

(5.48)

where kn = nπ/L, and n = 1, 2, 3, . . .. if we set the harmonic oscillator angular

velocity to 1, then:

ψn(x) = 〈x | n〉 =1√2nn!

π−1/4 exp(−x2/2) Hn(x) (5.49)

where Hn(x) is the Hermite polynomial.

The occupancy |〈ψm|φn〉|2 between states in Ψ and Φ are computed for the first

five states (0 ≤ n ≤ 4 and 0 ≤ m ≤ 4) as shown in table 5.2. We see that states in

Φ only have finite overlap with states in Ψ that share the same parity. As unitary

operators maintain the cross product between two states, a unitary cooling potential

116

can not “rotate” a state φ to another state ψ if 〈φ|ψ〉 = 0.

Initial States and Ground State Overlap

|〈φ|ψ〉|2 φ0(x) φ1(x) φ2(x) φ3(x) φ4(x)

ψ0(x) 0.321 0.0 0.146 0.0 0.0301

ψ1(x) 0.0 0.189 0.0 0.231 0.0

ψ2(x) 0.103 0.0 0.044 0.0 0.233

ψ3(x) 0.0 0.154 0.0 0.000964 0.0

ψ4(x) 0.046 0.0 0.123 0.0 0.00637

Table 5.2: Initial States and Ground State Overlap

5.4.1 Single State System Revisit

To verify the unitary properties, let us first check how a cooling potential cools

an initial state |φi〉 that has no overlap with the ground state |ψ0〉, i.e., 〈φi|ψ0〉 = 0.

Choose |φi〉 = |φ1〉 as the only initial state (a system with single particle), and monitor

the occupancies of the four lowest states. The result is shown fig. 5.10. It is clear

that the ground state has zero occupancy from the very beginning, only the first and

third excited states are populated, and in the end, the system is cooled down to the

first excited state. We can not reach the ground state because the unitary cooling

potential maintains the orthogonality.

117

Cooling a One-Particle System to a Non-Ground State

0 5 10 15Time(s)

0.0

0.2

0.4

0.6

0.8

1.0Occupancy

0123

0.50 0.25 0.00 0.25 0.502 k/dx

0

5

10

15

20

25

30nk

0 5 10 15time(s)

10 1

100

(E E0)/E0

Figure 5.10: Cool an initial state |φi〉 = |φ1〉 that has no overlap with the ground stateto the lowest possible state permitted by unitary potentials. Left: the occupancy forthe four lowest states. Middle: occupancy of momentum states, dx is the latticespacing that defines the momentum cutoff kc = π/dx in simulation. Right: relativeerror as a function of time.

5.4.2 Two-State System

The simplest many-body case is a two-particle system with the initial states

(|φ0〉, |φ1〉). The cooling procedure can be summarized in fig. 5.11. In the left panel,

the occupancies of the lowest four states are plotted as functions of time. It can be

seen that at the beginning of the cooling, the second and third excited states are also

partially populated. As the cooling proceeds, their occupancy decreases while the

ground state and first excited states are fully occupied at the end of cooling. In the

middle panel is the occupancy in momentum space. The right panel shows the energy

as a function of time, it is a monotonically decreasing function.

118

Cooling a Two-Particles System to Ground States

0 10 20 30Time(s)

0.0

0.2

0.4

0.6

0.8

1.0Occupancy

0123

0.50 0.25 0.00 0.25 0.502 k/dx

0

5

10

15

20

nk

0 10 20 30time(s)

10 4

10 3

10 2

10 1

100

(E E0)/E0

Figure 5.11: A two-particle system with initial states (|φ0〉, |φ1〉).

Can we always reach the ground states? Let us consider the initial states

(|φ2〉,|φ4〉), from table 5.2; these two states have no overlap with |ψ1〉 and |ψ3〉, so

we would expect no occupancy for the first and third excited states. Our simulation

does prove this as shown in fig. 5.12.

Cooling a Two-Particles System to Non-Ground States

0 10 20 30Time(s)

0.0

0.2

0.4

0.6

0.8

1.0Occupancy

0123

0.50 0.25 0.00 0.25 0.502 k/dx

0

5

10

15

20

25

30nk

0 10 20 30time(s)

0

1

2

3

4

5

6

7(E E0)/E0

Figure 5.12: A two particle system with initial states (|φ2〉, |φ4〉)

119

So, we may conclude that to reach the ground states, the initial states should

have non-zero occupancies on those ground states, or the final states are not ground

states.

5.4.3 Multiple-State System and Fermi Surface

Since the wavefunctions of a harmonic oscillator are all Gaussian (weighted by

some polynomials), the Fermi surface should look like a Gaussian too if we have many

particles in the system. We simulate a ten-particle system as shown in fig. 5.13. The

middle panel does look like a Gaussian function.

Cooling a Ten-Particles System to Ground States

0 50 100 150Time(s)

0.20.30.40.50.60.70.80.91.0

Occupancy

0123456789

0.50 0.25 0.00 0.25 0.502 k/dx

0

10

20

30

40

50nk

0 50 100 150time(s)

10 3

10 2

10 1

100

101

(E E0)/E0

Figure 5.13: A ten particle system with initial states (|φ0〉-|φ10〉). The momentumoccupancy profile is fairly close to Gaussian. The x-axis for the left and right panelsare physical time.

If a system has hundreds of particles with pairing, we will need to consider many

120

thousands of states and use a supercomputer. Each compute node may host tens or

hundreds of states, each piece of the overall cooling potential Hc can be computed

locally, and then sent to other nodes so that all compute nodes share the same unitary

operator, which will ensure the orthonormality among all states, without requiring

continual reorthogonalization that is computationally expensive.

Quantum Simulation of Fermionic Models Using Unitary Cooling

𝐻𝑐1

𝐻𝑐2

𝐻𝑐6

𝐻𝑐5

𝐻𝑐4

𝐻𝑐3

Figure 5.14: In ith compute node, a fragment of the cooling potential H ic is computed

locally using the states on that node. By exchanging all the fragments of Hc withother nodes, all nodes will have the same cooling potential. Application of the coolingpotential to local states will automatically maintain orthogonality among all statesand conserve particle number.

121

5.5 Conclusion

We have demonstrated that the unitary cooling operator can remove energy from

a many-body quantum system, and it will automatically maintain the orthonormality

of states. The Vc and Kc combination, in general, is more efficient than Vc alone.

These results can be extended to multi-dimensional systems and systems with pairing

fields straightforwardly.

122

APPENDIX

A. Rotating Frame Transform

A..1 Introduction

In theoretical calculations, some quantities have spatial phases that would make

calculation harder and more computationally expensive. To get rid of spatial phases,

one may transform the Hamiltonian to a rotating frame. A typical application of the

rotating frame is in the GPE simulation, where time-dependent driving potentials

generated by laser beams to couple different pseudo-spin states can be absorbed into

kinetics terms by transforming the Hamiltonian to the rotating frame from the lab

frame. In BCS theories, similar applications can be found in the literature.

A..2 Preliminary Mathematics

Before proceeding to the actual derivation, a simple mathematical identity is

deduced:

∇2[U(x)eiqx

]= ∇

[∇U(x)eiqx + U(x)iqeiqx

]= ∇2U(x)eiqx + 2iq∇U(x)eiqx − q2U(x)eiqx

= (∇+ iq)2U(x)eiqx

(0.1)

123

A..3 Derivation

Start with a BdG Hamiltonian with a pairing field being modulated in a periodic

way, i.e ∆ = ∆eiθ, where we let θ = 2q throughout the calculation. Let 2q = qa + qb:

Hψ =

−∇2 − µa ∆e2iqx

∆∗e−2iqx ∇2 + µb

U(x)eiqax

V ∗(x)e−iqbx

=

(−∇2 − µa)U(x)eiqax + ∆e2iqxV ∗(x)e−iqbx

∆∗e−2iqxU(x)eiqax + (∇2 + µb)V∗(x)e−iqbx

=

[−(∇+ iqa)2 − µa]U(x)eiqax + ∆V ∗(x)eiqax

∆∗U(x)e−iqbx + [(∇− iqb)2 + µb)]V∗(x)e−iqbx

=

[−(∇+ iqa)2 − µa]U(x)eiqax + ∆V ∗ (x)eiqax

∆∗U(x)e−iqbx + [(∇− iqb)2 + µb)]V∗(x)e−iqbx

= E

U(x)eiqax

V (x)∗e−iqbx

(0.2)

By canceling out the phase terms:[(i∇− qa)2 − µa] ∆

∆∗ − [(i∇+ qb)2 − µb)]

U(x)

V ∗(x)

= E

U(x)

V ∗(x)

(0.3)

Let δq = qa − qb, then:

qa = q + δq qb = q − δq (0.4)

124

Substituting the relation into the previous equation yields:(i∇− q − δq)2 − µa ∆

∆∗ −(i∇+ q − δq)2 − µb)

U(x)

V ∗(x)

= E

U(x)

V ∗(x)

(0.5)

This relation can be used to simplify the calculation of Fulde–Ferrell–Larkin–Ovchinnikov

phase in a homogeneous system.

A..4 Polar coordinates

In some applications, such as vertices on a cylindrical DVR basis, in order to get

rid of the angular dependent phase, a similar approach as above can be adopted. In

polar coordinates, the Del operator ∇2 is defined as:

∇2 =1

r

∂r

(r∂f

∂r

)+

1

r2

∂2f

∂θ2

=∂2f

∂r2+

1

r

∂f

∂r+

1

r2

∂2f

∂θ2

(0.6)

To be general, assume f = f(r, θ), then:

∇2[feinθ

]=

∂2

∂r2

[f(r, θ)einθ

]+

1

r

∂r

[f(r, θ)einθ

]+

1

r2

∂2

∂θ2

[f(r, θ)einθ

]=

{∂2

∂r2[f ] +

1

r

∂r[f ] +

1

r2

[(∂2

∂θ2+ i2n

∂θ− n2

)f

]}einθ

=

[(∇2 + i2n

r2∂θ− n2

r2)f(r, θ)

)]einθ

(0.7)

if f(r, θ) = f(r), i.e. f only depends on r, the above result can be simplified:

∇2[feinθ

]=

[(∇2 − n2

r2)f(r, θ)

)]einθ (0.8)

125

To compute the pairing field of a vortex in the BdG formalism, let the pairing field

be of this form:

∆ = ∆0g(r)ei2nθ (0.9)

The detailed calculation is given as follows: −∇2

2− µa ∆g(r)ei2nθ

∆∗g(r)∗e−i2nθ ∇2

2+ µb

U(r)einθ

V ∗(r)e−inθ

=

(−∇2

2− µa)U(r)einθ + ∆g(r)ei2nθV ∗(r)e−inθ

∆∗g(r)∗e−i2θU(r)einθ + (∇2

2+ µb)V

∗(x)e−inθ

=

[−∇2

2− µa

]U(r)einθ + ∆g(r)V ∗(r)einθ

∆∗g(r)∗U(r)e−inθ +[∇2

2+ µb

]V ∗(r)e−inθ

=

[−∇2

2− µa + n2

2r2

]U(r)einθ + ∆g(r)V ∗(r)einθ

∆∗g(r)∗U(r)e−inθ +[∇2

2+ µb − n2

2r2

]V ∗(r)e−inθ

= E

U(x)einθ

V (x)∗e−inθ

(0.10)

By canceling out the phase terms:[−∇2

2− µa + n2

2r2

]∆g(r)

∆∗g(r)∗[∇2

2+ µb − n2

2r2

] U(x)

V (x)∗

= E

U(x)

V (x)∗

(0.11)

So, to introduce the vortex pairing field, additional terms( n2

2r2) can be added to the

diagonal of the BdG matrix. In the DVR basis, we assume the system is fully spheri-

126

cally (2D, 3D, or even more general cases) symmetric. As a result, the function f(r, θ)

in general is just f(r).

B. Matrix Representation of Kinetic Operator

In order to simulate the inhomogeneous system using the BCS theory, an n-

dimensional box with side length

n︷ ︸︸ ︷R×R . . . R, each of the side will be discretized

into N points to form a

n︷ ︸︸ ︷N ×N . . .N grid. No matter what the dimensionality is,

the kinetic operator should be expressed in such a grid configuration. Since the

kinetic operator T must compute the second-order derivative of the wavefunction

represented in the grid, an accurate matrix representation of the kinetic operator

must be obtained. The derivation here is based on the two-dimension system, but it

can be generalized to arbitrary dimension straightforwardly.

B..1 Preliminary Theory

Start with the Schrodinger equation for a two-dimension system:

H(x, y, t)ψ(x, y) = Eψ(x, y) (0.12)

127

The wavefunction in a two-dimension grid can be presented as a 2D matrix

ψ =

ψ1,1 ψ1,2 ... ψ1,N

ψ2,1 ψ2,1 ... ψ1,N

...

eψN,1 ψN,2 ... ψN,N

(0.13)

Then, for any operator, its matrix element form can be defined as:

Om,n = 〈ψm|O|ψn〉

=

∫dxdyψ∗m(x, y)Oψn(x, y)

(0.14)

Assuming that the potential operator O is depends only on spatial coordinates x,y,

then its matrix form can obtained:

Om,n = 〈ψm|O|ψn〉

=

∫dxdyψm(x, y)O(x, y)ψn(x, y)

= ∆x∆y

∑x,y

ψ∗m(x, y)O(x, y)ψn(x, y)

= ∆x∆y(ψ∗1,1O1,1ψ1,1 + ψ∗1,2O1,2ψ1,2 + ...ψ∗2,1O2,1ψ2,1 + ...)

= ∆x∆y(|ψ21,1|O1,1 + |ψ1,2|2O1,2 + ...+ |ψ2,1|2O2,1 + |ψ2,1|2O2,1...)

= O(x, y)

(0.15)

where ∆x,∆y are the lattice spacings.

128

Fourier integral to Fourier series

Similar to the one-dimension DFT, a two-dimension Fourier integral can be dis-

cretized into its Fourier series for a 2D lattice simulation.

DFT [ψ(x)] =

∫dx e−ikxψ(x) =

L

N

∑n

e−ikxnψ(xn),

DFT−1(ψk) =

∫dk

(2π)eikxψk =

1

L

∑m

eikmxψkm .

(0.16)

We can extend these relations to a 2D case without difficulty:

DFT [ψ(x, y)] =

∫dxdye−i(kxx+kyy)ψ(x)

=LxLyNxNy

∑m,n

e−i(kmxm+knyn)ψ(xn, yn)

(0.17)

DFT−1[ψ(kx, ky)] =

∫dkxdky(4π2)

ei(kxx+kyy)ψ(kx, ky)

=1

LxLy

∑m,n

ei(kmxm+knyn)ψ(km, kn)

(0.18)

Kinetic Operator

For the kinetic operator, we may need to use Fourier transform to derive its

matrix representation. The kinetic term for a 2D system is:

T =−~2

2m

[∂2

∂x2+

∂2

∂y2

](0.19)

Applying it on a wavefunction ψ(x, y) will yield another function φ(x, y):

φ(x, y) = Tψ(x, y) =−~2

2m

[∂2

∂x2+

∂2

∂y2

]ψ(x, y) (0.20)

129

To be explicitly, invoke the Fourier transform:

ψ(x, y) =1

∫dkxdkyψ(kx, ky)e

−i(kxx+kyy) (0.21)

where ψ(kx, ky) is the Fourier transform of ψ in momentum space:

ψ(kx, ky) =1

∫dxdyψ(x, y)ei(kxx+kyy) (0.22)

If ψ(x, y) is represented as eq. (0.21), we can eliminate the second order partial

differential to get:

φ(x, y) = Tψ(x, y)

=−~2

2m

[∂2

∂x2+

∂2

∂y2

]ψ(x, y)

=~2

2m

1

∫dkxdky

[k2x + k2

y

]ψ(kx, ky)e

−i(kxx+kyy)

=~2

2m

1

4π2

∫dkxdkydx

′dy′[k2x + k2

y

]ψ(x′, y′)e−i(kxx+kyy−kxx′−kyy′)

(0.23)

Transforming the result of the last step to the summation form yields:

φ(x, y) =~2

2m

1

4π2

1

NxNy

∑k,l,m,n

[k2xk

+ k2yl

]ψ(x′m, y

′n)e−i[kxk (x−x′m)+kyl (y−y

′n)]

=∑

kx,ky ,m,n

f(kx, ky)ψ(x′m, y′n)e−i[kx(x−x′m)+ky(y−y′n)]

where

f(kx, ky) =~2

2m

1

4π2

1

NxNy

(k2x + k2

y) (0.24)

Now, to present in matrix or tensor form, we need to represent the LHS as a matrix:

φ(x, y) =∑kx,ky

(Φ(kx, ky, x′1) + Φ(kx, ky, x

′2) + · · ·+ Φ(kx, ky, x

′N)) (0.25)

130

where

Φ(kx, ky, x′n) = f(kx, ky)e

−i(kxx+kyy)

[ψ(x′n, y

′1)ei(kxx

′n+kyy′1) + ...

+ ψ(x′n, y′Ny)e

i(kxx′n+kyy′Ny )

] (0.26)

The terms in brackets can be rearranged as:

eikxx′1

[ψ(x′1, y

′1)eikyy

′1 + ψ(x′1, y

′2)eikyy

′2 + ...ψ(x′1, y

′Ny)e

ikyy′Ny

]+

eikxx′2

[ψ(x′2, y

′1)eikyy

′1 + ψ(x′2, y

′2)eikyy

′2 + ...ψ(x′2, y

′Ny)e

ikyy′Ny

]+

...

eikxx′xN

[ψ(x′xN , y

′1)eikyy

′1 + ψ(x′xN , y

′2)eikyy

′2 + ...ψ(x′xN , y

′Ny)e

ikyy′Ny

]The above results can be put into matrix representation, define:

M(kx, ky) =

(eikxx

′1 eikxx

′2 ...eikxx

′Nx

)

×

ψ(x′1, y′1) ψ(x′1, y

′2) ... ψ(x′1, y

′Ny

)

ψ(x′2, y′1) ψ(x′2, y

′2) ... ψ(x′2, y

′Ny

)

...

ψ(x′Nx , y′1) ψ(x′Nx , y

′2) ... ψ(x′Nx , y

′Ny

)

eikyy′1

eikyy′2

...

eikyy′Ny

(0.27)

131

Then φ(x, y) can be put as:

φ(x, y) =∑kx,ky

f(kx, ky)M(kx, ky)e−i(kxx+kyy)

=

(e−ikx1x e−ikx2x ...e−ikxNx x

)

×

Mf(kx1 , ky1) Mf(kx1 , ky2) ... Mf(kx1 , kyNy )

Mf(kx2 , ky1) Mf(kx2 , ky2) ... Mf(kx2 , kyNy )

...

Mf(kxNx , ky1) Mf(kxNx , ky2) ... Mf(kxNx , kyNy )

e−iky1y

e−iky2y

...

e−ikyNy y

(0.28)

where Mf is a matrix equal to the element-wise product of two matrices that

have the same dimensionality Nx ×Ny.

If the indices x, y run over all values, we can get φ(x, y) on the grid:

[φ] = UTxMfUy (0.29)

where

Ux =

e−ikx1x1 e−ikx1x2 ... e−ikx1xNx

e−ikx2x1 e−ikx2x2 ... e−ikx2xNx

...

e−ikxNx x1 e−ikxNx x2 ... e−ikxNx xNx

(0.30)

132

Uy =

e−iky1y1 e−iky1y2 ... e−iky1yNy

e−iky2y1 e−iky2y2 ... e−iky2yNy

...

e−ikyNy x1 e−ikyNx x2 ... e

−ikyNy yNy

(0.31)

and

[f ] =~2

2m

1

NxNy

k2x1

+ k2y1

k2x1

+ k2y2

... k2x1

+ k2yNy

k2x2

+ k2y1

k2x2

+ k2y2

... k2x2

+ k2yNy

...

k2xNx

+ k2y1

k2xNx

+ k2y2

... k2xNx

+ k2yNy

(0.32)

where [...] denotes a matrix representation, then:

[M ] = U∗x [ψ]U †y (0.33)

Since Mf is the element-wise product of two matrices, we can factor out the [ψ]

Mf = [f ]U∗x [ψ]U †y = U∗x [f ]U †y [ψ] (0.34)

Then the [φ] can be put as:

[φ] = UTx U

∗x [f ]U †yUy[ψ] (0.35)

Or the kinetic matrix can be written:

[T ] = UTx U

∗x [f ]U †yUy (0.36)

133

Let us examine eq. (0.26):

eikxx′1

[ψ(x′1, y

′1)eikyy

′1 + ψ(x′1, y

′2)eikyy

′2 + ...ψ(x′1, y

′Ny)e

ikyy′Ny

]+

eikxx′2

[ψ(x′2, y

′1)eikyy

′1 + ψ(x′2, y

′2)eikyy

′2 + ...ψ(x′2, y

′Ny)e

ikyy′Ny

]+

...

eikxx′xN

[ψ(x′xN , y

′1)eikyy

′1 + ψ(x′xN , y

′2)eikyy

′2 + ...ψ(x′xN , y

′Ny)e

ikyy′Ny

](0.37)

Treat ei(kxx′m+kyy′n) and ψ as one-dimension vectors with size Nx×Ny, then eq. (0.26)

can be rewritten as:

M(kx, ky)ψ =

(ei(kxx1+kyy1) ei(kxx1+kyy2) ... e−i(kxxNx+kyyNy )

)

ψ1,1

ψ1,2

...

ψ1,Ny

ψ2,1

...

ψ2,Ny

...

ψNx,Ny

=

N∑n

U(kx, ky)nψn

(0.38)

Define

M ′(kx, ky) =N∑n

U(kx, ky)nfn (0.39)

134

where N = Nx ×Ny, then eq. (0.25) can be rewritten as:

φ(x, y) =∑kx,ky

M ′(kx, ky)e−i(kxx+kyy)

=

(e−i(kx1x+kx1y) e−i(kx1x+ky2y) ... e

−i(kxNx x+kyNyy)

)

M ′1,1

...

M ′1,Ny

M ′2,1

...

M ′2,Ny

...

M ′Nx,Ny

=

(e−i(kx1x+kx1y) e−i(kx1x+ky2y) ... e

−i(kxNx x+kyNyy)

)M ′

(0.40)

Now it is much clearer that, if we run x, y over all pairs of values, we get:

[φ] = UM ′ψ (0.41)

where

U =

e−i(kx1x1+ky1y1) e−i(kx1x1+ky2y1) ... e−i(kx2x1+kyNy

y1)... e

−i(kxNx x1+kyNyy1)

e−i(kx1x1+ky1y2) e−i(kx1x1+ky2y2) ... e−i(kx2x1+kyNy

y2)... e

−i(kxNx x1+kyNyy2)

...

e−i(kx1xNx+ky1yNy ) e−i(kx1xNx+ky2yNy ) ... e−i(kx2xNx+kyNy

yNy )... e

−i(kxNx xNx+kyNyyNy )

(0.42)

135

With careful examination, it maybe not hard to find:

φ = UM ′ = UfU †ψ (0.43)

which implies:

[T ] = UfU † (0.44)

In a numerical simulation, the kinetics matrix in a plane-wave basis can be computed

using this method. One merit of using Fourier transform as shown above is that

this method can produce an accurate representation, as it takes all grid points into

account for derivative term calculation.

C. 2D Harmonic Oscillator

C..1 Introduction

The two-dimension harmonic oscillator is a solvable and instructive system. First,

it is simple enough for a graduate student to play with to gain knowledge of the es-

sential properties of multiple dimensional systems, such as angular momentum and

degeneracy of a quantum system. Secondly, since this system can be solved analyt-

ically in both Cartesian coordinates as well as cylindrical coordinates, it can serve

as a good example to connect quantum properties in these two different coordinate

systems, such as how the angular momentum in the cylindrical case is related to the

136

mathematical form of wave-function in Cartesian coordinates. Thirdly, similar to

cylindrical coordinates, a DVR basis can be used to expand the phase space. Numer-

ical results in Cartesian and Cylindrical coordinates can be used as a benchmark for

the case when a Bessel DVR basis is used.

Cartesian Coordinates

Let the angular frequencies of the harmonic oscillator in two dimensions be ωx

and ωy, then the the Schrodinger equation can be written as (~ is set to 1):

−(∂2ψ

∂x2+∂2ψ

∂y2

)+(ω2xx

2 + ω2yy

2)ψ = 2Eψ (0.45)

Since there is no coupling between the x and y component of the system, we can

safely assume the wavefunction ψ is separable, ie: ψ(x, y) = X(x)Y (x), substitute

into the Schrodinger equation yields:

(− 1

X

d2X

dx2+ ω2

xx2

)+

(− 1

Y

d2Y

dy2+ ω2

yy2

)= 2E (0.46)

The total energy E = Ex+Ey, so we can split the above equation into two equations:

−d2X

dx2+ ωxx

2X = 2ExX

−d2Y

dy2+ ω2

yy2Y = 2EyY

(0.47)

It is not hard to find the above two equations can be regarded as separate 1D harmonic

oscillator equations in the x and y direction, both of them have the same energy

137

spectra25:

Ex =1

2,3

2, . . . ,

2n+ 1

2, n = 0, 1, 2, . . .

Ey =1

2,3

2, . . . ,

2n+ 1

2, n = 0, 1, 2, . . .

(0.48)

The total energy of the 2D system is the sum of these two independent 1D systems.

If ωx = ωy = 1, the eigenvalue of E (with a factor of ~) corresponds to degeneracy of

E; i.e. if E = N , then N different wavefunctions of the 2D system will yield the same

expected energy E = N . To be more specific, the spectrum of the system should be:

E = 1, 2, 2, 3, 3, 3, 4, 4, 4, 4, . . . , (0.49)

Angular momentum

The three lowest energy state wavefunctions for a 1D harmonic oscillator are (Cs

are some normalization constants):

u0(x) =(mωπ~

) 14e−mωx

2/2~ ground state

u1(x) =(mωπ~

) 14

√2mω

~xe−mωx

2/2~ first excited state

u2(x) = C

(1− 2

mωx2

~

)e−mωx

2/2~ second excited state

(0.50)

For n > 3:

un(x) =n∑k=0

akyke−y

2/2 (0.51)

25With a factor of ~ωx and ~ωy respectively

138

where

ak+2 =2(k − n)

(k + 1)(k + 2)ak

y =

√mω

~x

(0.52)

We now can discuss the angular momentum from the perspective of the spectrum.

For example, when E = 1, the only possible combination is Ex = Ey = 1/2, since the

two components have the same energy in this case and are in the ground state; there

should be no “oscillation” in either the x or y direction (as E = (12

+ n)~ω, and n=0

for this case, the ground state is gaussian without a spatial factor in front of it), then

the overall monition of the system should have no angular momentum. However, for

E = 2, we need the first excited state in 1D. if nx = 1, ny = 0, or nx = 0, ny = 1,

their corresponding wave functions are:

ψ1(x, y) = C0xe−mω(x2+y2)/2~

ψ2(x, y) = C0ye−mω(x2+y2)/2~

(0.53)

Any linear combination of these two wave functions are also eigen function of the

harmonic oscillator Hamiltonian. We can construct two wavefunctions as follows:

φ1(x, y) = C1(x+ iy)e−mω(x2+y2)/2~

φ2(x, y) = C1(x− iy)e−mω(x2+y2)/2~

(0.54)

These two equations be expressed in polar coordinates as:

φ1(r, θ) = C1reiθe−mωr

2/2~

φ2(r, θ) = C1re−iθe−mωr

2/2~

(0.55)

139

It is clear from these two wave-functions that we can tell that there are two modes

with angular momentum quantum number equal to one.

For the E = 3 excited states, we will need the second excited state of the 1D

wavefunction since the possible combinations of the x, y components are: (nx =

0, ny = 2), (nx = 1, ny = 1) and (nx = 2, ny = 2).

Then, the overall wavefunction for these cases are:

ψ1(x, y) = C1(1− 2mωx2

~)e−mω(x2+y2)/2~

ψ2(x, y) = C2xye−mω(x2+y2)/2~

ψ3(x, y) = C1(1− 2mωy2

~)e−mω(x2+y2)/2~

(0.56)

Similar to the E = 2 case, we can construct three orthogonal wavefunctions (with

different angular momentum modes) by linear combination of these degenerate state

functions:

for l = 0, we can find:

φ0(x, y) =[C1 + C1(x2 + y2)

]e−mω(x2+y2)/2~ (0.57)

or in polar coordinates:

φ0(r, θ) =[C1 + C1r

2]e−mωr

2/2~ (0.58)

for l = 1, there is no way we can construct a wavefunction as in the E = 2 case that

yields the desired angular momentum since all the coefficients are quadratic.

140

for l = 2

φ±(x, y) = C3(x± iy)2e−mω(x2+y2)/2~ (0.59)

Rewrite it in the polar coordinates:

φ±(r, θ) = C3r2ei2θe−mωr

2/2~ (0.60)

It can be seen φ0(r, θ) has no angular momentum, while φ±(r, θ) have angular mo-

mentum quantum number equal to 2.

Based on the above discussion, it may be found that: For E = N , if N is odd,

l = 0 is non-degenerate, while all other values of l are odd integers and are doubly

degenerate with the values of l sharing the same parity (even). If N is even, all values

of l are odd integers and also doubly degenerate.

Cylindrical Coordinates

To understand angular momentum better, it is convenient to present the Schrodinger

equation in 2D polar coordinates, or cylindrical coordinates using the following rela-

tions:

r2 = x2 + y2 (0.61)

tan(θ) =y

x(0.62)

141

Let us set ωx = ωy = 1 for the rest of the discussion. Then the Schrodinger equation

in polar coordinates can be written as:

−∂2ψ

∂r2− 1

r

∂ψ

∂r− 1

r2

∂2ψ

∂θ2+ r2ψ = 2Eψ (0.63)

The energy spectrum should be the same as that in Cartesian coordinates, i.e., all

values of E are integers with degeneracy equal to E in 2D cases.

The radial wavefunctions of a harmonic oscillator in polar coordinates are of this

form:

φ(r) = CR(r) = CP (r)e−r2/2 (0.64)

where P (r) is a polynomial function of r, and C is a normalization factor.

Take the angular component into consideration, the real wavefunction in 2D

polar coordinates can be put as:

〈r, θ|ψ〉 = ψ(r, θ) = φ(r)eilθ (0.65)

where l is the angular momentum quantum number. The physical way to normalize

a single particle wavefunction is:

〈ψ|ψ〉 = 1 =

∫drdθ〈ψ|r, θ〉 〈r, θ|ψ〉

=

∫ r=∞

r=0

∫ θ=2π

θ=0

φ(r)∗φ(r)rdrdθ

= 2π

∫ r=∞

r=0

φ(r)∗φ(r)rdrdθ

(0.66)

Here the radial wavefunctions for the first four values of E are given as follows:

142

• For the ground state, E = 1, and P (r) = 1, then:

∫ ∞0

R(r)2rdr = 2π

∫ ∞0

re−r2

dr = π, C =√π (0.67)

• For E = 2, there are two degenerate states, with l = 1, and l = −1, and

P (r) = r:

∫ ∞0

rR(r)2dr = 2π

∫ ∞0

r3e−r2

dr = π, C =√π (0.68)

• For E = 3, there are two different P (r), corresponding to l = 0 and l = ±2:

∫ ∞0

rR(r)2dr = 2π, P (r) = r2, C =√

∫ ∞0

rR(r)2dr = π, P (r) = r2 − 1, C =√π

(0.69)

• For E = 4, there are also two different P (r) (for l = ±1 and l = ±3):

∫ ∞0

rR(r)2dr = 6π, P (r) = r3, C =√

∫ ∞0

rR(r)2dr =17π

4, P (r) = r3 − r/2, C =

√17π

4

(0.70)

D. DVR Basis Tutorial

Introduction

There are many ways to represent a function f . For example, we can expand

the function in terms of sine and cosine functions, which is just the Fourier series

143

representation of this function, i.e.:

f(x) =∞∑n=0

an sin

(2πnx

L

)+ bn cos

(2πnx

L

)n = 0, 1, 2 . . . (0.71)

where L is the range where the function f(x) is defined. The continuous version is

the Fourier transform of the function:

f(x) =1√2π

∫ ∞−∞

f(k)eikxdk (0.72)

In the Fourier series representation, the functions sin(

2πnxL

)and cos

(2πnxL

)are the

basis functions used to expand the function f(x), and all the sine and cosine functions

form a basis function set. It is clear that these functions are orthogonal and complete

in the parameter space S where the function f(x) lives. In principle we can pick any

basis set if f(x) can expressed accurately using the basis. In general, the number

of basis functions can be infinite. However, under some conditions, finite number

basis functions can express a function inside that space good enough within a desired

accuracy. The method to express a function using a finite basis set is called finite

basis representation (FBR).

Spectrum Representation

Given a set of mutually orthonormal basis functions {φi(x)}∞i=0, 〈φi|φj〉 = δij,

they span a Hilbert space S, any function that exists in that space can be expressed

accurately in terms of this basis set, while functions with negligible value outside that

space can be approximated.

144

In quantum physics, we may express any operator O as:

O =n∑i,j

|φi〉Oij 〈φj| (0.73)

where the matrix element of an operator O can be represented in Dirac notation as:

Oij = 〈φi|O|φj〉 (0.74)

If |φi〉 , |φi〉 are the ith and jth the basis states, and 〈x|φi〉 = ψi(x) is the ith basis

function, then each matrix element can be computed:

Oij =

∫∫〈i|x〉 〈x|O|y〉 〈y|j〉 dxdy

=

∫∫ψ∗i (x) 〈x|O|y〉ψj(y)dxdy

(0.75)

where 〈x|O|y〉 is the matrix element of the operator in the real space. For example,

O may be the external potential operator, in real space it is just V (x), then

Vij = 〈φi|V |φj〉 =

∫ψ∗i (x)V (x)ψj(x)dx (0.76)

It can be seen that it needs to perform multi-dimension integration to get a matrix

element. If the matrix representation of the potential operator has size n×n, then it

requires n(n+ 1)/2multi-dimension integrals. This is computationally expensive and

is one of the drawbacks of spectral methods.

Grid Representation

In practical numerical calculation, the function f(x) is often presented as a vector

[f(x0), f(x1), . . . f(xn)] where x0, x1, . . . are equally spaced grid points in the range

145

where the function is defined, and the grid distance is ∆x. In a grid representation,

unlike that in a spectral representation, the operation of a potential acting on a

wavefunction can be evaluated straightforwardly:

V (x)φ(x) = [V (x0)f(x0), V (x1)f(x1), . . . V (xn)f(xn)] (0.77)

The kinetic operator matrix can be computed in different way efficiently. One way

is to use the Fourier transform (FT) method appendix B.. The merit of the FT

method is that it is highly accurate, a more straightforward way to evaluate the

kinetic operator in a grid representation is to utilize the relation:

∂2f(x)

∂x2|x=xm ≈

(f(xm+1) + f(xm−1)− 2f(xm))

∆x2(0.78)

Then the kinetic operator matrix is tri-diagonal:

T =

−2 1 0 0 . . . 0

1 −2 1 0 . . . 0

0 1 −2 1 . . . 0

......

......

...

0 0 0 . . . 1 2

(0.79)

This matrix is much simpler than that computed from the FT method, but the

accuracy may be far from ideal as it only uses two neighboring points to compute the

second order derivative, while the FT method takes all points into account.

146

Discrete Variable Representation

The discrete variable representation (DVR) is a representation that is somewhat

like the combination of the spectral representation and the grid representations. The

basic ideas of the DVR were introduced back in the 1960s [142–144]. It became

well known during the 1980s after a series of publications by Light and others on

the application of DVR in chemical physics [145–148]. More recent leading work on

developing general DVR methods came from several works by Littejhon, Matthew,

and [149–151].

To define a DVR basis, let H be the Hilbert space, and P be the projecting

operator which sends H to a subspace S, i.e.,

S = PH (0.80)

The subspace is the practical space we are going to study, or to put it another way,

it is the space where we try to approximate H. For more formal discussion, see [149].

Let M the configuration space of H, for example, for a system without spin, the

configuration space in 3D space is just R3. In the configuration space, define a set

of grid points (or abscissa) {xi}Ni=1. This is similar to the grid representation, where

{xi} may be just the collection of all grid points represent a function g(x) over the

space M . In the DVR case, these grid points do not have to be equally spaced. The

grid points set and the projector P define a DVR set if they satisfy the following

147

properties:

|∆α〉 = P |xα〉 α = 1, 2 . . . , N (0.81)

where 〈∆α|∆β〉 = Wαδαβ, with Wα > 0. This is just the orthogonality property of

the vector set {|∆α〉}α. Another property requires the {|∆α〉}α to be complete in the

subspace S, meaning that any vector in S can be represented exactly using {|∆α〉}α

Since the weight factor Wα may not necessarily be equal to unity, we can define:

|Fα〉 =1√Nα

|∆α〉 , 〈Fα|Fβ〉 = δαβ (0.82)

The basis function set for the DVR in M can be defined:

Fα(x) = 〈x|Fα〉 , α = 1, 2, . . . N (0.83)

then any function that lives inside the space S can be exactly expressed as:

g(x) =N∑i=1

ciFi(x) (0.84)

ci are the expansion coefficients. So far there is nothing special about the DVR

method, and we have not explained the purpose of the grid point set.

Recall the way we define the basis function. Explicitly, let us write down a basis

function:

Fi(x) = 〈x|Fi〉 =1

Ni

〈x|∆i〉

=1

Ni

〈x|P |xi〉(0.85)

148

The projector P , by its definition, satisfies P†

= P = P2. the last line of the

equation can be written as:

Fi(x) =1

Ni

〈x|P |xi〉

=1

Ni

〈x|P 2|xi〉(0.86)

Evaluate the basis function at all grid points:

Fi(xj) =1

Ni

〈xj|P 2|xi〉 =1

Ni

〈∆i|∆j〉 =1

Ni

δij (0.87)

This is a very interesting observation. It says that any basis function is non-zero only

at its own grid point26 (the point where it is defined on, see eq. (0.85)). It is zero at all

other grid points. For x other than the grid points, it is non-zero in general. This is an

important property of the DVR method called interpolation, which is the feature that

makes the calculation of expansion coefficients much easier than other methods. For

example, one of the differences between the spectral method and the DVR method

is the way to determine the function expansion coefficients. Let a function f(x) be

expanded in a basis (can be spectrum basis or DVR basis), and the spectrum basis

function set is {φi(x)}Ni=1, the DVR basis function set is {Fj(x)}Mj=1. Then f(x) can

be expressed as:

f(x) =N∑i=1

ciφi(x) =M∑j=1

CjFj(x) (0.88)

26Note: the number of basis functions is equal to the number of the grid point.

149

In the spectrum set, the coefficients ci can be computed:

ci =

∫ ∞−∞

f(x)φ∗(x)dx (0.89)

In the DVR basis, to get the coefficients Cj, we can evaluation the function at grid

point xj:

f(xj) =M∑k=1

CkFk(xj) = CjFj(xj) (0.90)

In the last equation, we use the property of interpolation of the DVR basis, which

says a basis function only has non-zero value at its own grid point.

Then, the expansion coefficient can be computed in very simple and straightforward

way, i.e.

Cj =f(xj)

Fj(xj)(0.91)

Compared with eq. (0.89), the determination of expansion coefficient in DVR is

much simpler and faster, as it is a multiple integral for the regular spectral method

in eq. (0.89).

Sinc-Function Basis

To illustrate the properties of a DVR basis, here we demonstrate these properties

with the Sinc-function basis with equally spaced abscissa {xi}ni=1. As mention in the

above discussion, to define a DVR basis, we also need a projector P . For a sinc

150

function basis, we define the projector as:

P =1

∫k<kc

|k〉 〈k| dk (0.92)

where the k is the momentum (or wave-number) with a cutoff kc. Then

∆n(x) = 〈x|∆n〉 =

∫ kc

−kc

dk

2πeik(x−xn) =

kcπ

sinc(kc(x− xn)

)(0.93)

xn = x0 + an, zn = kxn = kx0 + πn, a =π

kc(0.94)

Computing the weight factor for each basis function:

Nn = 〈∆n|∆n〉−1 =1

∆n(xn)=

1

F 2n(xn)

= a (0.95)

Substituting the equation into eq. (0.85) yields:

Fn(x) =√Nn∆n(x) =

sinc(kc(x− xn)

)√a

(0.96)

Sinc DVR Basis Functions

4 2 0 2 4x

0.20.00.20.40.60.81.0

F(x)

x=-2x=-1x=0x=1x=2

Figure 0.1: Grid points set={−2,−1, 0, 1, 2}, kc = π. It can be seen that each basisfunction is local around its own grid point.

151

In fig. 0.1, five basis functions are plotted in the same figure. kc = π, a = 1. It

can be seen that each F (x) is local around its grid point while its value is rapidly

vanishing away. We can also easily check that its values at other grid points are zero,

while values on other points are not zero, but much smaller than the value at its grid

point.

E. Mean Field Decoupling

In the many-body Hamiltonian, the two-body interaction term includes four op-

erators, which is obviously not quadratic. One method to convert this into quadratic

terms is to use mean-field decoupling.

An arbitrary operator A can be written in the form:

A = 〈A〉+ δA (0.97)

where δA is the fluctuation around the expectation of A or 〈A〉. Then the product

of two operators can be expressed as follows:

AB = (〈A〉+ δA)(〈B〉+ δB) = 〈A〉 〈B〉+ 〈A〉 δB + δA 〈B〉+ δAδB (0.98)

In mean field theory, we assume the fluctuation is small, thus the second order fluc-

tuation term can be discarded. The exclusion of the δAδB is the reason why this

approximation is called the mean field approach. The above derivation can be pro-

ceeded by writing the fluctuation term as δA = A − 〈A〉. Applying the mean field

152

method to eq. (0.98) and substituting the δA terms yields:

〈A〉 〈B〉+ 〈A〉 δB + δA 〈B〉 = 〈A〉 〈B〉+ 〈A〉 (B − 〈B〉) + (A− 〈A〉) 〈B〉

= 〈A〉 B + A 〈B〉 − 〈A〉 〈B〉(0.99)

F. UV, IR Errors and Bloch Twisting

In a numerical simulation with a box and a set of grid points, the choice of box

size and grid point number is critical. The box size defines the minimum frequency or

lowers bound of momentum in the simulation, which is called the IR limit. The grid

point spacing defines the maximum frequency or upper bound of momentum and is

called the UV limit. Errors due to IR limit and UV limit are often called IR error

and UV error.

F..1 UV and IR Errors

We start with a one-dimension BCS model, which can solve the problem in a 1D

periodic universe. As we shall see, there are two forms of errors: UV errors resulting

from a limited kmax = πN/L and IR errors from the discrete dk = π/L. To estimate

the UV errors, we consider the asymptotic form of the pairing field and total particle

153

density integrals:

δUV ∆ =v0

2

∫ ∞kmax

dk

∆√ε2+ + ∆2

≈ v0

∫ ∞kmax

dk

2m∆

~2k2

=v0mδ

π~2kmax+

2v0m2µeff

3π~4k3max

,

δUV n+ = 2

∫ ∞kmax

dk

[1− ε+√

ε2+ + |∆|2

]

≈∫ ∞kmax

dk

4m2|∆|2

~4k4

=2m2|∆|2

3π~4k3max

+8m3µeff |∆|2

5π~6k5max

(0.100)

The error in ∆ is larger, so we can set the lattice spacing to achieve the desired

accuracy:

L

N.π2~2

v0m

δUV ∆

δ. (0.101)

Estimating the IR errors is more difficult: they arise from the variations of the inte-

grand over the range dk:

1

dk

∫ dk/2

−dk/2dkb

{dk

∑n

f(kn + kb)

}≈ 1

dk

∫ dk/2

−dk/2dkb

{dk

∑n

[f(kn)

+ kbf′(kn) +

k2b

2f ′′(kn)

]}=dk

∑n

{f(kn) +

dk2

24f ′′(kn)

}.

(0.102)

We thus expect the error to scale like

δIR ∼dk2

24=

π2

3L2(0.103)

154

But the coefficient is difficult to calculate.

Twist-Averaged Boundary Conditions

For a many-body wavefunction in periodic boundary conditions, it is often possi-

ble to assume the phase of the many-body wavefunction will have the same value once

any particle travels through the periodic boundary and back to its original position.

Lin et al. [152] pointed out that such an assumption may lead to a slow-down of

convergence for delocalized fermionic systems, due to the shell effects in the filling

of single-particle states. To alleviate the shell effect, we allow the overall many-body

wave functions to pick up a phase when particles in the system wrap around the

boundaries:

Ψ(r1 + Lx, r2, ...) = eiθxΨ(r1, r2...) (0.104)

Generally, θ is restricted in the range:

−π < θx ≤ π (0.105)

Then the twist average of any observable is defined:

〈A〉 = (2π)−d∫ π

−πdθ 〈ψ(R, θ)|A|ψ(R, θ)〉 (0.106)

Numerically, we will only sample some values of θ and average over the results, and

such a method may be well enough. One can also randomly shift the origin of the

grid several times during computation and take the average of these results.

155

Example

To see how the UV errors, IR errors, and twisting affect the total error, we

perform several BCS calculations with different lattice sizes and grid points as well

as three distinct twists. The results are plotted in the following figure:

156

UV, IR, and Twisting Errors

101 102

N

10 8

10 6

10 4

10 2

100

Erro

r

N_twist=1n(1.0)

(1.0)n(10.0)

(10.0)n(30.0)

(30.0)

101 102

N

10 8

10 6

10 4

10 2

Erro

r

N_twist=2n(1.0)

(1.0)n(10.0)

(10.0)n(30.0)

(30.0)

101 102

N

10 8

10 6

10 4

10 2

Erro

r

N_twist=3n(1.0)

(1.0)n(10.0)

(10.0)n(30.0)

(30.0)

101 102

N

10 8

10 6

10 4

10 2

Erro

r

N_twist=4n(1.0)

(1.0)n(10.0)

(10.0)n(30.0)

(30.0)

Figure 0.2: The dotted lines are the theoretical expectations, while other lines areactual numerical errors in different configurations. The number inside the parenthesisis the box size (1, 10, 30). The top-left panel is for the case with just single twist,the top-right panel has a twist of 2, the bottom-left panel is for the case with a twistof 3, and the bottom-right has a twist of 4.

This plots in fig. 0.2 show that our estimates of the UV errors are accurate, that

the UV errors in ∆ dominate, and that L ≈ 25 is required for reasonable IR conver-

157

gence. The following plot shows that the IR errors are quite complicated in structure

(shell effects). Fortunately, we can reduce these errors by explicitly performing the

Bloch (twist) averaging (see examples [152, 153])

Suppose we want a tolerance of δ ln ∆ < 10−4, then we must have L/N < 4.5.

Computationally, we can conveniently work with N = 210 = 1024, so L < 0.46.

G. Vortices in Cylindrical Coordinates

G..1 Introduction

To model a vortex in the DVR basis, we need to represent the Hamiltonian prop-

erly in a cylindrical system. As already introduced in the appendix A., this procedure

may need to perform the rotating transform. In this appendix, more specified details

are presented, which can be put into numerical implementation. Let us get started

with the single-particle Hamiltonian:

Hψn,lz(r, θ) = Eψn,lz(r, θ) (0.107)

where H = −~2∇2

2m− µ

Let the full wavefunction ψn,lz(r, θ) = Rn,lz(r)eilzθ, plugin back to the above

equation:

Eψn,lz(r, θ) =

(−~2

2m∇2 − µ

)Rn,lz(r)e

ilzθ (0.108)

158

In polar coordinates, the Del operator ∇2 is defined as:

∇2 =1

r

∂r

(r∂

∂r

)+

1

r2

∂2

∂θ2

=∂2

∂r2+

1

r

∂r+

1

r2

∂2

∂θ2

(0.109)

The Schrodinger equation can rewritten as:

Eψn,lz(r, θ) =

(−~2

2m

[1

r

∂r

(r∂

∂rRn,lz(r)

)− l2zr2Rn,lz(r)

]− µRn,lz(r)

)eilzθ (0.110)

Let R(r) = r−1/2f(r), the kinetics part can be expressed:[∂2

∂r2+

1

r

∂r

] [r−1/2f(r)

]=

∂r

[−1

2r−3/2f(r) + r−1/2∂f

∂r

]+

1

r

[−1

2r−3/2f + r−1/2∂f

∂r

]=

[3

4r−5/2f(r)− 1

2r−3/2∂f

∂r− 1

2r−3/2∂f

∂r+ r−1/2∂

2f

∂r2

]+

[−1

2r−5/2f(r) + r−3/2∂f

∂r

]=

1

4r−5/2f(r) + r−1/2∂

2f

∂r2

= r−1/2

[∂2

∂r2+

1

4r2

]f(r)

(0.111)

159

Combination with the angular part yields:(− ~2

2m∇2 − µ

)Rn,lz(r)e

ilzθ =

(− ~2

2m

[1

r

∂r

(r∂

∂rRn,lz(r)

)− l2zr2Rn,lz(r)

]− µRn,lz(r)

)eilzθ,

= r−1/2

[−~2

2m

(∂2

∂r2+

1

4r2− l2zr2

)− µ

]f(r)eilzθ

= r−1/2

[−~2

2m

(∂2

∂r2− 12

z − 1/4

r2

)− µ

]f(r)eilzθ

(0.112)

Substitution back to the Hamiltonian gives:[− ~2

2m

(∂2

∂r2− 12

z − 1/4

r2

)− µ

]f(r) = Ef(r) (0.113)

Define an effective kinetics operator K(lz) as:

K(lz) = − ~2

2m

(∂2

∂r2− 12

z − 1/4

r2

)(0.114)

Invoke the matrix representation of the BCS Hamiltonian (also see appendix A.):− ~22m∇2 − µa ∆(r)eiwθ

∆(r)e−iwθ ~22m∇2 + µb

√run,lz(r)e

iwθ

√rv∗n,lz(r)

eilzθ (0.115)

where w is the winding number for the vortex, some more algebra yields:K(lz + w)− µa ∆(r)

∆(r) −K(lz) + µb

Ψn,lz(r) = En,lzΨn,lz(r) (0.116)

where

Ψn,lz(r) =

un,lz(r)v∗n,lz(r)

(0.117)

160

It can be found that the two components now have different kinetics terms:

k(lz + w) = − ~2

2m

(d2

dr2− (lz + w)2 − 1/4

r2

)(0.118)

In a DVR representation, this will cause issues if the winding number w is odd, as

it is impossible to precisely represent kinetics operators that have different angular

factors27. However, if w is even, both kinetic operators can be precisely represented

in the same basis.

H. Digital Mirror Device Based Optical and Spatial LaserModulator

In this appendix, a technique using a digital micromirror device as a spatial light

modulator will be presented.

H..1 Background

A laser can introduce atom-light interaction via dipole interaction. In cold atom

physics, lasers are widely used to manipulate atoms, such as Rabi flopping, dipole

27Mathematically we have to use a single basis to represent all operators in the calculation, if the

kinetics terms have different angular momentum terms, we can only precisely describe one or the

other, not both of them at the same basis. For example, if lz is odd, it can be precisely represented

in an odd DVR basis, but an odd DVR basis can not represent lz + 1.

161

trapping, optical lattices, potential barrier and phase mask, etc. It is also of great

interest to construct a quantum gas microscope [154] for single-site addressability,

which provides a very efficient method for manipulating quantum states of an in-

dividual atom. Many such sites can form a lattice to trap multiple atoms. These

trapped atoms may be used for quantum computing. Applications like this require a

very high precision of lattice generation, which is hard to achieve if there are aberra-

tions in an optical setup. All the aberrations will cause an overall distortion on the

wavefront of the laser beam. One method to undo the wavefront distortion is to use

a binary hologram to compensate these imperfections [155, 156].

In cold atom experiments, researchers may want to trap a BEC in a dipole

potential to study its quantum dynamic, the ability to modify the trap in real-time

and observe how a BEC evolves is preferred. However, in practice, it may be very

challenging to generate arbitrary dipole potential profiles in real-time with the desired

spatial phase using traditional methods, such as using transparency. The advent of

programmable DMD technology offers scientists a new toy to create arbitrary dipole

potentials [157–160].

162

H..2 Digital Micromirror Device

A DMD contains a two-dimensional array of tiny mirrors (on the scale of several

micrometers). Each mirror has a control memory bit associated with it, which enables

the mirror to be turned on or off independently. When a mirror is on, it will reflect

light in the desired direction. When it is off, the light will be reflected in another

direction. A diagram of a DMD is shown in fig. 0.3. Such a device is connected

to a computer that controls the state of each micromirror. A DMD contains a large

amount of such tiny mirrors, and users can control the pattern on a DMD. An example

of patterns generated on a DMD is showed in fig. 0.4, where each of the diamond is

a micromirror.

Different States of Two Micromirrors

0 1

Figure 0.3: Digital Micromirror Device Diagram of two micro mirrors with differentstates.

163

DMD Display Pattern

Figure 0.4: One example of patterns displayed on a DMD. The filled diamonds rep-resent mirrors of state on, while the empty diamonds represent mirrors with stateoff.

The physical geometry of a real DMD (Model DLP3000 from Texas Instruments)

is shown in fig. 0.5. This is the first type of DMD we used for experimental tests. The

size of each micromirror is about 7µm and can be turned on and off at a maximum

frequency of 4000Hz. Some higher models can provide a much higher resolution

and refresh rate. One application is to use a DMD to shape and steer two tight

laser beams through a BEC trapped inside a harmonic trap ( fig. 0.6 ). This can

be done by updating the pattern on the DMD quickly by playing a sequence of

patterns stored in the DMD controller. Since a laser beam can induce a dipole

potential, the resulting effect is two dipole potentials sweeping through the BEC, and

the consequential quantum dynamic can be observed.

In the above example, the direct imaging method is used to generate the dipole

potential. This method is convenient for applications where the phase information

164

is not essential. Another method is to use the Fourier imaging method, where the

pattern on a DMD is the Fourier transform of the desired dipole potential pattern.

This method is convenient and flexible when phase information is required, such as in

experiments where one needs to imprint a circular phase profile on a BEC to generate

vortices or solitons [132]. In the following sections, both methods will be discussed in

detail.

DMD Geometry

684Pixels

608 Pixels

Figure 0.5: This is a toy model of DMD used for testing the idea of spatial lightmodulator. Its micromirror array has dimensions of 608×684 pixels; each pixel is amicromirror.

H..3 Direct Imaging

The Direct Imaging method puts the DMD int the object plane and the BEC

(or other targets) in the imaging plane. The optical setup is shown in fig. 0.7. The

laser beam is first expanded using a telescope ( L1 and L2), then is reflected from

165

Double Moving Potential Barriers

DMD

Figure 0.6: A DMD is programmed to present two stripes moving from left to right,which will steer the reflected laser beam through a BEC where the quantum dynamicsare taking place.

DMD to the target. In such a setup, the only thing we can do is to modulate the

intensity profile of the target pattern. The beam splitters (BS1) is used to reflect a

small portion of the laser into the CCD, so the control computer is able to update

the DMD pattern by taking the laser intensity profile into account in real-time. In an

experiment, one may want to update the DMD patterns at a rate comparable to the

evolution rate of the BEC (in a time scale of microseconds or less), which is typically

a very high refresh rate. One can store the pattern sequence into the DMD controller,

and use a TTL signal to trigger the device or use a wavefunction generator to produce

periodic triggers if necessary.

166

Opti

cal

Setu

pfo

rD

irect

Imagin

g

BEC

Lase

r6

60

nm

FC1

FC2

CC

DL1L2

L3L4

L5

BS1

FC1

FC2

Fib

er

Co

mp

ute

r

𝑓(T)

CC

D

TTL

Sign

al

Fun

ctio

n

Ge

ne

rato

r

M1

FCFi

ber

Co

up

ler

Mir

ror

Len

s

Lase

r B

eam

Be

am S

plit

ter

Dig

ital

Mir

ror

Dev

ice

DM

D

Fig

ure

0.7:

Opti

cal

Syst

emSet

up

for

Dir

ect

Imag

ing

167

H..4 Intensity Modulation

The idea behind the intensity modulation using a DMD is to use a patch (a

group of micromirrors on a DMD) of the DMD and focus the light reflected from that

patch to a target point. Then by switching some pixels in that patch on or off, we can

control the light intensity at the target point. In this discussion, an algorithm based

on some heuristic observations is proposed and implemented for the application of

intensity modulation.

H..5 Pattern Generation Algorithms

If the light reflected from a m×n pixel patch is projected to a point in the image

plane, the range of intensity (arbitrary unit) is in range of R ≡ [0,m× n]. For each

value k ∈ R, we can simply turn on k mirrors inside that patch. For a given k, the

number of configurations that will have k pixels on can be computed as:

N =(m× n)!

(k − 1)!(0.119)

However, in practice, it is desirable to define an optimal configuration that has pre-

ferred symmetries based on the profile of the incident laser beam and different optical

geometries. Assume the local laser power be uniform inside each patch. Then we can

require that all pixels within a patch should be arranged symmetrically with respect

168

to their center point. To achieve this, each pixel is assigned with a tag number Ln,

where Ln ∈ R. When an intensity number k is assigned to a patch, all the pixels with

the tag number of Ln ≤ k should be turned on.

To achieve the desired symmetry, an algorithm to sort the tag number Ln inside

a patch is proposed. It includes a global sorting phase for all Ln, and a local sorting

phase for pixels that have the same distance to the central point. Based on such a

method, the desired order of Ln within a patch can be implemented. The outline of

the algorithm is listed in table 0.1:

169

Algorithm:Intensity Modulation Pattern Generation

1. Sort pixels labeled with coordinate (x, y) inside a patch based on their

distance to the center of that patch.

2. Pixels that have the same distance to the center we call a group, and a local

sorting method is invoked to rearrange them inside the group:

(a) Select the first pixel in that group as a new reference. Connect cen-

ters of all other pixels in the group to the center of the first pixel by

lines, measure the normal distance from the patch center to each line,

denoted by d.

(b) Sort all other pixels in that group based on their corresponding value

of d.

3. Assign an tag number from 0 to N-1 to each pixel after sorting.

4. A given patch needs to turn on k pixels:

(a) If k is an odd number, turn on pixels with tag numbers smaller than

k.

(b) Otherwise, turn on pixels with tag numbers in range from 1 to k

(include k).

Table 0.1: Algorithm: Intensity Modulation Pattern Generation

170

Global Sorting

An example is shown in (a) of fig. 0.8, this is a 5 × 5 patch, or 25 pixels; label

these pixels with indices in a top-down and left-right scheme, then the top-left pixel

has an index number equal to 1, the top-right has index 5, and the bottom-right will

have index 25. Note these indices are different from the number on (a) of fig. 0.8.

In the global sorting phase, each pixel will be sorted based on their distance to the

center of the patch (the center of the 13th pixel). Explicitly, after this step, we may

get the sorted indices:

[13], [12, 14, 8, 18], [7, 9, 17, 19], [11, 3, 15, 23], [16, 2, 4, 6, 10, 20, 22, 24], [5, 1, 21, 25]

In the above list, indices are sorted in ascending order based on their distance to the

center. Pixels with the same distance are grouped together inside square brackets.

Because the 13th pixel is just on the top of the patch center, it is at the very beginning

of the above list.

Local Sorting

In the local sorting phase, select a group in the above list. For example, let us

choose the 3rd group: [7, 9, 17, 19]. Based on the algorithm in table 0.1, we pick the

7th pixel as the local reference, and connect the centers of the other three pixels to

the center of the 7th pixel with straight lines. Sort these three pixels based on the

normal distance of their corresponding line to the patch center, and we will get the

171

rearranged list: [7, 19, 9, 17]. The 19th pixel moves to the second place because the

line connecting it and the 7th pixel has the minimum distance to the patch center

(0). Repeating this procedure for other groups will yield the final order:

[13], [12, 14, 8, 18], [7, 19, 9, 17], [11, 15, 3, 23], [16, 10, 4, 20, 2, 24, 6, 22], [5, 21, 1, 25]

The last step is to assign a tag number to each of those pixels from 0 to 24 based on

the above order. Then one can get the result as shown in (a) of fig. 0.8.

DMD Patch Patterns for Intensity Modulation

23 17 11 15 21

19 5 3 7 14

9 1 0 2 10

13 8 4 6 16

22 20 12 18 24

23 17 11 15 21

19 5 3 7 14

9 1 0 2 10

13 8 4 6 16

22 20 12 18 24

23 17 11 15 21

19 5 3 7 14

9 1 0 2 10

13 8 4 6 16

22 20 12 18 24

23 17 11 15 21

19 5 3 7 14

9 1 0 2 10

13 8 4 6 16

22 20 12 18 24

23 17 11 15 21

19 5 3 7 14

9 1 0 2 10

13 8 4 6 16

22 20 12 18 24

23 17 11 15 21

19 5 3 7 14

9 1 0 2 10

13 8 4 6 16

22 20 12 18 24

(a) (b) (c)

(f)(e)(d)

Figure 0.8: (a) shows the tag numbers for all pixels inside a patch, pixels with thesame distance to the center are filled with the same color. (b)-(f) show how pixels areturned on and off for different k values, those with purple color are on, while thosewith green color are off. The patch patterns are symmetric and compact for a givenk. Such an arrangement ensures the corresponding target point with better shapeand least spread out.

172

Number of Active Pixels

The size of a patch to be focused on one target point can be determined by the

method in [159]. In an actual experiment, the light source profile should be taken

into account, since the overall light source in a large region maybe not uniform. For

any given target intensity, the number of active pixels (pixels that are on) should be

computed locally for each patch. Then the light field intensity on each patch of the

DMD and the ratio of source patch intensity to its target point intensity should be

computed. The detail of the calculation is listed in table 0.2.

173

Algorithm: Intensity Modulation for Direct Imaging

1. For a target pattern sliced to a grid of size m×n, find the point intensity

It(x, y), where (x, y) is the row and column indices.

2. Split the light field intensity on a DMD into m×n patches, each patch has

size w×h.

3. Compute the total intensity for the patch on the DMD at (x, y), i.e. Is(x, y).

4. Compute the intensity ratio R(x, y) = It(x, y)/Is(x, y).

5. For the patches with the maximum intensity ratio Rmax, turn all pixels in

those patches on.

6. For other patch at (x, y), the number of pixels to be turned on is computed

as k = R(x, y)× w × h/Imax.

7. Use the value k and the algorithm in the table 0.1 to turn on pixels in the

corresponding patch.

Table 0.2: Algorithm: Intensity Modulation for Direct Imaging

174

H..6 Fourier Imaging

Unlike the direct imaging method, we can use a thin lens as a Fourier engineer28

to transform a DMD pattern to the target plane. The efficiency of light focused to

the target plane is not necessarily lower than the case of direct imaging. However, it

offers the possibility to modulate the phase profile at the target. The optical system

setup is shown in fig. 0.9. The actual optical setup is presented in fig. 0.10, with a

detailed description in the caption. Due to the properties of the Fourier transform,

the phase map in the target can be modulated via the pattern on the DMD, which

means it is possible to fabricate any desired phase profile. The fact that each pixel on

a DMD is binary-valued and finite-sized imposes some subtle difficulties in practical

applications. The simplest way to generate a target pattern with the desired phase

map is to have a source pattern set to the inverse Fourier transform of the target. In

reality, each point in the source can only have a binary value of 0 or 1, and it is not

a complex value. Regardless of these issues, with some relaxation of accuracy, and

as long as the resolution of a DMD is high enough, it is still possible to produce a

pattern that is good for actual experiments. Methods to generate patterns on a DMD

for Fourier imaging will be discussed in the remaining sections.

28A device that can perform Fourier transform

175

Fourier Imaging Setup Geometry

660nm Laser

L1 L2

L3

FC1 FC2Fiber

Computer

M1

FC Fiber Coupler

Mirror

Lens

Laser Beam

Digital Micromirror Device

TargetDMD

FC1 FC2

f f

Figure 0.9: The DMD and the target plane are at the focusing points of the lens L3.L1 and L2 act as telescope to expand the laser beam.

176

Experimental Setup for Fourier Imaging

CCD Plastic

DMD

Lens

Stray Light Shield

Figure 0.10: The CCD and DMD are located at the focusing points of the thin lens,a tube (the blue one) is used for shielding the stray light to increase the contrast onthe CCD. A plastic slide is used to introduce more distortion to the wave front.

H..7 Phase Map Retrieval

Due to the imperfection of the laser source, lens, mirrors, fibers, and other com-

ponents in an optical setup, the wavefront of a laser beam can be distorted, as shown

in fig. 0.11. A perfect laser beam with a flat phase front is gradually distorted when

it goes through lenses and fibers, is reflected from mirrors or a DMD, etc. For some

high precision optical applications such as a quantum gas microscope [154], the phase

177

imperfection must be eliminated in order to get highly ordered optical lattices.

Wavefront Distortion

Optical Setup

Figure 0.11: The wavefront of a laser beam gets distorted when passing an opticalsystem. The left blue box represents uniform and flat phase map, while the right onerepresents the distorted phase map.

DMD Patch and Regions Configuration

Figure 0.12: The DMD surface is divided into small patches, blue and gray squares,blue ones are active patches used in the experiment. These active patches are groupedinto 5× 3 regions; the central patch of each region is the parent for that region. Thecentral region is the root region, and its central patch is the root patch with zerophase. These arrows connect regions to their parent regions.

178

To undo the wavefront distortion, the phase map of an optical system must be

first retrieved. The light modulation can then be done using specific algorithms. The

underlying principle of spatial light modulation can be found in chapter 2 and chapter

3 of [156]. In our experiment, a more flexible phase retrieval method derived from [161]

is used, which can be applied to a broader range of DMD sizes. The method first

divides the DMD surface into patches, and groups patches into rectangular regions

Inside each region, the center patch is marked as the parent of the other patch in the

same region (see table 0.3). Each patch and its parent can be programmed to form

a pair of gratings (also see fig. 0.14). Their relative phase29 can be found using the

method described in table 0.4.

29The relative phase between a region and its parent region is measured by putting two gratings

pattern on the two central patches. Inside each region, phases are measured in the same way by

placing gratings on each patch and the central patch. For every measurement, all pixels are turned

off except those used to generate the two gratings.

179

Algorithm: DMD Patch Division and Structure

1. For a given DMD area of size W ×H pixels, divide into patches with size

Wp ×Hp.

2. Group those patches into a grid of M ×N regions, the ith region has size of

Wi ×Hi, and Wi = ni ×Wp, Hi = mi ×Hp, where integers ni ≥ 3,mi ≥ 3

are the number of patches in the x and y direction for each the region.

3. For the ith region, pick one patch inside as the reference patch and set it as

the parent patch for all other patches in that region, denoted as P iref , for

convenient, the patch that closest to the center of the region is chosen.

4. Pick the central region as the root region, and its reference patch will be

the root patch, P rootref , which has zero relative phase,i.e. φ(P root

ref ) = 0.

5. Start from the root region, assign its reference patch as the parent to those

reference patches with center-to-center distance no larger than Dmax, if that

reference patch has no parent assigned.

6. After step 5, we will get a tree structure of patches (see fig. 0.12), each

patch has a parent patch except the root patch.

Table 0.3: Algorithm: DMD Patch Division

180

Algorithm: Phase Map Retrieval

1. Let the origin be (x0, y0) (the central point of the first order diffraction).

2. Find the relative phase for each patch except the root patch:

• For each patch at location (x, y) and its parent at location (xp, yp), we

put two gratings on the patch and its parent.

• From the CCD, we can see interference pattern as described by [161]

and [156]. We cut out the line pattern which perpendicular to the

stripe of interference.

• Fit the line pattern with best fit method using the formula f(x) =

Asinc2(ksx + φs)[1 + cos(kcx + φc)] , where ks is determined by the

grating period, and kc is determined by the distance of the patch to

its parent. φc is the phase of the patch related to its parent.

3. Start from the root patch recursively (breadth-first traversal), add the rel-

ative phase of a parent patch to the relative phases of its children (the root

patch has zero phase).

Table 0.4: Algorithm: Phase Map Retrieval

181

Piece Up Phase Map

After the last step, the phase distortion information of each patch is put together

to generate a phase map that looks like a Mosaic pattern, as shown in fig. 0.13

(a). We apply the unwrapping algorithm (see [156]) to get a new map, as shown in

fig. 0.13(b), and the bi-linear interpolation method is applied to make a smooth phase

map, fig. 0.13 (c).

Phase Unwrapping

Wrapped Phase Map Unwrapped Map Interpolated Map

0

2𝜋

−17

8

(a) (b) (c)

Figure 0.13: (a): Original phase map by combining the phase distortion informationfrom all patches. (b): Unwrapped phase map. (c): Bi-linearly interpolated phasemap.

182

Center of the First Diffraction Pattern

0th Order

1st Order

-1st Order

Laser Grating Lens Screen

Figure 0.14: An ideal laser beam is reflected from two perfect gratings into a focusinglens31, where these two reflected beams will interfere with each other on the imagingplane. Since there is no phase difference for the zero-order diffraction beam, we do notsee any interference pattern. For the first and negative first-order diffraction patterns,we can see interference stripes because reflected beams from these two gratings havean overall π phase difference.

One challenging task of phase map retrieval is to find the center point (x0, y0) of

the first-order diffraction on the imaging plane for the first step of table 0.4 as shown

in fig. 0.14. The origin (x0, y0) is the position marked by the black cross in the first-

order pattern. In practice, it is tough to pinpoint the origin precisely. To solve this

issue, one can first use a patch of the grating to scan through the entire interested

DMD area, and fit the first-order diffraction pattern with the theoretical formula,

then pick the center of the best fitting as the origin. This is not safe however32. To

31Another solution is to use a single grating, where we do not see any interference to the first

order, then the center should appear in the same position as marked by the cross.32The result may not be precise in either horizontal or vertical directions if we just fit the data

in a single dimension.

183

make the result more reliable, one can perform phase correction (see the next section)

to some images, and check how well the resulting phase map works. Then one must

fine-tune the point and check over again. Typically, the calculated origin is very close

to the actual one after several iterations.

H..8 Phase Correction

With the phase map at hand, the phase distortion can be undone for either the

first-order image or the negative first-order image, as showed in fig. 0.15. The pattern

on the DMD is the binary hologram, which can be Fourier transformed to get the

desired image on the target (CCD here). It is called a binary hologram because each

pixel (micromirror) on a DMD can only have two states.

Phase Corrections

No phase correctionCorrection tothe first order

Correction to theminus first order

Figure 0.15: (a): Original distorted image (b): Improved image by applying the phasemap on the first-order diffracted beam. (c): Improved image by applying the phaseon the negative first-order diffracted beam.

184

Gerchberg-Saxton Algorithm

The algorithm described in [156] gives the general idea of the transform, which

can yield excellent results. However, it has to apply some dithering method to improve

the image quality, which may not give the best result. Here a modified version of the

Gerchberg-Saxton (GS) algorithm is proposed, which takes the binary nature of the

DMD into account. The conventional GS algorithm can be done in two different ways,

one of them is shown in table 0.5

185

Algorithm: Gerchberg-Saxton Algorithm

1. A = IDFT(T ), where T is the target image, and IDFT(x) is the inverse

Fourier transform.

2. B = Amp(S)eiφ(A), where S is the source image, Amp(X) is the function

returns the amplitude of x, and φ(x) returns the phase of x.

3. C = DFT(B), where DFT is the Fourier transform.

4. D = Amp(T )eiφ(C).

5. A = IDFT(D).

6. Check if A is converged. if not, return to step 2.

Table 0.5: Algorithm: Gerchberg-Saxton Algorithm

186

Gerchberg Saxton Algorithm

𝐼

𝜙𝐴

𝐼

𝜙𝐴

Fourier transform

Inverse Fourier

transform𝐴

𝐼

Φ

𝐴𝐼

Φ

3 4

12

5

Laser Profile

PhaseFinal Output

Magnitude

DesiredPattern

Figure 0.16: Gerchberg Saxton Algorithm: The blue boxes represent either the Fouriertransform or its inverse operation. The yellow pentagons represent the combinationof amplitude and phase input to generate a complex map as output. The green boxestake complex numbers as input, and output two real components(Amplitude andphase). Step 5 outputs the hologram phase map. Step 3 takes the laser profile as theinput. Step 1 takes the desired target image as the input.

Binarized Gerchberg-Saxton Algorithm

The conventional GS algorithm can also be presented as a flow chart, as shown

in fig. 0.16. The direct application of this method to a DMD can be problematic

due to the binary nature of such a device. The final result has to be binarized using

some cut-off value, which can be too coarse. If a simple rule of binarization is used,

for example, all resulting values with phase less than π are set to one; otherwise

the values are set to zero. the image quality may not be good, as can be seen from

fig. 0.16.

To address this issue, an improved version of the GS algorithm is proposed here

187

by inserting one more step into the conventional GS algorithm, as shown in fig. 0.17.

The modified GS algorithm is called the “Binarized GS Algorithm” or BGS algorithm,

as described in table 0.6. The simple, single step makes a big difference. The resulting

image is much better (see fig. 0.18).

Algorithm: Binarized Gerchberg-Saxton Algorithm.

1. A = IDFT(T ), where T is the target image, and IDFT(x) is the inverse

Fourier transform.

2. B = Amp(S) × θ(φ(A)), where S is the source image, Amp(X) is the

function returns the amplitude x, and φ(x) returns the phase of x. θ(x) is

a binary function which returns 1 when x < π, 0 otherwise.

3. C = DFT(B), where DFT is the Fourier transform.

4. D = Amp(T )eiφ(C).

5. A = IDFT(D).

6. Check if the change of A is converged. if not, return to step 2.

Table 0.6: Algorithm: Binarized Gerchberg-Saxton Algorithm

188

Binarized Gerchberg Saxton Algorithm

𝐼

𝜙𝐴

𝐼

𝜙𝐴

Fourier transform

Inverse Fourier

transform𝐴

𝐼

Φ

𝐴𝐼

Φ

3 4

12

5

𝜙 > 𝜋

x

Laser Profile

PhaseFinal Output

Magnitude

DesiredPattern

Figure 0.17: Binarized Gerchberg Saxton Algorithm: All the steps are the same asthe GS algorithm showed in fig. 0.16. The only difference comes from the diamondshape step where the laser profile is modulated based on the phase of step 2, if thephase of a pixel is smaller than π, the intensity of that point input at step 3 will beset to zero.

The improvement of the binarized GS algorithm is shown in fig. 0.18, and it

can be seen that the modified GS algorithm yields more accurate images with less

surrounding noise.

189

Performance Comparison Between Two Gerchberg Saxton Algorithms

Figure 0.18: The image in the first column are the target images, those in the middlecolumn are the results from the conventional GS algorithm, while the third columnare the results using the binarized GS algorithm.

190

Applications of Phase Map Correction

So far, the distorted phase map has not been taken into consideration. In actual

experiments, the phase correction is done by adding the phase map to the output of

step 5 of table 0.6. Then we binarize the result using a simple rule, i.e., set the value

of a pixel to one if the corresponding phase is smaller than π, or zero if it is greater

than π. The image quality of the BGS method much better than the GS method,

and it is also better than the method mentioned in [156].

Ideal Gaussian beam

One application of the phase correction is to generate an ideal Gaussian beam.

The results are shown in fig. 0.19, which is from our first successful experimental test.

From the margin-profile in both x and y directions, it can be seen that the resulting

Gaussian profile is much better for the first order in fig. 0.19(c) and the negative first

order in fig. 0.19(d), when compared to the fig. 0.19(b). The method used here is

described in [156]. One can find that the correction is not perfect, primarily due to

the fact that amplitude modulation was not applied and we used a toy DMD where

each mirror was arranged in a diamond shape, which made the aspect ratio of the

DMD not proportional with respect to pixel number ratio, i.e.

height of DMD

width of DMD6= number of pixels in height direction

number of pixels in the width direction

191

To achieve the ideal Gaussian beam, the DMD in the setup as shown in fig. 0.9 is

programmed to have grating pattern as shown in the first row of fig. 0.20. Then if the

diffracted beams are focused on the target plane, the first-order, zero-order, and the

negative first order beams will be distorted due to aberrations as in fig. 0.19(b). To

correct the distortion, we can add the phase map to the grating, where black pixels

are of zero value, and white pixels have the value of one. Binarization of the sum will

lead to the distortion of the grating as shown in the second row of fig. 0.20. Such a

DMD pattern will improve the first order beam significantly as can be seen from the

result. If the phase map is subtracted from the grating as shown in the third row

of fig. 0.20, we will correct the negative first order beam.

192

Ideal Gaussian Beams Generation

(a) (b)

(d)(c)

Figure 0.19: The first successful test of phase correction. (a): The phase distortionmap for the optical setup. (b): the first, zeroth, and negative first order of theuncorrected Gaussian beams. (c): Phase correction is applied to the first order bream.(d): Phase correction is applied to the negative first order beam.

193

Ideal Gaussian Beams Generation Algebra

No Phase Correction

1

2

3

Figure 0.20: (a): An ideal grating generated on the DMD without correction willyield distorted diffracted beams. (b): The phase map of the optical system is addedto the grating which changes the landscape of the grating. The resulting first-orderbeam is corrected as it can be seen that its margin profiles are improved significantly.(c): Subtract the phase map from the grating yields improvement on the negativefirst beam.

194

Bibliography

[1] Christopher J Pethick and Henrik Smith.

Bose–Einstein condensation in dilute gases.

Cambridge university press, 2008.

[2] Tigran Kalaydzhyan. “Chiral superfluidity of the quark–gluon plasma”.

In: Nuclear Physics A 913 (2013), pp. 243–263. issn: 0375-9474.

[3] Mark G. Alford, Andreas Schmitt, Krishna Rajagopal, and Thomas Schafer.

“Color superconductivity in dense quark matter”.

In: Rev. Mod. Phys. 80 (4 2008), pp. 1455–1515.

[4] P. Kapitza. “Viscosity of Liquid Helium below the λ-Point”.

In: Nature 141.3558 (1938), pp. 74–74. issn: 1476-4687.

[5] H. Kamerlingh Onnes. “Further experiments with liquid helium. C. On the

change of electric resistance of pure metals at very low temperatures etc. IV.

The resistance of pure mercury at helium temperatures”.

In: Through Measurement to Knowledge: The Selected Papers of Heike

Kamerlingh Onnes 1853–1926.

Ed. by Kostas Gavroglu and Yorgos Goudaroulis.

Dordrecht: Springer Netherlands, 1991, pp. 261–263.

isbn: 978-94-009-2079-8.

195

[6] J. F. ALLEN and A. D. MISENER. “Flow of Liquid Helium II”.

In: Nature 141.3558 (1938), pp. 75–75. issn: 1476-4687.

[7] J. Bardeen, L. N. Cooper, and J. R. Schrieffer.

“Theory of Superconductivity”. In: Phys. Rev. 108 (5 1957), pp. 1175–1204.

[8] Leon N. Cooper. “Bound Electron Pairs in a Degenerate Fermi Gas”.

In: Phys. Rev. 104 (4 1956), pp. 1189–1190.

[9] Wilhelm Zwerger. The BCS-BEC crossover and the unitary Fermi gas.

Vol. 836. Springer Science & Business Media, 2011.

[10] Immanuel Bloch, Jean Dalibard, and Wilhelm Zwerger.

“Many-body physics with ultracold gases”.

In: Reviews of Modern Physics 80.3 (2008), pp. 885–964.

[11] Charles Kittel, Paul McEuen, and Paul McEuen.

Introduction to solid state physics. Vol. 8. Wiley New York, 1996.

[12] M. Prakash, T. L. Ainsworth, and J. M. Lattimer.

“Equation of State and the Maximum Mass of Neutron Stars”.

In: Phys. Rev. Lett. 61 (22 1988), pp. 2518–2521.

[13] Michael McNeil Forbes, Sukanta Bose, Sanjay Reddy, Dake Zhou,

Arunava Mukherjee, and Soumi De. “Constraining the neutron-matter

196

equation of state with gravitational waves”.

In: Phys. Rev. D 100 (8 2019), p. 083010.

[14] J. Sauls. “Superfluidity in the interiors of neutron stars”.

In: NATO Advanced Science Institutes (ASI) Series C.

Ed. by H. Ogelman and E. P. J. van den Heuvel. Vol. 262.

NATO Advanced Science Institutes (ASI) Series C. 1989, p. 457.

isbn: 978-94-009-2273-0.

[15] V. RADHAKRISHNAN and R. N. MANCHESTER.

“Detection of a Change of State in the Pulsar PSR 0833-45”.

In: Nature 222.5190 (1969), pp. 228–229. issn: 1476-4687.

[16] Bennett Link, Richard I. Epstein, and Kenneth A. Van Riper.

“Pulsar glitches as probes of neutron star interiors”.

In: Nature 359.6396 (1992), pp. 616–618. issn: 1476-4687.

[17] A.B. Migdal. “Superfluidity and the moments of inertia of nuclei”.

In: Nuclear Physics 13.5 (1959), pp. 655–674. issn: 0029-5582.

[18] Dany Page and Sanjay Reddy. “Dense Matter in Compact Stars: Theoretical

Developments and Observational Constraints”.

In: Annual Review of Nuclear and Particle Science 56.1 (2006), pp. 327–374.

197

[19] M. Tanabashi, K. Hagiwara, K. Hikasa, K. Nakamura, Y. Sumino,

F. Takahashi, J. Tanaka, K. Agashe, G. Aielli, C. Amsler, et al.

“Review of Particle Physics”. In: Phys. Rev. D 98 (3 2018), p. 030001.

[20] Guthrie B. Partridge, Wenhui Li, Ramsey I. Kamar, Yean-an Liao, and

Randall G. Hulet. “Pairing and Phase Separation in a Polarized Fermi Gas”.

In: Science 311.5760 (2006), pp. 503–505. issn: 0036-8075.

[21] A. M. Clogston.

“Upper Limit for the Critical Field in Hard Superconductors”.

In: Phys. Rev. Lett. 9 (6 1962), pp. 266–267.

[22] K. Maki and T. Tsuneto.

“Pauli Paramagnetism and Superconducting State”.

In: Progress of Theoretical Physics 31.6 (1964), pp. 945–956.

[23] Pawe l Haensel, Aleksander Yu Potekhin, and Dmitry G Yakovlev.

Neutron stars 1: Equation of state and structure. Vol. 326.

Springer Science & Business Media, 2007.

[24] Peter Fulde and Richard A. Ferrell.

“Superconductivity in a Strong Spin-Exchange Field”.

In: Phys. Rev. 135 (3A 1964), A550–A563.

198

[25] AI Larkin and Yu N Ovchinnikov. “Nonuniform state of superconductors”.

In: Soviet Physics-JETP 20.3 (1965), pp. 762–762.

[26] Richard D Mattuck.

A guide to Feynman diagrams in the many-body problem.

Courier Corporation, 1992.

[27] John Bardeen and David Pines. “Electron-Phonon Interaction in Metals”.

In: Phys. Rev. 99 (4 1955), pp. 1140–1150.

[28] Bascom S. Deaver and William M. Fairbank.

“Experimental Evidence for Quantized Flux in Superconducting Cylinders”.

In: Phys. Rev. Lett. 7 (2 1961), pp. 43–46.

[29] R. Doll and M. Nabauer. “Experimental Proof of Magnetic Flux

Quantization in a Superconducting Ring”.

In: Phys. Rev. Lett. 7 (2 1961), pp. 51–52.

[30] Barry N Taylor, Peter J Mohr, and M Douma.

“The NIST Reference on constants, units, and uncertainty”.

In: available online from:. physics. nist. gov/cuu/index (2007).

[31] John David Jackson. Classical electrodynamics. John Wiley & Sons, 2007.

[32] Michael Tinkham. Introduction to superconductivity.

Courier Corporation, 2004.

199

[33] Peter Ring and Peter Schuck. The nuclear many-body problem.

Springer Science & Business Media, 2004.

[34] Philippe Andre Martin and Francois Rothen.

Many-body problems and quantum field theory: an introduction.

Springer Science & Business Media, 2013.

[35] P. Hohenberg and W. Kohn. “Inhomogeneous Electron Gas”.

In: Phys. Rev. 136 (3B 1964), B864–B871.

[36] W. Kohn and L. J. Sham.

“Self-Consistent Equations Including Exchange and Correlation Effects”.

In: Phys. Rev. 140 (4A 1965), A1133–A1138.

[37] Richard Feynman.

“Statistical mechanics: a set of lectures (advanced book classics)”. In: (1998).

[38] N. N. Bogoljubov, V. V. Tolmachov, and D. V. Sirkov.

“A New Method in the Theory of Superconductivity”.

In: Fortschritte der Physik 6.11-12 (1958), pp. 605–682.

[39] JG Valatin. “Comments on the theory of superconductivity”.

In: Il Nuovo Cimento (1955-1965) 7.6 (1958), pp. 843–857.

[40] W. Vincent Liu and Frank Wilczek. “Interior Gap Superfluidity”.

In: Phys. Rev. Lett. 90 (4 2003), p. 047002.

200

[41] Michael McNeil Forbes, Elena Gubankova, W. Vincent Liu, and

Frank Wilczek. “Stability Criteria for Breached-Pair Superfluidity”.

In: Phys. Rev. Lett. 94 (1 2005), p. 017001.

[42] G. Sarma. “On the influence of a uniform exchange field acting on the spins

of the conduction electrons in a superconductor”.

In: Journal of Physics and Chemistry of Solids 24.8 (1963), pp. 1029–1032.

issn: 0022-3697.

[43] W. Yi and L.-M. Duan.

“Detecting the Breached-Pair Phase in a Polarized Ultracold Fermi Gas”.

In: Phys. Rev. Lett. 97 (12 2006), p. 120401.

[44] Elena Gubankova, W. Vincent Liu, and Frank Wilczek.

“Breached Pairing Superfluidity: Possible Realization in QCD”.

In: Phys. Rev. Lett. 91 (3 2003), p. 032001.

[45] E. Gubankova, E. G. Mishchenko, and F. Wilczek.

“Breached Superfluidity via p-Wave Coupling”.

In: Phys. Rev. Lett. 94 (11 2005), p. 110402.

[46] Ivar Giaever. “Electron tunneling and superconductivity”.

In: Rev. Mod. Phys. 46 (2 1974), pp. 245–250.

201

[47] E. K. Moser, W. J. Tomasch, M. J. McClorey, J. K. Furdyna, M. W. Coffey,

C. L. Pettiette-Hall, and S. M. Schwarzbek.

“Microwave properties of YBa2Cu3O7−x films at 35 GHz from

magnetotransmission and magnetoreflection measurements”.

In: Phys. Rev. B 49 (6 1994), pp. 4199–4208.

[48] Y. Shin, C. H. Schunck, A. Schirotzek, and W. Ketterle.

“Tomographic rf Spectroscopy of a Trapped Fermi Gas at Unitarity”.

In: Phys. Rev. Lett. 99 (9 2007), p. 090403.

[49] J. Carlson and Sanjay Reddy. “Superfluid Pairing Gap in Strong Coupling”.

In: Phys. Rev. Lett. 100 (15 2008), p. 150403.

[50] C. N. Yang. “Concept of Off-Diagonal Long-Range Order and the Quantum

Phases of Liquid He and of Superconductors”.

In: Rev. Mod. Phys. 34 (4 1962), pp. 694–704.

[51] Yosuke Nagaoka. “DLRO, ODLRO and superfluidity”.

In: Physics of Highly Excited States in Solids.

Berlin, Heidelberg: Springer Berlin Heidelberg, 1976, pp. 137–143.

isbn: 978-3-540-37975-1.

202

[52] Chunji Wang, Chao Gao, Chao-Ming Jian, and Hui Zhai.

“Spin-Orbit Coupled Spinor Bose-Einstein Condensates”.

In: Phys. Rev. Lett. 105 (16 2010), p. 160403.

[53] Y.-J. Lin, K. Jimenez-Garcıa, and I. B. Spielman.

“Spin-orbit-coupled Bose-Einstein condensates”.

In: Nature 471.7336 (2011), pp. 83–86. issn: 1476-4687.

[54] Pengjun Wang, Zeng-Qiang Yu, Zhengkun Fu, Jiao Miao, Lianghui Huang,

Shijie Chai, Hui Zhai, and Jing Zhang.

“Spin-Orbit Coupled Degenerate Fermi Gases”.

In: Phys. Rev. Lett. 109 (9 2012), p. 095301.

[55] Lawrence W. Cheuk, Ariel T. Sommer, Zoran Hadzibabic, Tarik Yefsah,

Waseem S. Bakr, and Martin W. Zwierlein.

“Spin-Injection Spectroscopy of a Spin-Orbit Coupled Fermi Gas”.

In: Phys. Rev. Lett. 109 (9 2012), p. 095302.

[56] U. Fano.

“Effects of Configuration Interaction on Intensities and Phase Shifts”.

In: Phys. Rev. 124 (6 1961), pp. 1866–1878.

[57] Herman Feshbach. “A unified theory of nuclear reactions. II”.

In: Annals of Physics 19.2 (1962), pp. 287–313. issn: 0003-4916.

203

[58] S. Peil, J. V. Porto, B. Laburthe Tolra, J. M. Obrecht, B. E. King,

M. Subbotin, S. L. Rolston, and W. D. Phillips.

“Patterned loading of a Bose-Einstein condensate into an optical lattice”.

In: Phys. Rev. A 67 (5 2003), p. 051603.

[59] Ana Maria Rey, B. L. Hu, Esteban Calzetta, Albert Roura, and

Charles W. Clark.

“Nonequilibrium dynamics of optical-lattice-loaded Bose-Einstein-condensate

atoms: Beyond the Hartree-Fock-Bogoliubov approximation”.

In: Phys. Rev. A 69 (3 2004), p. 033610.

[60] Martin Lebrat, Pjotrs Gri sins, Dominik Husmann, Samuel Hausler,

Laura Corman, Thierry Giamarchi, Jean-Philippe Brantut, and

Tilman Esslinger.

“Band and Correlated Insulators of Cold Fermions in a Mesoscopic Lattice”.

In: Phys. Rev. X 8 (1 2018), p. 011053.

[61] Piotr Magierski, Bugra Tuzemen, and Gabriel Wlaz lowski.

“Spin-polarized droplets in the unitary Fermi gas”.

In: Phys. Rev. A 100 (3 2019), p. 033613.

[62] Y.-J. Lin, R. L. Compton, A. R. Perry, W. D. Phillips, J. V. Porto, and

I. B. Spielman.

204

“Bose-Einstein Condensate in a Uniform Light-Induced Vector Potential”.

In: Phys. Rev. Lett. 102 (13 2009), p. 130401.

[63] Allan Griffin, David W Snoke, and Sandro Stringari.

Bose-Einstein condensation. Cambridge University Press, 1996.

[64] D. M. Eagles. “Possible Pairing without Superconductivity at Low Carrier

Concentrations in Bulk and Thin-Film Superconducting Semiconductors”.

In: Phys. Rev. 186 (2 1969), pp. 456–463.

[65] Anthony James Leggett. “Diatomic molecules and Cooper pairs”.

In: Modern trends in the theory of condensed matter. Springer, 1980,

pp. 13–27.

[66] E. Tiesinga, B. J. Verhaar, and H. T. C. Stoof.

“Threshold and resonance phenomena in ultracold ground-state collisions”.

In: Phys. Rev. A 47 (5 1993), pp. 4114–4122.

[67] William C. Stwalley. “Stability of Spin-Aligned Hydrogen at Low

Temperatures and High Magnetic Fields: New Field-Dependent Scattering

Resonances and Predissociations”.

In: Phys. Rev. Lett. 37 (24 1976), pp. 1628–1631.

[68] S. Inouye, M. R. Andrews, J. Stenger, H.-J. Miesner, D. M. Stamper-Kurn,

and W. Ketterle.

205

“Observation of Feshbach resonances in a Bose–Einstein condensate”.

In: Nature 392.6672 (1998), pp. 151–154. issn: 1476-4687.

[69] Ph. Courteille, R. S. Freeland, D. J. Heinzen, F. A. van Abeelen, and

B. J. Verhaar.

“Observation of a Feshbach Resonance in Cold Atom Scattering”.

In: Phys. Rev. Lett. 81 (1 1998), pp. 69–72.

[70] Cheng Chin, Rudolf Grimm, Paul Julienne, and Eite Tiesinga.

“Feshbach resonances in ultracold gases”.

In: Rev. Mod. Phys. 82 (2 2010), pp. 1225–1286.

[71] C. A. Regal, M. Greiner, and D. S. Jin.

“Observation of Resonance Condensation of Fermionic Atom Pairs”.

In: Phys. Rev. Lett. 92 (4 2004), p. 040403.

[72] M. W. Zwierlein, C. A. Stan, C. H. Schunck, S. M. F. Raupach,

A. J. Kerman, and W. Ketterle.

“Condensation of Pairs of Fermionic Atoms near a Feshbach Resonance”.

In: Phys. Rev. Lett. 92 (12 2004), p. 120403.

[73] Wolfgang Ketterle and Martin W Zwierlein.

“Making, probing and understanding ultracold Fermi gases”.

In: arXiv: Other Condensed Matter (2008).

206

[74] M. Bartenstein, A. Altmeyer, S. Riedl, R. Geursen, S. Jochim, C. Chin,

J. Hecker Denschlag, R. Grimm, A. Simoni, E. Tiesinga, C. J. Williams, and

P. S. Julienne. “Precise Determination of 6Li Cold Collision Parameters by

Radio-Frequency Spectroscopy on Weakly Bound Molecules”.

In: Phys. Rev. Lett. 94 (10 2005), p. 103201.

[75] Michael Forbes. “The unitary fermi gas: An overview”.

In: INT, University of Washington, unpublished (2012).

[76] P. Nozieres and S. Schmitt-Rink. “Bose condensation in an attractive

fermion gas: From weak to strong coupling superconductivity”.

In: Journal of Low Temperature Physics 59.3 (1985), pp. 195–211.

issn: 1573-7357.

[77] C. A. R. Sa de Melo, Mohit Randeria, and Jan R. Engelbrecht.

“Crossover from BCS to Bose superconductivity: Transition temperature and

time-dependent Ginzburg-Landau theory”.

In: Phys. Rev. Lett. 71 (19 1993), pp. 3202–3205.

[78] R. Haussmann, W. Rantner, S. Cerrito, and W. Zwerger.

“Thermodynamics of the BCS-BEC crossover”.

In: Phys. Rev. A 75 (2 2007), p. 023610.

207

[79] Olga Goulko and Matthew Wingate. “Thermodynamics of balanced and

slightly spin-imbalanced Fermi gases at unitarity”.

In: Phys. Rev. A 82 (5 2010), p. 053621.

[80] L. H. Thomas. “The calculation of atomic fields”. In: vol. 23. 5.

Cambridge University Press, 1927, pp. 542–548.

[81] Enrico Fermi.

“Un metodo statistico per la determinazione di alcune priorieta dell’atome”.

In: Rend. Accad. Naz. Lincei 6.602-607 (1927), p. 32.

[82] Eberhard Engel and Reiner M Dreizler. Density functional theory.

Springer, 2013.

[83] Richard J. Furnstahl, Gautam Rupak, and Thomas Schafer.

“Effective Field Theory and Finite-Density Systems”.

In: Annual Review of Nuclear and Particle Science 58.1 (2008), pp. 1–25.

[84] L. Salasnich, N. Manini, and F. Toigo.

“Macroscopic periodic tunneling of Fermi atoms in the BCS-BEC crossover”.

In: Phys. Rev. A 77 (4 2008), p. 043609.

[85] CF von Weisslcker. “Sur Theorie der Kernaassen”.

In: Z. Phys 9.6 (1935), p. 431.

208

[86] Luca Salasnich and Flavio Toigo.

“Extended Thomas-Fermi density functional for the unitary Fermi gas”.

In: Phys. Rev. A 78 (5 2008), p. 053626.

[87] Aurel Bulgac and Yongle Yu.

“Vortex State in a Strongly Coupled Dilute Atomic Fermionic Superfluid”.

In: Phys. Rev. Lett. 91 (19 2003), p. 190404.

[88] L. N. Oliveira, E. K. U. Gross, and W. Kohn.

“Density-Functional Theory for Superconductors”.

In: Phys. Rev. Lett. 60 (23 1988), pp. 2430–2433.

[89] S. Kurth, M. Marques, M. Luders, and E. K. U. Gross.

“Local Density Approximation for Superconductors”.

In: Phys. Rev. Lett. 83 (13 1999), pp. 2628–2631.

[90] E. K. U. Gross, M. Marques, M. Luders, and Lars Fast. “Calculating the

critical temperature of superconductors from first principles”.

In: AIP Conference Proceedings 577.1 (2001), pp. 177–182.

[91] Aurel Bulgac. “Local-density-functional theory for superfluid fermionic

systems: The unitary gas”. In: Phys. Rev. A 76 (4 2007), p. 040502.

[92] Aurel Bulgac, Michael McNeil Forbes, and Piotr Magierski.

“The unitary Fermi gas: From Monte Carlo to density functionals”.

209

In: The BCS-BEC Crossover and the Unitary Fermi Gas. Springer, 2012,

pp. 305–373.

[93] Eric Braaten and H.-W. Hammer.

“Universality in few-body systems with large scattering length”.

In: Physics Reports 428.5 (2006), pp. 259–390. issn: 0370-1573.

[94] Alexander L Fetter and John Dirk Walecka.

Quantum theory of many-particle systems. Courier Corporation, 2012.

[95] Jun John Sakurai and Eugene D Commins.

Modern quantum mechanics, revised edition. 1995.

[96] T. Papenbrock and G. F. Bertsch. “Pairing in low-density Fermi gases”.

In: Phys. Rev. C 59 (4 1999), pp. 2052–2055.

[97] Aurel Bulgac and Yongle Yu.

“Renormalization of the Hartree-Fock-Bogoliubov Equations in the Case of a

Zero Range Pairing Interaction”. In: Phys. Rev. Lett. 88 (4 2002), p. 042504.

[98] C. Lobo, A. Recati, S. Giorgini, and S. Stringari.

“Normal State of a Polarized Fermi Gas at Unitarity”.

In: Phys. Rev. Lett. 97 (20 2006), p. 200403.

210

[99] Aurel Bulgac and Michael McNeil Forbes.

“Unitary Fermi Supersolid: The Larkin-Ovchinnikov Phase”.

In: Phys. Rev. Lett. 101 (21 2008), p. 215301.

[100] Aurel Bulgac and Michael McNeil Forbes.

“The Asymmetric Superfluid Local Density Approximation (ASLDA)”.

In: arXiv e-prints, arXiv:0808.1436 (2008), arXiv:0808.1436.

arXiv: 0808.1436 [cond-mat.supr-con].

[101] Charles G Broyden.

“A class of methods for solving nonlinear simultaneous equations”.

In: Mathematics of computation 19.92 (1965), pp. 577–593.

[102] Anushree Datta, Kun Yang, and Amit Ghosal.

“Fate of a strongly correlated d-wave superconductor in a Zeeman field: The

Fulde-Ferrel-Larkin-Ovchinnikov perspective”.

In: Phys. Rev. B 100 (3 2019), p. 035114.

[103] Jami J Kinnunen, Jildou E Baarsma, Jani-Petri Martikainen, and

Paivi Torma. “The Fulde–Ferrell–Larkin–Ovchinnikov state for ultracold

fermions in lattice and harmonic potentials: a review”.

In: Reports on Progress in Physics 81.4 (2018), p. 046401.

211

[104] B. S. Chandrasekhar. “A NOTE ON THE MAXIMUM CRITICAL FIELD

OF HIGH-FIELD SUPERCONDUCTORS”.

In: Applied Physics Letters 1.1 (1962), pp. 7–8.

[105] David Bohm. “Note on a theorem of Bloch concerning possible causes of

superconductivity”. In: Physical Review 75.3 (1949), p. 502.

[106] C. Mora and R. Combescot.

“Transition to Fulde-Ferrell-Larkin-Ovchinnikov phases in three dimensions:

A quasiclassical investigation at low temperature with Fourier expansion”.

In: Phys. Rev. B 71 (21 2005), p. 214504.

[107] Nobukatsu Yoshida and S.-K. Yip.

“Larkin-Ovchinnikov state in resonant Fermi gas”.

In: Phys. Rev. A 75 (6 2007), p. 063601.

[108] G. G. Batrouni, M. H. Huntley, V. G. Rousseau, and R. T. Scalettar. “Exact

Numerical Study of Pair Formation with Imbalanced Fermion Populations”.

In: Phys. Rev. Lett. 100 (11 2008), p. 116405.

[109] J. E. Baarsma and H. T. C. Stoof.

“Inhomogeneous superfluid phases in 6Li-40K mixtures at unitarity”.

In: Phys. Rev. A 87 (6 2013), p. 063612.

212

[110] T. K. Koponen, T. Paananen, J.-P. Martikainen, and P. Torma.

“Finite-Temperature Phase Diagram of a Polarized Fermi Gas in an Optical

Lattice”. In: Phys. Rev. Lett. 99 (12 2007), p. 120403.

[111] T K Koponen, T Paananen, J-P Martikainen, M R Bakhtiari, and P Torma.

“FFLO state in 1-, 2- and 3-dimensional optical lattices combined with a

non-uniform background potential”.

In: New Journal of Physics 10.4 (2008), p. 045014.

[112] Hui Hu and Xia-Ji Liu. “Fulde–Ferrell superfluidity in ultracold Fermi gases

with Rashba spin–orbit coupling”.

In: New Journal of Physics 15.9 (2013), p. 093037.

[113] Yong Xu, Chunlei Qu, Ming Gong, and Chuanwei Zhang. “Competing

superfluid orders in spin-orbit-coupled fermionic cold-atom optical lattices”.

In: Phys. Rev. A 89 (1 2014), p. 013607.

[114] M. Iskin. “Spin-orbit-coupling-induced Fulde-Ferrell-Larkin-Ovchinnikov-like

Cooper pairing and skyrmion-like polarization textures in optical lattices”.

In: Phys. Rev. A 88 (1 2013), p. 013631.

[115] Leonard W. Gruenberg and Leon Gunther.

“Fulde-Ferrell Effect in Type-II Superconductors”.

In: Phys. Rev. Lett. 16 (22 1966), pp. 996–998.

213

[116] S. Takada. “Superconductivity in a Molecular Field. II —Stability of

Fulde-Ferrel Phase—”.

In: Progress of Theoretical Physics 43.1 (1970), pp. 27–38.

[117] L. G. Aslamazov. “Influence of Impurities on the Existence of an

Inhomogeneous State in a Ferromagnetic Superconductor”.

In: Soviet Journal of Experimental and Theoretical Physics 28 (1969), p. 773.

[118] Yuji Matsuda and Hiroshi Shimahara. “Fulde–Ferrell–Larkin–Ovchinnikov

State in Heavy Fermion Superconductors”.

In: Journal of the Physical Society of Japan 76.5 (2007), p. 051005.

[119] R. Beyer and J. Wosnitza. “Emerging evidence for FFLO states in layered

organic superconductors (Review Article)”.

In: Low Temperature Physics 39.3 (2013), pp. 225–231.

[120] Joachim Wosnitza. “FFLO States in Layered Organic Superconductors”.

In: Annalen der Physik 530.2 (2018), p. 1700282.

[121] S. Uji, T. Terashima, M. Nishimura, Y. Takahide, T. Konoike, K. Enomoto,

H. Cui, H. Kobayashi, A. Kobayashi, H. Tanaka, M. Tokumoto, E. S. Choi,

T. Tokumoto, D. Graf, and J. S. Brooks.

“Vortex Dynamics and the Fulde-Ferrell-Larkin-Ovchinnikov State in a

214

Magnetic-Field-Induced Organic Superconductor”.

In: Phys. Rev. Lett. 97 (15 2006), p. 157001.

[122] William A. Coniglio, Laurel E. Winter, Kyuil Cho, C. C. Agosta, B. Fravel,

and L. K. Montgomery. “Superconducting phase diagram and FFLO

signature in λ-(BETS)2GaCl4 from rf penetration depth measurements”.

In: Phys. Rev. B 83 (22 2011), p. 224507.

[123] H. Mayaffre, S. Kramer, M. Horvatic, C. Berthier, K. Miyagawa, K. Kanoda,

and V. F. Mitrovic. “Evidence of Andreev bound states as a hallmark of the

FFLO phase in κ-(BEDT-TTF) 2 Cu (NCS) 2”.

In: Nature Physics 10.12 (2014), pp. 928–932.

[124] Satoshi Tsuchiya, Jun-ichi Yamada, Kaori Sugii, David Graf,

James S. Brooks, Taichi Terashima, and Shinya Uji.

“Phase boundary in a superconducting state of κ-(BEDT-TTF) 2Cu (NCS)

2: Evidence of the Fulde–Ferrell–Larkin–Ovchinnikov phase”.

In: Journal of the Physical Society of Japan 84.3 (2015), p. 034703.

[125] Shinya Uji, Kouta Kodama, Kaori Sugii, Taichi Terashima,

Takahide Yamaguchi, Nobuyuki Kurita, Satoshi Tsuchiya, Takako Konoike,

Motoi Kimata, Akiko Kobayashi, Biao Zhou, and Hayao Kobayashi.

“Vortex dynamics and diamagnetic torque signals in two dimensional organic

215

superconductor λ-(BETS) 2GaCl4”.

In: Journal of the Physical Society of Japan 84.10 (2015), p. 104709.

[126] Charles C. Agosta, Nathanael A. Fortune, Scott T. Hannahs, Shuyao Gu,

Lucy Liang, Ju-Hyun Park, and John A. Schleuter.

“Calorimetric Measurements of Magnetic-Field-Induced Inhomogeneous

Superconductivity Above the Paramagnetic Limit”.

In: Phys. Rev. Lett. 118 (26 2017), p. 267001.

[127] Zhen Zheng and Z. D. Wang. “Cavity-induced

Fulde-Ferrell-Larkin-Ovchinnikov superfluids of ultracold Fermi gases”.

In: Phys. Rev. A 101 (2 2020), p. 023612.

[128] Martin W. Zwierlein, Andre Schirotzek, Christian H. Schunck, and

Wolfgang Ketterle.

“Fermionic Superfluidity with Imbalanced Spin Populations”.

In: Science 311.5760 (2006), pp. 492–496. issn: 0036-8075.

[129] Shiori Sugiura, Takayuki Isono, Taichi Terashima, Syuma Yasuzuka,

John A. Schlueter, and Shinya Uji. “Fulde–Ferrell–Larkin–Ovchinnikov and

vortex phases in a layered organic superconductor”.

In: npj Quantum Materials 4.1 (2019), p. 7. issn: 2397-4648.

216

[130] J. E. Williams and M. J. Holland.

“Preparing topological states of a Bose–Einstein condensate”.

In: Nature 401.6753 (1999), pp. 568–572. issn: 1476-4687.

[131] M. R. Matthews, B. P. Anderson, P. C. Haljan, D. S. Hall, M. J. Holland,

J. E. Williams, C. E. Wieman, and E. A. Cornell.

“Watching a Superfluid Untwist Itself: Recurrence of Rabi Oscillations in a

Bose-Einstein Condensate”. In: Phys. Rev. Lett. 83 (17 1999), pp. 3358–3361.

[132] J. Denschlag, J. E. Simsarian, D. L. Feder, Charles W. Clark, L. A. Collins,

J. Cubizolles, L. Deng, E. W. Hagley, K. Helmerson, W. P. Reinhardt,

S. L. Rolston, B. I. Schneider, and W. D. Phillips.

“Generating Solitons by Phase Engineering of a Bose-Einstein Condensate”.

In: Science 287.5450 (2000), pp. 97–101. issn: 0036-8075.

[133] M. H. Anderson, J. R. Ensher, M. R. Matthews, C. E. Wieman, and

E. A. Cornell.

“Observation of Bose-Einstein Condensation in a Dilute Atomic Vapor”.

In: Science 269.5221 (1995), pp. 198–201. issn: 0036-8075.

[134] Eric A. Cornell, Jason R. Ensher, and Carl E. Wieman.

“Experiments in Dilute Atomic Bose-Einstein Condensation”.

217

In: arXiv e-prints, cond-mat/9903109 (1999), cond–mat/9903109.

arXiv: cond-mat/9903109 [cond-mat].

[135] W. Ketterle, D. S. Durfee, and D. M. Stamper-Kurn.

“Making, probing and understanding Bose-Einstein condensates”.

In: arXiv e-prints, cond-mat/9904034 (1999), cond–mat/9904034.

arXiv: cond-mat/9904034 [cond-mat].

[136] M. R. Matthews, B. P. Anderson, P. C. Haljan, D. S. Hall, C. E. Wieman,

and E. A. Cornell. “Vortices in a Bose-Einstein Condensate”.

In: Phys. Rev. Lett. 83 (13 1999), pp. 2498–2501.

[137] Aurel Bulgac, Michael McNeil Forbes, and Achim Schwenk.

“Induced P -Wave Superfluidity in Asymmetric Fermi Gases”.

In: Phys. Rev. Lett. 97 (2 2006), p. 020402.

[138] David B. Kaplan. “Five lectures on effective field theory”.

In: arXiv e-prints, nucl-th/0510023 (2005), nucl–th/0510023.

arXiv: nucl-th/0510023 [nucl-th].

[139] Daniel E. Sheehy and Leo Radzihovsky. “BEC–BCS crossover, phase

transitions and phase separation in polarized resonantly-paired superfluids”.

In: Annals of Physics 322.8 (2007), pp. 1790–1924. issn: 0003-4916.

218

[140] Aurel Bulgac, Michael McNeil Forbes, Kenneth J. Roche, and

Gabriel Wlaz lowski. “Quantum Friction: Cooling Quantum Systems with

Unitary Time Evolution”.

In: arXiv e-prints, arXiv:1305.6891 (2013), arXiv:1305.6891.

arXiv: 1305.6891 [nucl-th].

[141] L. Salasnich. “Bright solitons in ultracold atoms”.

In: Optical and Quantum Electronics 49.12 (2017), p. 409. issn: 1572-817X.

[142] David O. Harris, Gail G. Engerholm, and William D. Gwinn.

“Calculation of Matrix Elements for One-Dimensional Quantum-Mechanical

Problems and the Application to Anharmonic Oscillators”.

In: The Journal of Chemical Physics 43.5 (1965), pp. 1515–1517.

[143] A. S. Dickinson and P. R. Certain. “Calculation of Matrix Elements for

One-Dimensional Quantum-Mechanical Problems”.

In: The Journal of Chemical Physics 49.9 (1968), pp. 4209–4211.

[144] D Baye and P -H Heenen.

“Generalised meshes for quantum mechanical problems”. In: Journal of

Physics A: Mathematical and General 19.11 (1986), pp. 2041–2059.

219

[145] J.V. Lill, G.A. Parker, and J.C. Light. “Discrete variable representations and

sudden models in quantum scattering theory”.

In: Chemical Physics Letters 89.6 (1982), pp. 483–489. issn: 0009-2614.

[146] J. V. Lill, Gregory A. Parker, and John C. Light.

“The discrete variable–finite basis approach to quantum scattering”.

In: The Journal of Chemical Physics 85.2 (1986), pp. 900–910.

[147] J. C. Light and Z. Bacic. “Adiabatic approximation and nonadiabatic

corrections in the discrete variable representation: Highly excited vibrational

states of triatomic molecules”.

In: The Journal of Chemical Physics 87.7 (1987), pp. 4008–4019.

[148] Jonathan Tennyson, Steven Miller, James R. Henderson, and

Brian T. Sutcliffe. “Highly Excited Rovibrational States of Small Molecules”.

In: Philosophical Transactions: Physical Sciences and Engineering 332.1625

(1990), pp. 329–341. issn: 09628428.

[149] Robert G. Littlejohn, Matthew Cargo, Tucker Carrington, Kevin A. Mitchell,

and Bill Poirier.

“A general framework for discrete variable representation basis sets”.

In: The Journal of Chemical Physics 116.20 (2002), pp. 8691–8703.

220

[150] Robert G. Littlejohn and Matthew Cargo.

“An Airy discrete variable representation basis”.

In: The Journal of Chemical Physics 117.1 (2002), pp. 37–42.

[151] Robert G. Littlejohn and Matthew Cargo.

“Bessel discrete variable representation bases”.

In: The Journal of Chemical Physics 117.1 (2002), pp. 27–36.

[152] C. Lin, F. H. Zong, and D. M. Ceperley. “Twist-averaged boundary

conditions in continuum quantum Monte Carlo algorithms”.

In: Phys. Rev. E 64 (1 2001), p. 016702.

[153] Jindrich Kolorenc and Lubos Mitas.

“Applications of quantum Monte Carlo methods in condensed systems”.

In: Reports on Progress in Physics 74.2 (2011), p. 026502.

[154] Waseem S. Bakr, Jonathon I. Gillen, Amy Peng, Simon Folling, and

Markus Greiner. “A quantum gas microscope for detecting single atoms in a

Hubbard-regime optical lattice”. In: Nature 462.7269 (2009), pp. 74–77.

issn: 1476-4687.

[155] Philip Zupancic, Philipp M. Preiss, Ruichao Ma, Alexander Lukin,

M. Eric Tai, Matthew Rispoli, Rajibul Islam, and Markus Greiner.

221

“Ultra-precise holographic beam shaping for microscopic quantum control”.

In: Opt. Express 24.13 (2016), pp. 13881–13893.

[156] Philip Zupancic, Philipp M. Preiss, Ruichao Ma, Alexander Lukin,

M. Eric Tai, Matthew Rispoli, Rajibul Islam, and Markus Greiner.

“Ultra-precise holographic beam shaping for microscopic quantum control”.

In: Opt. Express 24.13 (2016), pp. 13881–13893.

[157] Richard W. Bowman, Graham M. Gibson, Anna Linnenberger,

David B. Phillips, James A. Grieve, David M. Carberry, Steven Serati,

Mervyn J. Miles, and Miles J. Padgett. ““Red Tweezers”: Fast, customisable

hologram generation for optical tweezers”.

In: Computer Physics Communications 185.1 (2014), pp. 268–273.

issn: 0010-4655.

[158] Alexander L. Gaunt and Zoran Hadzibabic.

“Robust Digital Holography For Ultracold Atom Trapping”.

In: Scientific Reports 2.1 (2012), p. 721. issn: 2045-2322.

[159] G. Gauthier, I. Lenton, N. McKay Parry, M. Baker, M. J. Davis,

H. Rubinsztein-Dunlop, and T. W. Neely. “Direct imaging of a

digital-micromirror device for configurable microscopic optical potentials”.

In: Optica 3.10 (2016), pp. 1136–1143.

222

[160] Yu-Xuan Ren, Zhao-Xiang Fang, Lei Gong, Kun Huang, Yue Chen, and

Rong-De Lu. “Dynamic generation of Ince-Gaussian modes with a digital

micromirror device”. In: Journal of Applied Physics 117.13 (2015), p. 133106.

[161] Dustin Stuart, Oliver Barter, and Axel Kuhn.

“Fast algorithms for generating binary holograms”.

In: arXiv e-prints, arXiv:1409.1841 (2014), arXiv:1409.1841.

arXiv: 1409.1841 [physics.optics].