 School of Physics  Theses
School of Physics  Theses
Permanent URI for this collection
Search Results
Now showing
1  10 of 117

ItemTopological quantum computing with magnetsuperconductor hybrid systemsCrawford, Daniel ( 202311)Developing a practical general purpose quantum computer is this eras moonshot project, enabling fundamental advances in simulating quantum manybody systems, as well as promising new classicalcomputerbeating algorithms with applications in cryptography, meteorology, economics, and logistics. Current quantum processors struggle with short coherence times  meaning that the fragile quantum bits (qubits) break down  resulting in high error rates. Thus complicated or long calculations are prohibitive to run on current devices. Quantum error correction could be the solution, however, many physical qubits are required to encode a single logical qubit. Thus a massive scaling up of hardware is required to realise even a modest number of faulttolerant logical qubits. Over the past twenty years the idea of engineering an inherently faulttolerant, or topological, quantum computer has been developed. In principle, these faulttolerant qubits do not decohere due to a topological protection; the information is distributed across a physical system such that local perturbations do not damage the whole information encoding. Majorana zeromodes, characteristic quasiparticles in topological superconductors, have emerged as a leading candidate for the building blocks of a faulttolerant qubit. Many experimental platforms which might yield Majorana zeromodes have been proposed, but as of writing unambiguous evidence for Majorana zeromodes and topological superconductivity has not been presented in any experiment. Here I study magnetsuperconductor hybrid (MSH) systems, which involve networks of magnetic adatoms assembled on a superconducting surface via lateral atom manipulation using a scanning tunneling microscope tip. These systems are clean and crystalline, and thus are an ideal platform for experiments. I present compelling theoretical and experimental evidence for topological superconductivity in Mn and Fe chains on Nb(110). However, the systems investigated so far experimentally have long localisation lengths, resulting in hybridised Majorana modes. Because these modes cannot be used to build a faulttolerant qubit, I theoretically investigate several extensions to these experiments. I propose constructing quasi onedimensional chains consisting of several rows of magnetic adatoms, with ferromagnetic order in one crystalline direction and antiferromagnetic in the other. I also suggest engineering the Nb(110) surface with an alloy to dramatically increase the Rashba splitting. Both of these proposals are readily accessible in experiment, and could yield nonhybridised Majorana zeromodes. Having established the viability of the platform, I introduce a numerical apparatus for studying manybody nonequilibrium superconducting physics. While this is generic and can be applied to any superconducting problem, here I use it to study topological quantum computing on a MSH platform. I first show that quantum gates can indeed be implemented via braiding Majorana zeromodes. I then show how singlemolecule magnets can be use to initialise and readout MSH qubits. I build on this protocol and introduced a dressed Majorana qubit, which combines an MSH network with singlemolecule magnets. These could be easier to initialise and readout than a conventional Majorana qubit.

ItemAddressing domain shift in deeplylearned jet tagging at the LHCOre, Ayodele Oladimeji ( 202309)Over the last fifteen years, deep learning has emerged as an extremely powerful tool for exploiting large datasets. At the Large Hadron Collider, which has been in operation over the same time span, an important use case is to identify the initiating particles of hadronic jets. Due to the complexity of the radiation patterns within jets, neural networkbased classifiers are able to outperform traditional techniques for jet tagging. While these approaches are powerful, neural networks must be applied carefully to avoid performance losses in the presence of domain shift—where the data on which a model is evaluated follows different statistics to the training dataset. This thesis presents studies of possible strategies to mitigate domain shift in the application of deep learning to jet tagging. Firstly, we develop a deep generative model that can separately learn the distribution of quark and gluon jets from mixed samples. Building on the jet topics framework, this model provides the ability to sample quark and gluon jets in high dimension without taking input from Monte Carlo simulations. We demonstrate the advantage of the model over a conventional approach in terms of estimating the performance of a quark/gluon classifier on experimental data. One can also use likelihoods under the model to perform classification that is robust to outliers. We go on to evaluate fully and weaklysupervised classifiers using real datasets collected at the CMS experiment. Two measurements of the quark/gluon mixture proportions of the datasets are made under different assumptions. Compared to the predictions based on simulation, we either over or underestimate the quark fractions of each sample depending on which assumption is made. When estimating the discrimination power of the classifiers in real data we find that while the absolute performance depends on the choice of fractions, the rankings among the models are stable. In particular, weaklysupervised models trained on real jets outperform both simulationtrained models. Our generative networks yield competitive classification and provide a better model for the quark and gluon jet topic distributions in data than the simulation. Finally, we investigate the performance of a number of methods for training massgeneralised jet taggers, with a focus on algorithms that leverage metalearning. We study the discrimination of jets from boosted Z' bosons against a QCD background and evaluate the networks' performance at masses distant from those used in training. We find that a simple data augmentation strategy that standardises the angular scale of jets with different masses is sufficient to produce strong generalisation. The metalearning algorithms provide only a small improvement in generalisation when combined with this augmentation.

ItemFunctional Renormalization Group Methods for SpinOrbit Coupled Hubbard SystemsBeyer, Jacob ( 202308)This thesis establishes the extension of the functional renormalization group to systems of arbitrary lattice complexity with additional spin or orbital degrees of freedom. Using these capabilities, we investigate the effects of spinorbit coupling on square and triangular lattice structures, which describe for example cuprates, ironpnictides, strontium ruthenate, tin layers on silicon and lead layers on siliconcarbide. For the methodological advances, we build on previous studies of the truncatedunity functional renormalization group, but remedy existing symmetry breaking issues. These were incurred when combining a sublattice degree of freedom with the expansion of non transfer momentum dependencies in a planewave basis, and can be alleviated by careful selection of considered bonds. We furthermore demonstrate a wide range of intricacies, paramount for correct functional renormalization calculations, all of which we resolved. The obtained algorithms we validate at certainty not hitherto achieved, heralding a novel approach of quantitative comparison. All of this is contained and published in a high performance C++ implementation, already in use by junior researchers. Motivated by experimental results, we study the effect of Rashba spinorbit coupling in the squarelattice Hubbard model. We find the superconducting instabilities to be robust under weaktomoderate Rashbacoupling strengths. When the coupling is increased further the transition scale decreases significantly. We furthermore measure the contribution of triplet superconductivity, to indicate regions of interest for topological effects. Taking advantage of the functional renormalization group’s capability to produce phase diagrams, we also investigate particlehole instabilities in the system. Here we find a complex interplay of commensurate and incommensurate spindensity waves and unexpected regions of accidental nesting. The weaktointermediate coupling phase diagram in filling and spinorbit coupling strength is presented. We lastly turn our attention to triangular lattice materials. Here, recent abinitio calculations predict high Rashba spinorbit coupling strengths in, for example, Pb on SiC. We introduce Rashba spinorbit coupling to the Hubbard model, finding a wide range of spindensity waves with differing ordering vectors, some of which appear favorable for multiq instabilities. We further find superconducting phases around halffilling and at low filling. The region around halffilling is singletdominated, gaining triplet weight with increased spinorbit coupling. Contrarily the pure triplet region at low filling is an extended phase, persisting under spinorbit coupling. We present a phase diagram for the triangular lattice RashbaHubbard model in filling and spinorbit coupling strength.

ItemMassive Black HolesPaynter, James Robert ( 202308)Black holes are one of the most fundamental astrophysical objects in our universe. In this thesis I look at massive black holes (MBH) with masses $10^{4}10^{10}$ times that of our sun. In particular, I investigate how their gravitational influence distorts photon trajectories and describe how this can be used to study MBH. This phenomena, known as gravitational lensing, results in changes in shape and brightness of the images of the source as seen by a distant observer. The most striking manifestation of gravitational lensing is multiple images, known as \emph{strong} gravitational lensing. Strong gravitational lensing also results in the magnification of one or more of the images above that which would have been observed in the absence of deflecting matter. The number of cosmological black holes (MBH that do not belong to a galaxy core) is not well constrained. Gravitational lens statistics is one of the few ways to probe their number density. The fraction of sources experiencing strong gravitational lensing (multipleimage formation) is proportional to the number density of gravitational lenses which are able to form such images. GRBs are short bursts of $\gamma$rays which signify the birth of a stellar mass black hole. Gravitational lensing of timeseries data (lightcurves) manifests as repetition of the primary signal as a lensed ``echo''. I describe the Bayesian parameter estimation and model selection software \pygrb{} which I wrote for this thesis. I use \pygrb{} to analyse GRB lens candidates from the Burst And Transient Source Experiment (BATSE) GRB catalogue to determine how similar the putative GRB lensed echo images are. I find one convincing candidate  GRB~950830  which passes all our tests for statistical selfsimilarity. I conclude that GRB~950830 was gravitationally lensed by a $(1+z_l)M_l\approx\unit[5.5\times 10^4]{\msun}$ intermediate mass black hole (IMBH). Furthermore, based on the occurrence rate of this lensing event, I am able to estimate that the density of IMBH in the universe is $n_\textsc{imbh}=\unit[6.7^{+14.0}_{4.8}\times10^{3}]{Mpc^{3}}$. I also study the merger of black holes, looking at the recoiling quasar E1821+643 (E1821 hereafter). E1821 has a mass of $\mbh \sim \unit[2.6\times10^9]{\msun}$ and is moving with a lineofsight velocity $v_\text{los}\approx \unit[2,070\pm50]{\kms}$ relative to its host galaxy. I use Bayesian inference to infer that E1821+643 was likely formed from a binary black hole system with masses of $m_1\sim 1.9^{+0.5}_{0.4}\times \unit[10^9]{M_\odot}$, $m_2\sim 8.1^{+3.9}_{3.2} \times \unit[10^8]{M_\odot}$ (90\% credible intervals). Given our model, the black holes in this binary were likely to be spinning rapidly with dimensionless spin magnitudes of ${\chi}_1 = 0.87^{+0.11}_{0.26}$, ${\chi}_2 = 0.77^{+0.19}_{0.37}$. I find that E1821+643 is likely to be rapidly rotating with dimensionless spin ${\chi} = 0.92\pm0.04$. Recoiling black holes are one method to populate the universe with massive black holes, however, these are expected to be rare. Massive black holes carry with them a tight cluster of stars and stellar remnants. These stars will pass through the optical caustic(s) of the black hole occasionally, which may lead to observable brightening of the star. Magnifications of greater than one million can easily be achieved, which I term ``Gargantuan Magnification Events'' (GMEs). I estimate the rate at which this lensing occurs, including the distribution of magnifications and event durations. I consider GMEs of pulsars in orbit of MBH as a possible generating mechanism for Fast Radio Bursts (FRBs). I find that pulsar GMEs are able to account for $0.11\%$ of the total FRB rate as observed by the Canadian Hydrogen Intensity Mapping Experiment Fast Radio Burst (CHIME/FRB) radio observatory. These seemingly unrelated problems all tied together in the end. This thesis is a study of black holes, their interaction with light and matter, and how they evolve through cosmic time. Many lifetimes of work have gone into generating the theory behind the sentence just prior. I hope that my contributions embellish these theories.

ItemDeveloping and applying quantum sensors based on optically addressable spin defectsHealey, Alexander Joseph ( 202304)Quantum sensing aims to further our understanding of the natural world and support an upcoming technological revolution by exploiting quantum properties or systems to exceed the performance of classical sensing. Owing to their convenient modes of operation and strong room temperature quantum properties, optically active spin defects hosted within solid state materials have come to prominence as one of the foremost tools of choice in this landscape. Many applications now aim to leverage dense ensembles of such defects to boost measurement sensitivity or scale up, which places greater emphasis on the quality of the host material and sensor production methods since cherrypicking individual defects is no longer an option. The prototypical example of such a defect is the nitrogenvacancy (NV) centre in diamond, which exhibits remarkable room temperature spin coherence, bestowed upon it by diamond's material properties. In this thesis, we first look at optimising the production of NV ensembles for quantum sensing, aiming to efficiently and costeffectively produce sensors capable of performing high sensitivity measurements in two key regimes that will be central to the experimental applications explored later. The topics examined are hyperpolarisation of a nuclear spin ensemble on the diamond surface through coupling to an ultranearsurface NV layer, and investigating the properties of a van der Waals antiferromagnet through widefield NV microscopy. The demands placed on the NV layer for these applications are diverse from one another, with charge stability and quantum coherence properties being vital for the former, and the ability to scalably and reproducibly create layers of known thickness crucial to the latter. In light of these studies, we finally consider whether a different spin system housed within an entirely separate materials system, the boronvacancy defect in hexagonal boron nitride, may be a suitable alternative to the wellestablished NV diamond system. We find that the distinct properties of the new host material provide both advantages and disadvantages compared to diamond, and that this system could allow quantum sensing to find even broader scope in the future. By investigating the link between host material properties and the suitability of a quantum sensor for given applications, this thesis provides a unique perspective on the future of the field, which will likely demand more highly specialised and varied sensors.

ItemNo Preview AvailableDeterministic implantation of donor ions in nearsurface nanoarrays for silicon quantum computingRobson, Simon Graeme ( 202308)Remarkable theoretical and experimental progress has been achieved with donorbased silicon quantum computing architectures in the last decade, firmly cementing this implementation as one of the forerunners in the race to build the first largescale quantum computer. By employing nearsurface donor atoms (P, As, Sb, Bi) as the storage medium, both their nuclear and electronic spin states can be used to encode quantum information. Silicon is an excellent host material, having the advantage that donor atoms can easily be incorporated into its lattice, as well as being able to be isotopically enriched into 28Si, giving donor spin coherence times in excess of 30 s. Despite a significant number of experimental challenges, the end goal of creating a nearsurface entangled donor array to enable multiqubit operations is in sight. The aim of this work is to address some significant recent advances towards this goal through the use of directed implantation of single donor ions. Ion implantation has previously been shown to be a valid method for introducing donorqubits into silicon, and for decades has been a wellestablished fabrication technique in the classical semiconductor industry. In this work, it is shown that by employing siliconbased active detection substrates connected to an ultralow noise chargesensitive preamplifier, single donor ions can be deterministically implanted at depths between 10  20 nm with a detection confidence exceeding 99.8%. The recent acquisition of an insitu stepped nanostencil extends this concept further to allow the controlled placement of single donors to a lateral precision of around 50 nm. Through the use of a stepandrepeat procedure, the ability to form twodimensional qubit nanoarrays with this system is demonstrated. With the technique readily capable of scaling up to hundreds of qubits or more, this represents a significant milestone towards the realisation of a topdown solid state qubit architecture. A complementary method for single donor placement in silicon is also given, again using ion implantation. It involves the use of a focused ion beam instrument that has been modified to include a keV electronbeamionsource to give access to a large selection of ion species, focused to a 180 nm spot size. By integration of the same highconfidence single ion detection technology, it is shown that this technique is also capable of creating largescale donor arrays in silicon, but without the need for a physical mask. Its use as not just a single ion implanter, but also a novel instrument for nearsurface characterisation of semiconductors is also presented. The system's functionality is demonstrated through the identification of fabrication faults in a siliconbased device that otherwise may have gone undetected through conventional characterisation methods. The adaptation of the focused ion beam technique into an efficient method for creating microvolumes of isotopically pure 28Si is also explored. This is an important area of focus required to achieve ultralong qubit coherence times, with the results of a preliminary characterisation confirming the technique's suitability. Finally, adapting the single ion detection technology to demonstrate a new approach for performing highresolution Rutherford backscattering spectrometry is also presented. Some major advantages include a small physical detector footprint and ease of integration into existing beamline structures. In keeping with the overall theme of this study, the system is used to analyse samples pertinent to silicon donor quantum computing, such as shallowly implanted donors and enriched 28Si wafers. The series of experiments performed in this thesis thus represent some significant steps towards achieving the scalable fabrication of a donorbased silicon quantum computer.

ItemTowards Automating the Design and Optimisation of Particle AcceleratorsZhang, Xuanhao ( 202306)The question of efficiency and optimality of accelerator lattice structures was investigated in this thesis. Within the context of circular accelerators for hadron therapy, an analysis on the design methodology of existing compact circular accelerators was carried out. This analysis prompted the design of a novel lattice based on two double bend achromat arcs as an alternative to conventional periodic cell structures. The feasibility to perform slow extraction for hadron therapy purposes was demonstrated using the proposed lattice. The extraction efficiency was optimised by tuning the lattice optics. In the second half of this thesis, an automated design and optimisation algorithm was proposed. This algorithm was developed as a general purpose lattice design tool. The development process examined three optimisation routines including the Simulated Annealing algorithm, a simple genetic algorithm, and the Nondominated Sorting Genetic Algorithm (NSGA). Three encoding methods were developed to represent the accelerator lattice for use with the optimisation routines. Namely, the finite slicing encoder, the neural network encoder, and the matrix encoder. It was found that the combination of NSGAIII algorithm and the matrix encoder was the most efficient method for exploring the feasible parameter space for a generalisable lattice design problem.

ItemSurface acoustic wave neuromodulationPeng, Danli ( 202303)Neurological disorders such as Alzheimer's disease often involve impaired axonal function, underscoring the importance of modulating diffusion processes within axons for treatment. Surface Acoustic Waves (SAWs) offer a promising avenue for this, given their unique properties like miniaturized dimensions, absence of shock waves, and reduced selfheating compared to traditional ultrasound methods. This thesis explores the utility of SAWs in enhancing axonal diffusion as a potential treatment for neurological disorders characterized by axonal dysfunction. The initial phase of the research employed retinal ganglion cells as a model system for studying diffusion. The axons of retinal ganglion cells are naturally radially aligned and serve as a wellestablished model, offering advantages in data analysis and reducing error. A mathematical model was established to measure dye diffusion in these cells, laying the groundwork for understanding diffusion mechanisms that are broadly applicable, including but not limited to Alzheimer's disease. Subsequently, I investigated the SAWdriven diffusion enhancement in artificial axons, represented by microchannels. My findings indicate up to a 39% increase in diffusion rates within these microchannels when subjected to SAWs. Numerical simulations were conducted to understand the acoustic pressure fields and acoustic streaming fields, elucidating the mechanisms behind SAWbased diffusion enhancement. Lastly, I explored the biological implications of SAWs by studying their effects on astrocyte recovery, a key factor in brain injury treatment. My results demonstrate that SAWs can promote astrocyte coverage and extrusion growth without affecting width, primarily through enhanced cellular activity rather than increased membrane permeability. Overall, this thesis contributes a new analytical approach to measuring diffusion, advances our understanding of SAWbased mechanisms, and offers a novel potential treatment avenue for neurological disorders involving axonal dysfunction.

ItemImproved hidden Markov models for continuous gravitational wave searchesClearwater, Patrick Winston ( 202211)The direct detection of gravitational waves in 2015 has ushered in a new way of making astronomical observations and provided a rich stream of data for making astrophysical inferences. The detections reported by the Advanced Laser Interferometer GravitationalWave Observatory (Advanced LIGO) and the Virgo detector during their first three observing runs have so far all been compact binary coalescences, which are short duration signals from the late stages of compact object mergers. There is much left to be discovered, and this thesis advances the state of the art in searches for continuous wave signals: persistent, relativelyweak signals from sources such as neutron stars. The thesis describes two significant improvements to the hidden Markov model (HMM) scheme often used for continuous wave searches, applies the HMM to a search of LIGO Observing Run 2 (O2) data, and describes two ancillary improvements (graphics processing unit optimisation and fewbit digitisation) that improve the performance and memoryefficiency of the implementation. HMMs are used in continuous wave searches to account for spin wandering: small stochastic variations in signal frequency. They work by splitting detector data into short time segments, calculating a detection statistic as a function of frequency at each segment, and then tracking the most likely path for the signal frequency based on a userspecified transition model (an unbiased random walk in this thesis). We introduce a detection statistic called the Jstatistic which is sensitive to sources that are part of a binary system. The Jstatistic reliably detects signals weaker by a factor of four compared to the Besselweighted Fstatistic, the previous detection statistic used in HMM searches for binary sources. This improved HMM scheme allows searches for binary sources to be as sensitive as searches for isolated sources. We use the J statistic HMM pipeline, called "version 2", to search LIGO O2 data for gravitational radiation from the lowmass xray binary Scorpius X1 over a 60650 Hz frequency band. While no detection is claimed, three candidates survive our followup veto procedure. Assuming a nondetection, the search sets a 95 per cent confidence upper limit on strain h_0 of 3.47e25 at 194.6 Hz when marginalising over the inclination angle of the source. One drawback of the HMM is that each time segment is combined incoherently: version 2 of the HMM does not enforce a consistent signal phase in the transition between blocks. We introduce version 3 of the HMM, which does track interblock phase. The result is a detection pipeline, applicable to either isolated or binary sources, that is a factor of ~1.5 more sensitive than version 2, and closes much of the gap between the HMM and a fullycoherent search while retaining the computational efficiency of earlier HMM versions. We describe an implementation of the J statistic and HMM on graphics processing units (GPUs), which provides an orderofmagnitude improvement in processing speed and was essential for covering the wide parameter range used in the O2 Scorpius X1 search. Running that search using the GPU implementation of the pipeline required approx < 3e5 GPUhours. We further describe the first application of fewbit digitisation techniques to continuous gravitational wave search methods, finding a decrease in sensitivity of only 6 per cent (twobit digitisation) or 25 per cent (onebit) in return for a factor of 32 or 64, respectively, reduction in memory use.

ItemConstraining Cosmology with Secondary Anisotropies and Cluster Lensing of the Cosmic Microwave Background with the South Pole TelescopeChaubal, Prakrut ( 202306)There is a wealth of information encoded in the higher angular multipoles of the Cosmic Microwave Background (CMB) waiting to be explored with highresolution observations. In this thesis I will discuss the work done during my PhD, where I used the latest data, observed with the South Pole Telescope, to measure the secondary anisotropies of the CMB. I will also discuss the use of CMBcluster lensing as a powerful tool to constrain cosmology. In this thesis, I present the firstever measurement of the high\el{} temperature anisotropies from the 20192020 winter observations of the 1500 \sqdeg{} SPT3G survey. I discuss the method used to obtain an unbiased measurement of the bandpowers from the low level data from the telescope. Second, I investigate the lensing of the CMB by galaxy clusters. I show the improvement to cosmological constraints from galaxy cluster surveys with the addition of CMBcluster lensing data. I explore the cosmological implications of adding mass information from the 3.1$\sigma$ detection of gravitational lensing of the cosmic microwave background (CMB) by galaxy clusters to the SunyaevZel'dovich (SZ) selected galaxy cluster sample from the 2500 \sqdeg{} SPTSZ survey and targeted optical and Xray followup data. In the \lcdm{} model, the combination of the cluster sample with the Planck power spectrum measurements prefers $\sig(\Omega_m/0.3)0.5=0.831\pm0.020$. Adding the cluster data reduces the uncertainty on this quantity by a factor of 1.4, which is unchanged whether or not the 3.1$\sigma$ CMBcluster lensing measurement is included. We then forecast the impact of CMBcluster lensing measurements with future cluster catalogs. Adding CMBcluster lensing measurements to the SZ cluster catalog of the ongoing SPT3G survey is expected to improve the expected constraint on the dark energy equation of state w by a factor of 1.3 to $\sigma(w)$=0.19. We find the largest improvements from CMBcluster lensing measurements to be for \sig, where adding CMBcluster lensing data to the cluster number counts reduces the expected uncertainty on \sig{} by factors of 2.4 and 3.6 for SPT3G and CMBS4 respectively.