School of Physics - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 13
  • Item
    Thumbnail Image
    Diagnostics and control of transverse coupled-bunch instabilities in third generation electron storage rings
    PEAKE, DAVID ( 2011)
    The Australian Synchrotron is a newly commissioned third-generation light source situated in Melbourne, Australia. Synchrotron radiation is produced from the 216 metre circumference storage ring where 3 GeV electrons are trapped within a lattice formed by dipole bending magnets and multipole focussing magnets. The appearance of coupled-bunch instabilities form the primary limitation of modern storage rings. Instabilities enforce an upper limit on stored current and can reduce the utility of radiation production by increasing the effective emittance of the ring. Stored current limitations due to beam instabilities were discovered early in the commissioning phase of the Australian Synchrotron storage ring and were initially controlled by substantially increasing the chromaticity of the lattice from (ξx; ξy) = (2; 2) to (ξx; ξy) = (3:5; 13). Subsequent additions to the ring have resulted in an increase of the strength of destructive instabilities to the point where detrimental side-effects from chromatic corrections reduce the ability of the ring to damp instabilities. This increase in instability strength has lead to the shift from purely passive methods of instability control to the design and construction of an active transverse feedback system. This thesis describes the commissioning of a bunch-by-bunch transverse feedback system designed to combat coupled-bunch instabilities, allowing for the reduction of chromaticity within the storage ring lattice back to the initial design values (ξx; ξy) = (2; 2). Reducing the chromaticity also removes detrimental effects such as the reduction of the dynamic aperture and an increase in the lifetime of the beam. Novel methods for tuning the system and maximising the damping rate of the beam are introduced. Using these methods, the feedback system was successfully commissioned and was shown to have the stability required for user-mode storage ring operations. The bunch-by-bunch transverse feedback system can also be leveraged as a powerful diagnostic tool. New data acquisition techniques have been designed to allow for the study of different instability mechanisms as well as parameters present in the equations of motion for stored particles. These techniques and the suite of results achieved are presented.
  • Item
    Thumbnail Image
    Atomic-resolution imaging using inelastically scattered electrons
    Lugg, Nathan R ( 2011)
    Transmission electron microscopy (TEM) is a powerful technique for studying matter at the atomic scale. In this thesis we theoretically investigate how images in several imaging modes are formed using electrons that have scattered inelastically within a specimen. Understanding how both the channelling of the incident electron probe and the inelastic transition potentials within the specimen combine to generate the inelastically scattered wave is fundamental in understanding how an inelastic image is formed. We demonstrate that the atomic-resolution chemical mapping of elements within a specimen can be achieved using energy-filtered transmission electron microscopy (EFTEM) based on inner-shell ionisation. We show how the approach based on calculating the elastic wavefunction and individual inelastic (ionisation) transition potentials can provide insight as to when direct interpretation may and may not be possible. This is demonstrated by a comparison between experimental data and simulation for the EFTEM image of the La N4,5 edge in LaB6 in which direct interpretation of the location of the La columns is possible. Chemical mapping of the atoms in a specimen using scanning transmission electron microscopy (STEM) based on electron energy-loss spectroscopy (EELS) has recently been demonstrated using known test specimens. In this thesis we present the first applications of this novel technique to the compositional determination of a technologically important Ce/Zr mixed oxide (Ce2Zr2O8) catalytic nanocrystal. Full quantum mechanical calculations are an essential part of the analysis and are used to identify perturbations in the chemical composition from that of the ideal Ce2Zr2O8 ordered nanocrystal structure. To date standard TEM operating voltages (≥ 100 kV) have been used in STEM. These high accelerating voltages lead to radiation damage, especially in specimens containing light elements. Recent technological advances in aberration correction have enabled high-resolution imaging at lower incident energies where knock-on damage is less problematic. With the amelioration of this problem in mind, we will discuss theoretically the advantages and disadvantages of moving to the low accelerating voltage regime with respect to electron channelling and the inelastic probe-specimen interactions that take place within the specimen. High angle annular dark field (HAADF) STEM, STEM EELS imaging and annular bright field (ABF) STEM imaging are all considered. We find that, in general, elastic channelling along columns is favoured by high accelerating voltages and that, in contrast, lower accelerating voltages provide more favourable inelastic interactions. The recently developed technique of ABF STEM, which is based on both elastic and inelastic (thermal) scattering, has shown much potential in directly imaging atoms as light as Li and H, which are notoriously difficult to detect in inelastic imaging modes such as STEM EELS and HAADF STEM. Given that there is strong interest in the imaging of Li, particularly because of its significance in battery materials, we focus our discussion on microscope optimisation to explore the conditions under which Li columns can be expected to be directly visible using ABF STEM. A detailed discussion is given of the controllable parameters and the conditions most favourable for Li imaging.
  • Item
    Thumbnail Image
    Numerical and analytical approaches to modelling 2D flocking phenomena
    Smith, Jason A. ( 2011)
    In the first section of this thesis, the motion of self-propelled particles (known as boids) in a 2D system with open boundaries is considered using a Lagrangian Individual-Based model. In fact, two different variations of this model are developed, one with a cohesion potential based on the soft core Morse potential and the other based on the hard core Lennard-Jones potential, both well understood in the fields of atomic and molecular physics respectively. The results obtained from these two different variations are then compared with one another, as well as with earlier work in the field, in order to determine the effectiveness and applicability of hard-core and soft-core cohesion potentials, the differences in flocking behaviour they produce, and context in which each one is applicable to real flocking systems. Some of the flocking phases and shapes obtained from these two different models are then compared to a number of specific flocking situations observed in the real world. It is shown that a flocking model with a soft-core cohesion potential is particularly good at modelling the cluster type flocks most commonly seen in bird and fish schools, whilst a hard-core cohesion potential has a tendency to produce a distinctive wavefront type flock which has been well documented in the context of large herds of mammals, particularly wildebeest. It is also found that stable vortex states, observed in systems of bacteria, are only seen with a soft-core cohesive potential. In the second section of this thesis, a novel approach is taken to model a flocking as a gas. An equation of state is derived analytically for a flocking system in 2D using the virial expansion. The relationships obtained from this equation of state are compared to the results of a simple numerical simulation in order to establish the accuracy of this new approach to modelling flocking systems. Finally, this new statistical mechanical approach to deriving a flocking model is applied to the Vicsek model, the most well understood flocking model derived from a physics perspective. It is found that the analytical relationships between a number of key variables of the gas system such as pressure, temperature, interaction length, entropy and heat capacity derived from the virial equation of state bear a close comparison to those same variables derived numerically from the standard Vicsek model, thus demonstrating the efficacy and veracity of this new approach to modelling flocking phenomena.
  • Item
    Thumbnail Image
    Topological quantum error correction and quantum algorithm simulations
    Wang, David ( 2011)
    Quantum computers are machines that manipulate quantum information stored in the form of qubits, the quantum analogue to the classical bit. Unlike the bit, quantum mechanics allows a qubit to be in a linear superposition of both its basis states. Given the same number of bits and qubits, the latter stores exponentially more information. Quantum algorithms exploit these superposition states, allowing quantum computers to solve problems such as prime number factorisation and searches faster than classical computers. Realising a large-scale quantum computer is difficult because quantum information is highly susceptible to noise. Error correction may be employed to suppress the noise, so that the results of large quantum algorithms are valid. The overhead incurred from introducing error correction is neutralised if all elementary quantum operations are constructed with an error rate below some threshold error rate. Below threshold, arbitrary length quantum computation is possible. We investigate two topological quantum error correcting codes, the planar code and the 2D colour code. We find the threshold for the 2D colour code to be 0.1%, and improve the planar code threshold from 0.75% to 1.1%. Existing protocols for the transmission of quantum states are hindered by maximum communication distances and low communication rates. We adapt the planar code for use in quantum communication, and show that this allows the fault-tolerant transmission of quantum information over arbitrary distances at a rate limited only by local quantum gate speed. Error correction is an expensive investment and thus one seeks to employ as little as possible without compromising the integrity of the results. It is therefore important to study the robustness of algorithms to noise. We show that using the matrix product state representation allows one to simulate far larger instances of the quantum factoring algorithm than under the traditional amplitude formalism representation. We simulate systems with as many as 42 qubits on a single processor with 32GB RAM, comparable to amplitude formalism simulations performed on far larger computers.
  • Item
    Thumbnail Image
    A cold atom electron source for diffractive imaging
    Saliba, Sebastian Dylan ( 2011)
    Cold electrons bunches can be created by near-threshold photoionisation of a cloud of cold atoms in a magneto-optical trap (MOT). The electrons can be extracted with temperatures less than 15 K and, due to their low momentum spread, an electron bunch created from cold atoms has high spatial coherence. Coupled with the potential of high brightness, a cold atom electron source (CAES) is a promising prospect for single-shot diffractive imaging of nanoscale objects. This thesis describes the construction and characterisation of a cold atom electron source. A CAES relies on laser cooling and trapping techniques which require stable, high precision external cavity diode lasers (ECDLs) with narrow linewidth, minimal frequency noise and reproducible day-to-day operation. The research presented here begins by extending the current understanding of this crucial component of the apparatus, specifically to detail a new model of laser frequency mode selection that shows the critical role played by the external cavity in determining the frequency stability. A geometric relationship is introduced, defining in a straightforward way how the mode-hop-free tuning range of an ECDL can be optimised. It is also shown that the frequency linewidth of an ECDL varies with the focus of the collimation lens used. This previously unidentified effect is described with a Gaussian optics model, and shown theoretically and experimentally to have a significant effect on the laser linewidth. A new monolithic block ECDL design based on these results was constructed for the CAES apparatus and performed favourably in comparison to previous designs, proving to be robust and reliable. A CAES is a new type of electron source whose quantitatively different properties are under intense study. A model describing the spatial coherence properties of a CAES using a statistical optics framework was developed to better understand the capabilities of the source in diffractive imaging applications. The extracted electron bunches are shown to have properties analogous to quasi-monochromatic paraxial optical wavefields. Based on the theoretical model, I have developed and implemented a novel technique to measure the transverse spatial coherence of the source using electron bunches arbitrarily shaped in two dimensions. The coherence length measurement indicates a lower limit of 10 nm for cold electron bunches, an order of magnitude larger than conventional photoemission electron sources. Such a large coherence length required the development of a diffractive imaging model for assessing the viability of current and future experiments. An analytic expression describing the electrostatic potential of a single biomolecule was derived for use in a diffraction simulation algorithm. Simulated diffraction patterns of a biomolecule were produced indicating single shot imaging of nanoscale objects using a CAES will be achievable. Compared with conventional electron sources, the CAES thus has several advantages for diffractive imaging applications: the demonstrated substantial increase in source coherence; the novel ability to create low emittance, arbitrarily shaped electron bunches; and the high brightness potential desirable for single-shot imaging. The simulations and experiments characterising the CAES source described here demonstrate the viability of diffractive imaging experiments anticipated in the near future.
  • Item
    Thumbnail Image
    Surface engineering for quantum information processing in NV diamond
    STACEY, ALASTAIR ( 2011)
    Due to an extensive list of extreme and often complementary materials properties, diamond has become a leading candidate in a range of advanced mechanical, thermal, optical and electronics applications. These include devices designed to operate under extraordinarily harsh conditions such as the low-earth orbit environment, and applications with highly demanding performance requirements such as high power electronics. Perhaps the most demanding of these areas in which diamond shows great promise is the pursuit of scalable quantum information processing (QIP) devices, with the ultimate goal being the production and integration of a large number of interacting and controllable quantum bits (Qubits), within a single device. Although the production of this so called quantum computer is theoretically compatible with diamond’s ideal intrinsic properties, experimental realization of this type of device will require absolute control over material characteristics, both in terms of the bulk material used and following the various processing steps required for device construction. Most currently successful processing techniques can be seen to leave some form of crystal damage either near the surface or within the bulk of the material and due to the extreme sensitivity of any form or qubit this residual damage can be very hard to measure, and even harder to prevent or subsequently remove. Specifically for optical QIP devices, the Nitrogen- Vacancy (NV) centre in diamond holds enormous potential in a variety of qubit architectures, most of which require coupling of this defect to a photonic cavity or waveguide system. The biggest roadblock to realization of this lofty goal however is an apparent lack of NV centres found close enough to any diamond sample surfaces, without significant degradation in their spectral qualities. In this thesis we have primarily addressed the issue of near-surface NV by conducting a set of experiments aimed at understanding whether NV centres can be feasibly created close to diamond surfaces (<100nm), whilst maintaining their desirable stable electronic/spectral properties. To this end we have completed a number of interrelated studies on single crystal and nanocrystalline diamond samples, focusing on clarifying the effect of near-surface diamond characteristics, both intrinsic and affected by fabrication procedures, on the quantum and electronic properties of near-surface diamond colour centres. We show that high crystalline quality diamond can be grown using chemical vapour deposition techniques, including the controllable growth of nanocrystalline diamond particles, and that this material is a compatible host for commensurably high spectral quality NV centres. We also show that novel colour centres can be deliberately grown using these techniques, with advantageous quantum optical properties, possibly rivaling those of the NV centre itself. Utilizing optical spectroscopy measurements, and following a range of thermal annealing steps, we show that near-surface NV centres produced by ion-implantation exhibit significant inhomogeneous and homogeneous spectral instability, making them unsuitable for optical QIP applications and that post-growth fabrication techniques, such as mechanical polishing, have also been shown to produce significant near-surface crystal damage, which is likely to further degrade the performance of near-surface NV centres. We utilize advanced surface analysis techniques, such as synchrotron based x-ray absorption, to show that nanodiamonds (<30nm) are subject to enhanced thermal and chemical reactivity, which can be significantly alleviated with the application of surface hydrogen termination schemes. Conversely, the diffusion of hydrogen into CVD grown diamond materials has been shown to passivate near-surface NV centres and mitigation of this effect is likely to be the only significant remaining hurdle in the production of near-surface NV qubits for optical QIP devices. We have seen that hydrogen plasmas do provide a useful surface processing technique, capable of removing crystal damage, which is otherwise resistant to standard wet-chemical etching, and that nitrogen dosed hydrogen plasmas form a new and advantageous plasma-based planarization technique for diamond surfaces, which may be capable of replacing the ubiquitous mechanical polishing methods. Further, we have seen that pure nitrogen plasmas have the potential to form a thermally stable surface termination barrier for diamond, offering a possible alternative to hydrogen surface passivation. Finally, we have directly measured the effect of surface termination on nearsurface unoccupied electronic states, confirming that NV centres within 100nm of the surface have a positive future in optical QIP devices.
  • Item
    Thumbnail Image
    Superfluid spin up and pulsar glitch recovery
    van Eysden, Cornelis Anthony ( 2011)
    When a massive star comes to the end of its life, it explodes in a supernova, leaving behind a compact remnant known as a neutron star. The core density of a neutron star exceeds that of terrestrial nuclei, making these cosmic objects the only known means to probe the properties of bulk nuclear matter at extreme densities. Radio waves are beamed along a neutron stars' magnetic axis, which is misaligned with its rotation axis, creating a lighthouse effect which has earned these stars the name `pulsars'. The pulse arrival times are measured with ultra-high precision, rivalling that of the best terrestrial clocks. Some pulsars undergo timing irregularities, known as glitches, during which the spin frequency of the star suddenly increases and then recovers quasi-exponentially over a period of days to weeks. In this thesis, glitch recovery is used to extract information about the pulsar interior in two ways: high-resolution timing data, and gravitational radiation. The interior of a pulsar has long been thought to consist of superfluid neutrons coexisting with a proton-electron plasma. In Chapter 2, a hydrodynamic model of the global flow induced during glitch recovery is constructed using the Hall-Vinen-Bekarevich-Khalatnikov (HVBK) two-component equations. The impulsive spin up of the two-component superfluid and its container is solved analytically in arbitrary geometry, generalising the extensively studied case of single-fluid spin up. The spin-up time depends on the geometry, mutual friction coefficients, $B$, $B'$, the Ekman number $E$, and the superfluid density fraction $\rho_n$. For $B\sim O(1)$, the inviscid component undergoes Ekman pumping due to strong coupling to the viscous component, and the azimuthal velocities are “locked together” during the spin-up. For $B\lesssim E^{1/2}$, there is no Ekman pumping in the inviscid component, and the inviscid azimuthal velocity spins up via mutual friction on a combination of the mutual friction and Ekman time-scales. The spin-up process is studied in spheres, cylinders (with co- and counter-rotating lids), and cones, and occurs faster in spheres and cones which become shorter at larger radius. In Chapter 3, the coupled, dynamic response of a rigid container filled with a two-component superfluid undergoing Ekman pumping is calculated self-consistently. The container responds to the back-reaction torque exerted by the viscous component of the superfluid and an arbitrary external torque. The resulting motion is described by a pair of coupled integral equations for which solutions are easily obtained numerically. If the container is initially accelerated impulsively then set free, it relaxes quasi-exponentially to a steady state over multiple time-scales, which are a complex mix of $B$, $B'$, $E$, $\rho_n$ and the varying hydrodynamic torque at different latitudes. The spin down of light containers (compared with the contained fluid) depends weakly on $B$, $B'$, $E$, $\rho_n$ and occurs faster than the Ekman time. When the fluid components are initially differentially rotating, the container can “overshoot” its asymptotic value before increasing again. When a constant external torque is applied, the superfluid components rotate differentially and non-uniformly in the long term. For an oscillating external torque, the amplitude and phase of the container response are most pronounced when the container is light compared with the contained fluid. The coupled spin-up of a two-component superfluid and its container are fitted to radio pulsar timing data in Chapter 4. All glitches recorded to date in the Crab and Vela pulsars are considered, with specific attention given to the 1985 and 1988 Vela glitches recorded at Mount Pleasant observatory in Australia and the 1975 Crab glitch recorded at Jodrell Bank in England. The model successfully accounts for the quasi-exponential recovery observed in pulsars like Vela and the “overshoot” observed in pulsars like the Crab. By fitting the model to high-resolution timing data, three constitutive coefficients in bulk nuclear matter can be extracted: the shear viscosity, the mutual friction parameters, and the charged fluid fraction. The fitted coefficients for the Crab and Vela are compared with theoretical predictions for several equations of state, including the colour-flavour locked and two-flavour colour superconductor phases of quark matter. Good agreement is found between the bulk-averaged, effective parameters extracted from observations and the theory of condensed protons and neutrons, giving support to the hydrodynamic model. The spin-down recovery of impulsively spun-up containers filled with superfluid helium has been studied in the laboratory by Tsakadze & Tsakadze (1980). In Chapter 5, theory derived in Chapters 2 and 3 is applied to interpret the Tsakadze data. The dependence of the spin-down time on temperature and the mass fraction of the viscous component are investigated. Excellent agreement at the 0.5% level is obtained for experiments at $1.4
  • Item
    Thumbnail Image
    The Jaynes-Cummings-Hubbard model
    Makin, Melissa I. ( 2011)
    This thesis comprises an intensive investigation of the Jaynes-Cummings-Hubbard (JCH)system. This Hamiltonian describes a system of coupled photonic cavities, each cavitycontains a single two-level system. This system is rich in the physics it contains. Thepresence of the two-level system provides a mechanism by which the photons may interactwith each other, providing an interesting array of non-linear phenomena. Using the meanfieldapproximation, the phase diagram of this system has been shown to display what areeffectively two different phases - a superfluid phase, and a Mott-insulator phase. In thisthesis we show that by using exact diagonalisation for a finite number of cavities, thereis a rich structure to the phases that goes beyond this dual division of phases. We alsogenerate phase diagrams that could be experimentally realised using an ion trap system.Investigating the time dependent properties of the one-dimensional JCH system, we obtainboth localised and delocalised behaviour, despite having only one excitation in the system.In certain limits the 1D JCH system approximates two Heisenberg spin chains. We findit is also possible in the one excitation, one-dimension time dependent case to activelycontrol the location of the excitation, by means of a potential, thus creating all standardcomponents of linear optics: we specifically investigate waveguides and beamsplitters. Finally,we investigate the use of matrix product states (MPS) to study the one-dimensionalJCH system in the time domain. MPS is used to show how two colliding excitations canshow the signature of photon blockade.
  • Item
    Thumbnail Image
    Collective superfluid vortex dynamics and pulsar glitches
    WARSZAWSKI, LILA ( 2011)
    Pulsar glitches offer a way of studying the dynamics of cold, ultradense matter in systems of stellar dimensions, under extremes of density, temperature and magnetisation unattainable on Earth. This thesis aims to build a robust model of pulsar glitches, based on the superfluid vortex unpinning paradigm, which relates the physical parameters of the pulsar interior to the observed distribution of glitch sizes and waiting times (power laws and exponentials respectively). Our modelling efforts draw together knowledge about superfluid vortex dynamics and pinning, garnered from condensed matter and nuclear physics, the observational facts gathered by pulsar astronomers, and the theoretical framework of non-equilibrium stochastic systems, such as those exhibiting self-organised criticality. In each case, we emphasise the necessity of collective mechanisms in triggering avalanche-like vortex unpinning events. We begin by studying the dynamics of superfluid vortices from first principles, using numerical solutions of the Gross-Pitaevskii equation (GPE). We solve the GPE in the presence of a lattice of pinning sites, in a container that is decelerated at a constant rate, mimicking the electromagnetic spin-down torque on a pulsar. The superfluid spins down spasmodically, as vortices unpin and hop between pinning sites when the Magnus force, due to the lag between the superfluid and vortex line velocities, exceeds a threshold. Torque feedback between the superfluid and its container regulates the lag between the superfluid and crust, resulting in abrupt increases in the container angular velocity. We study how the statistics of the sizes and waiting times between spin-up events change with the mean and dispersion of pinning strengths, the electromagnetic spin-down torque, the relative number of vortices compared to pinning sites, and the ratio of the crust and superfluid moment of inertia - all parameters of interest in neutron stars. We find that mean glitch size increases with mean pinning strength and the ratio of the moments of inertia. It is independent of the relative number of pinning sites and vortices, suggesting that vortices move a characteristic distance before repinning, rather than repinning at the next available site. The mean waiting time decreases with the number of pinning sites and vortices, the ratio of the moments of inertia and the spin-down torque, and it increases with the width of the pinning strength distribution. In order to explain the broad range of observed glitch sizes using the vortex unpinning paradigm, a collective unpinning mechanism is required. Using numerical solutions of the GPE, we study how the unpinning of one vortex can cause other vortices to unpin. We identify two knock-on triggers: acoustic pulses emitted as a vortex repins, and the increased repulsive force between vortices locally, when an unpinned vortex approaches its nearest neighbours. In the second half of the thesis, we construct a suite of three large-scale stochastic models of glitches. We are inspired to prosecute this program by similarities between the statistics of archetypal self-organised critical systems, such as earthquakes and sand piles, and those of pulsar glitches. The essential features of the vortex dynamics observed in the GPE simulations are abstracted and condensed into a set of iterative rules that form the basis of automata and analytic glitch models. A cellular automaton model, in which vortices interact with nearest neighbours via the Magnus force, reveals that when all pinning sites are of the same strength, large-scale inhomogeneities in the pinned vortex distribution are necessary to produce a broad range of glitch sizes. In this case, glitch sizes and durations are power-law-distributed, and waiting times obey an exponential distribution. We find no evidence of history-dependent glitch sizes or aftershocks. A coherent noise model, based on a similar model developed to study atom hopping in glasses, in which pinning strength varies from site to site, but the pinned vortex distribution is assumed to be spatially homogeneous, exhibits power-law-distributed glitch sizes. Exponential waiting times are put in by hand, by assuming that the stress released in a glitch accumulates over exponentially-distributed time intervals. A wide range of pinning strengths is needed to find agreement with radio timing data. Mean pinning strength is found to decrease with increasing characteristic pulsar age. Finally, we construct a statistical model that tracks the vortex unpinning rate as a function of the stochastically fluctuating global lag between the superfluid and container. Monte-Carlo simulations and a jump-diffusion master equation reveal that a knock-on mechanism that is finely tuned with respect to the pinning strength, is essential to producing a broad range of glitch sizes. Estimates of the power dissipated in acoustic waves during repinning, and of the strength of the proximity effect, do not meet the fine-tuning criteria. We propose to extend this promising model to include nearest-neighbour interactions in the future, in the hope that this may lessen the need for fine tuning. The non-axisymmetric rearrangement of the superfluid velocity field during a vortex-avalanche-driven glitch is a source of gravitational radiation. We calculate the gravitational wave strain using the characteristic vortex motion observed in the GPE simulations. We set an upper bound on the wave strain of h ~ 10-23 for a glitch resulting from an unpinning avalanche of the maximum observed size. We also estimate the contribution to the stochastic gravitational wave background from the superposition of many glitches from a Galactic neutron star population. We place an upper bound on the signal-to-noise ratio of the background of ~ 10-5 for the Advanced LIGO (Laser Interferometer Gravitational-wave Observatory) detector. Detection of a gravitational wave signal from glitches can teach us about the physics of matter at nuclear densities, from the equation of state to transport coefficients like viscosity.
  • Item
    Thumbnail Image
    Dark matter indirect detection and Bremsstrahlung Processes
    Jacques, Thomas David ( 2011)
    It is now well established that some form of Dark Matter (DM) makes a sizeable contribution to the total matter-energy abundance of the Universe, yet DM still evades detection and its particle properties remain unknown. Indirect detection provides an important probe of some of these fundamental properties. DM self-annihilation throughout the Universe is expected to lead to an observable signal of standard model particles at Earth, and any observed flux of standard model particles from a particular region acts as an upper limit on the annihilation signal from that region. In Chapter 1, we give an introduction to our current knowledge of DM. We begin with the historic and recent evidence for the existence of DM based on its gravitational effects, before describing our current knowledge of DM formation history and abundance. We then describe and compare a number of competing DM density profiles for our galaxy, highlighting the large uncertainties towards the Galactic center. There are currently a large number of DM candidates, sometimes called the `Candidate Zoo'. We briefly introduce several of the most popular candidates, describing their history and motivation. We then move on to describe current searches for DM, focusing on indirect detection, which aims to detect DM via an observable flux of its SM annihilation products. We detail the major constraints on the DM self-annihilation cross section, and examine some potential signals from DM annihilation. We also describe constraints on DM from direct detection and collider searches. Finally, we introduce bremsstrahlung processes in the context of DM annihilation, where a particle such as a gamma-ray is radiated from one of the DM annihilation products at the Feynman diagram level. In Chapter 2, we use gamma-ray data from observations of the Milky Way, Andromeda (M31), and the cosmic background to calculate robust upper limits on the dark matter self-annihilation cross section to monoenergetic gamma rays, $\langle v\sigma \rangle_{\gamma \gamma}$, over a wide range of dark matter masses. We do this in a model-independent and conservative way, such that our results are valid across a broad spectrum of DM models and astrophysical assumptions. In fact, over most of this range, our results are unchanged if one considers just the branching ratio to gamma rays with energies within a factor of a few of the endpoint at the dark matter mass. If the final-state branching ratio to gamma rays, $Br(\gamma \gamma)$, were known, then $\langle v\sigma \rangle_{\gamma \gamma} / Br(\gamma \gamma)$ would define an upper limit on the {\it total} cross section. %we conservatively assume $Br(\gamma \gamma) \gtrsim 10^{-4}$. In Chapter 3, we take advantage of the fact that annihilation to charged leptons will inevitably be accompanied by gamma rays due to radiative corrections to place similar limits on the the annihilation cross section to an electron-positron pair, $\langle v\sigma \rangle_{e^+e^-}$. Photon bremsstrahlung from the final state particles occurs at the Feynman diagram level, yet the gamma-ray spectrum per annihilation is approximately model independent, such that our analysis applies to a broad class of DM models. We compare the expected annihilation signal with the observed gamma-ray flux from the Galactic Center, and place conservative upper limits on the annihilation rate to an electron-positron pair. We also constrain annihilation to muon and tau lepton pairs. We again make conservative choices in the uncertain dark matter density profiles, and note that our constraints would only be strengthened if the density were more tightly constrained. The spectrum per annihilation produces hard gamma rays near the kinematic cutoff, and we find that the constraints on $\langle v\sigma \rangle_{e^+e^-}$ are weaker than those on $\langle v\sigma \rangle_{\gamma \gamma}$ only by a factor of $\sim 10^{-2}$, as expected since the $2\rightarrow 3$ process is suppressed relative to the $2\rightarrow 2$ process. Annihilation to leptons will also be accompanied by massive gauge bosons due to electroweak radiative corrections. In Chapter 4 we examine a case where DM annihilates exclusively to neutrinos at the $2\rightarrow 2$ level, and gamma rays, leptons and hadrons will inevitably be produced due to electroweak bremsstrahlung. We explicitly calculate the ratio of the rate for the three electroweak bremsstrahlung modes $\chi\chi\rightarrow \nu\bar\nu Z,\, e^+ \nu W^-\, e^- \bar \nu W^+$ to the rate for the $2\rightarrow 2$ process $\chi\chi\rightarrow \nu\bar\nu$. Electroweak bremsstrahlung plays a larger role in the special case where the annihilation rate to leptonic modes suffers helicity suppression. While it has long been known that photon bremsstrahlung can lift the helicity suppression, we show in Chapter 5 that electroweak bremsstrahlung is also capable of lifting this suppression, such that the branching ratio to the 3-body electroweak bremsstrahlung final states can greatly exceed the branching ratio to an electron-positron or neutrino pair. We explicitly calculate the electroweak bremsstrahlung cross section in a typical leptophilic model. In Chapter 6 we examine observational signatures of dark matter annihilation in the Milky Way arising from these electroweak bremsstrahlung contributions to the annihilation cross section. Here we calculate the spectra of stable annihilation products produced via $\gamma/W$/$Z$-bremsstrahlung. After modifying the fluxes to account for the propagation through the Galaxy, we set upper bounds on the annihilation cross section via a comparison with observational data. We show that stringent cosmic ray antiproton limits preclude a sizable dark matter contribution to observed cosmic ray positron fluxes in the class of models for which the bremsstrahlung processes dominate.