School of Physics - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 9 of 9
  • Item
    Thumbnail Image
    Soft supersymmetry breaking from stochastic superspace
    Pesor, Nadine Elsie ( 2013)
    A consequence of exact supersymmetry is the prediction of a mass degenerate superpartner for each standard model particle. The non-observation of such particles demands that supersymmetry manifests at low energies in softly broken form, such that new types of divergences (i.e. higher than logarithmic) are avoided. We propose a new realisation of softly broken supersymmetric theories as theories defined on stochastic superspace. With a suitably chosen probability distribution, the soft supersymmetry breaking parameters emerge upon averaging over the fluctuating superspace coordinates. At the classical level, the supersymmetry breaking is parametrised by a single mass parameter, $\xi$, which describes the stochasticity of the Grassmannian coordinates. It is therefore highly predictive, and by virtue of its characteristic structure of soft breaking terms, within reach for detection or falsification at the Large Hadron Collider. In the context of the standard model with stochastic supersymmetry, the $B_{\mu}$ parameter, the universal soft trilinear coupling $A_0$, the universal gaugino mass $m_{1/2}$ and the universal scalar mass $m_0$ are all given solely in terms of $\xi$. The relations are $B_{\mu} = \xi^*$, $A_0 = 2 \xi^*$, $m_{1/2} = |\xi|/2$ and $m_0 = 0$. At the quantum level, these relations hold at a certain scale, $\Lambda$, which is a second free parameter. The soft scalar masses, zero at tree-level, are induced radiatively through the renormalisation group equations at one-loop. Employing an analytical solution to an approximation of the one-loop renormalisation group equations, we find the parameter space of minimal stochastic supersymmetry to be highly constrained by the nature of its lightest supersymmetric particle (LSP). A viable neutralino LSP only emerges when the cutoff scale is taken to be greater than $\mgut$. In a more detailed analysis using sparticle spectrum calculator software to determine the mass and decay spectra, each point in parameter space is checked against known limits on relic density and rare decay processes. We then use a fast simulation of the ATLAS detector to determine which points in its parameter space are excluded by ATLAS zero lepton searches, which are amongst the most constraining limits on direct sparticle production. We find that the minimal model is definitively excluded by the recent discovery of a Higgs with mass approximately $10 \, \GeV$ heavier than that predicted by stochastic superspace. To address the observation of nonzero neutrino masses, we separately consider R-parity violation and the type-I seesaw mechanism as extensions to minimal model. In the former case, we are able to introduce neutrino masses and mixings consistent with experiment by including purely trilinear R-parity violating superpotential terms and assuming the less constrictive baryon triality symmetry. The latter case is found only to be viable when the neutrino Yukawa coupling is small relative to the top Yukawa, and the cutoff scale is large. However, as these models do not affect the Higgs mass prediction, they are excluded for the same reason as the minimal model. Finally, we consider the next-to-minimal supersymmetric standard model in stochastic superspace. The introduction of a gauge singlet superfield offers the possibility of increasing the mass prediction for the Higgs relative to the minimal model. Indeed, we observe a global increase such that $m_h = 116.6 \div 121.0 \, \GeV$. However, this is insufficient to achieve overlap with the allowed mass range from CMS and ATLAS searches.
  • Item
    Thumbnail Image
    Principles and applications of quantum decoherence in biological, chemical, and condensed matter systems
    Hall, Liam Terres ( 2013)
    This thesis focuses on the use of the Nitrogen-Vacancy (NV) defect centre in diamond as a single spin sensor of nanoscale magnetic fields. The NV system has attracted considerable interest in recent years due to its unique combination of sensitivity, nanoscale resolution, room temperature operation, and stable fluorescence, together with the inherent biocompatibility of diamond; making it ideal for measuring coherent quantum processes in biological, chemical and condensed matter systems. Existing NV-based sensing techniques, however, are ultimately limited by sources of magnetic noise that act to destroy the very resource required for their operation: the quantum phase coherence between NV spin levels. We address this problem by showing that this noise is a rich source of information about the dynamics of the environment we wish to measure. We develop protocols by which to extract dynamical environmental parameters from decoherence measurements of the NV spin, and a detailed experimental verification is conducted using diamond nanocrystals immersed in a MnCl2 electrolyte. We then detail how sensitivities can be improved by employing sophisticated dynamic decoupling techniques to remove the decoherence effects of the intrinsic noise, whilst preserving that of the target sample. To characterise the effects of pulse errors, we describe the full coherent evolution of the NV spin under pulse-based microwave control, including microwave driven and free precession intervals. This analysis explains the origin of many experimental artifacts overlooked in the literature, and is applied to three experimentally relevant cases, demonstrating remarkable agreement between theoretical and experimental results. We then analyse and discuss two important future applications of decoherence sensing to biological imaging. The first involves using a single NV centre in close proximity to an ion channel in a cell membrane to monitor its switch-on/switch-off activity. This technique is expected to have wide ranging implications for nanoscale biology and drug discovery. The second involves using an array of NV centres to image neuronal network dynamics. This technique is expected to yield significant insight into the way information is processed in the brain. In both cases, we find the temporal resolution to be of millisecond timescales, effectively allowing for real time imaging of these systems with micrometre spatial resolution. We analyse cases in which environmental frequencies are sufficiently high to result in a mutual exchange of energy with the NV spin, and discuss how this may be used to reconstruct the corresponding frequency spectrum. This analysis is then applied to two ground-breaking experiments, showing remarkable agreement. Protocols for in-situ monitoring of mobile nanodiamonds in biological systems are developed. In addition to obtaining information about the local magnetic environment, these protocols allow for the determination of both the position and orientation of the nanocrystal, yielding information about the mechanical forces to which it is subjected. These techniques are applied in analysing a set of experiments in which diamond nanocrystals are taken up endosomally by human cervical cancer cells. Finally, we focus our attention on understanding the microscopic dynamics of the spin bath and its effect on the NV spin. Many existing analytic approaches are based on simplified phenomenological models in which it is difficult to capture the complex physics associated with this system. Conversely, numerical approaches reproduce this complex behaviour, but are limited in the amount of theoretical insight they can provide. Using a systematic approach based on the spatial statistics of the spin bath constituents, we develop a purely analytic theory for the NV central spin decoherence problem that reproduces the experimental and numerical results found in the literature, whilst correcting the limitations and inaccuracies associated with existing analytical approaches.
  • Item
    Thumbnail Image
    Lyα as a cosmological probe of dark energy: assessing the impact of systematics
    GREIG, BRADLEY ( 2013)
    In the late 1990s supernovae observations confirmed the late-time accelerated expansion of the Universe. This accelerated expansion is thought to result from the repulsive gravitational force of a mysterious substance called `dark energy', whose physical properties are completely unknown. One of the key science goals for the forthcoming decade is to obtain physical insight on the properties of dark energy through measurements of the large-scale clustering of matter in the Universe. Numerous dark energy experiments are planned employing a wide variety of techniques to probe the large-scale structure. Crucial to maximising the full potential of these experiments is assessing the impact of astrophysical systematics that can either diminish or destroy the cosmological signal. Within this thesis I focus specifically on assessing the impact of astrophysical systematics on large-scale structure measurements via the detection of Lyman-α (Lyα) radiation in both emission and absorption. Firstly, the ongoing Baryon Oscillation Spectroscopic Survey (BOSS) will measure the baryon acoustic oscillation (BAO) scale from the three dimensional clustering of matter via the Lyα forest. The BAO scale is a characteristic length scale at which matter is observed to cluster in excess, imprinting a measurable signature on the large-scale structure in the Universe crucial to understanding dark energy. I develop calibrated semi-analytic Lyα forest simulations, equivalent in size and resolution to the largest N-body and hydrodynamical simulations, that can be performed on a single desktop computer in under a day. The synthetic Lyα forest spectra are shown to be in broad agreement with a range of observational measurements including the Lyα flux probability distribution and 1D line-of-sight flux power spectrum. I demonstrate that the BAO scale can be correctly recovered from the 3D Lyα flux power spectrum measured from the simulated data. I estimate that a BOSS-like 10^4 deg^2 survey with ~15 background quasars per square degree and a signal-to-noise ratio of ~5 per pixel should achieve a measurement on the BAO scale to within ~1.4 per cent. Recently, BOSS published the first measurement of the BAO scale from the Lyα forest (Busca et al., 2013, Slosar et al., 2013) with a recovered accuracy of ~2.5 per cent. Using these simulations for an equivalent Lyα forest data sample I recover a fractional error of ~2.3 per cent. The speed and efficiency of this simulation approach is well suited for exploring the astrophysical systematics on the recovery of the BAO signature from large scale spectroscopic surveys such as BOSS. Using these semi-analytic Lyα forest simulations, I assess the impact of HeII reionisation on the recovered accuracy of the BAO scale. Inhomogeneous HeII reionisation is driven by bright, rare quasars which can result in large-scale UV background and temperature fluctuations that could potentially affect the fractional precision to which the BAO scale can be recovered from the Lyα forest. I develop a semi-analytic model for HeII reionisation which agrees well both qualitatively and quantitatively with existing numerical cosmological HeII reionisation simulations. Investigating a variety of HeII reionisation models I assess their impact on the statistical measurements of the Lyα forest. I observe that HeII reionisation can produce a fractional increase of ~50 per cent in power at large-scales in the 3D Lyα flux power spectrum, reducing to ~5 per cent at k>0.03 Mpc^-1. However, despite the fractional increase in the large-scale power, I do not observe any change in the predicted accuracy to which the BAO scale can be recovered. Secondly, the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) aims to measure the large-scale clustering of Lyα emitting (LAE) galaxies to explore the properties of dark energy. However, the observed clustering properties of LAE galaxies are sensitive to the radiative transfer of Lyα photons through the intergalactic medium (IGM) which can mimic gravitational effects, potentially reducing the precision of cosmological constraints. I assess the impact of these non-gravitational Lyα radiative transfer effects on the observed clustering of LAE galaxies, focusing in particular on the effects of the IGM velocity gradients, local density within the environment of an LAE galaxy and ionising background fluctuations. For example, linear redshift-space distortions on the LAE galaxy power spectrum are potentially degenerate with the Lyα radiative transfer effect owing to the dependence of Lyα flux on IGM velocity gradients. Using Fisher matrices I assess the impact of these Lyα radiative transfer effects on recoverable cosmological constraints important for dark energy studies such as the growth rate of structure, f, the Hubble rate, H(z) and the angular diameter distance, D_A(z). At the power spectrum level, I observe a complete degeneracy between f and the Lyα radiative transfer effect associated with IGM velocity gradients, while D_A(z) and H(z) are independent of these degeneracies. Deriving next-to-leading order corrections for the clustering of LAE galaxies within the Eulerian perturbation theory framework I show that these degeneracies can be broken by considering higher order galaxy clustering statistics such as the three-point function (bispectrum). Therefore, making it possible to recover cosmological parameters from LAE galaxy surveys. Finally, I observe that by combining the LAE galaxy power spectrum and bispectrum measurements, the constraints on D_A(z) and H(z) can be further improved.
  • Item
    Thumbnail Image
    Electronic damage to single biomolecules in femtosecond X-ray imaging
    Curwood, Evan K. ( 2013)
    Knowledge of the structure of large, complex molecules is of vital interest in understanding their function in biological systems. Standard X-ray crystallographic methods of structure determination are unsuitable for a large class of biomolecules for which it is difficult, or impossible to form high-quality crystals. The technique of coherent diffractive imaging (CDI) provides a route toward the determination of large molecular structures without crystallisation. CDI uses a Fourier transform mapping between fields in the sample and detector planes; this implies attainable resolution is limited by the angle to which signal can be measured. Unfortunately, biological molecules scatter weakly; in order to obtain signal to the required angle an extremely bright new source of X-rays is required. These new sources, the X-ray free-electron lasers (XFELs), have brightnesses approaching that sufficient to resolve biological molecules to atomic resolution. This increased brightness has an unfortunate side effect, the number of unwanted photoionisation events in the target molecule is vastly increased. This leads to an imbalance of charge that results in the eventual destruction of the molecule. In this thesis, I show that the intense illumination from an XFEL produces a time-dependent electron density in the target molecule. This effect targets the inner shell electrons in the molecule, and hence preferentially degrades the high-resolution information. I further show that the time-dependent electron density in the molecule can be treated as a partially coherent secondary source of X-rays, violating the coherence assumption inherent to CDI. This damage-induced degree of partial coherence is determined from simulated experimental conditions. It is demonstrated that this degree of partial coherence due to damage can be used to infer information about the physical processes underlying the interaction between the molecule and the X-ray field. This information can be transferred between similar molecules in an XFEL experiment to compensate for damage processes. Assumptions made about the partial coherence of the scattered X-ray field are used to recover the structure of a biomolecule in simulation using an adjusted CDI iterative scheme. Structure refinement and electron density recovery schemes are also investigated.
  • Item
    Thumbnail Image
    A generalised holographic approach to coherent diffractive imaging
    Morgan, Andrew James ( 2013)
    This thesis examines methods for obtaining the full complex wave-field from measured probability intensities in microscopy experiments, otherwise known as the phase problem. We present a new theoretical framework for solving the phase problem. From this, previously intractable problems may be solved via direct methods. In addition, we present a method for greatly increasing the potential output of single particle diffraction measurements in free-electron laser science (FELS) experiments. This method extracts many single particle diffraction measurements from a single many particle diffraction measurement. We investigate the feasibility for performing coherent diffractive imaging (CDI) in the scanning transmission electron microscopy (STEM) geometry using standard methods. Despite some success in ideal cases, we find that the inversions (from the measured diffraction intensities to the complex wave-field) are very sensitive to imperfections in the data. Consequently we investigate the degree to which various sources of error, such as detector noise and spatial incoherence, affect the convergence properties of the inversions. Difficulties in the application of single-shot CDI in STEM may be overcome by measuring many diffraction patterns from the same sample and combining these data for a ptychographic method. We performed a ptychographic reconstruction of a complex sample transmission function. This was done by combining several diffraction measurements from overlapping regions of a boron-nitride cone. We present the subsequent atomic resolution retrieval and the algorithm used. We develop and implement new holographic algorithm to obtain a high resolution reconstruction of the complex exit surface wave formed by a gnat’s wing. In doing so we derive a new error metric applicable in both holographic and non-linear CDI. It is also found to be a more reliable measure for the fidelity of the retrieval. By extending the previous holographic method to accommodate diffraction data taken in the near-field (or the Fresnel regime) we find that previous restrictions on the illumination and the position of the sample in the beam are greatly reduced. We demonstrate the experimental feasibility of this method by recombining the complex exit surface wave emanating from a microfibre. Here the specimen was illuminated by a plane wave while the diffraction data was taken in a plane centimetres from the object. We show how the requirement for full coherence of the imaging system can be removed by incorporating the partially coherent modes of the imaging probe into the formalism of the holographic method. A new iterative linear algorithm is developed, extending the applicability of the algorithm to higher resolutions and greatly increasing the computational speed of the retrieval. We present a direct single-shot sub-Angström retrieval from the edge structure of a CeO2 nano-particle, using electrons. This retrieval is improved by the inclusion of the entire autocorrelation function of the exit surface wave. Using the new iterative linear algorithm we are able to include a subsequent iteration, correcting for the non-linear contribution to the autocorrelation function by the object. Finally, we use the techniques developed in this thesis to extend the new iterative linear algorithm to include ptychographic data-sets.
  • Item
    Thumbnail Image
    Ion beam techniques for micromachining of single crystal diamond
    Fairchild, Barbara Anne ( 2013)
    Diamond is a material with extreme physical properties unlike any other naturally occurring substance. Its high optical transparency and numerous optically active centres make it a promising platform for quantum information processing (QIP) devices. The negatively charged nitrogen vacancy centre (NV-) in single crystal diamond (SCD) currently stands out as the prime candidate for QIP in diamond, since it is a stable single photon source with observed room temperature single spin readout, Rabi oscillations and qubit-qubit coupling. The zero phonon line (ZPL) for NV- is 637 nm, however ~95% of the NV- centre emission occurs in phonon side bands. One approach to effciently capture emission and increase the proportion of emission into the ZPL is to use cavities. Cavities coupled to waveguides can then route the single photon in the optical circuit for QIP protocols. However if optics and QIP in diamond are to be realised then we need to develop methods to fabricate devices and structures from of order 200 nm to microns in scale in nature’s hardest material. We focus on monolithic devices that in the ideal case would be used to couple individual photons to individual NV- centres. Optimal operating conditions depend critically on the ability to demonstrate fabrication at least as good as those currently available in silicon and silica based devices. This thesis investigates ion implantation and focused ion beam (FIB) milling as processes for fabricating optical devices in SCD. This thesis asks the question: can ion implantation and ion milling be used to fabricate diamond optical components suitable for QIP applications? To answer this question, we developed a method to produce the first 165 nm layer in SCD from ion implantation. This is a significant enabling technology for applications in diamond QIP, diamond micro-electro-mechanical-systems (MEMS) and nano-electro-mechanicalsystems (NEMS), similar to Smart-cutTMin silicon. Using our new technique, we fabricated test structures in SCD, developing new methods to solve the various fabrication challenges posed by working in diamond. We demonstrate the first micromanipulation of diamond membranes onto mirrors and show that light is transmitted in layers fabricated using the double energy implantation method. Scanning electron microscope (SEM) and Raman characterisation identified permanent alteration of the diamond material on surfaces/films fabricated using our ion beam technique. This observation lead to the investigation of the changes in the diamond material as it undergoes both the ion implantation process and subsequent annealing steps. Modification of diamond (by ion implantation) to amorphous carbon and then to nano structured graphite after annealing was observed. A detailed material study using transmission electron microscopy (TEM) and electron energy loss spectroscopy (EELS) resulted in a fundamental insight into the significance of strain in the amorphisation process in diamond. We determine a value for the critical threshold for amorphisation in diamond, Dc, of 2:95 _ 0:10 g=cm3 independent of SRIM (Stopping Range of Ions in Matter) modelling and identify a layer of distorted diamond that is formed during ion implantation that is not removed even after annealing at 1400°C. In Chapter 1 we discuss diamond as a material and why it holds such promise as a QIP platform. Chapter 2 reviews the state of the art of diamond fabrication including ion implantation, fabrication of the first SCD structures and the lift-o_ method. In Chapter 3 a new method for fabricating ultra thin layers is described, capable of producing layers as thin as 165 nm. The fabrication of various structures and lift-out of layers onto di_erent substrates are covered in Chapter 4 we highlight new methods developed to produce them in SCD. We show that doubly implanted layers are able to transmit light, however the optical performance of some structures raise questions about residual damage and changes to the physical characteristics of the diamond material due to ion implantation. The subsequent TEM study investigates the e_ects of ion implantation on diamond in Chapter 5 identifying a new region of damage below amorphisation, which we identify as distorted diamond. Using EELS analysis we show (see Chapter 6) that strain is a significant mechanism in the amorphisation of diamond and determine a critical damage threshold value (Dc) independent of SRIM damage modelling. An isochronal annealing study is reported in Chapter 7 where we show that the distorted diamond layer identified in Chapter 5 is not removed by high temperature annealing, only reduced. This interface layer between the resulting nano structured graphite and diamond film(s) is correctly located to be the source of texturing observed on the surface of thin films produced using ion implantation. The accompanying studies using Rutherford backscattering spectroscopy (RBS) and near-edge X-ray absorption fine structure spectroscopy (NEXAFS) reported in Chapter 8 shows that the surface of ion implanted films are damaged diamond and that they are of the appropriate thickness to be the residual distorted diamond layer reported in Chapter 5 and 6. The results are discussed and summarised in the concluding chapter, Chapter 9. We find that ion implantation and ion milling can be used to fabricate optical components in SCD. Our double energy implantation method has significant potential for microfluidic and MEMS/NEMS devices such as cantilevers. However the ion implantation process (even after annealing at high temperatures) leaves a residual damage signature in the material that currently renders it unsuitable for the more stringent demands of optical QIP for NV- in diamond. Ion implantation and ion milling continue to be of interest to the optical community fabricating diamond structures, however further study is required. Investigation into the nature of the residual damage and if it can be removed by high pressure annealing or passivation with hydrogen are possible avenues. Fabrication of thin films using ion implantation, flip bonding and/or further ion thinning are also techniques being investigated by researchers in the field.
  • Item
    Thumbnail Image
    The search for the Higgs boson in tauon pairs at the ATLAS experiment
    Shao, Qi Tao ( 2013)
    The Higgs boson is a particle that’s predicted to exist by spontaneous electroweak symmetry breaking. Electroweak symmetry breaking is an essential part of the Standard Model of particle physics, as it generates masses for the electroweak gauge bosons. Finding the Higgs boson is integral to our understandings of the fundamental particles and their interactions. Searches for the Higgs boson are conducted by the ATLAS experiment using proton-proton collisions at the Large Hadron Collider. One of these searches is performed using the H→ττ decays, which has a clean detection signature and, with H →bb, is one of the only two viable fermonic search channels. Using the 4.7 f b−1 of data collected at √s = 7 TeV, the H→ττ analysis excludes the Higgs boson at approximately 3 times the expected cross section for 100 < mH < 120 GeV and 5 to 12 times the expected cross section for 130 < mH < 150 GeV. The H→ττ search results are combined with those from the other channels to achieve better sensitivities. The combined results have excluded most Higgs masses between 110 and 500 GeV. The only region that is not excluded is at mH = 126 GeV, where an excess above the background expectations is observed in multiple bosonic channels. This excess has a combined local significance of 5.9 σ. ATLAS claims this observed excess as a discovery of a new bosonic particle, whose properties have thus far been measured to be consistent with that of the Higgs boson.
  • Item
    Thumbnail Image
    Interacting dark matter: decay and bremsstrahlung processes
    Galea, Ahmad Jacob ( 2013)
    Though there is substantial indirect astrophysical evidence for the existence of dark matter (DM), it has yet to be directly detected. Consequently, little is known about its internal structure. It is possible that there is a small but finite non-gravitational interaction between dark matter and the Standard Model (SM) which may have observable consequences. The purpose of this thesis is the exploration of some of these interactions and consequences. In particular we consider the possibility that dark matter is unstable on long timescales, as motivated by discrepancies between simulation and observation of structure on sub-galactic scales. We also consider the consequences of electroweak radiative corrections to annihilation processes involving dark matter, as such corrections are necessarily present in many well motivated models. We consider this possibility in the contexts of dark matter annihilation in galactic halos, and production in colliders. Chapter 1 provides an introduction to dark matter, including some of its astrophysical and particle aspects. As a motivation for the following sections, we begin by briefly outlining some of the observational evidence for dark matter. We go on to discuss structure formation, and the cold dark matter distribution on galactic scales. Next we discuss the possibility of non-gravitational interactions involving dark matter, including decay, annihilation, scattering off nuclei, and production. Finally we discuss the determination of the relic abundance in the early Universe, including a discussion of models involving coannihilation. Late decaying dark matter has been proposed as a solution to the small scale structure problems inherent to cold dark matter cosmology. In these models the parent dark matter particle is unstable, and decays into a daughter with near degenerate mass, plus a relativistic final state. In Chapter 2 we review the observational constraints on decaying dark matter, and construct explicit particle physics models to realize this scenario. To achieve this, we introduce a pair of fermionic dark matter candidates and a new scalar field, which obey either a Z4, or a U(1) symmetry. Through the spontaneous breaking of these symmetries, and coupling of the new fields to standard model particles, we demonstrate that the desired decay process may be obtained. We also discuss the dark matter production processes in these models. In Chapter 3 we investigate electroweak radiative corrections to dark matter annihilation into leptons, in which a W or Z boson is also radiated. In many dark matter models the annihilation rate into fermions is helicity suppressed. We demonstrate that bremsstrahlung processes can remove this helicity suppression, causing the branching ratios Br($\ell \nu W $), Br($\ell^+\ell^-Z$), and Br($\bar\nu \nu Z$) to dominate over Br($\ell^+\ell^-$) and Br($\bar\nu \nu$). We find this effect to be most significant in the limit where the dark matter mass is nearly degenerate with the mass of the boson which mediates the annihilation process. Finally, in Chapter 4, we investigate a mono-Z process as a potential dark matter search strategy at the Large Hadron Collider (LHC). In this channel a single Z boson recoils against missing transverse momentum attributed to dark matter particles, $\chi$, which escape the detector. For illustrative purposes we consider the process $q\bar{q} -> \chi\chi Z$ in a toy dark matter model, where the Z boson is emitted from either the initial state quarks, or from the internal propagator. We look for muonic decays of the Z, showing the Standard Model backgrounds to this process to be easily removable with modest selection cuts. We compare signal with Standard Model backgrounds and demonstrate that there exist regions of parameter space where the signal may be clearly visible above background in future LHC data.
  • Item
    Thumbnail Image
    Generation of shaped cold electron bunches for ultrafast electron diffraction
    McCulloch, Andrew James ( 2013)
    This thesis presents the development of a new electron source with the goal of single-shot Ultrafast Electron Diffraction (UED) of biological samples. A source capable of UED should be both bright and coherent. These properties are both enhanced by reducing the electron temperature. A Cold Atom Electron Source (CAES) produces cold electron bunches by near-threshold ionisation of laser-cooled atoms. A Magneto-Optical Trap (MOT) is used to cool and trap rubidium atoms to a temperature of 70 μK, and electrons are liberated from the cold atoms using two-colour photoionisation. The temperature of the photoelectrons is as low as 10 K, limited by the extraction process. Upon ionisation, a charged particle cloud is created from the cold atoms. An investigation into the origin of electron temperature is presented. The effect of ion position correlations within the charged particle cloud is shown to play a small role, but for low density, the major contribution is from the scattering of electrons from their parent ion. A model for the extraction of an electron from a atom in a Stark potential is developed and used to explain the observed distributions of photoelectrons. The effects of finite electron temperature on beam parameters relevant for diffraction are presented. Electron beam quality can be degraded by Coulomb interactions within the bunch. Such effects can be ameliorated by controlling the initial electron density distribution to produce uniform ellipsoidal electron bunches. Ellipsoidal bunches have internal fields which are linear as a function of position, which upon evolution do not degrade the beam coherence, and the Coulomb expansion can be completely reversed using linear optics. The cold atom source has the unique capability to shape the initial electron density distribution in three dimensions. Control over the ionisation volume is achieved via spatially modulating the intensity of the light fields used for ionisation. A method for the production of arbitrarily shaped electron bunches was developed and implemented. Ellipsoidal electron bunches were produced, and in addition, were used to determine a source temperature of 15 K. The ability to shape the initial electron bunch allowed for a novel implementation of the “pepper-pot” high-precision emittance measurement technique. The brightness of a beam is fundamentally limited by the initial phase space density of the source. Careful characterisation and optimisation of the initial emittance is therefore vital, and the unique shaping abilities of a CAES allow these measurements and optimisation to be performed in real time. Cold Atom Electron Sources have previously been limited to production of electron pulses with duration of order nanoseconds, too long for UED. A method for reducing the pulse length to a few hundred picoseconds, short enough for Radio Frequency (RF) cavity compression to sub-100 fs, required for UED, is presented. The production of short electron pulses relies on the use of a femtosecond laser pulse and quasi-coherent two-colour photoionisation which reduces the pulse length. Counterintuitively, the effect of the high bandwidth of the laser pulse does not adversely affect the transverse beam qualities, and the electron pulses remain highly coherent. The intrinsically high coherence of the electrons provided by a CAES, combined with the production of short, ideally distributed electron bunches, should allow for the realisation of a source capable of single-shot diffractive imaging of weakly scattering molecules.