School of Physics - Theses
Now showing items 1-12 of 193
Semi-analytic galaxy formation during the epoch of the reionisation
Semi-analytic models play an important role in modelling the epoch of reionisation. This thesis presents three studies that are related to this topic. First, we measure clustering segregation with both UV-luminosity and stellar mass at z > 4, which is then compared with predictions from the Meraxes semi-analytic model. Our results suggest that the dependence of clustering strength on UV-luminosity is stronger than stellar mass, indicating that compared with stellar mass, UV-luminosity is more tightly correlated with halo mass. Secondly, we investigate dust extinction in the early Universe. Our method utilises the Meraxes semi-analytic model to produce intrinsic galaxy luminosity and adopts parametric relations to estimate dust extinction. A novelty of our approach is that intrinsic luminosity and dust extinction are determined simultaneously by calibrating both galaxy formation and dust models against only UV observations. Our results suggest that there is a factor of two systematic error in the estimations of the cosmic star formation rate density based on the dust law in the local Universe. Finally, we present a method to augment N-body simulations using a Monte Carlo algorithm, which increases the mass resolution of the simulations. The results can be used by semi-analytic models of reionisation to overcome the challenge that convergent predictions of the reionisation history require both high mass resolution and large simulation volume. The effectiveness of our method is tested using a high resolution small volume N-body simulation.
Astigmatic phase retrieval of lightfields with helical wavefronts
The controlled use of coherent radiation has led to the development of a wide range of imaging methods in which aspects of the phase are enhanced through diffraction and propagation. A mathematical description of the propagation of light allows us to determine the properties of an optical wavefield in any plane. When a sample is illuminated with coherent planar illumination and its diffracted wavefield is recorded in the far-field of propagation, a direct inverse calculation of the phase can be quickly performed through computational means – the fast Fourier transform. Algorithmic processing is required, however, because only the intensity of the diffracted wavefield can be recorded. To determine structural information about the sample, some other information must be known about the experimental system. What is known, and how it is processed computationally, has led to the development and successful application of a broad spectrum of phase reconstruction iterative algorithms. Vortices in lightfields have a helical structure to their wavefront, at the core of which exists, necessarily, a screw-discontinuity to their phase. They have a characteristic intensity distribution comprising a radially symmetric bright ring around a dark core which, for either handedness of the rotation of the vortex, appears identical. Observation of a vortex is, therefore, ambiguous in its ability to determine its true direction of rotation. The ubiquitous presence of vortices in all lightfields hinder the success of phase reconstruction methods based on planar illumination and, if successful, render any reconstruction of the phase non-unique, due to the ambiguity associated to their helicity. The presence of a controlled spherical phase distortion can break the symmetry of the appearance of the vortices and, hence, remove the ambiguity from the system and drive algorithms to a solution. For the pathological case of an on-axis vortex, however, spherical distortion will not break the radial symmetry. The astigmatic phase retrieval method separates the spherical distortion into cylindrical distortion in two orthogonal directions. This form of phase distortion breaks the symmetry of a vortex allowing a unique determination of the phase. The incorporation of such use of cylindrical distortion into an iterative phase reconstruction algorithm forms the basis for the astigmatic phase retrieval (APR) method. Presented in this thesis is the creation and propagation of lightfields with helical wavefronts, produced through simulation and experiment. Observation of the effects of cylindrical distortion on vortices is explored in detail, particularly for split high-charge vortices where their positions can inform the type and strength of the applied phase distortion. Experimentally, onaxis vortices are created and distorted for the purposes of astigmatic phase retrieval in both visible light and X-ray wavefields. This thesis presents the first experimental demonstration of the astigmatic phase retrieval (APR) method, successfully applied optically with a simple test sample. The method is also applied to lightfields with helical wavefronts. The successful unambiguous reconstruction of on-axis chargeone and charge-two visible light vortices are presented, which is the first experimental demonstration on the unique phase reconstruction of an on-axis vortex from intensity measurements alone. Experiments are then performed to apply the method to vortices created in X-ray wavefields. The parameters of the experiment and the data have not, however, allowed for a successful reconstruction in this case. It is demonstrated through extensive simulation analysis that the APR method is a fast and robust imaging method. It is also shown that, through observation of the error metric, experimental parameters can be corrected or even determined, making the method successful even if there is no a priori knowledge of the experimental system. The application of the APR method as a general imaging technique for use in high-resolution X-ray diffraction experiments is, therefore, is a logical extension of the work of this thesis.
Practical Aspects of the Preparation of NV Centers In Diamond for Quantum Applications and Magnetometry
This thesis present the result of four experimental projects, that revolve around the practical aspects of using NV centers for quantum applications. The core of the this work deals with the coherence time of NV centers and how it is affected by damage introduced into the diamond lattice by ion implantation where we have discovered that while the emission of the NV center is sensitive to the damage the coherence time is not. The other topics of this work cover a novel method to deposit isolated nano diamond using aerosols and a method to secure the nano diamonds into silicon substrates using self-assembled mono layers. Finally, the work concludes with a proposal to use the magnetic field produced by spin vortices to increase the coherence time of NV centers where some preliminary result of the spin vortices fabrication are presented.
The host galaxies of high-redshift quasars
In the early Universe, we observe supermassive black holes with masses of up to a billion times the mass of the Sun, accreting at or even above the Eddington limit. These high-redshift quasars are some of the most luminous objects in the Universe, and raise many questions about the formation and growth of the first black holes. Investigating their host galaxies provides a useful probe for understanding these high-redshift quasars. In the local Universe, there are clear correlations between the mass of a supermassive black hole and the properties of its host galaxy, indicating a black hole--galaxy co-evolution. Exploring how these black hole--host relations evolve with redshift can give valuable insights into why these relations exist. Studying the host galaxies of high-redshift quasars thus provides vital insights into the early growth of supermassive black holes and the black hole--galaxy connection. In this thesis I use three techniques to study the host galaxies of high-redshift quasars: the Meraxes semi-analytic model, the BlueTides hydrodynamical simulation, and observations with the Hubble Space Telescope. Meraxes is a semi-analytic model designed to study galaxy formation and evolution at high redshift. Using this model, I study the sizes, angular momenta and morphologies of high-redshift galaxies. I also use Meraxes to study the evolution of black holes and their host galaxies from high redshift to the present day. The model predicts no significant evolution in the black hole--host mass relations out to high redshift, with the growth of galaxies and black holes tightly related even in the early Universe. I also examine the growth mechanisms of black holes in Meraxes, finding that the majority of black hole growth is caused by internal disc instabilities, and not by galaxy mergers. I then use the BlueTides cosmological hydrodynamical simulation to investigate the detailed properties of quasar host galaxies at z=7. I find that the hosts of quasars are generally highly star-forming and bulge dominated, and are significantly more compact than the typical high-redshift galaxy. Using BlueTides I make predictions for observations of quasars with the James Webb Space Telescope, finding that detecting quasar hosts at these redshifts may be possible, but will still be challenging with this groundbreaking instrument. Finally, I use observations from the Hubble Space Telescope to obtain deep upper limits on the rest-frame ultraviolet luminosities of six z~6 quasars. I also detect up to 9 potential companion galaxies surrounding these quasars, which may be interacting with their host galaxies. Observations with the upcoming James Webb Space Telescope are needed to detect quasar host galaxies in the rest-frame ultraviolet and optical for the first time.
Raman spectroscopy of artists' materials: Advances in characterisation and analysis
Cultural heritage research and conservation practice seek to preserve our cultural heritage. Understanding the composition of the materials that comprise the artwork or cultural object is critical to inform collections management and preservation treatments; however methods of analysis are constrained to those that are either non-destructive or can obtain the desired information from micro-samples in order to retain the integrity of the object. Raman spectroscopy is an ideal technique for characterising cultural materials as it is non-destructive, requires relatively little sample preparation and utilises short measurement times. Micro-Raman is especially useful for examining micro-samples and painting cross sections due to its spatial resolution being sufficiently high to target individual pigment grains. Despite the non-destructive, micro-sample analysis advantages, there are limitations to the use of Raman spectroscopy in the cultural heritage conservation context. Firstly, fluorescence is frequently observed in cultural materials. Secondly, with the large number of compositionally complex and heterogenous materials encountered in conservation, there is a need for advanced methods that can deal with large sample datasets. These methods are needed to facilitate examination of both non-spatial data and spatial (imaging) data, to extract the maximum amount of information possible from the limited sample available. It is the aim of this thesis to demonstrate how Raman micro-spectroscopy can provide new and useful information about paintings using the following strategies. (1) Creation of a reliable pigment database supported by X-ray diffraction data to confirm the structural identity of each pigment. At the time this work was started, a spectral library for pigment identification was necessary to have a comprehensive set of pigment reference spectra against which to compare unknowns. A Raman spectral pigment reference library was developed comprising over 180 samples from the National Gallery of Victoria’s pigment collection. Samples were validated using X-ray Powder Diffraction (XRPD) prior to Raman analysis. This Raman spectral pigment library can also support future identification of materials and artworks, alongside other Raman spectral databases that are now available. (2) Utilisation of the database in conjunction with longer wavelengths to mitigate fluorescence effects. In the presence of fluorescent components, Raman analysis is hampered. To mitigate the impact of fluorescence observed at 514nm, the efficiency of 830nm wavelength incident irradiation was examined. It was found effective and used to answer five research questions (case studies) regarding authenticity and art historical practice and to inform attribution, provenance studies and conservation treatments: a) Mock-up paintings were prepared to trial the experimental methodology. The overpaint in a 16th century portrait miniature was identified as Zinc white pigment, indicating the overpaint was applied after the mid-19th century. b) An Australian Impressionists artist’s catalogue from 1889 was examined and the inks found to contain Prussian blue and vermilion as the main pigments, with a minor addition of minium and perhaps a lake pigment, providing insight into the artist’s technical methods. c) Overpainted cracks in Tom Roberts’ iconic painting Shearing the Rams were suspected to have been due to the type of pigment used. The main pigment was vermilion, which is not known to cause cracking, so the cause of cracking is now believed to be due to the high ratio of binding medium in the paint. d) The Finding of Moses (1712) was reattributed to Giambattista Tiepolo (1696-1770), after Prussian blue was identified as a key component of the paint layer. e) The organic blue colour in an Indian Palampore was dissimilar to indigo but matched a published spectrum of indigo on silk, highlighting the importance of local structure and bonding on subtle features in Raman spectra. (3) Identified a practical method of Surface Enhanced Raman Spectroscopy for conservation to increase the Raman signal and make it visible over the photoluminescent background. This work reviewed several SERS substrate configurations, then prepared and evaluated SERS substrates prepared by a) colloidal Ag nanoparticles, b) Ag coated nanospheres, c) Ag foil etching and d) electroless deposition of Ag on a Cu coupon. The ease of production and reproducibility were used to select the most practical substrate for SERS analysis in conservation being SERS substrates prepared using an electroless deposition method. The selected substrate was used to identify dammar as the varnish used on an important Italian Renaissance painting by Tiepolo with the outcomes published in 2008. (4) Developing new methods of data analysis for managing complexity in spectra and large data sets. Multivariate analysis techniques have been used to analyse spectral datasets in numerous fields and provide an excellent opportunity to enhance the analysis of large Raman spectra datasets in conservation. Principal components analysis (PCA) and hierarchical clustering analysis (HCA) methods were used to visualise the data structural relationships amongst Raman spectra of natural and synthetic resins. It has been demonstrated that the two most utilised natural resins, dammar and mastic, are able to be distinguished from one another by PCA and HCA of their Raman spectra, irrespective of their supplier and the naturally occurring sample variance. This work also shows, using PCA and HCA, that the synthetic cyclohexanones resins Ketone N and Laropal K 80 are indistinguishable whilst the other synthetic painting varnish cyclohexanone, MS2A, is easily separable. The synthetic resins were found to be quite homogeneous in composition with little variability in their Raman spectral response compared to a very much greater degree of variance was observed within the natural resins: amber, copal, colophony and sandarac. Finally, a multivariate image analysis method, assembling the data into a 3D data-cube and using PCA and clustering techniques, was developed. The method for assembling and analysing the spectral 3D data cubes was achieved using prepared samples of known pigments in binder. The technique was used in the analysis of an Italian renaissance painting. PCA and clustering methods were applied to SEM-EDS elemental maps of Ti, Sn, Si, Pb, Mn, Mg, K, Fe, Cu, Ca, Al, S (corrected for Pb) and O, to develop a compositional map of the materials used and indicate their sequence in the layered construction of the painting. Secondly, using Raman maps of spectral intensity collected at 830 nm to mitigate fluorescence, and using the spectral database, vermilion, lead-tin yellow type 1 and a blue-green pigment consistent with terre-verte or another green silicate pigment were found in the paint layer. The ground layer was found to contain anhydrite with large gypsum inclusions. The identification of these components has led to the attribution of a previously anonymous painting to Dosso Dossi with the outcomes published in 2008, receiving 70 citations until 2019. Multivariate methods developed here have been further applied in published research in both conservation and non-conservation applications, which is noted in this thesis.
A measurement of E-mode polarisation spectrum of the cosmic microwave background with the POLARBEAR experiment
The cosmic microwave background (CMB) stands as one of the most interesting subjects of astronomical surveys. The CMB light carries information about the history and the origin of the universe. Polarisation of the CMB can be decomposed and transformed in to E-mode and B-mode polarisation. My thesis focuses on the measurement of E-mode polarisation power spectrum of the cosmic microwave background using 150 Ghz data taken from July 2014 to December 2016 with the POLARBEAR experiment. POLARBEAR is an on-going cosmic microwave background experiment that began taking data in 2012. The target area of observation for the data in this thesis is a large patch of sky of over 600 deg2. A continuously rotating half wave plate was installed in the POLARBEAR telescope and heavily influences the overall data analysis. This thesis is outlined as followed. Chapter 1 reviews the standard model of cosmology, and explain the physics of the CMB and CMB polarisation. Chapter 2 describes the POLARBEAR experiment. Chapter 3 gives a brief review of the continuously rotating half wave plate and its impacts on recorded data. Chapter 4 describes the calibration of the recorded data. Chapter 5 describes the pipeline to process the recorded data. Chapter 6 details on the validation of the established power spectrum pipeline, the data selected following Chapter 5, and the estimation of systematics in POLARBEAR data. Chapter 7 details on the E-mode power spectrum estimated from the selected data. I use the measured E-mode power spectrum in combination with other data sets: Planck 2018, ACTPol, SPT-SZ, and BAO data to constrain cosmological parameters. The combined data set is consistent with the standard Lambda Cold Dark Matter model of cosmology. I also consider extensions to the standard model, and present improved constraints on these extensions. Chapter 8 summaries preceding chapters and gives an outlook on the Simons Array, the successor of the POLARBEAR experiment.
Learning invariant representations with applications to high-energy physics
In searches for new physics in high-energy physics, experimental analyses are primarily concerned with physical processes which are rare or hitherto unobserved. To claim a statistically significant discovery or exclusion of new physics when studying such decays, it is necessary to maintain an appropriate signal to noise ratio. This makes systems capable of efficient discrimination of signal from datasets overwhelmingly dominated by background events an important component of modern experimental analyses. However, na\"ive application of these methods is liable to raise poorly understood systematic effects which may ultimately degrade the significance of the final measurement. To understand the origin of these systematic effects, we note that there are certain protected variables in experimental analyses which should remain unbiased by the analysis procedure. Variables that the input parameters of models of new physics are strongly dependent upon and variables used to model background contributions to the total measured event yield fall into this category. Systems responsible for separating signal from background events achieve this by sampling events with signal-like characteristics from all candidate events. If this procedure introduces sampling bias into the distribution of protected variables, this introduces systematic effects into the analysis which are difficult to characterize. Thus it is desirable for these systems to distinguish between signal and background events without using information about certain protected variables. Beyond high-energy physics, building systems that make decisions independent of certain protected or sensitive information is an important theme in the real-world application of machine learning and statistics. We address this task as an optimization problem of finding a representation of the observed data that is invariant to the given protected quantities. This representation should satisfy two competing criteria. Firstly, it should contain all relevant information about the data so that it may be used as a proxy for arbitrary downstream tasks, such as inference of unobserved quantities or prediction of target variables. Secondly, it should not be informative of the given protected quantities, so that downstream tasks are not influenced by these variables. If the protected quantities to be censored from the intermediate representation contain information that can improve the performance of the downstream task, it is likely that removing this information will adversely affect this task. The challenge lies in balancing both objectives without significantly compromising either requirement. The contribution of this thesis is a new set of methods for addressing this problem. This thesis is divided into two parts, which are largely independent of one another. The first part of this thesis is about constraining the optimization procedure by which the representation is learnt to reduce the informativeness of the representation of the given protected quantities, such that the representation is invariant to changes in these quantities. The second part of this thesis approaches the problem from a latent variable model perspective, in which additional unobserved (latent) variables are introduced which explain the interaction between different attributes of the observed data. These latent variables can be interpreted as a more fundamental, compact lower-dimensional representation of the original high-dimensional unstructured data. By constraining the structure of this latent space, we demonstrate we can isolate the influence of the protected variables into a latent subspace. This allows downstream tasks to only access a relevant subset of the learned representation without being influenced by protected attributes of the original data. The feasibility of our proposed methods is demonstrated through application to a challenging experimental analysis in precision flavor physics at the Belle II experiment - the study of the $b \rightarrow s \gamma$ transition, a sensitive probe of potential new physics.
Distributed Matrix Product State Simulations of Large-Scale Quantum Circuits
Before large-scale, robust quantum computers are developed, it is valuable to be able to classically simulate quantum algorithms to study their properties. To do so, we developed a numerical library for simulating quantum circuits via the matrix product state formalism on distributed memory architectures. By examining the multipartite entanglement present across Shor’s algorithm, we were able to effectively map a high-level circuit of Shor’s algorithm to the one-dimensional structure of a matrix product state, enabling us to perform a simulation of a specific 60 qubit instance in approximately 14 TB of memory: potentially the largest non-trivial quantum circuit simulation ever performed. We then applied matrix product state and matrix product density operator techniques to simulating one-dimensional circuits from Google’s quantum supremacy problem with errors and found it mostly resistant to our methods.
Measurement of Direct CP Asymmetry and Branching Fraction in B0→D0𝜋0 and B+→D0𝜋+ at the Belle Experiment
This thesis describes the measurement of direct CP asymmetry and branching fraction for the hadronic B decays B0 -> D0 pi0 an B+ -> D0 pi+. The study uses the full dataset of 711 fb^(-1) collected at the Y(4S) resonance by the Belle experiment at the KEKB accelerator in Tsukuba, Japan. Event reconstruction, background suppression and modelling are first studied using Monte Carlo simulations, before yield and direct CP asymmetry are extracted in a three-dimensional unbinned extended maximum likelihood fit. B+ -> D0 pi+ is measured first as the control mode to validate the methodology, before same techniques are used on B0 -> D0 pi0 . The measured branching fractions and direct CP asymmetries are: Br(B0 -> D0 pi0) = (2.69 +/- 0.06 +/- 0.09) x 10^(-4), A_CP(B0 -> D0 pi0) = (0.10 +/- 2.05 +/- 1.29) x 10^(-2), Br(B+ -> D0 pi+) = (4.53 +/- 0.02 +/- 0.14) x 10^(-3), A_CP(B+ -> D0 pi+) = (0.19 +/- 0.36 +/- 0.60) x 10^(-2), for B0 -> D0 pi0 and B+ -> D0 pi+ respectively, where the first uncertainty is statistical and the second is systematic. The represents the world’s first measurement of direct CP asymmetry for B0 -> D0 pi0. This measurement of branching fraction of B0 -> D0 pi0 and B+ -> D0 pi+, and direct CP asymmetry of B+ -> D0 pi+ are the most precise to date, and consistent with the current world average values.
Weighing the Giants: Measuring galaxy cluster masses with CMB lensing
Galaxy clusters are powerful probes of cosmology. Their abundance depends on the rate of structure growth and the expansion rate of the universe, making the density of clusters highly sensitive to dark energy. Galaxy clusters additionally provide powerful constraints on matter density, matter fluctuation amplitude, and the sum of neutrino masses. However, cluster cosmology is currently limited by systematic uncertainties in the cluster mass estimation. Generally, the cluster masses are estimated using observable-mass scaling relations where the observable can be optical richness, X-ray temperature etc. The observable-mass scaling relation depends on the complex cluster baryonic physics which is not well understood and any deviation in the baryonic physics will lead to uncertainties in the mass estimation. On the other hand, gravitational lensing offers one of the most promising techniques to measure cluster mass as it directly probes the total matter content of the cluster. Gravitational lensing can additionally be used to calibrate the observable-mass scaling relations. The gravitational lensing source can either be optical galaxies or the cosmic microwave background (CMB). My thesis focuses on developing statistical and mathematical tools to robustly extract the cluster lensing signal from CMB data. We develop a maximum likelihood estimator to optimally extract cluster lensing signal from CMB data. We find that the Stokes QU maps and the traditional EB maps provide similar constraints on mass estimates. We quantify the effect of astrophysical foregrounds on CMB cluster lensing analysis. While the foregrounds set an effective noise floor for temperature estimator, the polarisation estimator is largely unaffected. We use realistic simulations to forecast that CMB cluster lensing is expected to constrain cluster masses to 3-6%(1%) level for upcoming (next generation) CMB experiments. One of the standard ways to extract the CMB-cluster lensing signal is by using the quadratic estimator. The thermal Sunyaev-Zel'dovich effect (tSZ) acts as a major contaminant in quadratic estimator and induces significant systematic and statistical uncertainty. We develop modified quadratic estimator to eliminate the tSZ bias and to significantly reduce the tSZ statistical uncertainty. Using our modified quadratic estimator we constrain the mass of Dark Energy Survey year-3 cluster catalog. We also put constraints on the normalisation parameter of optical richness-mass scaling relation. In addition to removing the tSZ bias, modified quadratic estimator also reduces tSZ induced statistical uncertainty by 40% in future low noise CMB-surveys.
Quantum hyperpolarisation of nuclear spins and multi-modal microscopic imaging with diamond defect spins
Quantum technologies promise to impact on several aspects of society. Examples include quantum computing to perform certain calculations significantly faster than current classical computers, quantum cryptography for more secure communications, quantum sensing to make measurements with unprecedented sensitivity and resolution, and specialised quantum devices such as quantum hyperpolarisers for enhanced medical imaging. However, the field is still in its infancy and most quantum technologies have been realised only in delicate laboratory settings with little prospect for real-world applications (e.g. quantum sensors), or are many years away from being mature enough to make an impact (quantum computing). This thesis develops two applications of quantum technologies, in the direction of quantum hyperpolarisation on the one hand and quantum sensing on the other hand, which utilise a quantum system particularly suited for practical applications, the nitrogen-vacancy (NV) centre in diamond. This diamond spin defect can be operated in ambient conditions and the resulting quantum devices can be easily miniaturised for large scale deployment. Specifically, in the first part of this thesis (chapters 2 to 4), two new techniques to realise hyperpolarisation (HP) of nuclear spins are developed. Through effective HP, ensembles of nuclear spin can be polarised far beyond the normal Boltzmann level, which can be used to enhance the spin signal for nuclear magnetic resonance (NMR) and imaging (MRI). Chapter 2 and 3 focus on exploiting direct cross-relaxation (CR) between the NV spin and the nuclear spin. Chapter 2 investigates a CR-based protocol for sensing, and determines, through a study of the NV physics, under what regimes this protocol can be applied to nuclear spin detection. This study constructs a framework under which HP via CR can be realised. Chapter 3 continues in this direction and demonstrate that CR can be used to hyperpolarise external nuclear spins. A detailed understanding of the spin bath mechanics is explored and the impact of rogue uncontrolled NV spins on this spin bath is determined. Additionally, this protocol is compared with other HP techniques and shows a remarkable improvement in polarisation rate, however, it is particularly sensitive to magnetic field detuning. To overcome this issue, in chapter 4 a different technique is developed that relies on a dynamical decoupling protocol purposefully modified to achieve HP. This new technique has a slower polarisation rate than CR-based HP but is robust to the experimental errors that exist in scaling these hyperpolarisation techniques. The second part of this thesis (chapters 5 and 6) exploits the quantum sensing properties of ensembles of NV centres in diamond to develop multi-modal microscopic imaging, which is a promising tool for device diagnosis and the study of mesoscopic phenomena. Specifically, chapter 5 develops and implements a technique for imaging the electric field simultaneously with the magnetic field. The technique is applied to the study of electric fields that are intrinsic to interfaces and junctions. The functionality of electronic devices (such as transistors) are fundamentally dictated by these fields which have traditionally been opaque to probing except at the very surface. While the surface potential is crucial, a wealth of information is contained in the bulk structure which is the focus of this study. In chapter 6 the same sensing protocol is extended to image stress embedded in the diamond rather than electric fields. A series of different deformation sources is used to test and verify that the technique can determine the entire stress tensor with high sensitivity and micrometer spatial resolution. With these new imaging capabilities, extending the traditional magnetic field sensing to electric field and stress, multi-modal NV imaging is a promising example of quantum technology that may have an immediate impact in other fields of science.