Deep fakes are currently proliferating in our society, and they are becoming increasingly indistinguishable from genuine photographs, audio or video recordings. In this paper, I explore the challenges that deep fakes present for individuals who may suffer harm as a result of these engines of disinformation targeting their identities. But at the same time, I point to some positive use cases involving deep fakes. As with many new technologies, I argue that a rush to regulate deep fakes risks stifling innovation and competition in the still fledgling market for synthetic media as a result of the shift it would entail from research and development to compliance costs. My argument therefore is for a more carefully considered, targeted approach designed to minimise the harms associated with deep fakes while leaving space for benefits.