Now showing 1 - 4 of 4
ItemDiscovering syntactic phenomena with and within precision grammarsLetcher, Ned ( 2018)Precision grammars are hand-crafted computational models of human languages that are capable of parsing text to yield syntactic and semantic analyses. They are valuable for applications requiring the accurate extraction of semantic relationships and they also enable hypothesis testing of holistic grammatical theories over quantities of text impossible to analyse manually. Their capacity to generate linguistically accurate analyses over corpus data also supports another application: augmenting linguistic descriptions with query facilities for retrieving examples of syntactic phenomena. In order to construct such queries, it is first necessary to identify the signature of target syntactic phenomena within the analyses produced by the precision grammar in use. This is often a difficult process, however, as analyses within the descriptive grammar can diverge from those in the precision grammar due to differing theoretical assumptions made by the two resources, the use of different sets of data to inform their respective analyses, and the exigencies of implementing a large-scale formalised analyses. In this thesis, I present my research into developing methods for improving the discoverability of syntactic phenomena within precision grammars. This includes the construction of a corpus annotated with syntactic phenomena which supports the development of syntactic phenomenon discovery methodologies. Included within this context is the investigation of strategies for measuring inter-annotator agreement over textual annotations for which annotators both segment and label text---a property that traditional kappa-like measures do not support. The second facet of my research involves the development of an interactive methodology—and accompanying implementation—for navigating the alignment between dynamic characterisations of syntactic phenomena and the internal components of HPSG precision grammars associated with these phenomena. In addition to supporting the enhancement of descriptive grammars with precision grammars, this methodology has the potential to improve the accessibility of precision grammars themselves, enabling people not involved in their development to explore their internals using familiar syntactic phenomena, as well as allowing grammar engineers to navigate their grammars through the lens of analyses that are different to those found in the grammar.
ItemImproving the efficiency and capabilities of document structuringMARSHALL, ROBERT ( 2007)Natural language generation (NLG), the problem of creating human-readable documents by computer, is one of the major fields of research in computational linguistics The task of creating a document is extremely common in many fields of activity. Accordingly, there are many potential applications for NLG - almost any document creation task could potentially be automated by an NLG system. Advanced forms of NLG could also be used to generate a document in multiple languages, or as an output interface for other programs, which might ordinarily produce a less-manageable collection of data. They may also be able to create documents tailored to the needs of individual users. This thesis deals with document structure, a recent theory which describes those aspects of a document’s layout which affect its meaning. As well as its theoretical interest, it is a useful intermediate representation in the process of NLG. There is a well-defined process for generating a document structure using constraint programming. We show how this process can be made considerably more efficient. This in turn allows us to extend the document structuring task to allow for summarisation and finer control of the document layout. This thesis is organised as follows. Firstly, we review the necessary background material in both natural language processing and constraint programming.
ItemThe effects of sampling and semantic categories on large-scale supervised relation extractionWilly ( 2012)The purpose of relation extraction is to identify novel pairs of entities which are related by a pre-specified relation such as hypernym or synonym. The traditional approach to relation extraction is to building a dedicated system for a particular relation, meaning that significant effort is required to repurpose the approach to new relations. We propose a generic approach based on supervised learning, which provides a standardised process for performing relation extraction on different relations and domains. We explore the feasibility of the approach over a range of relations and corpora, focusing particularly on the development of a realistic evaluation methodology for relation extraction. In addition to this, we investigate the impact of semantic categories on extraction effectiveness.
ItemFundamentals of agent computation theory: semanticsKinny, David Nicholas ( 2001)About 5 years ago, the idea of software agents escaped from an obscure existence within the arcane field of Artificial Intelligence, and it is now running rampant through computer science, the software industry and the media, mutating violently as it goes and infecting many who come into contact with it. Despite humble origins in the study of Philosophy of Mind, the term agent has come to be applied to a diverse and disparate range of software constructs, and threatens soon to displace object from its primal position. Every computer scientist knows what agents are, or should be, although scant agreement upon definitions has been achieved, as so many variously qualified uses of the label now flourish. In the Artificial Intelligence research community where it was nurtured, however, the term still has a reasonably specific meaning: an agent is a situated or embedded system which participates in an ongoing interaction with some environment which it can observe and act upon. By assumption, an agent's behaviour is purposeful or motivated: it is thought of as wanting to perform some set of activities or achieve some set of goals and trying to do so when suitable opportunities present; in general it may be viewed as monitoring and controlling itself and its environment so as to bring about or maintain internal or external situations that it in some sense prefers. A very concrete example would be a robot, situated in the physical world, tasked to achieve certain objectives, but required to make its own moment-to-moment decisions about how and when to do so. But more often than not an agent inhabits an entirely artificial environment, within a single computer or a distributed network such as the Internet. It is with agents in this sense that this thesis is concerned. (From introduction)