University Library
  • Login
A gateway to Melbourne's research publications
Minerva Access is the University's Institutional Repository. It aims to collect, preserve, and showcase the intellectual output of staff and students of the University of Melbourne for a global audience.
View Item 
  • Minerva Access
  • Engineering and Information Technology
  • Computing and Information Systems
  • Computing and Information Systems - Research Publications
  • View Item
  • Minerva Access
  • Engineering and Information Technology
  • Computing and Information Systems
  • Computing and Information Systems - Research Publications
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

    Scaling conditional random fields using error correcting codes

    Thumbnail
    Download
    Scaling Conditional Random Fields Using Error Correcting Codes (135.6Kb)

    Citations
    Altmetric
    Author
    COHN, TREVOR; SMITH, ANDREW; OSBORNE, MILES
    Date
    2005
    Source Title
    Proceedings, 43rd Annual Meeting of the Association for Computational Linguists
    University of Melbourne Author/s
    Cohn, Trevor
    Affiliation
    Engineering: Department of Computer Science and Software Engineering
    Metadata
    Show full item record
    Document Type
    Conference Paper
    Citations
    Cohn, T., Smith, A., & Osborne, M. (2005). Scaling conditional random fields using error correcting codes. In, Proceedings, 43rd Annual Meeting of the Association for Computational Linguists, Ann Arbor, Michigan.
    Access Status
    Open Access
    URI
    http://hdl.handle.net/11343/33831
    Abstract
    Conditional Random Fields (CRFs) have been applied with considerable success to a number of natural language processing tasks. However, these tasks have mostly involved very small label sets. When deployed on tasks with larger label sets, the requirements for computational resources mean that training becomes intractable. This paper describes a method for training CRFs on such tasks, using error correcting output codes (ECOC). A number of CRFs are independently trained on the separate binary labelling tasks of distinguishing between a subset of the labels and its complement. During decoding, these models are combined to produce a predicted label sequence which is resilient to errors by individual models. Error-correcting CRF training is much less resource intensive and has a much faster training time than a standardly formulated CRF, while decoding performance remains quite comparable. This allows us to scale CRFs to previously impossible tasks, as demonstrated by our experiments with large label sets.
    Keywords
    error-correcting codes; machine learning; named entity recognition; natural lanuagage processing; part of speech tagging; noun phrase chunking

    Export Reference in RIS Format     

    Endnote

    • Click on "Export Reference in RIS Format" and choose "open with... Endnote".

    Refworks

    • Click on "Export Reference in RIS Format". Login to Refworks, go to References => Import References


    Collections
    • Computing and Information Systems - Research Publications [1584]
    Minerva AccessDepositing Your Work (for University of Melbourne Staff and Students)NewsFAQs

    BrowseCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects
    My AccountLoginRegister
    StatisticsMost Popular ItemsStatistics by CountryMost Popular Authors