Lou Burnard | |
---|---|
Louis Deryck Burnard | |
Born | Birmingham, England | 9 December 1946
Nationality | British |
Known for | Oxford Text Archive, Text Encoding Initiative |
Partner | Lilette |
Children | 3 |
Academic background | |
Alma mater | Balliol College, Oxford |
Lou Burnard (born 1946 in Birmingham, England) is an internationally recognised expert [1] in digital humanities, particularly in the area of text encoding and digital libraries. He was assistant director of Oxford University Computing Services (OUCS) from 2001 to September 2010, when he officially retired from OUCS. Before that, he was manager of the Humanities Computing Unit at OUCS for five years. He has worked in ICT support for research in the humanities since the 1990s. He was one of the founding editors of the Text Encoding Initiative (TEI) and continues to play an active part in its maintenance and development, as a consultant to the TEI Technical Council and as an elected TEI board member. He has played a key role in the establishment of many other activities and initiatives in this area, such as the UK Arts and Humanities Data Service and the British National Corpus, and has published [2] and lectured widely. Since 2008 he has worked as a Member of the Conseil Scientifique for the CNRS-funded "Adonis" TGE. [3] [4]
He won a scholarship to Balliol College, Oxford University, graduated with a first in English in 1968, and gained an MPhil in 19th-century English Studies (1973), MA (1979). He taught English at the University of Malawi between 1972 and 1974. [5] [6]
His first job at the University Computing Services was as a data centre operator. He described it as sitting in a large room in the Department of Atmospheric Physics with a line printer, a card reader, a card punch and three teletype devices. The one he sat in front of told the time every five minutes and the date every half hour. If it stopped doing either, he had instructions to call an engineer. Aside from light duties tearing up output from the line printer, that was essentially all he had to do for his 8-hour shift. He learned to program in Algol68, created a concordance to the songs of Bob Dylan, and finally got a job as a programmer in 1974.
He claimed that the first real program he wrote was 12 lines of assembler to link a PDP-8-driven graphics display to an ICL 1900 mainframe. He learned Snobol4, and worked with Susan Hockey on the design of the Oxford Concordance Program (OCP). He also worked on network database management systems, notably Cullinane's IDMS, and on ICL's CAFS text search engine. [7]
In 1976 he set up the Oxford Text Archive with Susan Hockey.
After flirting briefly with applications of computers in History under the tutelage of Manfred Thaller, he succumbed to the lure of SGML in 1988 following the Poughkeepsie Conference, which launched the Text Encoding Initiative (TEI) project of which he has been European editor since February 1989.
The Oxford electronic Shakespeare (1989), published by the Oxford University Press, was the first to offer a commercial e-text encoded for analysis. William Montgomery, one of the associate editors, and Lou Burnard encoded each poem or play with COCOA tags so that it could be processed by Micro-Oxford Concordance Program. [8]
Since October 1990 he has also been responsible for OUCS participation in the British National Corpus Project [9] a 100- million-word corpus of modern British English. [6]
He initiated the Xaira (XML Aware Indexing and Retrieval Architecture) project, an advanced text-searching software system for XML resources. Originally developed for searching the British National Corpus, it was funded by the Mellon Foundation in 2005–6. [10]
A markuplanguage is a text-encoding system consisting of a set of symbols inserted in a text document to control its structure, formatting, or the relationship between its parts. Markup is often used to control the display of the document or to enrich its content to facilitate automated processing.
The Perseus Digital Library, formerly known as the Perseus Project, is a free-access digital library founded by Gregory Crane in 1987 and hosted by the Department of Classical Studies of Tufts University. One of the pioneers of digital libraries, its self-proclaimed mission is to make the full record of humanity available to everyone. While originally focused on the ancient Greco-Roman world, it has since diversified and offers materials in Arabic, Germanic, English Renaissance literature, 19th century American documents and Italian poetry in Latin, and has sprouted several child projects and international cooperation. The current version, Perseus 4.0, is also known as the Perseus Hopper, and is mirrored by the University of Chicago.
The Text Encoding Initiative (TEI) is a text-centric community of practice in the academic field of digital humanities, operating continuously since the 1980s. The community currently runs a mailing list, meetings and conference series, and maintains the TEI technical standard, a journal, a wiki, a GitHub repository and a toolchain.
Xaira is an XML Aware Indexing and Retrieval Architecture developed at Oxford University, it was funded by the Mellon Foundation between 2005 and 2006. It is based on SARA, an SGML-aware text-searching system originally developed for searching the British National Corpus. Xaira has been redeveloped as a generic XML system for constructing query-systems for any kind of XML data, in particular for use with TEI. The current Windows implementation is intended for non-specialist users. A more sophisticated and open-source version is currently under development. This version supports cross-platform working using standards such as XML-RPC and SOAP.
Apache AxKit was an XML Apache publishing framework run by the Apache foundation written in Perl. It provided conversion from XML to any format, such as HTML, WAP or text using either W3C standard techniques, or flexible custom code.
Oxford Text Archive (OTA) is an archive of electronic texts and other literary and language resources which have been created, collected and distributed for the purpose of research into literary and linguistic topics at the University of Oxford, England.
EpiDoc is an international community that produces guidelines and tools for encoding in TEI XML scholarly and educational editions of ancient documents, especially inscriptions and papyri.
The British National Corpus (BNC) is a 100-million-word text corpus of samples of written and spoken English from a wide range of sources. The corpus covers British English of the late 20th century from a wide variety of genres, with the intention that it be a representative sample of spoken and written British English of that time. It is used in corpus linguistics for analysis of corpora.
C. Michael Sperberg-McQueen is an American markup language specialist. He was co-editor of the Extensible Markup Language (XML) 1.0 spec (1998), and chair of the XML Schema working group.
Steven J DeRose is a computer scientist noted for his contributions to Computational Linguistics and to key standards related to document processing, mostly around ISO's Standard Generalized Markup Language (SGML) and W3C's Extensible Markup Language (XML).
The Alliance of Digital Humanities Organizations (ADHO) is a digital humanities umbrella organization formed in 2005 to coordinate the activities of several regional DH organizations, referred to as constituent organizations.
Eighteenth Century Collections Online (ECCO) is a digital collection of books published in Great Britain during the 18th century.
The European Association for Digital Humanities (EADH), formerly known as the Association for Literary and Linguistic Computing (ALLC), is a digital humanities organisation founded in London in 1973. Its purpose is to promote the advancement of education in the digital humanities through the development and use of computational methods in research and teaching in the Humanities and related disciplines, especially literary and linguistic computing. In 2005, the Association joined the Alliance of Digital Humanities Organizations (ADHO).
The Association for Computers and the Humanities (ACH) is the primary international professional society for digital humanities. ACH was founded in 1978. According to the official website, the organization "support[s] and disseminate[s] research and cultivate[s] a vibrant professional community through conferences, publications, and outreach activities." ACH is based in the United States, and has an international membership. ACH is a founding member of the Alliance of Digital Humanities Organizations (ADHO), a co-originator of the Text Encoding Initiative, and a co-sponsor of an annual conference.
COCOA was an early text file utility and associated file format for digital humanities, then known as humanities computing. It was approximately 4000 punched cards of FORTRAN and created in the late 1960s and early 1970s at University College London and the Atlas Computer Laboratory in Harwell, Oxfordshire. Functionality included word-counting and concordance building.
In markup languages and the digital humanities, overlap occurs when a document has two or more structures that interact in a non-hierarchical manner. A document with overlapping markup cannot be represented as a tree. This is also known as concurrent markup. Overlap happens, for instance, in poetry, where there may be a metrical structure of feet and lines; a linguistic structure of sentences and quotations; and a physical structure of volumes and pages and editorial annotations.
Susan Hockey is an Emeritus Professor of Library and Information Studies at University College London. She has written about the history of digital humanities, the development of text analysis applications, electronic textual mark-up, teaching computing in the humanities, and the role of libraries in managing digital resources. In 2014, University College London created a Digital Humanities lecture series in her honour.
Sebastian Patrick Quintus Rahtz (SPQR) was a British digital humanities information professional.
CLOC was a first generation general purpose text analyzer program. It was produced at the University of Birmingham and could produce concordances as well as word lists and collocational analysis of text. First-generation concordancers were typically held on a mainframe computer and used at a single site; individual research teams would build their own concordancer and use it on the data they had access to locally, any further analysis was done by separate programs.
Lorna M. Hughes has been Professor in Digital Humanities at the University of Glasgow since 2015. From 2016 to 2019, she oversaw the redevelopment of the Information Studies subject area The re-launch was marked by an international symposium at the University of Glasgow in 2017.
{{cite web}}
: Missing or empty |title=
(help)