<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns="http://purl.org/rss/1.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel rdf:about="http://hdl.handle.net/20.500.12124/2">
<title>Eurac Research: CMC &amp; WaC</title>
<link>http://hdl.handle.net/20.500.12124/2</link>
<description>Submissions dealing with CMC data or data collected from the world wide web (from Eurac Research).</description>
<items>
<rdf:Seq>
<rdf:li rdf:resource="http://hdl.handle.net/20.500.12124/9"/>
<rdf:li rdf:resource="http://hdl.handle.net/20.500.12124/8"/>
<rdf:li rdf:resource="http://hdl.handle.net/20.500.12124/7"/>
<rdf:li rdf:resource="http://hdl.handle.net/20.500.12124/3"/>
</rdf:Seq>
</items>
<dc:date>2026-02-04T12:27:47Z</dc:date>
</channel>
<item rdf:about="http://hdl.handle.net/20.500.12124/9">
<title>KrdWrd CANOLA Corpus 1.1</title>
<link>http://hdl.handle.net/20.500.12124/9</link>
<description>KrdWrd CANOLA Corpus 1.1
Stemle, Egon W.; Steger, Johannes M.
The CANOLA Corpus is a visually annotated English web corpus for training classification engines to remove boiler plate on unseen Web pages. It was harvested, annotated and evaluated by the tools and infrastructure of the KrdWrd Project.
</description>
<dc:date>2010-11-25T00:00:00Z</dc:date>
</item>
<item rdf:about="http://hdl.handle.net/20.500.12124/8">
<title>KrdWrd CANOLA Corpus 1.0</title>
<link>http://hdl.handle.net/20.500.12124/8</link>
<description>KrdWrd CANOLA Corpus 1.0
Stemle, Egon W.; Steger, Johannes M.
The CANOLA Corpus is a visually annotated English web corpus for training classification engines to remove boiler plate on unseen Web pages. It was harvested, annotated and evaluated by the tools and infrastructure of the KrdWrd Project.
</description>
<dc:date>2010-09-10T00:00:00Z</dc:date>
</item>
<item rdf:about="http://hdl.handle.net/20.500.12124/7">
<title>DIDI - The DiDi Corpus of South Tyrolean CMC 1.0.0</title>
<link>http://hdl.handle.net/20.500.12124/7</link>
<description>DIDI - The DiDi Corpus of South Tyrolean CMC 1.0.0
Frey, Jennifer-Carmen; Glaznieks, Aivars; Stemle, Egon W.
The DiDi corpus has an overall size of around 600.000 Tokens gathered from 136 South Tyrolean Facebook users who participated in the DiDi project. It consists of 11.102 Facebook wall posts, 6.507 wall comments and 22.218 private messages. All messages were written by the participants throughout the year 2013. Please read the fulldescription of the corpus for further details. Please consider also the description of the method of data collection and the full description of the DiDi project and its research questions.&#13;
&#13;
As every participant could offer either his/her private messages, his/her texts on the wall or both, the corpus comprises wall posts and wall comments from 130 profiles and private messages of 56 profiles; 50 participants granted access to both types of data. Free access to the corpus is given to the wall posts and comments. Due to privacy issues the access to the private messages is restricted. Access to the private messages can be given for scientific research only, after signing a non-disclosure agreement. In case you are interested in the data for scientific reasons, please contact the research team.&#13;
&#13;
All texts were anonymised in order to guarantee that the participants' identity cannnot be infered from the texts. The anonymisation included person names, group names, geographical names and adjectival references, institution names, hyperlinks, mail addresses, phone numbers, numbers of bank accounts, servers, postal codes and other private information. Please, read the anonymisation document for the anonymisation keys.&#13;
&#13;
The corpus offers a vast range of research opportunities for linguists that are interested in CMC in general, and more specific in multilingual language use, the use of regional varieties, code switching, code shifting and code mixing phenomena, etc.&#13;
&#13;
Access to the DiDi corpus: https://commul.eurac.edu/annis/didi
</description>
<dc:date>2019-03-07T00:00:00Z</dc:date>
</item>
<item rdf:about="http://hdl.handle.net/20.500.12124/3">
<title>PAISÀ Corpus of Italian Web Text</title>
<link>http://hdl.handle.net/20.500.12124/3</link>
<description>PAISÀ Corpus of Italian Web Text
Lyding, Verena; Stemle, Egon; Borghetti, Claudia; Brunello, Marco; Castagnoli, Sara; Dell’Orletta, Felice; Dittmann, Henrik; Lenci, Alessandro; Pirrelli, Vito
The Paisà corpus is a large collection of Italian web texts, licensed under Creative Commons (Attribution-ShareAlike and Attribution-Noncommercial-ShareAlike). It has been created in the context of the project PAISÀ.&#13;
&#13;
All documents were selected in two different ways. A part of the corpus was constructed using a method inspired by the WaCky project. We created 50,000 word pairs by randomly combining terms from an Italian basic vocabulary list, and used the pairs as queries to the Yahoo! search engine in order to retrieve candidate pages. We limited hits to pages in Italian with a Creative Commons license of type: CC-Attribution, CC-Attribution-Sharealike, CC-Attribution-Sharealike-Non-commercial, and CC-Attribution-Non-commercial. Pages that were wrongly tagged as CC-licensed were eliminated using a black-list that was populated by manual inspection of earlier versions of the corpus. The retrieved pages were automatically cleaned using the KrdWrd system.&#13;
&#13;
The remaining pages in the PAISÀ corpus come from the Italian versions of various Wikimedia Foundation projects, namely: Wikipedia, Wikinews, Wikisource, Wikibooks, Wikiversity, Wikivoyage. The official Wikimedia Foundation dumps were used, extracting text with Wikipedia Extractor.&#13;
&#13;
Once all materials were downloaded, the collection was filtered discarding empty documents or documents containing less than 150 words.&#13;
&#13;
The corpus contains approximately 380,000 documents coming from about 1,000 different websites, for a total of about 250 million words. Approximately 260,000 documents are from Wikipedia, approx. 5,600 from other Wikimedia Foundation projects. About 9,300 documents come from Indymedia, and we estimate that about 65,000 documents come from blog services.
</description>
<dc:date>2013-01-01T00:00:00Z</dc:date>
</item>
</rdf:RDF>
