Pleiades+ CSV file, ordered by the Pleiades ID converted form the initial excel file of Pleiades. The ordering by Pleiades ID makes the computation of alternative names easier, because there is one alternative name after another.
The Arachne Database is a Mysql database with a structure that has evolved over time. It uses the utf8_general_ci encoding. The Arachne places have place names which are more or less ordered.
All places contain information about their larger entities: for every place there is the country in which it falls. The Pleiades Places are described by identifiers: therefore a country is described in the same way as a city or a village.
In Arachne, places are defined by a list of name attributes. These names contain the specific place (Aufbewahrungsort), ancient place name (Ort_antik), city name (Stadt), country name (Land), and their synonyms(Aufbewahrungsort_Synonym,Stadt_Synonym). Specific Places describe the place in the most Specific way. They are often freetext descriptions of places, addresses, directions (after 300 meters on road XY on the left side) , Museums and many other forms of Place descriptions. This field is unlikely to match one of the Pleiades labels. The annotations will therefore often describe the more general place in which an Arachne place forms a part.
For example, the place “Corso d` Italia 35 b, Rome, Italy, Europe” will refer to “Italy” and “Rome” in Pleiades the address “Corso d` Italia 35 b” is much too specific for Pleiades.
The script is focused on precision rather than recall. This means that the script will try to produce exact results rather than many results, even if there could be more "real" matches.
The five steps to annotation
1. Collecting synonyms of Pleiades places
Pleiades ID : 1094
2. Query the database for all synonyms
The collected synonyms of the Pleiades+ dataset have been rewritten as SQL-queries.
The collection of labels:
The resulting SQL-queries:
SELECT `PS_OrtID` From `ort` WHERE `Aufbewahrungsort` LIKE 'Lopadusa Ins.' OR `Aufbewahrungsort_Synonym` LIKE 'Lopadusa Ins.' OR `Ort_antik` LIKE 'Lopadusa Ins.' OR `Stadt` LIKE 'Lopadusa Ins.' OR `Stadt_Synonym` LIKE 'Lopadusa Ins.' OR `Land` LIKE 'Lopadusa Ins.';
SELECT `PS_OrtID` From `ort` WHERE `Aufbewahrungsort` LIKE 'lopadusa' OR `Aufbewahrungsort_Synonym` LIKE 'lopadusa' OR `Ort_antik` LIKE 'lopadusa' OR `Stadt` LIKE 'lopadusa' OR `Stadt_Synonym` LIKE 'lopadusa' OR `Land` LIKE 'lopadusa';
SELECT `PS_OrtID` From `ort` WHERE `Aufbewahrungsort` LIKE 'lampidusa' OR `Aufbewahrungsort_Synonym` LIKE 'lampidusa' OR `Ort_antik` LIKE 'lampidusa.' OR `Stadt` LIKE 'lampidusa' OR `Stadt_Synonym` LIKE 'lampidusa' OR `Land` LIKE 'lampidusa';
The SQL statements check all the fields of the Arachne database where place names are mentioned for a specific place. If one of the names in Pleiades+ matches one of the name description fields in Arachne, the key of the dataset is returned and added to the list of matching IDs of all synonyms.
The synonymous names of a Pleiades data sets are matched by the SQL "LIKE" statement. The "LIKE" comparison is case insensitive and special characters for example 'a' matches on 'ä'. For Further Information on this issue see:
3. Collect all results and remove duplicates
Result examples: 1001,2001,1001,3001,1001,2001,3001
After removing the duplicates: 1001,2001,3001
4. Convert Internal IDs
The internal Arachne IDs are converted to general Arachne entity IDs.
5. Convert results to OAC-RDF
The matching datasets in Pleiades and Arachne are then annotated in OAC.
The annotation URI:
A typical annotation URI produced in this workflow looks like this.
"1310384539" represents a unix-timestamp of the date of creation.
"Pleiades991319toArachneEntity8003997" describes the annotation in a more or less human readable form.
The timestamp and the combination of IDs provide the uniqueness of the annotation URI. The fact that the URI scheme is human readable is NOT a requirement of an URI and shouldnt be expected in general!
The main goal of a URI is to be unique reference to a piece of data. In this case even if the same annotation is done another time it must not have the same URI. The annotated things are the same but the annotation itself is different. The annotation represents knowledge at a specific time. The Arachne database is still growing and corrected, so the results will change over time, even if the script that produces them remains the same.
The Annotation in RDF:
The date is also represented in the "dc:created" triple of the annotation.
The name of the script is represented in the "dc:creator" triple.
This is just a string, but when you get conflicting or wrong interpretations you can just blame a bad version of a script that has produced the results.
There is also the advantage of identifying old annotations.
A complete set of triples of an Arachne Pleiades annotation:
<http://arachne.uni-koeln.de/oacAnnotaion/1310384539/Pleiades991319toArachneEntity8003997> rdf:type oac:Annotation .
<http://arachne.uni-koeln.de/oacAnnotaion/1310384539/Pleiades991319toArachneEntity8003997> dc:created "2011-07-11 13:42:19".
<http://arachne.uni-koeln.de/oacAnnotaion/1310384539/Pleiades991319toArachneEntity8003997> dc:creator "ArachneEntityToPleiadesPlacesScriptv0.2" .
<http://arachne.uni-koeln.de/oacAnnotaion/1310384539/Pleiades991319toArachneEntity8003997> oac:hasBody <http://pleiades.stoa.org/places/991319>.
<http://arachne.uni-koeln.de/oacAnnotaion/1310384539/Pleiades991319toArachneEntity8003997> oac:hasTarget <http://arachne.uni-koeln/entity/8003997>.
<http://pleiades.stoa.org/places/991319> rdf:type rdfs:Resource .
<http://pleiades.stoa.org/places/991319> rdf:type oac:Body.
The annotation of topographical units is similar to this. But it uses a different table and different fields.
The fact that a Pleiades place is most of the time more general than the Arachne Place could not be expressed in the annotation. This is a unsatisfying trade-off produced by using such general frameworks as the OAC. The information expressed in it is very unspecific.
Maybe there will be a way for the next version expressing this fact as an optional piece of information that can or cannot be used. If you want just a Link then you just ignore the other links. If you are very hungry, you can eat millions of good spaghetti as well as millions of very cheap spaghetti. But the good ones you can also enjoy, if you want to.
A web of linked data could not be typed in by hand. That would be extremely time consuming and produce different errors than letting the computer handle it. Everyone that tries to keep two long numbers in mind will mess up something every once in a while. What I'm missing is a simple controlled way to annotate machine authorship. This could help to back check results and to handle messy data. Who is to blame if the data is messy? The data-source or the automated process in the background.
Rasmus Krempel, Arachne Database, CoDArchLab University of Cologne