This deliverable describes the first version of the semantic search module of the CANDELA platform which is the aim of task 2.3 of work package 2.
Semantic search covers a set of services to retrieve images through a semantic description of their content (i.e. places, type of vegetation or occurrence of a forest fire) and to search for data related to their content (i.e. cities and their population, weather measures, fire evolution over time). It relies on a formal representation of data that can be “located on” (or more generally “linked to”) images thanks to their date and location. Hence a preliminary work to the design of semantic search facilities is to identify various relevant data to be used to search for images. The use cases defined in tasks 1.1 and 1.2 will provide semantic search scenario and contribute to identify relevant datasets that will enrich the image description. Once various data sources are identified, because each source has its own format and structure, the next stage is to propose a homogeneous representation of this heterogeneous data. This representation requires to define an appropriate data model, which may be a formal vocabulary or an ontology. Then data has to be associated to one or several semantic classes from this vocabulary, and stored in a repository. The semantic search facility can take advantage of this formal representation and of a reasoning engine to support the search for images according to the data that describes it and linked to it.
Best viewed in Chrome or FireFox
This project has received funding from the European Union's Horizon 2020
research and innovation programmeunder grant agreement No 776193