Category Archives: team blog

KMi robots: making a map with a Roomba, an RPlidarA2 and gmapping

/ * PREMISES */ More than some time ago we decided to invest in a low-cost wheeled platform and make the first steps in the robotics world. We then started a mini project that we called “Dynamic Knowledge Acquisition with robots” or DKA-robo. The idea of DKA-robo is the following: we set up a […]

On deriving data flows from scientific workflows – By Enrico Daga

Workflow models are designed with the purpose of supporting the reuse of processing components in multiple executions. This approach to encode process knowledge found particular success in the research community of open science as one way of supporting the recording and reuse of scientific experiments. One example of this is the myexperiments.org portal. Workflows […]

Learning to assess Linked Data relationship using Genetic Programming – By Ilaria Tiddi

A very well-known issue for  tasks such as text-mining, named-entity disambiguation, ontology population or query expansion is how to identify and measure how strongly two entities are related. We tackled this problem in our work  “Learning to assess Linked Data relationships using Genetic Programming”, to be presented at the 15th International  Semantic Web […]

Update of time-invalid information in Knowledge Bases through Mobile Agents – By Ilaria Tiddi

Managing dynamic data in knowledge bases, i.e. statements that are only valid for a certain period of time, is a very well-known problem for knowledge acquisition and representation, because data need to be constantly re-evaluated to allow reasoning. Let us imagine a very basic example, such as the knowledge base representing the Knowledge Media Institute working environment. While information about locations is static (e.g. the coordinates of a room), temperature, humidity, wifi signal or number of people in a room change often and the statements about them in the knowledge base might not be valid anymore after some time. Common solutions to this, e.g. providing time-stamped versions of the knowledge base or using sensors to constantly stream the information, might be suitable for a simplified scenario such as the KMi one, but are costly and less flexible in terms of data collection if applied at large-scale (e.g. in a smart city environment). Much of the information provided is likely not be required, and only queries at specific locations would […]