Connected Data encompasses data acquisition and data management requirements from a range of areas including the Semantic Web, Linked Data, Knowledge Management, Knowledge Representation and many others. Yet for the true value of many of these visions to be realised both within the public domain and within organisations requires the assembly of often huge datasets. Thus far this has proven problematic for humans to achieve within acceptable timeframes, budgets and quality levels.


This panel poses the question of whether the development of Artificial Intelligence techniques, particularly within Explainable AI (XAI) will be the catalyst many of these areas have looked for in order to prove these value propositions. Traditionally the Semantic Web for instance was seen as an initiative for humans to make the web readable to machines for instance, arguably however this is failing without the capacity and speed of AI techniques to acquire, structure and integrate disparate and messy datasets.

network graph
How Knowledge is structured as Data is considered by many a prerequisite for true Artificial Intelligence applications

Similarly whilst AI has surpassed Big Data as the latest buzzword its value (and safety in some applications) of outputs is dependent on practitioners’ ability to explain how outputs were arrived at.  Simply put without the encoding of Knowledge its often difficult to explain the how and why of AI.

Similarly within individual organisations whilst few would argue against the value of integrating disparate data & using a common language initiatives such as Knowledge Graph implementations are often inherently risky and expensive endeavours.  Perhaps Explainable AI is whats needed to derisk these areas and achieve buy in from wider stakeholders?