From “Big” to “Extreme”, or how to increase the value from data analytics
During the last years, characteristic features of Big Data (volume, velocity, variety) have increased considerably, involving huge volumes of most diverse data coming from many different sources. However, this increase does not guarantee per se a similar growth in the “value” resulting from the traditional processing and analysis of this data. It requires the development of specific methods, products, platforms, specially designed to cope with this new Extreme Data paradigm. During the last recent years, many initiatives have addressed this need of new tools, that have also to be aligned with existing and evolving infrastructures (cloud, HPC, etc). This session brings together projects (EXAMODE, MARVEL, VesselAI, DAPHNE, SELMA) that have addressed this challenge from different perspectives, at different times, and applying to different domains and use cases.
The objective of the session is threefold:
- General: to share lessons learnt, encountered challenges and how the projects have overcome them
- Technical: for each project to consider how to leverage on the results from each other, using blocks from other projects
- Business: define a path of collaboration that can also help on the exploitation and sustainability of the results
Agenda of the session:
- Introduction and setting-up the scene
- Presentation from each project
- Pannel discussion
- Wrap and and conclusions
Daniel Alonso Roman presentation here
SELMA presentation here
ExaMode presentation here
DAPHNE presentation here
MARVEL presentation here
VesselAI presentation here



