School of Computing and Mathematical Sciences

Distributed and high-performance AI systems

The group conducts multidisciplinary research in machine learning, artificial intelligence (AI), and high-performance solutions for their deployment using:

Digital twins

  • Self-learning digital twins for cyber-physical systems
  • Self-adapting networks of digital twins
  • Diffusion and assimilation models for high performance digital twins that combine physics-driven causality with simulations and observed data, enabling real-time predictions
  • Physics informed neural networks to enable safe and robust digital twins that comply with mathematical guarantees such as partial differential equations

Distributed AI

  • Distributed and high performance machine learning architectures for scale, performance and quality of service
  • Embarrassingly parallel and spatially distributed AI models to exploit large scale computation for real-time predictions (including GPUs, FPGAs, HPC, Clouds)
  • Edge enhanced, embedded and miniaturized systems to support locality, resource and performance aware machine learning
  • Data intensive distributed systems for high performance inferencing

Knowledge and data engineering

  • Scalable and distributed data management architectures for robust and elastic data storage
  • Consistency, availability and partition tolerance (CAP) compliant models for data integration, modelling and data retrieval (No SQL, graph, replicated databases)
  • High performance indexing, data modelling and knowledge engineering (warehousing, graphs databases, ontologies, semantics)
  • Federated and privacy preserving data storage architectures (catalogs, blockchains, data lakes)

Group members

Back to top
MENU