School of Computing and Mathematical Sciences
Distributed and High-Performance AI Systems
The Distributed AI Systems group provides theoretical and practical innovations in data intensive distributed systems (including clouds, HPC and quantum data centres), distributed machine learning models and self-adapting and physics informed digital twins to emulate the real time behaviour of (bio)mechanical and cyber-physical systems.
Another key focus of the group is to provide technical leadership in investigating state of the art high-performance computational infrastructures for artificial intelligence models and their applications in environment, health, engineering and beyond. We exploit emerging approaches in parallel and distributed systems such as exa-scale, quantum computing and edge enhanced system architectures to provide scalable, efficient and robust AI systems.
The group also investigates distributed models for data representation, and integration such as graphs models, multi-agent systems and semantic models to offer reliable, consistent and networked systems.
In particular, the Distributed AI Systems group conducts multidisciplinary research in the following areas:
Self-Learning Digital twins
- Self-learning digital twins for cyber-physical systems
- Self-adapting networks of digital twins
- Diffusion and assimilation models for high performance digital twins that combine physics-driven causality with simulations and observed data, enabling real-time predictions
- Physics informed neural networks to enable safe and robust digital twins that comply with mathematical guarantees such as partial differential equations
Distributed AI
- Distributed and high performance machine learning architectures for scale, performance and quality of service
- Embarrassingly parallel and spatially distributed AI models to exploit large scale computation for real-time predictions (including GPUs, FPGAs, HPC, Clouds)
- Edge enhanced, embedded and miniaturized systems to support locality, resource and performance aware machine learning
- Data intensive distributed systems for high performance inferencing
Knowledge and data engineering
- Scalable and distributed data management architectures for robust and elastic data storage
- Consistency, availability and partition tolerance (CAP) compliant models for data integration, modelling and data retrieval (No SQL, graph, replicated databases)
- High performance indexing, data modelling and knowledge engineering (warehousing, graphs databases, ontologies, semantics)
- Federated and privacy preserving data storage architectures (catalogs, blockchains, data lakes)
Group members
- Professor Ashiq Anjum (group lead)
- Dr John Drake
- Dr Rob Free
- Dr Xiao Chen
- Dr Wentao Li
- Dr Furqan Aziz
- Dr Genovefa Kefalidou
- Dr Craig Bower
- Dr Ali Khan
- Dr Corentin Houpert
- Dr Amjad Ali
- Dr Hassan Mansoor
- Dr Noel Clancy
- Dr Fang Chen
Projects
- SLAIDER
- Self-learning digital twins for sustainable land management
- BASAE II
- Meteor
- I-Reach project
- Verifiability Node
- OR-MASTER
News and Updates
- Vacancy: Professor in distributed computing
- Vacancy: Lecturer in distributed machine learning and digital twins
- Professor Ashiq Anjum is the general chair for the 17th IEEE/ACM International Conference on Utility and Cloud Computing