Dr. Omer Rana

Professor of Performance Engineering and the Dean of International for the Physical Sciences and Engineering College at Cardiff University.

He has research interests in high performance distributed computing (particularly cloud and edge computing) and intelligent systems. He is a visitor professor at the Department of Computer Science and Engineering at Shanghai Jiao Tong University (China) and was previously a visiting professor at Princess Noura University in Riyadh (Saudi Arabia). He is a fellow of Cardiff University's multi-disciplinary ``Data Innovation`` Research Institute and previously the deputy director of the Welsh eScience Centre.

Rana has contributed to specification and standardisation activities via the Open Grid Forum and worked as a software developer with London-based Marshall Bio-Technology Limited prior to joining Cardiff University, where he developed specialist software to support biotech instrumentation. He also contributed to public understanding of science, via the Wellcome Trust funded ``Science Line``, in collaboration with BBC and Channel 4. Rana holds a PhD in ``Neural Computing and Parallel Architectures`` from Imperial College (London Univ., UK), an MSc in Microelectronics (Univ. of Southampton, UK) and a BEng in Information Systems Eng. from Imperial College (London Univ., UK). Rana was born in Lahore (Pakistan), and maintains a close link and collaboration with the Computer Science research community in Pakistan.

Omer Rana's Keynote

The Intelligent Edge: Integrating AI & Edge Computing

Abstract:

Internet of Things (IoT) applications today involve data capture from sensors and devices that are close to the phenomenon being measured, with such data subsequently being transmitted to Cloud data centre for storage, analysis and visualisation. Currently devices used for data capture often differ from those that are used to subsequently carry out analysis on such data. Increasing availability of storage and processing devices closer to the data capture device, perhaps over a one-hop network connection or even directly connected to the IoT device itself, requires more efficient allocation of processing across such edge devices and data centres. Supporting machine learning directly on edge devices also enables support for distributed (federated) learning, enabling user devices to be used directly in the inference or learning process. Scalability in this context needs to consider both cloud resources, data distribution and initial processing on edge resources closer to the user. This talk considers whether a data comms. network can be enhanced using edge resources, and whether a combined use of edge, in-network (in-transit) and cloud data centre resources provide an efficient infrastructure for machine learning and AI. The following questions are addressed in this talk:

  • How do we partition machine learning algorithms across Edge-Network-Cloud resources -- based on constraints such as privacy capacity and resilience?

  • Can machine learning algorithms be adapted based on the characteristics of devices on which they are hosted? What does this mean for stability/ convergence vs. performance?

  • Do we trade-off accuracy for “explainability” of results? Given a complex parameter space can “approximations” help with explaining the basis of results?