Osvaldo Simeone – Ensuring Reliability via Hyperparameter Selection: Theory and Applications

Abstract: Hyperparameter selection is a critical step in the deployment of artificial intelligence (AI) models, particularly in the current era of foundational, pre-trained, models. By framing hyperparameter selection as a multiple hypothesis testing problem, recent research has shown that it is possible to provide statistical guarantees on population risk measures attained by the selected hyperparameter. This talk will review the Learn-Then-Test (LTT) framework, which formalizes this approach, and explore several extensions tailored to engineering-relevant scenarios. These extensions encompass different risk measures and statistical guarantees, multi-objective optimization, the incorporation of prior knowledge and dependency structures into the hyperparameter selection process, as well as adaptivity. The talk will also include illustrative applications.
Bio: Osvaldo Simeone is a Professor of Information Engineering at King’s College London and a visiting Professor at Aalborg University. From 2006 to 2017, he was at the New Jersey Institute of Technology (NJIT). Among other recognitions, Prof. Simeone is a co-recipient of the 2022 IEEE Communications Society Outstanding Paper Award, the 2021 IEEE Vehicular Technology Society Jack Neubauer Memorial Award, the 2019 IEEE Communication Society Best Tutorial Paper Award, the 2018 IEEE Signal Processing Best Paper Award, and the 2015 IEEE Communication Society Best Tutorial Paper Award. He was awarded an Advanced grant by the European Research Council (ERC) in 2025, an Open Fellowship by the EPSRC in 2022, and a Consolidator grant by the ERC in 2016. He was a Distinguished Lecturer of the IEEE Communications Society in 2021 and 2022, and he was a Distinguished Lecturer of the IEEE Information Theory Society in 2017 and 2018. Prof. Simeone is the author of the textbooks “Machine Learning for Engineers” and “Classical and Quantum Information Theory” published by Cambridge University Press, four monographs, two edited books, and more than 200 research journal and magazine papers. He is a Fellow of the IET and IEEE.
Rebecca Killick – The Art of the Pivot: Detecting Shifts in your Data Streams

Abstract: Our world is a continuous stream of data, but the most critical insights often lie not in the flow, but in the sudden, meaningful shifts. This talk is an accessible guide to changepoint detection, the powerful data science discipline of identifying abrupt, significant changes in the properties of a time series. We will explore how these “changepoints” are more than just anomalies—they are signals that can indicate a market crash, a machine failure, a fundamental shift in customer behaviour, or a pivotal moment in a patients treatment.
Through a series of real-world examples, this keynote will demystify the core concepts behind changepoint detection. We’ll discuss why it’s a crucial tool in the data science toolkit, and touch on the key methodologies used to find these hidden signals. By the end of this talk, you will gain a new perspective on analyzing data, recognizing that knowing when a change occurred is often the first step to unlocking its true value.
Bio: Rebecca Killick received their PhD degree in Statistics from Lancaster University, where they hold Professor and Director of Research positions. In 2019 they were the first UK recipient of the “Young Statistician of the Year” award from the European Network for Business and Industrial Statistics which recognizes the work of young people in introducing innovative methods, promoting the use of statistics and/or successfully using it in daily practice. Rebecca sees their research as a feedback loop, being inspired by problems in real world applications, creating novel methodology to solve those problems and then feeding these back into the problem domain. Their primary research interests lie in development of novel methodology for the analysis of univariate and multivariate nonstationary time series models. This covers many topics including developing models, model selection, efficient estimation, diagnostics, clustering and prediction. Rebecca is highly motivated by real world problems and has worked with data in a range of fields including Bioinformatics, Energy, Engineering, Environment, Finance, Health, Linguistics and Official Statistics. Rebecca is passionate about ensuring the availability and accessibility of research in the form of open-source software. As part of this they advocate to the statistical community the importance of recognition of research software as an academic output, are co-Editor in Chief of the Journal of Statistical Software and a member of the rOpenSci statistical software peer review board.
Patrick Rubin-Delanchy – The Manifold Hypothesis in Science & AI

Abstract: The manifold hypothesis is a widely accepted tenet of machine learning which asserts that nominally high-dimensional data are in fact concentrated around a low-dimensional manifold. In this talk, I will show some real examples of manifold structure occurring in science and in AI (internal representations of LLMs), and discuss associated questions, particularly around how observed topology and geometry might map to the real world (science) or a human-understandable concept (AI). I will present statistical models and theory which help explain the efficacy of popular combinations of tools, such as PCA followed by t-SNE. Finally, I will point to a vast array of unexplored possibilities in representation learning and potential implications for the future role of AI in science.
Bio: Professor Rubin-Delanchy obtained his PhD in Statistics at Imperial College London in 2008, supervised by Prof. Andrew Walden. He was awarded a Heilbronn research fellowship in Data Science in 2012, which he first held at the University of Bristol and then at the University of Oxford. He became an assistant professor of Statistics at the University of Bristol in July 2017, and was promoted to full professor in July 2022. As of Jan 2024, he is Chair of Statistical Learning at the University of Edinburgh. Professor Rubin-Delanchy’s lab conducts fundamental research, particularly around the manifold hypothesis and its implications for science and AI, as well as applied research, particularly building predictive models for security and healthcare. He is an associate editor for the Journal of the Royal Statistical Society, Series B (Statistical Methodology). He is part of a 6-years EPSRC Programme Grant on Network Stochastic Processes and Time Series (NeST), a multi-million pound award in together with Imperial, Oxford, Bath, LSE, York and Bristol.
Chenlei Leng – Statistical Modeling of Network Features: From Foundations to Data Science Applications

Abstract: In many applications, we observe a single network, which in the directed case is typically sparse (with far fewer links than the maximum possible), homophilous (where similar nodes are more likely to connect), and reciprocal (with directed ties tending to appear in mutual pairs). This talk presents a statistical framework that incorporates these features in a unified model, allowing covariates to explain homophily and reciprocity. The talk begins with a foundational Bernoulli model designed to capture sparsity, providing new insights into effective sample size and inference under sparse conditions. We then extend this framework to include homophily and reciprocity in a full generative model, introducing a novel technique to estimate reciprocity parameters with statistical guarantees and optimality. Through numerical examples, I will illustrate how these models advance our understanding of network features while enabling more reliable, interpretable, and scalable network analytics. The talk concludes with perspectives on applications across domains, from online platforms to international relations.ence Applications.
