ANZIAM 2026 / 8th - 12th February / Canberra

Picture credit: VisitCanberra

SigmaOpt Workshop 2026 

Friday, 13 February 2026 

Location: Australian National University, Canberra 

Time                  Program 

09:45–10:00     Welcome 
                         Matthew Tam 

10:00–10:30    SigmaOpt/MoCaO Student Best Paper Prize 

10:30–11:00    Philipp Braun (ANU) 
                        Properties of Fixed Points of Generalised Extra Gradient Methods Applied to Min–Max Problems 

11:00–11:40    Morning Tea 

11:40–12:10   Queenie Huang (UNSW Sydney) 
                       A Robust Machine Learning Model of Classification and Feature Selection 

12:10–12:40   James Nichols (Macquarie Bank / ANU) 
                       Optimisation Problems in Deep Graph Matching 

12:40–13:30   Lunch 

13:30–14:00   Mahdi Abolghasemi (QUT) 
                       Insights on Predicting and Optimising Decisions 

14:00–14:30   Hoa Bui (Curtin University) 
                       Decomposition Strategies for Large-Scale Maintenance Scheduling Problems 

14:30–14:45   Closing Remarks 
                       Neil Dizon 

For more information, visit the ANZIAM Conference webpage austms.org.au/events/2026-sigmaopt-workshop 

Contact: Felipe Atenas (atenas.opt@gmail.com), Neil Dizon (n.dizon@unsw.edu.au

Abstracts 

Philipp Braun, ANU 
Title: Properties of Fixed Points of Generalised Extra Gradient Methods Applied to Min-Max Problems 
Abstract: In this talk we study properties of fixed points of generalised Extra-gradient (GEG) algorithms applied to min-max problems. We discuss connections between saddle points of the objective function of the min-max problem and GEG fixed points. We show that, under appropriate step-size selections, the set of local saddle points (local Nash equilibria) is a subset of locally stable fixed points of GEG. Convergence properties of the GEG algorithm are obtained through a stability analysis of a discrete-time dynamical system. The results when compared to existing methods are illustrated through numerical examples. The talk is based on the paper https://ieeexplore.ieee.org/document/11008549. 

Queenie Huang, UNSW Sydney 
Title: A Robust Machine Learning Model of Classification and Feature Selection 
Abstract: In this talk, we introduce an efficient machine learning method based on robust Support Vector Machines (SVMs) that simultaneously classifies data and selects relevant features whilst accounting for data uncertainty. Based on Wasserstein distributionally robust optimization, we develop computationally feasible robust SVM models along with efficient second-order cone programming methods using an integrated application of tools from convex non-smooth analysis and difference of convex optimization. Our computational results on benchmark datasets demonstrate that these robust SVMs identify relevant features whilst achieving higher classification accuracies than the conventional (non-robust) SVM models, especially for datasets with more features than instances. Applying our method to a novel dataset of handwriting samples from individuals with Alzheimer’s disease and a control group, the model was able to distinguish between both groups with greater than 80% accuracy and using only 37% (168/450) of all available features, outperforming previous SVM models and providing insights into the unique characteristics of the disease. 

James Nichols, Macquarie Bank/ANU 
Title: Optimisation problems in deep graph matching 
Abstract: Graph matching is the process of finding a mapping from the nodes of one graph (the source graph) to the nodes of another (the target graph) that somehow maps nodes with structural similarity to each other. Finding a suitable matching is incredibly difficult. Early formulations of the task in the literature resulted in quadratic assignment problems or other NP-hard discrete optimization problems that had to be solved. A further complication is that such early formulations only captured first-order adjacency data, rather than a more wholistic structural view of the graph. The modern approach is instead to leverage deep learning. This involves deep neural networks that compute node-embeddings and using the embeddings to compare each node in the source graph with each node in the target graph. This resulting similarity matrix can then be used to create a mapping of the nodes between the two graphs, so the task becomes to find a mapping that seems appropriate to the similarity matrix. This results in interesting and simple optimisation problems, and choices have interesting implications for the overall graph matching objective. We present some simple theoretical results and some computational results. This work was a collaboration with Gathika Ratnayaka and Prof. Qing Wang, and this talk is dedicated to the memory of her. 

Mahdi Abolghasemi, QUT 
Title: Insights on Predicting and Optimising Decisions 
Abstract: Forecasting and decision optimisation are among the most powerful tools in data-driven decision-making under uncertainty. From retail demand planning to energy load scheduling, the ability to forecast an uncertain future as accurately as possible and optimise actions accordingly is critical. However, literature suggests greater forecast accuracy does not always guarantee better decisions. In this talk, I will highlight why integrating forecasting and optimization models are crucial for a better decision making. I will then present methods that account for both forecast accuracy and downstream decisions quality. 

Hoa Bui, Curtin University 
Title: Decomposition Strategies for Large-Scale Maintenance Scheduling Problems 
Abstract: In this talk, we explore several optimisation problems in scheduling maintenance from integrated mining operations to chemical refinement process in Western Australia. Optimisation models for these problems are discrete, sometimes nonlinear, and of large scale. We show how decomposition methods such as Benders Decomposition, Logic based Benders Decomposition are used to tackle the large-scale problems and resolve the nonlinearity. We also discuss trade-off between formulation types: while linearization is often the default approach for such problems, we show that preserving nonlinear structures within a decomposition framework can lead to enhanced algorithmic efficiency.

You can download a PDF of this timetable here

Heading 1 Heading 2
Cell 1 Cell 2