The main goal of the conference is to foster discussion around the latest advances in Artificial Intelligence

Note - the Programme is subject to change

Posters

A Practical Guide to Selecting Context-Appropriate Fairness Metrics in Machine Learning
Caleb Barr, University Of Canterbury

Recent regulatory proposals for artificial intelligence emphasise fairness requirements for machine learning models. However, precisely defining an appropriate measure of fairness is challenging due to philosophical, cultural and political contexts. Biases can infiltrate machine learning models in complex ways depending on the model’s context, rendering a single common metric of fairness insufficient. This ambiguity highlights the need for criteria to guide the selection of context-aware fairness measures—an issue of increasing importance given the proliferation of ever tighter regulatory requirements of fairness. To address this, we developed a flowchart to guide the selection of contextually appropriate fairness measures. Twelve criteria were used to formulate the flowchart, including the consideration of model assessment criteria, model selection criteria, and data biases. Formulation of this flowchart has significant implications for predictive machine learning models, providing a guide to appropriately quantify fairness in these models.

Pretrained Workflow Transformer: Generating Functional Workflows from Technical Descriptions
Ardi Oeij, Lincoln University

A software functional specification typically includes a technical description of a function without a workflow visualization. Manually creating a workflow of a function from given technical descriptions express in natural language is challenging and time-consuming. One potential solution is to automate the workflow generation. This automation can help System Analyst and Technical Writer to speed up delivery time in creating Software Specification and Technical Documentation.

Earlier studies explored automation using conventional approaches struggled with long sequences data and ambiguity. The introduction of the Transformer model brought a breakthrough in processing long-sequences sequential data and ambiguity. The gap in research lies in the absence of automatic generation of the Workflow for a function from a given technical description by training the Transformer model from scratch. Therefore, this thesis intent to develop a small pretrained transformer model to generate a workflow of a software function

TAIAO Platform: Flood Prediction and Critter monitoring
Nick Lim, AI Institute, University of Waikato

The TAIAO Platform applies artificial intelligence to diverse environmental challenges in New Zealand. This poster highlights two projects: (1) Flood Prediction, which integrates river sensor data, rainfall observations, and machine learning to deliver high-resolution, real-time forecasts that support community resilience against extreme weather; and (2) Maungatautari Species Monitoring, which uses trail cameras and automated image recognition to detect and classify native and invasive species, enhancing biodiversity conservation. Together, these projects illustrate TAIAO’s flexible, data-driven approach and the cross-cutting lessons learned in scaling AI systems, managing heterogeneous datasets, and delivering actionable insights for climate adaptation and ecological stewardship.

Standardizing Cluster Evolution Analysis in Data Streams: CapyMOA as a Platform for Scalable Evaluation
Guilherme Weigert Cassales, AI Institute, University of Waikato

This research focuses on unsupervised clustering in data streams, with an emphasis on understanding and tracking cluster evolution under non-stationary conditions. In streaming environments, clusters may emerge, fade, or shift over time, requiring specialized tools to evaluate and compare algorithmic behavior effectively. To address this, CapyMOA is being developed as a modular and extensible platform that standardizes key aspects of clustering evaluation, including procedures for detecting transitions, benchmarking performance, and ensuring replicability across experiments. By providing a unified API and evaluation framework, CapyMOA aims to facilitate the systematic study of clustering dynamics in high-velocity data, supporting applications in domains such as environmental monitoring, industrial telemetry, and smart infrastructure. This work aims to promote reproducible research and foster broader adoption of stream-based clustering methodologies.

Multi-Kernel CNN Ensemble for Predicting Wildfire Spread
Nikeeta Kumari, Unitec Institute of Technology

Wildfires are now a very serious problem because it is expanding very quickly and cause a lot of damage. They are damaging to the environment, destroy property, put communities at risk, and create serious public health risks by increasing air pollution. Being able to predict how wildfires will spread one day in advance is very important for emergency planning and disaster response. However, this is a difficult task because fire behaviour depends on many different factors, such as weather, vegetation, terrain, and past fire activity.

This project focuses on using a Multi-Kernel Convolutional Neural Network (MKCNN) to predict the next-day wildfire spread. The model will use satellite-based environmental image data such as temperature, humidity, wind direction, vegetation, and elevation. The goal is to predict the fire extent for the following day.

The dataset “Next Day Wildfire Spread”, which I will use, includes daily fire masks along with weather and environmental features. The data is in a structured grid, which makes it suitable for deep learning models like CNNs. Since this dataset is standardised and widely used, I can compare results with other studies. At the same time, there are challenges. Most pixels show no fire, so the data is imbalanced and harder to train on. In addition, since this dataset is still relatively new and not much research has been done with it, there may be unexpected challenges during the project, which I am prepared to address.

Rather than training a single model with all features at once, the proposed approach trains several MKCNN models using pairs of inputs: one environmental feature combined with the previous day’s fire mask. An extension of this approach could be the training of models with a fire mask and two of the top-ranked features. The final result will be calculated by ensembling these trained models to improve accuracy and stability.

The performance of this method will be compared with a baseline eleven-feature MKCNN model trained on all features, as well as other models such as Convolutional Autoencoders and U-Nets, which already showed their high performance in the prediction of wildfires. The project also finds which environmental factors have a high impact on the prediction of wildfire spread. It will provide information that could support better decision-making in wildfire management.

On the Shoulders of the Internet: Unveiling the Mystique of AI and Power
Lisen He, Victoria University of Wellington

AI is never neutral; it is inherently a power phenomenon. Understanding AI and its societal impacts requires a comprehensive analysis of its relationship with power. For this emerging field of research, despite diverse investigations into the nexus of AI and power, a systematic and unified description of their relationship is still lacking in the current literature at its early stage of development. Without such conceptual clarity, it is difficult to effectively confront the rise of AI hegemony in the digital era. To address this gap, this literature-based research thesis aims to develop a unifying conceptual framework that captures the overall relationship between AI and power. This framework serves as an integrative synthesis, encompassing relevant studies across various domains and offering a coherent structure to systematically map this diverse body of work. It allows scholars to situate their individual research within this large picture. On this basis, this thesis further aims to redefine the concept of power through a critical review. This thesis consists of two major components: (1) the development of the conceptual framework, and (2) a novel classification approach of the concept of power in the context of AI. Unlike traditional approaches that rely on aggregating prior studies, this thesis draws inspiration from the evolution of Internet Studies to inform AI Studies. It adapts and updates theoretical insights from Internet research to the AI context, while also proposing new theoretical implications specific to AI. These together constitute an eventual finding – the conceptual framework of the relationship between AI and power. This thesis argues that a true understanding of the relationship between AI and power requires basing on this conceptual framework (by recognizing both the formation and practice of AI at the same time).

Theoretically, this thesis aims to provide a comprehensive understanding of the relationship between AI and power, and to offer a broad conceptual framework through which scholars can resituate their work. While the thesis offers limited empirical insights, its primary contribution lies in advancing conceptual and theoretical understandings within the field of Science, Technology, and Society Studies.

Cross-Cultural Comparison on the Ethical Adoption of Artificial Intelligence between Malaysia and New Zealand
Asma Mat Aripin, Massey University

The rapid adoption of Artificial Intelligence (AI) presents complex ethical challenges that vary across cultural and regulatory contexts. This research explores how Malaysia and New Zealand, two nations with distinct cultural values and governance frameworks, approach the ethical adoption of AI in healthcare, business, and government sectors. Drawing on 42 semi-structured interviews (20 from Malaysia, 22 from New Zealand) and thematic analysis using NVivo, the study identifies key similarities and divergences in themes such as transparency, privacy, accountability, and trust. Findings highlight that while both countries recognize the need for robust governance and ethical safeguards, New Zealand emphasizes rights-based frameworks and regulatory compliance, whereas Malaysia prioritizes collective values and socio-cultural acceptance in AI deployment. This comparative perspective offers critical insights for policymakers, practitioners, and academics by demonstrating the importance of contextualized AI governance. The research contributes to cross-cultural scholarship on digital ethics and provides actionable recommendations to support equitable and responsible AI adoption in diverse societies.

Stability driven reinforcement learning for robotic construction
Marcel Garrobe, AI Institute, University Of Waikato

This project explores the use of reinforcement learning for autonomous robotic construction of stable structures without scaffolding. The focus lies on designing reward functions that effectively guide a robotic agent to connect two fixed points by placing blocks. Two metrics were developed to evaluate structural stability: one based on the maximum vertical load each block can bear, and another based on the critical tilting angle before collapse. These metrics were used to shape more informative reward signals compared to traditional binary schemes. Results show that this approach improves learning efficiency and leads to the construction of more robust and reliable designs.

Quantum Re-Uploading for Streaming Time-Series: Fourier Analysis and Climate-Risk Use-Cases
Léa Cassé, AI Institute, University of Waikato

Quantum Re-Uploading Units (QRUs) are shallow, hardware-efficient quantum models that repeatedly encode inputs through a single qubit, trading circuit depth and entanglement for richer frequency representations. This work examines their expressivity and trainability through Fourier spectral analysis and an absorption-witness metric, showing how re-uploading depth L shapes accessible frequencies while mitigating gradient instabilities typical of deeper ansätze. Performance is compared with parameter-matched classical baselines (LSTMs and MLPs) on the Mackey-Glass benchmark and TAIAO river-level datasets, highlighting stable training and enhanced spectral diversity. An applied prototype coupling a QRU forecaster with a QAOA-CVaR allocator demonstrates a practical use case for parametric micro-insurance, using TAIAO environmental and MetService data. Presented within the Global Industry Challenge (World Bank track) and Quantum World Congress 2025, this study provides a reproducible framework connecting spectral diagnostics with hybrid quantum-classical models such as QLSTM or quantum-enhanced reinforcement learners.

 

This product has been added to your cart

CHECKOUT