The main goal of the conference was to foster discussion around the latest advances in Artificial Intelligence

Neural Radiance Field Based Computerised Tomography Reconstruction
Aaron Smith, University of Canterbury

Computerised tomography ( CT ), reconstruction is the act of taking a series of 2D projections acquired under x-ray and reconstructing the 3D volume observed in the projections. Current methodologies of reconstruction include analytical reconstruction methods such as filter back projection and the broad category of iterative, model-based methods. We propose the adaptation of a neural radiance field network to learn to reconstruct the volume implicitly in the neural network’s parameters. We demonstrate this method by reconstructing volumes with real-world CT configurations with both simulated and real-world examples.

TAIAO – Green AI in Green Aotearoa
Albert Bifet, University of Waikato

In this talk, we will talk about Green AI, focusing on its two main aspects: using AI to tackle environmental issues and making AI systems more environmentally friendly. using incremental approaches. As AI becomes increasingly important for problem-solving and research, it is essential to integrate it into sustainability efforts. We will examine how AI is not only giving researchers a competitive advantage but also playing a key role in creating a more sustainable future.

Enhancing Aerial Imagery Analysis: Leveraging Explainability and Segmentation
Anany Dwivedi, University of Waikato

In the emerging field of aerial and satellite remote sensing, the widespread adoption of deep learning brings new possibilities. Current approaches, however, often overlook the unique characteristics of aerial data. This study introduces a methodology that capitalizes on distinctive features, leveraging additional annotations for enhanced neural network training. Despite modest gains in classification accuracy, the synergy of enhanced explainability, automated segmentation, and targeted classification demonstrates nuanced improvements. Preliminary results showcase potential applications in land cover mapping and highlight a path toward reducing dependency on labor-intensive human annotations through an iterative annotation and training loop.

Exploiting image classification explanations for object detection, segmentation and (improved) classification
Anany Dwivedi and Nick Lim Jin Sean, University of Waikato

Annotating segments for image segmentation tasks is a time-consuming and labour intensive task. In this poster, we will demonstrate how we can utilize the GradCAM explanations from neural networks in conjunction with SegmentAnything to achieve image segmentation without needing to train with the ground truth segmentations. We will also demonstrate improvements to the GradCAM explanations when the ground truth segmentation masks are available during training. This approach offers interpretable results while improving trust in the neural network.

Active Few-shot Learning for Rare Bioacoustic Feature Detection
Ben McEwen, University of Canterbury

The detection of rare acoustic features presents a number of challenges to researchers. Bioacoustic monitoring is commonly applied to highly vocal species but increasingly, there is a need for monitoring tools suitable for more challenging species such as at-risk species at low population densities, cryptic (less-vocal) species and invasive species detection. In New Zealand, possums, mustelids (ferrets, stoats, weasels), and rats pose a significant threat to our native biodiversity yet, little bioacoustic data related to these species currently exists. The detection of invasive species incursions has significant ecological and economic implications. We present the development of the Listening Lab annotation tool. This methodology uses a few-shot, active learning pipeline to aid users in the analysis of rare acoustic features within long-term or landscape-scale bioacoustic field recordings. This method applies a wavelet packet decomposition (WPD) segmentation approach for efficient data reduction and prototypical learning using embeddings of a pre-trained transformer-based model to recommend high-priority features to users, increasing the efficiency of data analysis and model development. We evaluate this methodology on an invasive species dataset containing Common Brushtail Possum (Trichosurus vulpecula) vocalisations. For 2-shot, 2-way learning before fine-tuning of the feature extraction model, this methodology achieves 81.3% validation accuracy and 98.4% accuracy after fine-tuning demonstrating high performance across a range of low and high-data contexts. We implement this methodology into a publicly available web application

Learning from Data Streams versus Continual Learning
Bernhard Pfahringer, University of Waikato

Data stream learning algorithms need to learn incrementally while also coping with potential concept drifts, i.e. changes
in the distribution of the data.  Continual Learning's main concern is to avoid catastrophic forgetting, i.e. preserving knowledge from previous concepts while learning and adapting to a current concept. Online Continual Learning is a fully incremental version of it. We will compare and contrast these two approaches to identify potential synergies and opportunities for transfer of ideas
from one research area to the other. 

Evolutionary Machine Learning Approaches and Applications
Bing Xue, Victoria University of Wellington

Evolutionary Computation (EC) include a group of nature inspired algorithms. EC approaches have several advantages: they do not make any assumption about the data; they do not require domain knowledge, but can easily incorporate, or make use of, domain-specific knowledge; they maintain a population of solutions, which makes them robust for problems with many local optima and particularly suitable for multi-objective problems. Therefore, EC approaches have been widely applied to address various challenging optimisation and learning tasks in a wide range of real-world applications. The main tasks include classification, regression, clustering, image analysis, transfer learning, multi-objective machine learning, feature selection, automated design of deep neural networks, and natural language processing, with applications on biology, aquaculture, cyber-security, chemistry, agriculture, planning, and others. In addition to the research and applications, this talk will also very briefly introduce the teaching programmes developed at Victoria University of Wellington, particularly research lead teaching.

Predicting Wildfires in Canadian Forests from Satellite Images using Deep Learning
Blesson Mammen, Unitec Institute of Technology

According to data from the Global Forest Watch, more tree cover is being destroyed in forest fires today than they were 20 years ago, indicating that forest fires are spreading more widely. In the year 2021, a frightening 9.3 million hectares of tree cover vanished globally. In 2023, there has already been an increase in fire activity around the world, with unprecedented burning in Canada and disastrous flames in Hawaii. By contributing to greenhouse gas emissions forest fires negatively impacts various spheres including public health, economic activity, and the ecosystem itself. Wildfires are extremely difficult to monitor. Their causality and behaviour incorporate complex climatic circumstances, complicated geography, and complex fuel structures, which makes their behaviour ambiguous and difficult to anticipate, especially for large, intense wildfires. According to UN Department of Economic and Social affairs policy brief, wildfires are a growing concern for sustainable development. Sustainable development necessitates the safeguarding of critical infrastructure. By assisting agencies in protecting vital infrastructure such as transportation networks and power lines, wildfire prediction can help lessen the economic and social effects of wildfires. Early detection facilitates the creation and implementation of evacuation strategies and resource allocation decisions in a systematic and efficient manner. It also helps to enable the application of techniques such as controlled burns or firebreaks to reduce the damage on ecosystems and encourage their recovery. Prediction of wildfires currently needs wide ranging data incorporating climatic circumstances like wind speed, ecological information including vegetation type and density and also soil temperature alongside other data. The need for gathering detailed heterogeneous data presents a challenge for predicting wildfires. When compared to machine learning (ML) techniques, deep learning (DL)-based approaches demonstrated promising results in their capacity to perform well in the challenge of detecting wildfires by identifying their presence, mapping their extent and location, and forecasting their behaviour and potential impacts. This research examines current deep learning-based applications for predicting wildfires. This research proposes using MobileNet V3, a deep learning model based on transfer learning to predict the occurrence of wildfire from satellite images. Training the model on a dataset containing images with wildfire spots greater than 0.01 acres will help the model to predict wildfires where the region under burn could be small. This can help in providing an adequate and effective response and help in reducing large scale damage caused by wildfires.

Can deep learning improve harvest decisions in aquaculture?
Cris Lovell-Smith, Nelson AI Institute

The New Zealand Aquaculture Strategy aims to create a $3 billion industry by 2035 and increasing product value and stock performance will be important in reaching this goal, in a sustainable way, without greatly increasing farm space. With respect to mussel farming in particular, current industry practices lead to sub-optimal harvest decisions due to subjective ‘by eye’ assessments of mussel condition. These subjective scores have a direct impact on raw meat yield and consequently revenue. We ask the question, can state-of-the-art deep learning models be used to objectively assess mussel condition and in turn optimize harvest decisions? We train a CNN using a state-of-the-art backbone network to predict cooked meat weight (yield) from raw half-shell images of mussel meat. Initial results on our development dataset are promising and we plan further assessments on a larger dataset, currently in development.

Collecting an aquaculture dataset using an accessible multiplatform phone app
Dana Lambert, Harvest Hub

Easy collection of good training data is an important step towards developing AI models for aquaculture. In our case, we are developing models to objectively assess mussel conditions. We propose a data collection approach that uses a smartphone application and volunteers across multiple institutions to quickly build a large dataset. We have built a cross-platform application for iPhone and Android. This allows users to capture videos and images of half-shell mussels and record details such as weight, dimensions and other characteristics to build a dataset. In future, we plan to add functionality that allows AI models to be easily trained on the data collected and deployed into the app. This will allow real-time evaluation of model output in the field.

Text-Guided Animal Re-Identification
Di Zhao, University of Auckland

Reliable re-identification of individuals within large wildlife populations is essential for wildlife conservation. Traditional methods such as tagging, scarring, branding, and DNA analysis face challenges like sensor failures and scalability issues in large populations. In contrast, computer vision techniques offer a promising alternative for animal re-identification through unmanned aerial vehicles or camera traps. Despite the proven efficacy of vision-language models like CLIP in re-identifying humans and vehicles, their application to animals remains unexplored. To fill this gap, our study introduces the Clip-based Animal REidentification (CARE) framework, specifically designed for wildlife. CARE leverages CLIP's cross-modal capabilities, employing a text token generator to produce conditional text tokens for each individual. Evaluation against state-of-the-art methods across eight popular animal re-identification benchmarks and a real-world stoat dataset demonstrates CARE's effectiveness.

Forecasting Sea Surface Temperatures and Anomalies Using Graph Neural Networks
Ding Ning, University of Canterbury

Sea Surface Temperature (SST) anomalies (SSTAs) play a pivotal role in climate oscillations and extreme events, with profound implications for marine ecosystems and human activities. Traditional methods and recent deep learning advances have shown potential in forecasting SSTs and SSTAs. This study delved deeper into the capabilities of graph neural networks (GNNs) for this task, aiming to harness the structure of climatological data at global scale. Building upon previous research, we introduced a refined graph construction method, which allows for better representation of SST teleconnections. Our investigation highlighted the GraphSAGE model's capability for one-month-ahead global mean SST and SSTA forecasting. Using a recursive model, SST predictions were achieved up to two years in advance. For SSTA forecasting, our model surpassed both the persistence model and traditional conversion methods from SST predictions. While our results underscored the potential of GNNs in forecasting SSTs and SSTAs, further avenues include refining graph construction, optimizing imbalanced regression techniques for extreme SSTAs, and integrating GNNs with other temporal pattern learning methods for enhanced long-term predictions. This presentation will demonstrate findings from our latest paper, part of a TAIAO Ph.D. project, and provide insights into ongoing research endeavors.

Comparative Analysis of Predictive Models for Daily PM10 Concentration Time Series in Auckland Urban Area: A Quasi-Experimental Exploration
Dr Sara Zandi, NZSEG

This study employs an exploratory and quasi-experimental methodology to investigate the performance of two optimized Multi-Layer Perceptron (MLP), and Long Short-Term Memory model (LSTM) on the prediction of daily PM10 concentration time series in the Auckland urban area. Daily data was retrieved from Penrose Station, spanning the temporal domain between January 1st, 2020, and January 1st, 2023. The dataset encompasses diverse measurements of atmospheric pollutants and meteorological parameters with several missing data. The assessment involved analysing the distributional characteristics of the original dataset compared to the imputed datasets using density plots and the Kolmogorov-Smirnov Test (KS test). The results highlight the nuanced impact of neuron allocation across hidden layers on model efficacy, emphasizing the inherent trade-off between capturing rudimentary and intricate features. The final optimized model was fitted to the training set and then evaluated its performance on the test data set.

AI for anomaly detection: from biosecurity to climate
Dr. Varvara Vetrova, University of Canterbury

Anomaly detection presents a challenge in many fields. In most general terms, anomaly could be thought of as unusual observation outside of data distribution. In this talk, we will present two case studies for anomaly detection: biosecurity and climate research. In the first case study, we will present a step towards detection of anomalous tree foliage in urban forests. Such change could represent a potential infestation with invasive species and therefore a biosecurity risks. In the second case study, we will discuss application of unsupervised learning for detection of unusually warm winds in Antarctica.

Genetic Programming and Machine Learning for Job Shop Scheduling
Fangfang Zhang, Victoria University of Wellington

Job shop scheduling is a process of optimising the use of limited resources to improve the production efficiency. Job shop scheduling has a wide range of applications such as order picking in the warehouse and vaccine delivery scheduling under a pandemic. In real-world applications, the production environment is often complex due to dynamic events such as job arrivals over time and machine breakdown. Scheduling heuristics, e.g., dispatching rules, have been popularly used to prioritise the candidates such as machines in manufacturing to make good schedules efficiently. Learning scheduling heuristics with genetic programming has attracted the attention of researchers over the years due to its flexible representation. Other machine learning approaches such as reinforcement learning has also been widely used for scheduling problems. A number of machine learning techniques such as surrogate, feature selection, and multitask learning can be used to improve the quality of learned solutions/heuristics for scheduling. With the growth of new technologies, researchers in this field have to continuously face new challenges, which requires innovative approaches for scheduling.

Introducing Monica an Ai Co-Pilot for Central Monitoring Stations, Security Operations Centres and you
Felix Marattukalam, University of Auckland

Computer Vision based analytics for video surveillance has been a topic of interest for researchers. This is widely used by security camera industry, more so with the introduction of edge-analytics. In this talk, I will be introducing an Ai Co-pilot called Watchful’s ‘Monica’. Monica can ingest edge analytic triggers and help surveillance stations or users to make better decisions by classifying the event into classes such as false positive, intruder, staff-on-site etc. It can also factor in custom site instructions and tailor the responses accordingly. Only the classification response is reviewed by a human staff to further action it by calling emergency services or seek guard response etc. This solution addresses a major research gap in the video surveillance industry by directly addressing the human fatigue factor in human monitors. The talk would touch on the technology briefly, show how it’s being currently used in the New Zealand, Australia and US markets and introduce the company Watchful to NZ Artificial Intelligence Research Association members.

Legal Regulation of AI - Missing the Mark?
Gay Morgan, University of Waikato

This paper explores the ongoing development of legal regulation of AI and its practical sufficiency to address the fundamental concerns which motivate that regulation. The paper presents a comparative analysis of the approaches of the US (both via the legislative route and via the important wide ranging Executive Order just issued by President Biden), the EU (via regulation nearly ready to be issued) and China (through a form of rules). It explores the similarities and differences of the underlying concerns animating that regulation, which appears to be aimed at governing the boundaries of AI’s development and establishing conditions for its use/functioning. The paper queries whether regulatory approaches aimed solely at the  developers/programmers/users of AI, while perhaps needed, are omitting a necessary component – regulation that assigned duties and responsibilities directly to AI entities, qua independent actors with some level of autonomy & choice. The paper then considers why and how a legal system might do that.

Deep Learning in Global Weather Forecasting
Gemma Mason, NIWA

We are in the midst of a complete paradigm shift in weather forecasting. In June of this year, one of the top weather modelling agencies, the European Centre for Medium-Range Weather Forecasts (ECMWF), announced that data-driven weather forecasts using deep-learning are now equalizing or in some cases out-performing world-leading numerical weather prediction (NWP) forecasts. This impressive milestone was all the more striking given that these top-of-the-line NWP forecasts take about an hour to produce a 10-day forecast on a supercomputing cluster with over 10,000 cores. A deep learning weather model, or neural weather model (NWM), can make a similar prediction in about a minute on a single GPU.

This talk will give an overview of machine learning for NWMs. We will discuss the types of architectures used to create these remarkably accurate predictions, the datasets used to train them, and their likely role in weather forecasting in the future. We will conclude with some analysis of their performance in predicting Cyclone Gabrielle, and general discussion on how NWMs will change the landscape of weather forecasting.

Forecasting the longitudinal tree radial growth data with Deep Learning methods: A case study
Guilherme Weigert Cassales, University of Waikato

Plantation forests worldwide serve a critical role in addressing both bio-based demands and climate change mitigation. However, the strategic selection of tree species tailored to specific locations and purposes is essential to maximize their benefits. This decision-making process is typically intricate and time-consuming, which might not suffice in the current era of rapid global change. Moreover, advancements in technology offer access to high-resolution, real-time spatio-temporal data, which can provide valuable insights and potential solutions within a significantly shorter timeframe if properly leveraged through appropriate analysis. Unlike many other research domains, the application of suitable machine learning (ML) algorithms offers a promising avenue for efficiently screening and utilizing these datasets.

Speech Reconstruction for Glottis Impairment: A Non-Invasive Machine Learning Approach
Hamid Sharifzadeh, Unitec Institute of Technology

Surgical interventions of the face or neck involving partial resections of muscles, cartilage, or bone can significantly impact speech production. Laryngectomy, the complete or partial surgical removal of the larynx for conditions like throat cancer, commonly results in voice loss. Current rehabilitation options for laryngectomised individuals include oesophageal speech, tracheoesophageal puncture (TEP), and electrolarynx devices, which facilitate communication but suffer from various challenges. These challenges include maintenance and infection risks, difficulty of use, and most importantly, the inability to generate natural-sounding speech, often resulting in monotonous or robotic output.

While speech impairment does not typically pose a life-threatening condition, it can profoundly impact daily life and well-being in affected individuals. Consequently, there has been extensive research dedicated to reconstructing natural-sounding speech using computational methods. These methods can be broadly classified into two categories: non-training and training-based approaches. Within training-based computational methods, several state-of-the-art deep learning algorithms have been recently developed to generate natural-sounding speech. However, their focus lies on reconstructing whispered speech (essentially speech without vocal fold vibration in healthy people), not on laryngectomised or distorted speech.

This presentation explores a non-surgical, non-invasive method employing a machine learning framework that combines whisper signal analysis with pitch generation and formant enhancement to reconstruct missing speech. Aiming to improve the quality of life for these patients, this research analysed and utilised data collected from 22 New Zealanders exhibiting varying degrees of glottis-related impairment, encompassing non-surgical larynx treatment, partial laryngectomy or related surgery (e.g., thoracotomy), and total laryngectomy.

Vehicle real time image collection for pavement defects identification
Heyang (Thomas) Li, University of Canterbury

This study introduces a highly adaptable pipeline designed for the automated on-vehicle acquisition, filtration, and classification of road surface defects. The framework exhibits robust versatility by seamlessly integrating diverse systems, encompassing an array of sensors including cameras, 3D cameras, and GPS locations. It accommodates computational resources, along with flexible choices for data transfers.

Experimental validation was conducted on a road sweeping vehicle and a mobility buggy, capturing images between 0.5 to 10 seconds intervals. Notably, the pipeline demonstrated autonomous functionality, requiring minimal driver input beyond the initiation. Data transfer, can be executed via WiFi or manual transfer at the conclusion of the operational shift, showcased the efficiency of the proposed methodology in real-world scenarios. This adaptive pipeline stands poised as a compelling innovation in the realm of on-vehicle data processing for road infrastructure assessment, and our team has shown promising early trial results for pavement defects identification.

Multi Modal CNN Transformer for Embryo Morphokinetic State Classification on Single Images and Time Inputs
Hooman Misaghi, University of Auckland

The emergence of time-lapse incubators has significantly expanded the scope and depth of information available to embryologists, enabling more informed decisions regarding embryo viability. An increasingly promising predictor in this context is the accurate estimation of morphokinetic changes. Unfortunately, manually labeling these events is both susceptible to variations among embryologists and demands significant resources. In this study, we address these challenges by leveraging artificial intelligence (AI) to automate annotations, fostering transparency through the open-sourcing of our model. Our study utilized imaging data acquired from a Vitrolife Embryoscope, capturing images at 20-minute intervals from 413 patients across five fertility clinics in New Zealand. This comprehensive dataset encompassed 104,418 images. Our model demonstrated an impressive overall accuracy of 81.2%. In conclusion, our study presents a robust deep-learning model capable of automating human embryo morphokinetic annotation. This model achieves high accuracy and holds promising implications for seamless integration into clinical practices, potentially revolutionizing the field of embryology.

Anomaly Detection for Maritime Trajectories
Jack Julian, University of Auckland

Irregular movements of maritime vessels can indicate illegal fishing activity and potential biosecurity risks. With the help of continual learning techniques, these anomalies can be detected in real time to identify and prevent environmental hazards.

Towards Robust Strategies for Satellite Streak Identification in Wide-Field Astronomical Survey Images
Jack Patterson, University of Canterbury

The exponential growth of low-Earth orbit megaconstellations has triggered a surge in satellite streaks within astronomical survey images, presenting substantial challenges for observational astronomers. These streaks, appearing prominently in wide-field images, have the potential to complicate celestial object detection and tracking methodologies, compromise astrometric precision, skew photometric measurements, and lead to distorted population statistics. Accurately identifying streaks is therefore essential for gaining a comprehensive understanding of their impact. We evaluate the effects of streaks on the CFHT Large Program CLASSY: the Classical and Large-A Solar SYstem survey through observational data analysis, exploring classical and deep learning-based strategies. We assessed five methodologies, including a standard Hough Transform-based approach, ASTRiDE (Automated Streak Detection for Astronomical Images), LETR (Line Segment Detection Using Transformers without Edges), Deep Hough Transform Line Priors, and Deep Hough Transform for Semantic Line Detection. To ensure consistent evaluations of each approach, a standardized execution and testing framework utilizing a labelled dataset of streak-containing images was developed. Examination of the results uncovered challenges arising from significant variability in streak width, brightness levels, and time-varying morphology within individual satellite passes. These challenges resulted in detected streaks of incorrect widths, streak angle misalignments, and high false positive rates, highlighting the need for the development of more robust methodologies. Moving forward, we plan to explore additional approaches and refine the most promising ones using survey-specific training data to improve their effectiveness.

Adaptive Isolation Forest
Jia (Justin) Liu, University of Waikato

The isolation forest is a widely used technique for anomaly detection within high-dimensional datasets. However, its efficacy is limited when dealing with intricate data distributions and dynamic data streams. To address such limitations, our paper introduces the adaptive isolation forest (AIF) – a batch-incremental approach for the original method. AIF employs incremental and online learning strategies to enhance ensemble management and adapt to concept drifts. Notably, AIF outperforms other streaming anomaly detection methods, demonstrating superior performance even in scenarios where concept drift is absent.

A time series transformer for forecasting Covid-19 hospitalisations
Jiawei Zhao, Institute of Environmental Science and Research (ESR)

The COVID-19 pandemic revealed serious challenges to public health systems and demonstrated the need to effectively monitor and predict the dynamics of infectious disease hospitalisations. Here we introduce a multivariate time series transformer model specifically tailored for forecasting COVID-19 related hospitalisations. Our model uses multiple sources, including wastewater samples, genome variants, vaccination rates and COVID cases from public datasets. The model is able to predict COVID-19 hospitalisations up to 28 days in advance. We demonstrate improved accuracy, substantiated by the decrease in Mean Absolute Error (MAE) and Root Mean Square Error (RMSE), in our precise predictions of hospital admissions for the four-month duration in 2023. In particular,we achieved an average reduction in prediction error of 3.24 for MAE and 3.6 for RMSE. 

To address the limited interpretability of the predictions from a transformers architecture, we implement a Deep Learning Important FeaTures (DeepLIFT) based framework aimed at explaining the relationships among features, our findings revealed two key insights: Firstly, the vaccination impact on hospitalisation. The analysis indicated that vaccination played a significant role: higher vaccination rates were associated with a reduction in the risk of hospitalisation, highlighting the protective effect of vaccination in our model. Secondly, the presence of the SARS-CoV-2 virus in wastewater samples were found to be informative indicators for predicting the trajectory of hospitalizations. This underscores the utility of wastewater viral signals in enhancing
the accuracy and depth of our predictive mode.

A framework for the segmentation of the cerebral cortex laminar structure
Jiaxuan Wang, The University of Auckland

Characterising the anatomical structure and connectivity between cortical regions is a critical step towards understanding the information-processing properties of the brain and will help provide insight into the nature of neurological disorders. A key feature of the mammalian cerebral cortex is its laminar structure, with the neocortex being differentiated into six layers. Identifying these layers in neuroimaging data is important for providing a foundation for understanding the axonal projection patterns of neurons in the brain. These patterns can be seen in experiments using anterograde tracer or are reflected in the brain activity seen in layer-fMRI, and so on. We analysed images of Nissl-stained histological slice of the brain of the common marmoset (Callithrix jacchus) - a new world monkey that is gaining popularity in the neuroscience community as an object of study. We present a novel computational framework that builds upon work to segment the cortical ribbon and uses an artificial intelligence approach to determine cortical layers at high resolution. We trained a U-Net-based deep learning model (nnU-Net) to segment cerebral cortical layers with ground truth provided by expert neuroanatomists. We further compared our model with the deep-learning model that the BigBrain project developed with our dataset.

AI and machine learning at VUW - Overview of CDSAI
Mengjie Zhang, Victoria University of Wellington

This talk will provide an overview of AI and machine learning at CDSAI VUW.

The Energy Bill of AI
Michael J. Watts, Media Design School

A common trope in science fiction is that an AI becomes self-aware, identifies humans as a threat, and proceeds to exterminate us. This trope persists in both the public consciousness, and in industry, where discussions about the p(Doom) abound. But what if AI threatens our existence in another way, by exacerbating a problem that is already negatively impacting our lives?

AI is contributing to climate change. With increasing use of AI such as Large Language Models being based in power-hungry overseas data centres, demand for electricity to operate and cool the hardware also increases. Most of this electricity comes from fossil fuel burning powerplants, increasing the carbon emissions that are warming our planet.

Unlike many countries in which data centres operate, the majority of New Zealand's electricity comes from renewable sources. This research examines the question: Is New Zealand a better place for data centres, in terms of acknowledging the climate crisis?

AI Augmented Science
Michael Witbrock, University of Auckland

Science grapples with the challenge of managing and processing an ever-expanding corpus of knowledge and research papers. The sheer volume and complexity of scientific literature make it increasingly daunting for researchers to stay abreast of developments within their fields, let alone across interdisciplinary boundaries. As the body of scientific knowledge expands, so too does the potential for redundancy, inconsistency, and fragmentation within the literature. Moreover, the proliferation of low-quality or misleading research papers further complicates the process of discerning reliable information from noise. Addressing these challenges requires concerted efforts to develop innovative tools and methodologies for organising, accessing, and evaluating scientific information. As part of the effort to address these challenges and a small step towards automated science, NAOInstitute’s Strong AI Lab (SAIL) is developing a system of interconnected in-context assistance for the lab’s processes, powered by AI and integrated with existing commercial productivity platforms such as Slack and Jira. At present, the system keeps track of lab members’ projects and publications, and automatically recommends papers to members based on their current projects. We will continue to develop and expand the system with more advanced features as they are developed as part of our broader research into more capable AI systems. A part of the Natural, Artificial and Organisational Intelligence Institute’s (NAOInstitute’s) aim of understanding how to integrate AI effectively and beneficially into our civilisation, SAIL’s research into AI for Science aims to greatly increase Aotearoa/NZ’s research impact and effectiveness.

Optokinetic Response Detection Using Self-Supervised and Pre-training Model on Eye Tracking Video
Mohammad Norouzifard, University of Auckland

Eye-tracking can be employed as a method for monitoring and measuring eye health in young children. The technology has had a significant impact so far and new application areas are emerging. One of the applications can be on eye health disorders in young children with amblyopia (lazy eyes). Amblyopia is a neurodevelopmental disorder of the visual system due to discordant visual experiences during infancy or early childhood. This study tries to detect the OKN (Optokinetic Nystagmus) pattern using deep learning method (video masked autoencoder - VideoMAE) to detect amblyopia in its early stages.

OKN is an involuntary beating of the eye that occurs in the presence of moving patterns. The OKN response consists of initial slow phases in the direction of the stimulus (smooth pursuits), followed by fast, corrective phases (return saccade). The declination can be observed clinically by eye specialists, however eye-tracking technology has a high potential here, but it remains to investigate which methodology and which test protocols are relevant to study and to what extent the technology can be used as a diagnostic tool.

We utilized a private dataset including ten video (each with five minutes length) for re-training and re-testing the proposed model.
Although the model performed well with the re-test data that was available in ABI at University of Auckland, its accuracy rate fell below expectations when we tested it with a dataset provided by re-test (unseen) dataset. The detection rate was slightly above 65%, which is inadequate for real-world deployment.

Subsequently, we incorporated augmentation techniques into our pipeline. Our data revealed that including additional data could enhance the model's performance. After performing the augmentation, we observed a significant improvement in detecting OKN patterns in the data provided by Mohammad. Our model's mAP (mean Average Precision) exceeded 86%, a notable increase from our previous results.

Enhancing Fake News Classification in Urdu: A Multilingual Large Language Model Approach with Domain Adaptation for Low Resource Languages
Muhammad Zain Ali, University of Waikato

Misinformation on social media is a widely acknowledged issue, and researchers worldwide are actively engaged in its detection. However, low-resource languages such as Urdu have received limited attention in this domain. Our proposed method aims to address this gap by investigating the effectiveness of domain adaptation before fine-tuning a large language model for fake news classification in Urdu. Domain adaptation is a comprehensive process with the primary objective of enabling a general-purpose model to grasp domain-specific terms and their contextual nuances. Our approach involves utilizing a multilingual Large Language Model (LLM) as a base and fine-tuning its masked language model on a domain-specific dataset, with approximately 10-15% of the tokens masked. Following domain adaptation, the model undergoes further fine-tuning for a downstream text classification task, specifically fake news classification. The fine-tuning process includes several steps such as tokenization, generating embeddings, experimenting with different layers for the middle part of the neural network, and finally adding the last fully connected layer for classification based on the number of classes. This approach offers two main advantages: (i) allowing the model to update its weights for different terms and learn their contextual usage; (ii) focusing more on the Urdu language during fine-tuning for the sequence classification task. In our experiments, we utilized the xlm-RoBERTa-base model as our base LLM model. For domain adaptation, we employed the publicly available UrduNews1M dataset, while for fine-tuning the model for the fake news classification task, we used another publicly available Urdu Fake News dataset. As a baseline, we used a hyperparameter-tuned Support Vector Machine (SVM) with TFIDF. Our results demonstrate that our proposed model achieved an 11 percent improvement in accuracy compared to the baseline SVM, and a 1.2 percent improvement compared to our selected LLM fine-tuned without domain adaptation.

Spatio-Temporal Learning and Spatio-Temporal Associative Memories in Bio/neuro systems, Mathematics and Brain-inspired Neurocomputation
Nikola Kasabov, AUT and University of Auckland

The majority of data that is dealt with across information and data sciences, are temporal or spatio/spectro temporal, including: biological and brain signals; audio-visual; environmental; financial and economic; communication. In many cases this data is simplified just as temporal or spatial, due to lack of computational models to model both spatial and temporal components of the data in their dynamic interaction and integration.

The talk introduces first the concepts of spatio-temporal learning (STL) and spatio-temporal associative memories (STAM). These are evolvable and explainable learning systems that are first structured according to spatial-, spectral or other relevant information from temporal or spatio-temporal data and then they are trained to further evolve their structure by learning spatio-temporal associations of the data resulting in explainable models. If a STAM system is activated with a smaller proportion of input data/stimuli, the system recalls previously learned spatio-temporal patterns to classify the input data or make a prediction. The talk briefly illustrates the concepts of STL and STAM as inherent features in biology and the human brain. Mathematical foundations for STL and STAM include: spatio-temporal (ST) encoding of streaming data; ST clustering; ST searching algorithms; STL learning algorithms; ST quantum computation; algebraic ST transformations; chaotic ST systems.

The last part of the talk presents a brain-inspired neurocomputation framework NeuCube [1,2,3], its STL algorithm and its use for STAM. It demonstrates its applications for classification and prediction of biological and brain signals, audio-visual data, environmental data, financial and economic data. When compared to traditional machine learning techniques, including deep neural networks, these systems demonstrate significantly better accuracy and a clear interpretability and explainability of the dynamics of the ST data. These systems are more energy efficient, as during STL, the spatial structure of the model helps to learn data faster and to recall it associatively.

ASML: A Scalable and Efficient AutoML Solution for Data Streams
Nilesh Verma, University of Waikato

AutoStreamML, a cutting-edge approach to automated machine learning on streaming data, incorporates a Progressive Model Selector, an Accumulated Snapshot Ensemble Classifier, Adaptive Random Directed Nearby Search (ARDNS), and a Distributed Pipeline Generator. These components work in harmony to dynamically evaluate, ensemble, adapt, and suggest machine learning pipelines, optimizing their performance over time. Rigorously evaluated on real-world and synthetic datasets, AutoStreamML surpasses baseline models in terms of accuracy, runtime, memory efficiency, and statistical significance.

Gradient boosting for data stream regression
Nuwan Gunasekara, University of Waikato

Recent advancements in gradient boosting for data stream classification have demonstrated superiority over existing bagging and random forest-based methods, particularly on binary class problems and evolving data. Inspired by this, this work explores its application on data stream regression, which has shown to be challenging due to the high variance in the base learners in a boosted setup. To overcome this issue in a batch setting, BagBoosting has been proposed. This work explores the possibility of using the recently proposed streaming gradient boosting method SGBT with bagging and random forest-based base learners for data stream regression. The experimental results indicate that SGBT can enhance the performance of existing bagging and random forest-based regression methods.

Learning from Task Metadata in Multi-Task Learning
Olivier Graffeuille, The University of Auckland

Multi-task learning architectures model multiple related tasks simultaneously by sharing parameters across networks to exploit shared knowledge and improve performance. Knowing what knowledge to transfer between tasks can be difficult, particularly for small datasets. We aim to leverage task metadata - data about the tasks themselves - to inform task relationships. To achieve this, we propose Multi-Task Hypernetworks, a novel multi-task learning architecture which generates flexible task networks with a minimal number of parameters per task. Our approach uses a hypernetwork to generate different network weights for each task from small task-specific embeddings and enable abstract knowledge transfer between tasks. Unlike existing methods, our approach can additionally naturally leverage metadata to further improve performance.

Machine Learning for Cold-Formed Steel Design
Parsa Yazdi, University of Waikato

Big data and AI technologies have made significant breakthroughs in the construction sector. However, there has not been a comprehensive assessment of its importance in the field of structural engineering. This study aims to investigate the recent implementations of Machine Learning (ML) in the field of Cold-Formed Steel (CFS). Specifically, the focus is on predicting the axial capacity of CFS channel sections using different ML techniques. The study conducted experiments using a popular benchmark dataset and a new synthetic dataset of 1.5 million records. The results revealed a better regressor and setup at predicting axial capacity than existing reported techniques. These findings and the novel dataset open up future research opportunities for using AI in CFS.

Investigating Deep Hybrid Models for Out Of Distribution Detection
Paul Schlumbom, University of Waikato

Deep learning approaches have achieved impressive results in a wide variety of domains in recent years but are often plagued by overconfidence on inputs that are too different from the original training data. This limits the application of deep learning systems in safety-critical environments where abstention is preferred to confidently incorrect predictions. Out Of Distribution (OOD) detection aims to model the distribution of training data to identify when an input is beyond the model’s experience, and to this end normalising flows are of interest in their ability to model complex distributions. However, vanilla normalising flows fall short due to their tendency to model low-level feature distributions rather than semantic features, and to resolve this Deep Hybrid Models (DHMs) – which train normalising flows on the features of classifier models – have been proposed. While exceptional performance has been reported with this approach at CVPR 2022, the code has not been made available and there do not appear to have been any follow-ups. In this work, we investigate the DHM approach to OOD detection. While we are unable to attain the results reported in the original paper, we can demonstrate competitive performance with other methods and perform an in-depth analysis of how the DHM models high-dimensional data, which may inform the development of higher performance OOD detectors.

Unveiling the Rational Foundations of Acupuncture Point Selection and Combinations for healing some diseases through Complex Network Analysis
Pranesh Shrestha, Unitec Institute of Technology

This research project aims to employ network science and complex network theory to investigate the underlying rationale and standards governing the combination of acupuncture points in Chinese medicine for the treatment of various diseases. Traditional Chinese medicine views the human body as a complex system, with acupuncture points acting as critical nodes that influence health and circulation along the body channels, such as the meridians (经络学说) and extraordinary vessels (奇经八脉). Despite the longstanding success of acupuncture in addressing health issues, the lack of standardized protocols and a comprehensive analysis of the reasons behind the efficacy of specific acupuncture points pose significant challenges.

The research will leverage network science to model the intricate connections and interactions among acupuncture points, considering factors such as proximity, channel relationships, and historical effectiveness. Complex network theory will be applied to discern patterns and features within the acupuncture point system, shedding light on why certain points hold more significance than others and how their combinations contribute to the targeted treatment of specific diseases.

This research endeavors to bridge the gap in understanding the selection and combination of acupuncture points in Chinese medicine by applying modern network science methodologies. The outcomes are anticipated to contribute to the establishment of evidence-based and trustable standards for acupuncture point selection, fostering a deeper comprehension of the intricate network dynamics underlying the effectiveness of acupuncture in treating various diseases.

The research team members: Dr William Liu at the Unitec Institute of Technology, Dr Thomas Lin, director of the TCM Chinese Medical Center and also TCM practitioner, Professor Jian Song, Tsinghua University and Professor Jinsong Wu, University of Chile

Symbolic Regression: Discovering Symbolic Models from the data
Qi Chen, Victoria University of Wellington

Symbolic regression, a captivating discipline in the world of machine learning, not only reveals the hidden patterns within datasets but does so with an exquisite dance of symbolic expressions. This enchanting field invites us to explore the profound connections between mathematics and the real world, orchestrating a harmony that unravels the mysteries of relationships and variables. In the realm of symbolic regression, data becomes a canvas, and algorithms, the artists that craft poetic equations to capture its essence. Researchers investigate a wide range of applications for symbolic regression, from physics and biology to finance and optimization problems. It has shown promise in fields where understanding the underlying structure of data is crucial. Join us in the enchanting world of symbolic regression, where equations tell the story of data in the most poetic and profound way.

Optimising Returns on Earth Observation Missions Using Deep Learning-Based Architectures for Cloud Detection in Remote Sensing Images
Ronnie Paguia, Auckland University of Technology

The expanding space economy poses severe sustainable challenges, seeing AI play a critical role in decongesting space traffic. The study explores deep learning for cloud detection to make intelligent decisions in orbit without human intervention. The experiment suggests that Convolutional Neural Networks or CNNs trained on single-sourced, homogenous regions GANs-augmented satellite data can be 88.70% accurate in 2.6E+15 FLOPS. In comparison, training CNNs on single-sourced GANs-augmented satellite data on heterogeneous regions can be 76.17% accurate in 2.6E+15 FLOPS. Onboard cloud detection algorithms can manage space debris as a sustainable approach to mitigate overcrowding assets in space.

Enhancing self-driving: Speed bump and pothole detection and quantization
Ruigeng Wang, University of Auckland

Potholes and speed bumps are important factors which affect self-driving experience and security if they cannot be identified and avoided in time. Current best self-driving assistance do not detect potholes or speed bumps. Unlike other object detection methods used in deployed self-driving solutions (e.g., pedestrian detection, road work cones, traffic signs using deep learning), detecting potholes and speed bumps in textureless surfaces using existing car cameras has not reached maturity yet.

In this article, we provide a hybrid state-of-the-art 2D/3D computer vision and deep learning solution to both detect and quantify potholes and speed bumps in real time (as per car hardware) for self-driving systems. Here, we focus on self-driving cars (including but not limited to TESLA) using a vision-based camera system rather than lidar or other similar active sensing approaches. We also test our solution for current best Tesla FSD (full self-driving) Hardware 3.0 (i.e. running 21 TOPS) and
below Tesla FSD 2.0 (12 TOPS) which equips all Tesla cars sold prior to 2020. Our solution combines 2D deep learning detection of objects, 3D mapping of the road to assess potholes and speed bumps grades triggering a change of direction decision (or not). We compared our solution performance and robustness on our dataset collected (from on-board and off-the-shelf cameras) for New Zealand roads.

Large Population Model for Complex Health Behaviour Simulation
Sijin Zhang, Institute of Environmental Science and Research

Large population models, also known as agent-based models (ABMs), have emerged as powerful tools for investigating complex social interactions, particularly in the context of public health and infectious disease investigations. In this work, we present JUNE-NZ, an innovative ABM framework seamlessly integrated into the PyTorch machine learning ecosystem. By combining tensorized representations with differentiability, JUNE-NZ enables real-time, fully automatic parameter calibration, enhancing its applicability in public health, especially epidemiological research, and outbreak control.

The model was employed to investigate the 2019 measles outbreak that occurred in New Zealand, demonstrating its skill in accurately simulating the outbreak’s peak. Furthermore, we extensively explored various policy interventions within the model and thoroughly examined their potential impacts.

This paper demonstrates that by leveraging the latest AI technology and the capabilities of traditional agent-based models, we gain deeper insights into the dynamics of infectious disease outbreaks. These insights, in turn, help us make more informed decisions when developing effective strategies that strike a balance between managing outbreaks and minimizing disruptions to everyday life.

Leveraging Machine Learning and Deep Learning for Climate Change Mitigation: A Multifaceted Approach
Simna Rassak, ESR

This work explores the multifaceted potential of machine learning (ML) and deep learning (DL) in mitigating climate change across diverse domains. This work showcases three distinct use cases. (1) Climate-Aware Avocado daily harvest volume Forecasting: We propose a Temporal Fusion Transformer model utilizing weather data and vegetation indices to forecast daily avocado yields with 90% accuracy for two weeks forecast. This helps farmers and industries with information for informed resource management, enabling optimized irrigation and water resource usage in avocado orchards. This climate-aware system can contribute to reduced water waste and environmental footprint in agriculture. (2) Manuka Trees and Lake Water Quality: Mānuka trees' riparian plantings in lakes hold potential for improving water quality and ecosystem health. Our study employed an Extreme Gradient Boosting (XGBoost) model with 88% accuracy to address data gaps in soil moisture (SM) monitoring. By comparing plots with and without mānuka trees, we gained valuable insights into their impact on SM retention and soil loss dynamics. This can be used for optimized irrigation, informed drought analysis, and ultimately, building more sustainable and resilient ecosystems through strategic utilization native plants. (3) Respiratory Infection Forecasting and early warning: Climate change is linked to an increase in respiratory illnesses. We propose a transformer model utilizing weather data and daily hospital reported cases to forecast respiratory infection outbreaks with an early detection window of two weeks. This helps in proactive public health measures, minimizing disease spread and reducing strain on healthcare systems. These diverse applications demonstrate the potential of ML and DL in tackling climate challenges. Further research and development in this domain can help to develop, more sustainable and resilient future.

Deep reinforcement learning based planning method in state space for lunar rovers
Siyao Lu, University of Auckland

The unmanned lunar rover is essential for lunar exploration and construction. Executing environment differ from what humans get since communication needs time from Earth to Moon. Considering possible discrepancies between the pre-considered environment from the planner and the real environment for sampling tasks on the moon, a planner that generates short plans quickly should be used. Therefore, we propose a planner for both standard and emergency planning based on deep reinforcement learning (DRL). Based on a specific moon sampling scenario, we propose a tracking reward guiding the rover searching in the states in the deep reinforcement learning architecture which is presented and created by a state space representation by matrix, randomly available training state pairs and the plans generated by a custom breadth-first search (BFS) planner for the tracking reward. Tests on training and planning are performed to validate the effectiveness, robustness, and customization of the proposed method in a planning domain with multiple rovers. Our model can handle three kinds of emergencies, even if they occur frequently. This planner can create a full-range plan 13.5 times faster than the traditional planner on complex problems or 10.1 times faster while controlling the rover step-by-step in the state space. While facing emergencies, the average response time of our model is 324 times faster than the classical planner.

Trajectory Flow Map Enhanced Transformer for Next POI Recommendation
Song Yang, University of Auckland

Next POI recommendation intends to forecast users’ immediate future movements given their current status and historical infor- mation, yielding great values for both users and service providers. However, this problem is perceptibly complex because various data trends need to be considered together. This includes the spatial locations, temporal contexts, user’s preferences, etc. Most existing studies view the next POI recommendation as a sequence prediction problem while omitting the collaborative signals from other users. Instead, we propose a user-agnostic global trajectory flow map and a novel Graph Enhanced Transformer model (GETNext) to better exploit the extensive collaborative signals for a more accurate next POI prediction, and alleviate the cold start problem in the meantime. GETNext incorporates the global transition patterns, user’s general preference, spatio-temporal context, and time-aware category em- beddings together into a transformer model to make the prediction of user’s future moves. With this design, our model outperforms the state-of-the-art methods with a large margin and also sheds light on the cold start challenges within the spatio-temporal involved recommendation problems.

Ensemble modelling provides robust time series forecasting for hospitalization rates related to severe respiratory diseases in Auckland
Steffen Albrecht, University of Auckland

Predicting trends in hospitalization rates is highly relevant as it enables proactive hospital management instead of simply reacting to higher demands during pandemics or seasonal epidemics caused by influenza and other respiratory diseases. Moreover, forecasting models can be used to foresee sharp rises in hospitalizations which allows policy makers to make better decisions about applying intervention strategies with the aim to reduce the transmission rates before the local health system is pushed beyond its limits. Machine learning has been shown to be very accurate in forecasting and provides algorithms and concepts to deal with multivariate data allowing for the integration of many different data sources to train comprehensive models. In this study we integrate data about hospitalizations from two large hospitals in Auckland together with laboratory tests providing information about which viruses are circulating at daily resolution during the winter season. The laboratory data is precious for the forecasting of respiratory disease cases on the one hand, but on the other hand it is challenging as some viruses are by nature highly over- or underrepresented in some years. Due to these strong season-to-season variations it can be difficult for training accurate forecasting models. Furthermore, it is difficult to define a forecasting strategy that is robust across several years. We therefore propose a new method that is based on an ensemble of models trained on data from single seasons and different combinations of variables available in the dataset. Based on an evaluation for ten years including pre- and post-COVID data, we show that this novel ensemble-based approach improves the accuracy of the forecasting and, more importantly, achieves a robust performance in our benchmarking. The resulting models will not only be of relevance to the collaborative efforts to better understand the underlying data, but they will also be implemented in the Auckland City Hospital to support the directors and management staff during future winter seasons.

The metaphysics of personal identity and the Te Ao Māori relational self and Māori data sovereignty
Tahua O’Leary, University of Auckland

The Māori notion of self is relational. This makes Māori data part of the self and thus inalienable. Consent models of data sovereignty are consequently inadequate. Capacity rollout is.

NEXT: Operational Nowcasting System for Wind and Solar Power in New Zealand
Tristan Meyers, NIWA

New Zealand is moving towards carbon neutrality by 2050. A large part of this effort is the rapid adoption of solar and wind power. However, in order to increase the uptake of intermittent generation sources of wind and solar power, the network requires confidence in their predictability. Current supplies of renewable energy forecast are decentralised, this results in inaccurate forecasts and lead to poor use of grid resources, costing hundreds of millions of dollars each year. Also, traditional numerical weather prediction system delivers forecasts every 6 hours but it's not sufficient in meeting the needs of dynamically changing demands and supplies of energy market especially when the energy portfolio is shifting towards renewables which are more susceptible to variations of short-term weather changes.

In this paper, we develop a centralised, high-resolution AI-enhanced operational nowcasting system (NEXT) for wind and solar power in Aotearoa New Zealand. NEXT draws upon several different threads including Numerical Weather Prediction (NWP) output, climatology prediction, current observations and combines them into a rapidly-updating seamless nowcast which refreshes every 30 minutes with 12-hour lead time. We tested NEXT using New Zealand Renanalysis (NZRA) data and all available sites with wind speed and/or solar irradiation observations, and found it to significantly outperform the Model Output Statistically (MOS) corrected NWP output with 12-hourly cycle, and exhibit great advantages in delivering an accurate nowcasts with 30-minutely refresh comparing with traditional weather forecast system.

NEXT will help the integration of intermittent solar and wind generation into the existing power network by giving decision makers access to accurate nowcasts.

Federated Learning-Enabled AI-Generated Content in Vehicular Internet of Things (VIoT)
William Liu, Otago Polytechnic Auckland International Campus (OPAIC)

Artificial Intelligence Generated Content (AIGC) stands out as a promising technology, offering substantial advancement in the efficiency, quality, diversity, and flexibility of content creation through the adoption of various generative AI models. The integration of AIGC services within vehicular Internet of Things (VIoT) ecosystems is poised to significantly enhance user experiences and driver safety. Nevertheless, the existing AIGC service provisions are struggling with certain limitations, notably the centralized training during pre-training, fine-tuning, and inference processes, particularly concerning their application in VIoT fields with a strong emphasis on the time sensitive and emergent communications, high mobility and dynamic environments and also privacy and security preservation.

Federated Learning (FL), as a collaborative learning framework distributing model training to cooperative data owners without requiring data sharing, emerges as a viable solution to simultaneously enhance learning efficiency and ensure privacy protection for AIGC within vehicular IoT contexts. In response, this study introduces FL-based techniques tailored to empower AIGC, with the overarching goal of enabling users to generate content that is not only diverse and personalized but also of high quality.

A primary study is conducted, focusing on FL-aided AIGC fine-tuning, adopting a state-of-the-art AIGC model, i.e., the stable diffusion model. Some primary numerical results have confirmed the promises (in term of computation efficiency and contents accuracy) of our approach, demonstrating the effective reductions in computation and communication costs, training latency, and enhanced privacy protection tailored for vehicular IoT use cases. Moreover, we will vision several pivotal research directions and open issues, emphasizing the convergence of FL and AIGC within the unique challenges posed by vehicular IoT or more general of intelligent transport system (ITS) fields.

Machine learning for emergency medical dispatch
Yi Mei, Victoria University of Wellington

New Zealand emergency service providers struggle to meet the Government-set response time KPIs due to increasing emergency volume, paramedic burnout, and suboptimal assignment of ambulances to emergencies. The problem faced by ambulance services changes daily: they do not know how many emergencies of which urgency will occur when or where. New Zealand services act with a limited set of resources and struggle to meet the response time targets set by Government. With difficulties of improving the infrastructure (number of staff and vehicles), efficient resource utilisation is critical to emergency service performance.

In this talk, we will introduce a recent collaborative project with Wellington Free Ambulance, which is to use novel interpretable machine learning techniques, i.e., genetic programming, to automatically learn emergency medical dispatching policies from years of historical dispatch data through a simulation optimisation framework. Our preliminary results show that the policies learned by our genetic programming methods can achieve much shorter response times to patients than the current manual dispatching methods used in industry (e.g., up to 20% reduction). Furthermore, the policies have potential to be transferred to different scenarios, e.g., from Wellington to Christchurch and vice versa.

Adaptive Prediction Interval for Data Stream Regression
Yibin Sun, University of Waikato

Prediction Interval (PI) is a powerful technique for obtaining information and quantifying the uncertainty of regression tasks.
However, research on PI for data stream has not received much attention. Moreover, traditional PI-generating approaches are not directly applicable due to streaming data's dynamic and evolving nature. We present AdaPI (ADAptive Prediction Interval), a novel method that can automatically adjust the interval width by an appropriate amount according to historical information so that the coverage converges to the expected value. AdaPI can be applied to any streaming PI technique as a post-processing step.

We extend the pervasive Mean and Variance Estimation (MVE) to make it incremental and usable with our PI method.
An empirical evaluation using a set of standard streaming regression tasks demonstrates that AdaPI can produce compact prediction intervals while maintaining a coverage close to the desired level, outperforming alternative methods.

Dynamic Systems in Machine Learning: Navigating Continual Adaptation in Evolving Environments
Yun Sing Koh, University of Auckland

A significant portion of scientific investigation revolves around creating and verifying hypotheses to aid in constructing precise models for various systems. The capacity to consistently learn and enhance the learning process through prior knowledge stands as a fundamental element of human intelligence. Challenges such as alterations in system conditions, the integration of new data, or a substantial change in system behavior are common obstacles faced by real-world applications of machine learning. This discussion will discuss our ongoing research in the realm of continuous adaptation.

Learning to Schedule Manufacturing Jobs via Reinforcement Learning
Yuqian Lu, University of Auckland

All factories need to schedule jobs with optimality. However, this remains a challenge as real-world manufacturing is a continuous dynamic system characterized by unpredictable job arrivals, a variety of products and unexpected job delays. To address this limitation, we are exploring Multi-Agent Reinforcement Learning technologies that enables individual machines to adaptively learn scheduling policies to process manufacturing tasks collaboratively in various contexts. In this talk, I will present this new problem of Continuous Dynamic Flexible Job Shop Scheduling Problem (C-DFJSP) and share our technological exploration for discussion.

Zero-Knowledge Proof-based Verifiable Federated Learning on Blockchain
Zhibo Xing, University of Auckland

The growing concern over privacy leakage has led to reduced user participation in data sharing, prompting the exploration of novel techniques such as federated learning. Meanwhile, existing federated learning solutions often overlook the validation of the training process, leaving room for malicious trainers to introduce false or toxic local models, detrimental to the global model's utility. To address this challenge, we propose a Zero-Knowledge Proof-based verifiable Federated Learning (ZKP-FL) scheme on the blockchain. ZKP-FL leverages zero-knowledge proofs to validate the extensive training process by dividing it into smaller pieces and generating proofs for each segment. The Sigma-protocol ensures the consistency and reliability of these proofs. Moreover, we design a secure model aggregation protocol that matches the local proofs, safeguarding the data privacy of individual local models throughout the process. To establish the effectiveness and security of ZKP-FL, we conduct a formal security analysis in terms of completeness, soundness, and zero-knowledge properties. Experimental evaluations with different algorithms and models within the ZKP-FL framework demonstrate that with parallel execution the additional proof time per round is minimal.

Privacy-Preserving Low-Rank Adaptation for Latent Diffusion Models
Zihao Luo, University of Auckland

Low-rank adaptation (LoRA) is an efficient strategy for adapting latent diffusion models (LDMs) on a training dataset to generate specific objects by minimizing the adaptation loss. However, adapted LDMs via LoRA are vulnerable to membership inference (MI) attacks that can judge whether a particular data point belongs to private training datasets, thus facing severe risks of privacy leakage. To defend against MI attacks, we make the first effort to propose a straightforward solution: privacy-preserving LoRA (PrivateLoRA). PrivateLoRA is formulated as a min-max optimization problem where a proxy attack model is trained by maximizing its MI gain while the LDM is adapted by minimizing the sum of the adaptation loss and the proxy attack model's MI gain. However, we empirically disclose that PrivateLoRA has the issue of unstable optimization due to the large fluctuation of the gradient scale which impedes adaptation. To mitigate this issue, we propose Stable PrivateLoRA that adapts the LDM by minimizing the ratio of the adaptation loss to the MI gain, which implicitly rescales the gradient and thus stabilizes the optimization. Our comprehensive empirical results corroborate that adapted LDMs via Stable PrivateLoRA can effectively defend against MI attacks while generating high-quality images.

 

This product has been added to your cart

CHECKOUT