The main goal of this conference is to foster discussion around the latest advances in Artificial Intelligence.

Note - the Programme and Speaker line up is subject to change

Session Two

Deep Learning in Spectral Computed Tomography
James Steven Atlas (University of Canterbury)

Photon-counting spectral computed tomography (PC-SCT) has attracted much attention for its potential in radiation dose reduction, metal artifacts/x-ray beam hardening artifact removal, tissue quantification and material discrimination. Current leading edge engineering at MARS Bioimaging integrates Medipix3 detector chip technology with a system that is in clinical study to provide PC-SCT to medical imaging centres. This talk will overview deep learning approaches to improving image quality and diagnostic measures in this domain.

Deep Learning, Health Delivery, and Assistive Technologies
Reza Shahamiri (The University of Auckland)

As the niche of modern Machine Learning technologies, Deep Learning algorithms have revolutionized how big unstructured data can be exploited to extract learnable knowledge and deliver novel intelligence products and services. They have made significant impacts in multiple fields beyond computer science and engineering, including health services and delivery, prevention, and diagnosis. In this presentation, Dr. Reza Shahamiri will talk about how deep learning technologies could help computers understand people who suffer from speech impairments, and how deep learning-inspired speech and language technologies could aid healthcare professionals in diagnosing dementia sufferers early. He will also explain how his Autism AI platform helps parents and caregivers identify autistic tamariki.

Evolutionary Feature Reduction
Bach Hoai Nguyen (Victoria University of Wellington)

We are now in the era of big data, where vast amounts of high-dimensional data become ubiquitous in a variety of domains, such as social media, healthcare, and cybersecurity. When machine learning algorithms are applied to such high-dimensional data, they suffer from the curse of dimensionality, where the data becomes very sparse. Furthermore, the high-dimensional data might contain redundant and/or irrelevant features that blur useful information from relevant features. Feature reduction can address the above issues by building a smaller but more informative feature set. Recently, evolutionary computation (EC) has been widely applied to achieve feature reduction because of its potential global search ability. Existing EC-based feature reduction approaches successfully reduce the data dimensionality while still improve the classification performance as well as the interpretability of the built models. This presentation explains a general framework of evolutionary feature reduction followed by the applications of feature reduction in real-world scenarios.

Evolutionary Machine Learning Approaches and Applications
Bing XUE (Victoria University of Wellington)

Evolutionary Computation (EC) include a group of nature inspired algorithms. EC approaches have several advantages: they do not make any assumption about the data; they do not require domain knowledge, but can easily incorporate, or make use of, domain-specific knowledge; they maintain a population of solutions, which makes them robust for problems with many local optima and particularly suitable for multi-objective problems. Therefore, EC approaches have been widely applied to address various challenging optimisation and learning tasks in a wide range of real-world applications. The main tasks include classification, regression, clustering, image analysis, transfer learning, multi-objective machine learning, feature selection, automated design of deep neural networks, and natural language processing, with applications on biology, aquaculture, cyber-security, chemistry, agriculture, planning, and others. In addition to the research and applications, this talk will also very briefly introduce the teaching programmes developed at Victoria University of Wellington, particularly research lead teaching.

Generic approaches to reasoning about social coordination
Stephen Cranefield (University of Otago)

In human society, we commonly interact with others on a peer-to-peer basis without certain knowledge of their goals and intentions. As a long tradition of research on game theory has shown, the temptation to take a short-term selfish reward, the fear of being taken advantage of, and self-interested decision-making can lead to socially suboptimal outcomes for all parties. However, achieving mutually optimal long-term cooperation can be fostered by the existence of mechanisms to align the interacting parties' knowledge and expectations, such as the existence of common background knowledge, shared group goals, social norms, etc. While game theory researchers have developed techniques to models of players' payoffs to incorporate such external mechanisms, this requires human expertise. While that approach is valuable for social science research, it is not clear how it can be generalised to build socially intelligent software that can perform delegated interactions or provide advice to users across a diversity of unforeseen scenarios. Example applications include social robots or personal assistant apps that can maintain social awareness and perform delegated interactions on behalf of users. In my work I take the alternative approach of developing approaches for software agents to reason with symbolic representations of agent expectations, norms, and common knowledge. In this talk I will discuss my recent work in this area and illustrate some (simulated) applications. This work was supported by the Marsden Fund Council from New Zealand Government funding, managed by Royal Society Te Apārangi.

Machine Learning for Automation in Plant Tissue Culture 
Sam Davidson (Scion Research)

Machine learning and deep learning have a key role to play in modern automated systems for large-scale production of seedlings. These seedlings can be produced via plant tissue culture techniques such as somatic embryogenesis. A common challenge is that not all somatic embryos produced via this method will germinate successfully to produce healthy seedlings. We have used deep learning instance segmentation to automatically segment these somatic embryo in microscopy images and machine learning to 1. predict germination success and 2. obtain the most important morphological features from the images.  

God as the original Ai: Learning from the Sociology of Religion.
Mike Grimshaw (Univerity of Canterbury)

To think forwards in Ai we also need to think backwards. We have a model for Ai that makes clear its possibilities, limitations, issues and implementations and this is 'God'. This presentation argues that God, as a human technology and creation, is in fact the original Ai. That is, 'God' is an artificial intelligence technology that is positioned 'as if' it is an independent autonomous intelligent entity capable of interaction with the world. Yet what is really interacting are networks of human intelligence and culture and what has and is done in the name of such 'God Ai' raises questions for how we should consider Ai today, including questions concerning Ai coders and programmers. I consider the ways in which Ai can learn from the sociology of religion and argue for the benefits of greater inter-disciplinarity in Ai thinking.

Instance-based Explanations for Gradient Boosting Machine Predictions with IBEX values
Paul Geertsema (The University of Auckland)

We show that Gradient Boosting Machine (GBM) predictions can be represented as linear combinations of training data target instances. The weights associated with such linear combinations are effectively measures of instance importance, thus complementing measures of feature importance such as SHAP and LIME. We refer to these weights as IBEX (Instance Based EXplanations) values. IBEX values are additive and can therefore offer both local and global explanations for GBM predictions. Our work contributes towards efforts to make machine learning approaches more explainable and interpretable.

Learn More About You Data - Symbolic Regression and Modeling
Qi Chen (Victoria University of Wellington)

Symbolic regression and modelling is the process of developing for some symbolic descriptions to capture the structure of the data and make an accurate prediction in the numerical data space. These descriptions are usually represented in mathematical models, which are composed of variables and operators to represent the underlying relationship between the independent/input variables and dependent/target variable(s). The development in symbolic regression and modelling is motivated by the need to efficiently and effectively convert the data into actionable knowledge. The key characteristic that distinguishes symbolic regression and modelling techniques from numerical modelling techniques is their capability of producing interpretable models. The interpretability heightens an insightful understanding of the data generating system and the theory in the field of interest. Apart from the interpretability, symbolic regression and modelling is also driven by a number of other metrics, such as the accuracy, the generalisation capacity, the robustness on noisy data, the number and type of features/variables involved in the models, and the shape/structure of the models. Our research is driven by handling all these aspects. Statistical Learning plays an important role in many files of science and industry. Statistical learning theory provides the solid theoretical basis of many of today's machine learning techniques. When symbolic regression meets statistical learning, it will generate more insights and knowledge from the data.

Machine Learning for Better Responsive Emergency Medical Dispatch
Yi Mei (Victoria University of Wellington)

Due to limited resources of ambulances, paramedics, and specialists, New Zealand’s emergency medical service is facing pressure to handle increasingly high demand (e.g., more severe COVID cases on top of standard patient workload). One critical but challenging problem in emergency medical services is responsive emergency medical dispatch. It comprises a variety of complex and interdependent offline decisions (e.g., managing staff work shifts) and online decisions (e.g., assigning staff to ambulances and assigning ambulances to emergency requests) under a highly dynamic and uncertain environment such as unpredicted emergency requests arriving in real-time with uncertain levels of urgency. Furthermore, there are different conflicting goals to be balanced, such as reducing the response time of the requests while avoiding long staff work hours. This talk will introduce a novel machine learning algorithm that we developed for automatic emergency medical dispatch. Specifically, we model the online ambulance dispatch as a unique Markov decision process with multiple cooperative agents, and develop a discrete event simulation platform for it. Then, we propose different representations for the policy, and use genetic programming, an evolutionary machine learning approach, to learn the policy through a simulation optimisation paradigm. Results show that the learned policies can reduce ~50% response time of the ambulance to attend the dynamically arriving emergency requests. This shows the promise of machine learning to assist efficient emergency medical services.

Multitask learning on graph convolutional residual neural networks for screening of multi-target anticancer compounds
Binh Nguyen (Victoria University of Wellington)

Recently, various modern experimental screening pipelines and assays have been developed to find promising anticancer drug candidates. However, it is time-consuming and almost infeasible to screen an immense number of compounds for anticancer activity via experimental approaches. Several computational advances have been proposed to partially address this issue. In this study, we presented iACP-GCR, a model based on multitask learning on graph convolutional residual neural networks with two types of shortcut connections, to identify multi-target anticancer compounds. In our proposed architecture, the graph convolutional residual neural networks are shared by all the prediction tasks before being separately customized. The NCI-60 dataset, one of the most reliable and well-known sources of experimentally verified compounds, was used to develop our model. From that dataset, data of compounds which were screened across nine cancer types (panels) including breast, central nervous system, colon, leukemia, non-small cell lung, melanoma, ovarian, prostate, and renal, were collected and refined for model training and evaluation. The model performance evaluated on an independent test set shows that iACP-GCR surpasses two other advanced multitask learning methods. The prediction accuracy is also improved by adding two types of short-cut connections to the shared networks.

One-shot learning without catastrophies
Marcus Frean (Victoria University of Wellington)

Standard neural nets need to go over the same data many times in order to learn, and suffer from catastrophic forgetting of older data. Is it possible to update the weights in a connectionist system online, just once for each input-output pair in a sequence, and never have to go back? We consider a slight tweak to a standard neural net that implements one-shot learning, and suffers no such forgetting (up to the system's capacity). As a bonus, one can derive a biologically plausible learning algorithm using only Hebbian learning and no back-propagation.

Proving the Lottery Ticket Hypothesis
Brendan McCane (University of Otago)

The Lottery Ticket Hypothesis (LTH) hypothesises some interesting properties of deep neural networks for which there is ample empirical evidence. For example, it has been shown empirically that aggressively pruned deep networks often perform just as well as their non-pruned counterparts. In this talk I will introduce the main claims of the LTH and outline an informal "proof" of the hypothesis. The work is still in progress as there are some technical hurdles to a more formal proof, but the talk will be accessible to most attendees.


This product has been added to your cart