The main goal of the conference was to foster discussion around the latest advances in Artificial Intelligence.
Note - the Programme and Speaker line up is subject to change
Aiding Forensic Investigations using Machine Learning
Hamid Sharifzadeh (Unitec Institute of Technology)
Advances in machine learning find rapid adoption in many fields ranging from communications, signal processing, and the automotive industry to healthcare, law, and forensics. In this talk, I briefly focus on a couple of research projects revolving around machine learning and forensic investigations: a) A biometric tool based on vein pattern recognition for CSAI investigations, b) Prediction and reconstruction of wear patterns on footwear outsoles. These research projects mainly rely on cutting-edge machine learning algorithms applied to forensics area for helping law enforcement agencies identify criminals/victims.
A Sufficient Basis for the Moral Considerateness of AI Would Be?
Gay Morgan (University of Waikato)
This paper explores the possible bases which might support extending moral considerateness to AI. It argues that ‘moral considerateness’ is different than ‘legal personality’ and need not be based on having the emotional and/or mental capabilities of, beyond, or similar to a human. That is, it argues that current approaches are too narrow, perhaps misguided, and are certainly missing an essential element to approaching the moral and legal status of AI as AI capacities expand. It further clarifies that legal personality grants no intrinsic or legal recognition of any moral status to the entity to which such personality has been granted – and that it is the moral status of AI that is the animating concern driving such proposals. I.e. the ‘legal personality’ approach either misunderstands the problem it is trying to address or misunderstands the solution proposed. The paper proposes a number possible bases on which moral considerateness might be extended to AI and what consequences would orcould flow from such an extension.
A suggested solution to the superintelligence control problem
Douglas Campbell (University Of Canterbury)
The human brain is a problem-solving system with no mechanical equal. Understanding how it works and building machines that can outperform it is now a national strategic priority of the world’s major superpowers. If the vast sums of money and intellectual resource being devoted to this task issue in success, then human intelligence will likely be very swiftly dwarfed by that of our mechanical creations, raising the problem as to how we can hope to retain control over intelligences much greater than our own. In this talk I review solutions to this problem proposed by Nick Botrom, Eliezer Yudkowsky, and Stuart Russell, and endorse a refined version of Yudkowsky’s proposal.
Adaptive Machine Learning for Data Streams
Albert Bifet (Artificial Intelligence Institute - University of Waikato)
Big Data and the Internet of Things (IoT) have the potential to fundamentally shift the way we interact with our surroundings. The challenge of deriving insights from the Internet of Things (IoT) has been recognized as one of the most exciting and key opportunities for both academia and industry. Advanced analysis of big data streams from sensors and devices is bound to become a key area of data mining research as the number of applications requiring such processing increases. Dealing with the evolution over time of such data streams, i.e., with concepts that drift or change completely, is one of the core issues in stream mining. In this talk, I will present an overview of data stream mining, and I will introduce some popular open source tools for data stream mining.
AI and Image Analysis in Computational Pathology
Ramakrishnan Mukundan (University of Canterbury)
The application of whole slide image analysis and machine learning algorithms in the field of computational pathology has the potential to transform care of breast cancer patients through improved pathology workflow, early and accurate disease diagnosis and enhanced disease management. One area where the emerging data driven technologies could be effectively utilized in pathological evaluations is accurate quantification of various tissue biomarkers and nuclear features. This presentation gives an overview of some of the projects undertaken by our research group (Computer Graphics and Medical Image Analysis group, Department of Computer Science and Software Engineering, University of Canterbury) and recent collaborations with the CDHB.
AI Social Impacts in New Zealand: some recent projects and some open questions
Alistair Knott (Victoria University of Wellington)
In this talk I will summarise some projects on AI social impacts in New Zealand that I have been involved in. One project focusses on uses of AI methods by New Zealand government departments. The report for this project was one of the foundational documents for New Zealand's innovative Algorithms Charter, that was released in 2020. Another project focusses on the impact of AI on jobs and work in New Zealand. This project fed into media and government discussions about prospects for shortening the working week. A third project considers the impact of social media recommender systems on platform users' attitudes towards terrorist and violent extremist content. This project extends beyond New Zealand: it is coordinated by the Global Partnership on AI (GPAI), and also involves discussions at the Global Internet Forum to Counter Terrorism (GIFCT). But New Zealand has a central role in these discussions, through its leadership (with France) of the Christchurch Call initiative. A final project is being conducted by a committee to offer the New Zealand government advice on its policy towards lethal autonomous weapons (LAWS). The government recently committed to supporting an international ban on LAWS. (Many New Zealand AI researchers signed an open letter calling for the New Zealand government to do this.) But the precise definition of LAWS is still the subject of much discussion, in AI and in HCI, both nationally and internationally.
Artificial Intelligence for Emergency Management
Phil Mourot (Artificial Intelligence Institute - University of Waikato)
When a disaster happens, and the emergency management team responds, it is all about a question of time. Time is the most valuable resource when you manage a crisis—every second counts. So, what you need first is information. You need to collect and analyse information to develop situational awareness. But you do not need all information; you need reliable data that can add value. Artificial intelligence (AI) can help control the flood of information in emergency operation centres and provide essential data from various sources in a new way. Most of all, using AI can significantly increase efficiency and save time. We are developing a specific tool to be used during a crisis. The tool aims to help allocate emergency response resources at the right time and place and provide the best solutions based on the available and verified information. With the help of AI, data can be processed and enriched with essential findings to support rapid and reliable decision-making. The tool's purpose is to be deployed in emergency centres and help the team better understand the situation and make sound decisions.
An Overview of Some AI Projects at the University of Canterbury
Kourosh Neshatian (University of Canterbury)
In this talk, I will give an overview of some of the projects in my research group, ranging from theoretical machine learning to creating game-playing agents.
Conceptual complexity of neural networks
Lech Szymanski (University of Otago)
We propose a complexity measure of a neural network mapping function based on the order and diversity of the set of tangent spaces from different inputs. Treating each tangent space as a linear PAC concept we use an entropy-based measure of the bundle of concepts to estimate the conceptual capacity of the network. Empirical evaluations show that this new measure is correlated with the generalisation capabilities of the corresponding network. It captures the effective, as opposed to the theoretical, complexity of the network function.
Continual Lerning for Adaptive Predictive Systems
Yun Sing Koh (University of Auckland)
Much of scientific research involves the generation and testing of hypotheses that can facilitate the development of accurate models for a system. The ability to continuously learn, and improve the learning process using existing knowledge, is one of the core aspects of human intelligence. Despite this potential, until recent years, the dominant focus of machine learning research has been to design models which learn a set of tasks all at once without consideration of future learning. It typically assumes that the underlying systems are static and unchanging over time. In reality, many applications need to analyze data where the underlying system changes over time. Yet, changes in the conditions of the system, the introduction of new information, or a fundamental shift in how the system behaves is an issue that confronts applications of machine learning in a real-world context. This talk will discuss some of the research in the area of data streams and continual adaptation.
Deep Learning and Reasoning in General Game Playing
Ji Ruan (Auckland University of Technology)
General Game Playing (GGP) is a platform for developing general Artificial Intelligence algorithms to play a large variety of games that are unknown to players in advance. The game rules are encoded in the Game Description Language (GDL), a logic programming language. A GGP player processes the game rules to obtain game states and expand the game tree search for an optimal move. The recent accomplishments of AlphaGo were made by combining deep reinforcement learning and Monte-Carlo Tree Search. (1) We first present a deep learning architecture for GGP, which extends AlphaGO with the ability to play many more games. We show the feasibility of our approach and analyse the impact of different parameters for the neural network training. (2) We then present a general graph neural network (GNN) based reasoner for approximating the logical reasoning in GDL. We show that our neural reasoner can learn and infer various game states with high accuracy and has some capability of transfer learning across games. We conclude by a discussion of further research directions, e.g., adding an explainable AI (XAI) component to our framework.