Cooperative Human+AI Learning Management System

AI Controlled Machine Learning

Generally, in looking for a possible application of AI, enthuiasts like to a find task with a complexity such that it is difficult for a human to do the task as well as it should be done. Ironically, there is a such a task close at hand that is often overlooked: the control of the training of a machine learning system such as a neural network. Typically, the training of a neural network is controlled by a handful of hyperparamters that are manually tuned or that are set from prior experience. However, the limit in number of hyperparameters is not in the design of neural network systems. For example, instead of a single learning rate that decays on a fixed shedule, in principle that could be a serparate learning rate for each node optimized for each epoch depending on the current status of the training of that node. However, controlling the learning rate for each node for each epoch is way beyond the capacity of any human or any team of humans. A similar situation occurs for other hyperparameters such as the rate of weight decay, the amount of momentum and the dropout rate. For node-to-node knowledge sharing, there may a hyperparameter for every node pair. A computer system is capable of handling this level of complexity. However, it requires some degree of intelligence. That is, the training of a neural network or other machine learning system should be done by an AI-controlled learning supervisor.

A form of AI control of machine learning has not been overlooked: Having an AI system try to determine the optimum architecture or design of another AI system. This task is even more challenging than the task of detailed control of millions of hyperparameter mentioned in the previous paragraph. Some progress has been made even on this very challenging task. However, additional progress may be enabled by using a cooperative human + AI learning managment system.

Cooperative Intelligent Systems with Humans and AIs Working Together

A general principle on this website is that on any task requiring a high degree of intelligence, humans and machines working together should be able to do at least as well as either working alone. That principle is inherent in our definition of "cooperate". Therefore, having argued that the training of a neural network should be controlled by an AI system, the obvious next step for the training to be controlled by a Learning Supervisor system comprising humans and AI systems working together. Such cooperative human+AI learning supervisor systems are the theme of this website.

Tasks Done Better with Human Assistance

Managing Node-Specific and Other Highly Specific Hyperparameters

As was pointed out above, managing millions of hyperparameters can be dome better by an AI system than by humans. However, the proposed new criteria for machine intelligence are difficult to achieve without human assistance. For example, a classification error caused by an imperceptable change in a pattern generated an adversarial attack is, by definition, easy for human to notice as an error although it fools the AI system being attacked. In this case, and many others, it is easy for a human to judge sensibility.

Confirming Intrepretability

In a large neural network trained by existing training methods, it is notoriously difficult for a human to interpret the inner oayer nodes. However, it is much easier for a human to confirm or reject a proposed interpretation. It is possible for a human to conclude that the activity of a node will be difficult to interpret. For example, a human could make this conclusion for looking at a number of examples and observing that the examples violate any simple interpretation. On this website, the definition of something being "interpretable" is that a human can express it in words understable to other humans. With this definition, it is obvious that human can help an AI system in training another AI system to learn concepts that are interpretable.

Situations with Very Limited Data

Human are also very capable of making judgments from a limited amount of data. Large neural networks, on the other hand, have met or exceeded human performance on tasks in which a large quantity of data is available. Although the human role is less well defined than in the previous examples, there is clearly an opportunity for a human + AI partnership in controlling training of a system in a situation requiring lifelong learning or one-shot learning. More generally, humans and AI systems may work cooperatively to train an AI system to do better at generalizing to new data that has not been seen during training.

Assisting in Exploring Alternate Network Architectures

Often during development of a neural network for a new task, the network architecture is chosen in advance. Also, if the network architecture is changed, the iterative gradient descent training is restarted from scratch. However, there are methods for incrementally modifying a neural network architecture during training. In particular, there are methods for adding new conections, new nodes, or new layers while maintaining the performance of the previous network. Continued gradient descent training may then improve the performance. These architectural changes may be targeted at fixing specific errors. They may take advantage of a particular opportunity to add a connection for which the network objective has a large magnitude with respect to the connection weight. In any of these cases, the new network can escape from a stationary point to accelerate the training and/or make an immediate reduction in the error rate. These techniques can even escape from a stationary point that is a global minimum for the previous network. Some of these techniques are discussed elsewhere on this website:

Managing the process of making thousands of small incremental improvements would be difficult to do manually. This management task might be done by a fully automatic learning management system. If the network architecture is optimized by making a large number of small incremental changes, a process called "reinforcement learning" may be used. However, the some of the decisions to be make in changing the network architecture may benefit from human judgment and intuition. Thus, the exploration of network architectures may be done best by a cooperative human+AI learning mangagement system using human-assisted reinforcement learning.. To carry the idea further, the AI in the learning management system may itself be trained by human-assisted reinforcement learning.

Training: Coming Soon

Navigation Menu

by James K Baker and Bradley J Baker

© D5AI LLC, 2020

The text in this work is licensed under a Creative Commons Attribution 4.0 International License.
Some of the ideas presented here are covered by issued or pending patents. No license to such patents is created or implied by publication or reference to herein.