The history of machine learning has largely been a story of increasing
abstraction. In the dawn of ML, researchers spent considerable effort
engineering features. As deep learning gained popularity, researchers then
shifted towards tuning the update rules and learning rates for their
optimizers. Recent research in meta-learning has climbed one level of
abstraction higher: many researchers now spend their days manually constructing
task distributions, from which they can automatically learn good optimizers.
What might be the next rung on this ladder? In this post we introduce theory
and algorithms for unsupervised meta-learning, where machine learning
algorithms themselves propose their own task distributions. Unsupervised
meta-learning further reduces the amount of human supervision required to solve
tasks, potentially inserting a new rung on this ladder of abstraction.
Robots have been useful in environments that can be carefully controlled, such
as those commonly found in industrial settings (e.g. assembly lines). However,
in unstructured settings like the home, we need robotic systems that are
adaptive to the diversity of the real world.
The interpretability of neural networks is becoming increasingly necessary, as
deep learning is being adopted in settings where accurate and justifiable
predictions are required. These applications range from finance to medical
imaging. However, deep neural networks are notorious for a lack of
justification. Explainable AI (XAI) attempts to bridge this divide between
accuracy and interpretability, but as we explain below, XAI justifies
decisions without interpreting the model directly.
Quadruped robot learning locomotion skills by imitating a dog.
Whether it’s a dog chasing after a ball, or a monkey swinging through the
trees, animals can effortlessly perform an incredibly rich repertoire of agile
locomotion skills. But designing controllers that enable legged robots to
replicate these agile behaviors can be a very challenging task. The superior
agility seen in animals, as compared to robots, might lead one to wonder: can
we create more agile robotic controllers with less effort by directly imitating
animals?
In this work, we present a framework for learning robotic locomotion skills by
imitating animals. Given a reference motion clip recorded from an animal (e.g.
a dog), our framework uses reinforcement learning to train a control policy
that enables a robot to imitate the motion in the real world. Then, by simply
providing the system with different reference motions, we are able to train a
quadruped robot to perform a diverse set of agile behaviors, ranging from fast
walking gaits to dynamic hops and turns. The policies are trained primarily in
simulation, and then transferred to the real world using a latent space
adaptation technique, which is able to efficiently adapt a policy using only a
few minutes of data from the real robot.
Consequently, it is critical that RL policies are robust: both to naturally
occurring distribution shift, and to malicious attacks by adversaries.
Unfortunately, we find that RL policies which perform at a high-level in normal
situations can harbor serious vulnerabilities which can be exploited by an
adversary.
Reinforcement learning has seen a great deal of success in solving complex decision making problems ranging from robotics to games to supply chain management to recommender systems. Despite their success, deep reinforcement learning algorithms can be exceptionally difficult to use, due to unstable training, sensitivity to hyperparameters, and generally unpredictable and poorly understood convergence properties. Multiple explanations, and corresponding solutions, have been proposed for improving the stability of such methods, and we have seen good progress over the last few years on these algorithms. In this blog post, we will dive deep into analyzing a central and underexplored reason behind some of the problems with the class of deep RL algorithms based on dynamic programming, which encompass the popular DQN and soft actor-critic (SAC) algorithms – the detrimental connection between data distributions and learned models.
Look at the images above. If I asked you to bring me a picnic blanket in the
grassy field, would you be able to? Of course. If I asked you to bring over a
cart full of food for a party, would you push the cart along the paved path or
on the grass? Obviously the paved path.
In deep learning, using more compute (e.g., increasing model size, dataset
size, or training steps) often leads to higher accuracy. This is especially
true given the recent success of unsupervised pretraining methods like
BERT, which can scale up training to very large models and datasets.
Unfortunately, large-scale training is very computationally expensive,
especially without the hardware resources of large industry research labs.
Thus, the goal in practice is usually to get high accuracy without exceeding
one’s hardware budget and training time.
For most training budgets, very large models appear impractical. Instead, the
go-to strategy for maximizing training efficiency is to use models with small
hidden sizes or few layers because these models run faster and use less memory.
In this blog post, we share our experiences in developing two critical software
libraries that many BAIR researchers use to execute large-scale AI
experiments: Ray Tune and the Ray Cluster Launcher, both of which now
back many popular open-source AI libraries.
As AI research becomes more compute intensive, many AI researchers have become
squeezed for time and resources. Many researchers now rely on cloud providers
like Amazon Web Services or Google Compute Platform to access the huge amounts
of computational resources necessary for training large models.
All living organisms carve out environmental niches within which they can
maintain relative predictability amidst the ever-increasing entropy around them
(1),
(2).
Humans, for example, go to great lengths to shield themselves from surprise —
we band together in millions to build cities with homes, supplying water, food,
gas, and electricity to control the deterioration of our bodies and living
spaces amidst heat and cold, wind and storm. The need to discover and maintain
such surprise-free equilibria has driven great resourcefulness and skill in
organisms across very diverse natural habitats. Motivated by this, we ask:
could the motive of preserving order amidst chaos guide the automatic
acquisition of useful behaviors in artificial agents?