tag:blogger.com,1999:blog-8429657563266398562024-03-17T19:59:44.513-07:00Eric JangTechnology, A.I., CareersUnknownnoreply@blogger.comBlogger47125tag:blogger.com,1999:blog-842965756326639856.post-86675239802120680782021-09-20T16:36:00.003-07:002021-09-20T16:59:37.338-07:00Robots Must Be EphemeralizedThere is a subfield of robotics research called “sim-to-real” (sim2real) whereby one attempts to solve a robotic task in simulation, and then get a real robot to do the same thing in the real world. My team at Google utilizes Sim2Real techniques extensively in pretty much every domain we study, including <a href="http://arxiv.org/pdf/1804.10332.pdf">locomotion</a> and <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9068484">navigation</a> and <a href="https://arxiv.org/pdf/2011.03148.pdf">manipulation</a>. <span id="docs-internal-guid-56b536d6-7fff-317e-62c4-2f69c41a4d91"><p dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><br /></p>The arguments for doing robotic research in simulation are generally well-known in the community: more statistical reproducibility, less concern about safety issues during learning, avoiding the operational complexity of maintaining thousands of robots that wear down at differing rates. Sim2Real is utilized heavily on quadruped and five-finger hand platforms, because at present, such hardware can only be operated a few hundred trials before they start to wear down or break. When the dynamics of the system start to vary from episode-to-episode, learning becomes even more difficult. <br /><br />In a previous <a href="https://blog.evjang.com/2021/03/learning-robots.html">blog post</a>, I also discussed how iterating in simulation solves some tricky problems around new code changes invalidating old data. Simulation makes this a non-issue because it is relatively cheap to re-generate your dataset every time you change the code. <br /><br />Despite significant sim2real advances in the last decade, I must confess that three years ago, I was still somewhat ideologically opposed to doing robotics research in simulation, on the grounds that we should revel in the richness and complexity of real data, as opposed to perpetually staying in the safe waters of simulation. <br /><br />Following those beliefs, I worked on a <a href="https://www.youtube.com/watch?v=DFT4DPMVg1w">three-year long robotics project</a> where our team eschewed simulation and focused the majority of our time on iterating in the real world (mea culpa). That project was a success, and the <a href="https://openreview.net/forum?id=8kbp23tSGYv">paper will be presented at the 2021 Conference on Robotic Learning</a>. However, in the process, I learned some hard lessons that completely reversed my stance on sim2real and offline policy evaluation. I now believe that offline evaluation technology is no longer optional if you are studying general-purpose robots, and I have pivoted my research workflows to rely much more heavily on these methods. In this blog post, I outline why it is tempting for roboticists to iterate directly on real life, and how the difficulty of evaluating general-purpose robots will eventually force us to increasingly rely on offline evaluation techniques such as simulation.</span><div><span><br /></span></div><div><br /><h3 style="text-align: left;"><span>Two Flavors of Sim2Real</span></h3><br />I’m going to assume the reader is familiar with basic sim2real techniques. If not, please check out this <a href="https://sim2real.github.io/">RSS’2020 workshop website</a> for tutorial videos. There are broadly two ways to formalize sim2real problems.<br /><br />One approach is to create an “adapter” that transforms simulated sensor readings to resemble real data as much as possible, so that a robot trained in simulation behaves indistinguishably in both simulation and real. Progress on generative modeling techniques such as GANs have enabled this to work even for natural images.<br /><br />Another formulation of the sim2real problem is to train simulated robots under lots of randomized conditions. In becoming robust under varied conditions, the simulated policy can treat the real world as just another instance under the training distribution. OpenAI’s <a href="https://openai.com/blog/learning-dexterity/">Dactyl</a> took this “domain randomization” approach, and were able to get the robot to manipulate a Rubik’s cube without ever doing policy learning on real data. <br /><br />Both the domain adaptation and domain randomization approaches in practice yield similar results when transferred to real, so their technical differences are not super important. The takeaway is that the policy is learned and evaluated on simulated data, then deployed in real with fingers crossed.<br /><br /><h3 style="text-align: left;"><span>The Case For Iterating Directly In Real</span></h3><div><br /></div>Three years ago, my primary arguments against sim were related to the richness of data available to real vs simulated robots:<br /><ol style="text-align: left;"><li>Reality is messy and complicated. It takes regular upkeep and effort to maintain neatness for a desk or bedroom or apartment. Meanwhile, robot simulations tend to be neat and sterile by default, with not a lot of “messiness” going on. In simulation, you must put in extra work to increase disorder, whereas in the real world, <a href="https://en.wikipedia.org/wiki/Second_law_of_thermodynamics">entropy increases for free</a>. This acts as a forcing function for roboticists to focus on the scalable methods that can handle the complexity of the real world.</li><li>Some things are inherently difficult to simulate - in the real world, you can have robots interact with all manner of squishy toys and articulated objects and tools. Bringing those objects into a simulation is incredibly difficult. Even if one uses photogrammetry technology to scan objects, one still needs to set-dress objects in the scene to make a virtual world resemble a real one. Meanwhile, in the real world one can collect rich and diverse data by simply grabbing the nearest household object - no coding required.</li><li>Bridging the “reality gap” is a hard research problem (often requiring training high-dimensional generative models), and it’s hard to know whether these models are helping until one is running actual robot policies in the real world anyway. It felt more pragmatic to focus on direct policy learning on the test setting, where one does not have to wonder whether their training distribution differs from their test distribution.</li></ol><br />To put those beliefs into context, at the time, I had just finished working on <a href="https://ai.googleblog.com/2018/12/grasp2vec-learning-object.html">Grasp2Vec</a> and <a href="https://sermanet.github.io/tcn/">Time-Contrastive-Networks</a>, both of which leveraged rich real-world data to learn interesting representations. The neat thing about these papers was that we could train these models on whatever object (Grasp2Vec) or video demonstration (TCN) the researcher felt like mixing into the training data, and scale up the system without writing a single line of code. For instance, if you want to gather a teleoperated demonstration of a robot playing with a Rubik’s cube, you simply need to buy a Rubik’s cube from a store and put it into the robot workspace. In simulation, you would have to model a simulated equivalent of a rubik’s cube that twists and turns just like a real one - this can be a multi-week effort just to align the physical dynamics correctly. It didn’t hurt that the models “just worked”, there wasn’t much iteration needed on the modeling front for us to start seeing cool generalization.<br /><br />There were two more frivolous reasons I didn’t like sim2real:</div><div><b><br /></b></div><div><b>Aesthetics: </b>Methods that learn in simulation often rely on crutches that are only possible in simulation, not real. For example, using millions of trials with an online policy-gradient method (PPO, TRPO) or the ability to reset the simulation over and over again. As someone who is inspired by the sample efficiency of humans and animals, and who believes in the <a href="https://www.youtube.com/watch?v=Ount2Y4qxQo">LeCake narrative</a> of using unsupervised learning algorithms on rich data, relying on a “simulation crutch” to learn feels too ham-handed. A human doesn’t need to suffer a fatal accident to learn how to drive a car.<br /><br /><b>A “no-true-Scotsman” bias: </b>I think there is a tendency for people who spend all their time iterating in simulation to forget the operational complexity of the real world. Truthfully, I may have just been envious of others who were publishing 3-4 papers a year on new ideas in simulated domains, while I was spending time answering questions like “why is the gripper closing so slowly?” <br /><br /><br /><h3 style="text-align: left;">Suffering From Success: Evaluating General Purpose Robots</h3><br />So how did I change my mind? Many researchers at the intersection of ML and Robotics are working towards the holy grail of “generalist robots that can do anything humans ask them”. Once you have the beginnings of such a system, you start to notice a host of new research problems you didn’t think of before, and this is how I came to realize that I was wrong about simulation. <br /><br />In particular, there is a “Problem of Success”: how do we go about improving such generalist robots? If the success rate is middling, say, 50%, how do we accurately evaluate a system that can generalize to thousands or millions of operating conditions? The feeling of elation that a real robot has learned to do hundreds of things -- perhaps even things that people didn’t train them for -- is quickly overshadowed by uncertainty and dread of what to try next. <br /><br />Let’s consider, for example, a generalist cooking robot - perhaps a <a href="https://techcrunch.com/2021/08/19/musk-the-tesla-bot-is-coming/">bipedal humanoid</a> that one might deploy in any home kitchen to cook any dish, including Wozniak’s <a href="https://en.wikipedia.org/wiki/Artificial_general_intelligence#Tests_for_confirming_human-level_AGI">Coffee Test</a> (A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons). <br /><br />In research, a common metric we’d like to know is the average success rate - what is the overall success rate of the robot at performing a number of different tasks around the kitchen?<br /><br />In order to estimate this quantity, we must average over the set of all things the robot is supposed to generalize to, by sampling different tasks, different starting configurations of objects, different environments, different lighting conditions, and so on.</div><div><span style="font-family: Monaco; font-size: 12px;"><br /></span></div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5nD3RilmKenhiCwGjXeYtl0nukBJT1_3QXbdjUfLfd98YfOGMX1mWSawd9BFmK9hdsCMe7U9i8lbLqXnT6Z7Bp5qB3_k2sCbXJf3ICBXmS3_xtqRehjWBVWlMplifW5EKv5_C5ROK0EQ/s3166/p%2528_text_success_.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="284" data-original-width="3166" height="58" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5nD3RilmKenhiCwGjXeYtl0nukBJT1_3QXbdjUfLfd98YfOGMX1mWSawd9BFmK9hdsCMe7U9i8lbLqXnT6Z7Bp5qB3_k2sCbXJf3ICBXmS3_xtqRehjWBVWlMplifW5EKv5_C5ROK0EQ/w640-h58/p%2528_text_success_.png" width="640" /></a></div><br /><span style="font-family: Monaco; font-size: 12px;"><br /></span></div><div><span style="font-family: Monaco; font-size: 12px;"><br /></span></div><div><span>For a single scenario, it takes a substantial number of trials to measure success rates with single digit precision:<br /><ul style="text-align: left;"><li><span><a href="http://www.nowozin.net/sebastian/blog/how-to-report-uncertainty.html">http://www.nowozin.net/sebastian/blog/how-to-report-uncertainty.html</a></span></li><li><span><a href="https://towardsdatascience.com/digit-significance-in-machine-learning-dea05dd6b85b">https://towardsdatascience.com/digit-significance-in-machine-learning-dea05dd6b85b</a></span></li><li><span><a href="https://stats.stackexchange.com/questions/322953/number-of-significant-figures-to-report-for-a-confidence-interval">https://stats.stackexchange.com/questions/322953/number-of-significant-figures-to-report-for-a-confidence-interval</a></span></li></ul><br />The standard deviation of a binomial parameter is given by sqrt(P*(1-P)/N), where P is the sample mean and N is the sample size. If your empirical mean of the success rate is 50% under N=5000 samples, this equation tells you that the standard error is 0.007. A more intuitive way to understand this is in terms of a confidence interval: there is a 95% epistemic probability that the true mean, which may not be exactly 50%, lies within the range [50 - 1.3, 50 + 1.3]. <br /><br />5000 trials is a lot of work! Rarely do real robotics experiments do anywhere near 300 or even 3000 evaluations to measure task success.<br /><br />From <a href="https://towardsdatascience.com/digit-significance-in-machine-learning-dea05dd6b85b">Vincent Vanhoucke’s blog post</a>, here is a table drawing a connection from your sample size (under the worst case of p=50%, which maximizes standard error) to the number of significant digits you can report:<br /><br /><br /><img height="242" src="https://lh5.googleusercontent.com/S-q3jBkPv2f6zNkYqq8SlJC3W0VhxJ_x7qs5JnsDhvIYT2_I26XEp59qo0vHVilaf1AkHVCZ_bcuOxNdt2GVv5XR_W--EEfoN2-X_HTtn5dMw0O6--wUg-TmOT4Mowrs_wRtM0sB=w640-h242" width="640" /><br /><br /><br />Depending on the length of the task, it could take all day or all week or all month to run one experiment. Furthermore, until robots are sufficiently capable of resetting their own workspaces, a human supervisor needs to reset the workspace over and over again as one goes through the evaluation tasks. <br /><br />One consequence of these napkin calculations is that pushing the frontier of robotic capability requires a series of incremental advances (e.g. 1% at a time) with extremely costly evaluation (5000 episodes per iteration), or a series of truly quantum advances that are so large in magnitude that it takes very few samples to know that the result is significant. Going from “not working at all” to “kind of working” is one example of a large statistical leap, but in general it is hard to pull these out of the hat over and over again.<br /><br />Techniques like A/B testing can help reduce the variance of estimating whether one model is better than another one, but it still does not address the problem of the sample complexity of evaluation growing exponentially with the diversity of conditions the ML models are expected to generalize to.<br /><br />What about a high-variance, unbiased estimator? One approach would be to sample a location at random, then a task at random, and then an initial scene configuration at random, and then aggregate thousands of such trials into a single “overall success estimator”. This is tricky to work with because it does not help the researcher drill into problems where learning under one set of conditions causes catastrophic forgetting of another number. Furthermore, if the number of training tasks is many times larger than the number of evaluation samples and task successes are not independent, then there will be high variance in the overall success estimate.<br /><br />What about evaluating general robots with a biased, low-variance estimator of the overall task success? We could train a cooking robot to make millions of dishes, but only evaluate on a few specific conditions - for example, measuring the robot’s ability to make banana bread and using that as an estimator for its ability to do all the other tasks. Catastrophic forgetting is still a problem - if the success rate of making banana bread is inversely correlated with the success rate of making stir-fry, then you may be crippling the robot in ways that you are no longer measuring. Even if that isn’t a problem, having to collect 5000 trials limits the number of experiments one can evaluate on any given day. Also, you end up with a lot of surplus banana bread.<br /><br />The following is a piece of career advice, rather than a scientific claim: in general you should strive to be in a position where your productivity bottleneck is the number of ideas you can come up with in a single day, rather than some physical constraint that limits you to one experiment per day. This is true in any scientific field, whether it be in biology or robotics.<br /><br /><b>Lesson: Scaling up in reality is fast because it requires little to no additional coding, but once you have a partially working system, careful empirical evaluation in real life becomes increasingly difficult as you increase the generality of the system.</b><br /><br /><br /></span><h3 style="text-align: left;"><span>Ephemeralization</span></h3><span><div><span><br /></span></div>In his 2011 essay <a href="https://a16z.com/2011/08/20/why-software-is-eating-the-world/">Software is Eating The World</a>, venture capitalist Marc Andreessen pointed out that more and more of the value chain in every sector of the world was being captured by software companies. In the ensuing decade, Andreesen has refined his idea further to point out that “Software Eating The World” is a continuation of a technological trend, Ephemeralization, that precedes even the computer age. From Wikipedia:<br /><br /><i>Ephemeralization, a term coined by <a href="https://en.wikipedia.org/wiki/Buckminster_Fuller">R. Buckminster Fuller</a> in 1938, is the ability of technological advancement to do "more and more with less and less until eventually you can do everything with nothing," <br /></i><br />Consistent with this theme, I believe the solution to scaling up generalist robotics is to push as much of the iteration loop into software as possible, so that the researcher is freed from the sheer slowness of having to iterate in the real world. <br /><br />Andreessen has posed the question of how future markets and industries might change when everybody has access to such massive leverage via “infinite compute”. ML researchers know that “infinite” is a generous approximation - it still costs <a href="https://venturebeat.com/2020/06/01/ai-machine-learning-openai-gpt-3-size-isnt-everything/">12M USD</a> to train a GPT-3 level language model. However, Andreessen is directionally correct - we should dare to imagine a near future where compute power is practically limitless to the average person, and let our careers ride this tailwind of massive compute expansion. Compute and informational leverage are probably still the fastest growing resources in the world. <br /><br />Software is also eating research. I used to work in a biology lab at UCSF, where only a fraction of postdoc time was spent thinking about the science and designing experiments. The majority of time was spent pipetting liquids into PCR plates, making gel media, inoculating petri dishes, and generally moving liquids around between test tubes. Today, it is possible to run a number of “standard biology protocols” in the cloud, and one could conceivably spend most of their time focusing on the high-brow experiment design and analysis rather than manual labor. <br /><br /><br /><img height="375" src="https://lh3.googleusercontent.com/TQ-gLJE_QKgO--Chij3YguWatkkBeXnyQIQfa5v8jGGeftIW0YGrMMl7Oh6yz9AjWope7VnrUeU8u3k6jzyjfSv8fzYCEewe4AFnIE4wR165U6COBw1XeulfUpbB6A7T7xso-yfX=w640-h375" width="640" /><br /><br /><br />Imagine a near future where instead of doing experiments on real mice, we instead simulate a highly <a href="https://deepmind.com/research/publications/2019/Deep-neuroethology-of-a-virtual-rodent">accurate mouse behavioral model</a>. If such models turn out to be accurate, then medical science will be revolutionized overnight by virtue of researchers being able to launch massive-scale studies with billions of simulated mouse models. A single lab might be able to replicate a hundred years of mouse behavioral studies practically overnight. A scientist working on a laptop from a coffee shop might be able to design a drug, run clinical trials on it using a variety of cloud services, and get it FDA approved all from her laptop. When this happens, Fuller’s prediction will come true and it really will seem as if we can do “everything with nothing”.<br /><br /><br /></span><h3 style="text-align: left;"><span>Ephemeralization for Robotics</span></h3><span><br /><br />The most obvious way to ephemeralize robot learning in software is to make simulations that resemble reality as closely as possible. Simulators are not perfect - they still suffer from the reality gap and data richness problems that originally made me skeptical of iterating in simulation. But, having worked on general purpose robots directly in the real world, I now believe that people who want high-growth careers should actively seek workflows with highest leverage, even if it means putting in the legwork to make a simulation as close to reality as possible. <br /><br />There may be ways to ephemeralize robotic evaluation without having to painstakingly hand-design Rubik’s cubes and human behavior into your physics engine. One solution is to use machine learning to learn world models from data, and having the <a href="https://arxiv.org/pdf/1802.10592.pdf">policy interact with the world model</a> instead of the real world for evaluation. If learning high-dimensional generative models is too hard, there are <a href="https://arxiv.org/abs/1906.01624">off-policy evaluation methods</a> and <a href="https://arxiv.org/pdf/2007.09055.pdf">offline hyperparameter selection</a> methods that don’t necessarily require simulation infrastructure. The basic intuition is that if you have a value function for a good policy, you can use it to score other policies on your real world validation datasets. The downside to these methods is that they often require finding good policy or value function to begin with, and are only accurate for ranking policies up to the level of the aforementioned policy itself. A Q(s,a) function for a policy with a 70% success rate can tell you if your new model is performing around 70% or 30% , but is not effective at telling you whether you will get 95% (since these models don’t know what they don’t know). Some <a href="https://arxiv.org/abs/1907.03976">preliminary research</a> suggests that extrapolation can be possible, but it has not yet been demonstrated at the scale of evaluating general-purpose robots on millions of different conditions. <br /><br />What are some alternatives to more realistic simulators? Much like the “lab in the cloud” business, there are some emerging cloud-hosted benchmarks such as <a href="https://ai2thor.allenai.org/">AI2Thor</a> and <a href="https://real-robot-challenge.com/">MPI’s Real Robot Challenge</a>, where researchers can simply upload their code and get back results. The robot cloud provider handles all of the operational aspects of physical robots, freeing the researcher to focus on software. <br /><br /><br /><img height="426" src="https://lh4.googleusercontent.com/x4qxYiRoW5jGgfhSrCK3LGy4T2i87kx1F9GYChsEP0qevizzm_s_b39nbmmTWwIEm5rXHB7jB14Pqe-yyqDnEdjhIDOxHWqJLYw89pLRsaiuaFZrA9Sh47-9rn5PnX7mEGC8Z8fJ=w640-h426" width="640" /><br /><br /><br />One drawback of these setups is that these hosted platforms are designed for repeatable, resettable experiments, and do not have the diversity that general purpose robots would be exposed to.<br /><br />Alternatively, one could follow the Tesla Autopilot approach and deploy their research code in “shadow mode” across a fleet of robots in the real world, where the model only makes predictions but does not make control decisions. This exposes evaluation to high-diversity data that cloud benchmarks don’t have, but suffers from the long-term credit assignment problem. How do we know whether a predicted action is good or not if the agent isn’t allowed to take those actions? <br /><br />For these reasons, I think data-driven realistic simulation gets the best of both worlds - you get the benefits of real world diverse data and the ability to evaluate simulated long-term outcomes. Even if you are relying heavily on real-world evaluations via a hosted cloud robotics lab or a fleet running Shadow Mode, having a complementary software-only evaluation provides additional signal can only help with saving costs and time. <br /><br />I suspect that a practical middle ground is to combine multiple signals from offline metrics to predict success rate: leveraging simulation to measure success rates, training world models or value functions to help predict what will happen in “imagined rollouts”, adapting simulation images to real-like data with GANs, and using old-fashioned data science techniques (logistic regression) to study the correlations between these offline metrics and real evaluated success. As we build more general AI systems that interact with the real world, I predict that there will be cottage industries dedicated to building simulators dedicated for sim2real evaluation and data scientists who build bespoke models for guessing the result of expensive real-world evaluations.<br /><br />Separately from how ephemeralization drives down the cost of evaluating robots in the real world, there is the effect of ephemeralization driving down the cost of robot hardware itself. It used to be that robotics labs could only afford a couple expensive robot arms from Kuka and Franka. Each robot would cost hundreds of thousands of dollars, because they had precisely engineered encoders and motors that enabled millimeter-level precision. Nowadays, you can buy some cheap servos from AliExpress.com for a few hundred dollars, glue it to some metal plates, and control it in a closed-loop manner using a webcam and a neural network running on a laptop. <br /><br /><img height="432" src="https://lh5.googleusercontent.com/zUXFaTty3QupGJxj3CkeTn9J6_k5sEU2D6utkuIOtuTOIyZ4Q4YlDawao92ooRWUh5dxEkhsc53t9SGduFhsAa-GKzTSuZiHAnn4KFdEbzeAmE6e2yo9z8QG1N9H2BakWuE8ZsLE=w640-h432" width="640" /><br /><br /><br />Instead of relying on hardware precise position control, the arm moves based purely on vision and hand-eye coordination. All the complexity has been migrated from hardware to software (and machine learning). This technology is not mature enough yet for factories and automotive companies to replace their precision machines with cheap servos, but the writing is on the wall: software is coming for hardware, and this trend will only accelerate.<br /><br /><br /></span><h3 style="text-align: left;"><span>Acknowledgements</span></h3><span>Thanks to Karen Yang, Irhum Shafkat, Gary Lai, Jiaying Xu, Casey Chu, Vincent Vanhoucke, Kanishka Rao for reviewing earlier drafts of this essay.<div><span style="font-family: Arial; font-size: 11pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div></span></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-5870265823373312752021-07-30T13:36:00.016-07:002021-07-30T15:34:15.109-07:00ML Mentorship: Some Q/A about RL<p>One of my <a href="https://blog.evjang.com/2020/06/free-office-hours-for-non-traditional.html">ML research mentees</a> is following OpenAI's Spinning up in RL tutorials (thanks to the nice folks who put that guide together!). She emailed me some good questions about the <a href="https://spinningup.openai.com/en/latest/spinningup/rl_intro2.html#">basics of Reinforcement Learning</a>, and I wanted to share some of my replies on my blog in case it helps further other student's understanding of RL. </p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4BmH5QJny5_GgqqWKwjfYopNwIhsEmkwVqLmskhJoK8ybbg8QVftHJQSERJG9eP9HmIt8JvCDG1oVJOz_snFKcVL1rlmw3JBQcECZUkOf2oBmF4vYF8lUZ7KB6ageM6WeANuBg2r43Ac/s908/rl_basics.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="350" data-original-width="908" height="154" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4BmH5QJny5_GgqqWKwjfYopNwIhsEmkwVqLmskhJoK8ybbg8QVftHJQSERJG9eP9HmIt8JvCDG1oVJOz_snFKcVL1rlmw3JBQcECZUkOf2oBmF4vYF8lUZ7KB6ageM6WeANuBg2r43Ac/w400-h154/rl_basics.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The classic Sutton and Barto diagram of RL</td></tr></tbody></table><p><br /></p><p><br /></p><i><b>Your “<a href="https://blog.evjang.com/2021/01/understanding-ml.html">How to Understand ML Papers Quickly</a>” blog post recommended asking ourselves “what loss supervises the output predictions” when reading ML papers. However, in SpinningUp, it mentions that “<a href="https://spinningup.openai.com/en/latest/spinningup/rl_intro3.html">minimizing the ‘loss’ function has no guarantee whatsoever of improving expected return</a>” and “loss function means nothing.” In this case, what should we look for instead when reading DRL papers if not the loss function? <br /></b></i><br /><br />Policy optimization algorithms like PPO train by minimizing some loss, which in the most naive implementation is the (negative) expected return at the current policy's parameters. So in reference to my blog post, this is the "policy gradient loss" that supervises the <i>current policy's </i>predictions. <div><br /></div><div>It so happens that this loss function is defined with respect to data $\mathcal{D}(\pi^i)$ sampled by the *current* policy, rather than data sampled i.i.d from a fixed / offline dataset as commonly done in supervised learning. So if you change the policy from $\pi^i \to \pi^{i+1}$, then re-computing the policy gradient loss for $\pi^{i+1}$ requires collecting some new environment data $\mathcal{D}(\pi^{i+1})$ with $\pi^{i+1}$. Computing the loss function has special requirements (you have to annoyingly gather new data every time you update), but at the end of the day it is still a loss that supervises the training of a neural net, given parameters and data. </div><div><br /></div><div>On "loss function means nothing": the Spinning Up docs are correct in saying that the loss you minimize is not actually the evaluated performance of the policy, in the same way that minimizing cross entropy loss maximizes accuracy while not telling you what the accuracy is. In a similar vein, the loss value for $\pi^i, \mathcal{D}(\pi^i)$ is decreased after a policy gradient update. You can assume that if your new policy sampled the exact same trajectory as before, the resultant reward would be the same, but your loss would be lower. Vice versa, if your new policy samples a different trajectory, you can probably assume that there will be a monotonic increase in reward as a result of taking each policy gradient step (assuming step size is correct and that you could re-evaluate the loss under a sufficiently large distribution). </div><div><br /></div><div>However, you don't know how much decrease in loss translates to increase in reward, due to non-linear sensitivity between parameters and outputs, and further non-linear sensitivity between outputs and rewards returned by the environment. A simple illustrative example of this: a fine-grained manipulation task with sparse rewards, where the episode return is 1 if all actions are done within a 1e-3 tolerance, and 0 otherwise. A policy update might result in each of the actions improving the tolerance from 1e-2 to 5e-3, and this policy achieves a lower "loss" according to some Q function, but still has the same reward when re-evaluated in the environment.</div><div><br /></div><div>Thus, when training RL it is not uncommon to see the actor loss go down but the reward stay flat, or vice versa (the actor loss stays flat but the reward goes up). It's usually not a great sign to see the actor loss blow up though!</div><div><br /></div><div><br /></div><div><div><i><b>Why in DRL, people frequently set up algorithms to optimize the undiscounted return, but use discount factors in estimating value functions? <br /></b></i><br />See <a href="https://www.google.com/url?q=https://stats.stackexchange.com/questions/221402/understanding-the-role-of-the-discount-factor-in-reinforcement-learning&sa=D&source=editors&ust=1627670948029000&usg=AOvVaw1LGCxSYrZLmM1SDjrC8TxZ">https://stats.stackexchange.com/questions/221402/understanding-the-role-of-the-discount-factor-in-reinforcement-learning</a>. In addition to avoiding infinite sums from a mathematical perspective, the discount factor actually serves as an important hyperparameter when tuning RL agents. It biases the optimization landscape so that agents prefer the same reward sooner than later. Finishing an episode sooner also allows agents to see more episodes, which indirectly improves the amount of search and exploration a learning algorithm can do. Additionally, discounting produces a symmetry-breaking effect that further reduces the search space. In a sparse reward environment with a $\gamma=1$ (no discounting), an agent would be equally happy to do nothing on the first step, and then complete the task vs. do the task straight away. Discounting makes the task easier to learn because the agent can learn that there is only one preferable action at the first step.</div><div><br /></div><div><i><b>In model-based RL, why <a href="https://spinningup.openai.com/en/latest/spinningup/rl_intro2.html">embedding planning loops into policies</a> makes model bias less of a problem? <br /></b></i><br /><div>Here is an example that might illustrate how planning helps:</div><div><br />Given a good Q function $Q(s,a)$, you can recover a policy $\pi(a|s)$ by performing a search procedure argmax_a $Q(s,a)$ to recover the best action that results in the best expected (discounted) future returns. A search algorithm like grid search is computationally expensive, but guaranteed to work because it will cover all the possibilities.<br /><br />Imagine instead of search, you use a neural network "actor" to amortize the "search" process into a single pass through a neural network. This is what Actor-Critic algorithms do: they learn a critic and use the critic to learn an actor, which performs "amortized search over the argmax $Q(s,a)$".<br /><br />Whenever you can use brute force search on the critic instead of an actor, it is better to do so. This is because an actor network (amortized search) can make mistakes, while brute force is slow but will not make a mistake.</div><div><br /></div><div>The above example illustrates the simplest example of a 1-step planning algorithm, where "planning" is actually synonymous with "search". You can think about the act of searching for the best action with respect to $Q(s, a)$ as being equivalent to "planning for the best future outcome", where $Q(s,a)$ evaluates your plan. <br /><br />Now imagine you have a perfect model of dynamics, $p(s'|s,a)$, and an okay-ish Q function where it has function approximation errors in some places. Instead of just selecting the best Q value and action at a given state, the agent can now consider the future state and consider the Q values that one encounters at the next set of actions. By using a plan and an "imagined rollout" of the future, the agent can query $Q(s,a)$ along every state in the trajectory, and potentially notice inconsistencies with Q functions. For instance, Q might be high at the beginning of the episode but low at the end of the episode despite taking the greedy action at each state. This would immediately tell you that the Q function is unreliable for some states in the trajectory. </div><div><br /></div><div>A well-trained Q function should respect the Bellman equality, so if you have a Q function and a good dynamics model, then you can actually check your Q function for self-consistency at inference time time to make sure it satisfies Bellman equality, even before taking any actions. </div><div><br /></div><div>One way to think of a planning module is that it "wraps" a value function $Q_\pi(s,a)$ and gives you a slightly better version of the policy, since it uses search to consider more possibilities than the neural-net amortized policy $\pi(a|s)$. You can then take the trajectory data generated by the better policy and use that to further improve your search amortizer, which yields the "<a href="https://www.inference.vc/alphago-zero-policy-improvement-and-vector-fields/">minimal policy improvement technique</a>" perspective from Ferenc Huszár.</div><div><br /></div><div><br /></div><div><i><b>When talking about <a href="https://spinningup.openai.com/en/latest/spinningup/rl_intro2.html">data augmentation for model-free methods</a>, what is the difference between “augment[ing] real experiences with fictitious ones in updating the agent” and “us[ing] only fictitious experience for updating the agent”? <br /></b></i><br />If you have a perfect world model, then all you need is to train an agent on "imaginary rollouts" and then it will be exactly equivalent to training the agent on the real experience. In robotics this is really nice because you can train purely in "mental simulation" without having to wear down your robots. <a href="https://arxiv.org/abs/1802.10592">Model-Ensemble TRPO</a> is a straightforward paper that tries these ideas.<br /><br />Of course in practice, no one ever learns a perfect world model, so it's common to use the fictitious (imagined) experience as a supplemental experience to real interaction. The real interactions data provide some grounding in reality for both the imagination model and the policy training. <br /><br /><i><b>How to choose the <a href="https://spinningup.openai.com/en/latest/spinningup/rl_intro3.html">baseline (function b) </a>in policy gradients? <br /></b></i><br />The baseline should be chosen to minimize the variance of gradients while keeping the estimate of the learning signal unbiased. Here is a talk that covers that stuff in more detail <a href="https://www.blogger.com/#">https://www.youtube.com/watch?v=ItI_gMuT5hw</a>, you can also google terms like "variance reduction policy gradient" more and "control variates reinforcement learning". I have a blog post on variance reduction, which also discusses control variates: <a href="https://www.blogger.com/#">https://blog.evjang.com/2016/09/variance-reduction-part1.html</a><br /><br />Consider episode returns for 3 actions = [1, 10, 100]. Clearly the third action is by far the best, but if you take a naive policy gradient, you end up increasing the likelihood of the bad actions too! Typically $b=V(s)$ is sufficient, because it turns the $Q(s,a)-V(s)$ into advantage $A(s,a)$, which has the desired effect of increasing the likelihood of good actions, keeping the likelihood of neutral actions the same, and decreasing the likelihood of bad actions. <a href="https://www.blogger.com/#">Here is a paper</a> that applies an additional control variate on top of advantage estimation to further reduce variance. <br /><br /><br /><i><b>How to better understand <a href="https://spinningup.openai.com/en/latest/algorithms/td3.html">target policy smoothing </a>in TD3? </b></i><div><i><br /></i></div><div>In actor-critic methods, both the Q function and actor are neural networks, so it can be very easy to use gradient descent to find a region of high curvature in the Q function where the value is very high. You can think of the actor as a generator and a critic as a discriminator, and the actor learns to "adversarially exploit" regions of curvature in the critic so as to maximize the Q value without actually emitting meaningful actions. </div><div><br /></div>All three of the tricks in TD3 are designed to mitigate the problem of the actor adversarially selecting an action with a pathologically high Q value. By adding noise to the input to the target Q network, it prevents the "search" from finding exact areas of high curvature. Like Trick 1, it helps make the Q function estimates more conservative, thus reducing the likelihood of choosing over-estimated Q values.<br /><div><br /></div><div><span face="Roboto, RobotoDraft, Helvetica, Arial, sans-serif" style="background-color: white; color: #3c4043; font-size: 14px; letter-spacing: 0.2px;"><br /></span></div><h4 style="text-align: left;">A Note on Categorizing RL Algorithms</h4><div><br /></div>RL is often taught in a taxonomic layout, as it helps to classify algorithms based on whether they are "model based vs. model-free", "on-policy vs. off-policy", "supervised vs. unsupervised". But these categorizations are illusory, much like the <a href="https://www.blogger.com/blog/post/edit/842965756326639856/587026582337331275#">Spoon in the Matrix</a>. There are actually many different frameworks and schools of thought that allow one to independently derive the same RL algorithms, and they cannot always be neatly classified and separated from each other.<br /><br />For example, it is possible to derive actor critic algorithms from both on-policy and off-policy perspectives.<br /><br />Starting from off-policy methods, you have DQN which use the inductive bias of Bellman Equality to learn optimal policies via dynamic programming. Then you can extend DQN to continuous actions via an actor network, which arrives at DDPG. <br /><br />Starting from on-policy methods, you have REINFORCE, which is vanilla policy gradient algorithm. You can add a value function as a control variate, and this requires learning a critic network. This again re-derives something like PPO or DDPG.<br /><br />So is DDPG an on-policy or off-policy algorithm? Depending on the frequency with which you update the critic vs. the actor, it starts to look more like onpolicy or offpolicy update. My colleague Shane has a good treatment of the subject in his <a href="https://arxiv.org/pdf/1706.00387.pdf">Interpolated Policy Gradients</a> paper.<div><div class="docos-replyview-body docos-anchoredreplyview-body" dir="ltr" style="background-color: white; color: #3c4043; font-family: Roboto, RobotoDraft, Helvetica, Arial, sans-serif; font-size: 14px; letter-spacing: 0.2px; line-height: 20px; overflow-wrap: break-word; padding: 0px;"><br /></div></div><div><i><br /></i></div></div></div></div><div><p><br /></p></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-59709948171569473162021-06-19T16:59:00.011-07:002021-06-20T14:44:34.507-07:00Stonks are What You Can Get Away With: NFTs and Financial Nihilism<p> </p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjocri5AhqOSvuM_gmFus_6PItcVMUzpKAD76ohO6KvZack92Q8ffy96BvK8e8LzRUFF-L_xQMR7GbANh1aDA-7wxr-gbn-sQ3p7fYN1SV707CFEI-F7EBXkWknlS1ZMxNHo4wWjTof7ds/s1980/punk_tile%25403x.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1980" data-original-width="1980" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjocri5AhqOSvuM_gmFus_6PItcVMUzpKAD76ohO6KvZack92Q8ffy96BvK8e8LzRUFF-L_xQMR7GbANh1aDA-7wxr-gbn-sQ3p7fYN1SV707CFEI-F7EBXkWknlS1ZMxNHo4wWjTof7ds/w400-h400/punk_tile%25403x.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Eric Jang, "Ten Apes", Jun 19 2021. NFT "drop" coming soon.</td></tr></tbody></table><p></p><br />Andy Warhol once said, “Art is what you can get away with.” I interpret the quote as a nihilistic take on “beauty is in the eye of the beholder” — <a href="https://en.wikipedia.org/wiki/Fountain_(Duchamp)">a urinal you found in the junkyard</a> can be considered art, so long as you convince someone to buy it, or showcase it in a museum. All that matters is what other people see in it and what buyers are willing to pay.<br /><br />The 2020’s equivalent of Warhol paintings are <a href="https://ethereum.org/en/nft/">Non-Fungible-Tokens</a> (NFTs). In this essay I’ll explain what NFTs are by motivating them with some interesting real-world problems. Then I’ll discuss why the NFT craze for digital art generates so much ideologically contentious debate. Finally, I’ll discuss some parallels between artistic and financial nihilism, and how this might serve as a framework for thinking about wildly speculative markets.<br /><br /><br /><h3 style="text-align: left;">Explaining NFTs using Counterfeit Goods</h3><div><br /></div>Suppose you want to buy a Birkin bag or some other luxury brand item. An unauthorized seller — perhaps someone who needs some emergency cash — is willing to sell you a Birkin bag. They offer you a good discount, relative to the price the authorized retailer would charge you. But how can you be sure they aren’t selling you a fake? Counterfeits for these items are very high quality, and the average Birkin customer probably can’t tell the difference between a real and a fake. <br /><br />One way to avoid counterfeits is to only purchase items from an authorized retailer, e.g. a trusted Hermès store. But this is not practical because it prevents people from selling or giving away their bags. If you leave your bag to someone in your will, then its authenticity is no longer guaranteed.<br /><br />So we have the market need: how does a seller pass on or sell a luxury item? How does a buyer ensure that they are buying an authentic item?<br /><br />One possible answer is for Hermès to print out a list of secret serial numbers, perhaps sewn inside the bag, that declare whether a bag is legit or not. Owners receive a serial number when they buy the bag. But this is not a strong deterrent. A counterfeiter could just buy a real bag and then copy its serial number into many fake bags.<br /><br />What if Hermès maintains a public website of who owns which bag? Any time a bag changes ownership, this ledger needs to be updated. By recording a unique owner for each unique serial number, this solves the problem of counterfeiters simply duplicating serial numbers. The process shifts from verifying properties to verifying transactions and owners.<br /><br />These approaches would work, but also have a centralized point of failure: If the Hermès website goes down, nobody can trade bags anymore. Hermès is a big company and has the resources to protect their website against DDOS attacks and other cybersecurity threat vectors, but smaller luxury brands might not have a state-of-the-art security department. If they are not careful, their security could be breached by hackers or an unscrupulous sysadmin. Also, if Hermès stops operating as a company in 25 years, who will maintain the ledger of ownership? If it is a third party company, can we trust them not to abuse that power? Even in the unlikely event that the central point of failure never makes a mistake, it’s still mildly annoying to require Hermès to get involved every time a bag changes hands. <br /><br />What if you could verify transactions and owners, without a centralized party? This is where Non-Fungible Tokens, or NFTs, come in. In 2009, someone <a href="https://bitcoin.org/bitcoin.pdf">published a landmark paper</a> on how to build a decentralized ledger of who owns what. This ledger is called a "blockchain". A blockchain is a record of the consensus state of the world, following some agreed-upon protocol that is known to everyone. The remarkable thing about blockchains is that they are decentralized (no central point of failure), and resilient to malicious actors in the network. Distributed consensus is reached by each individual contributing some resource like money, hash rate, or computer storage. So long as a large fraction of resources in the network are controlled by well-behaved actors, the integrity of the blockchain remains secure. The fraction required typically varies from one-thirds to just over a half. <br /><br />There are many blockchains out there. The details of how their consensus protocols are implemented are fascinating but beyond the scope of this essay. The important thing to know is that the base technology underlying NFTs and cryptocurrencies is a formal protocol that allows people to come to an agreement on who owns what without having to involve a trusted third party (e.g. Hermès, an escrow agent, your bank, or your government). Theoretically speaking, blockchains allow shared consensus in a trustless society.<br /><br />NFTs are like a paper deed of ownership, but instead of paper the certificate is digital. And unlike a paper deed an NFT cannot be forged. NFTs contain a unique “serial number” that is publicly viewable, but only one person can be said to “possess” that serial number on the blockchain, much like how home addresses are public but registered to a single owner by the recording office. To see how NFTs solve the Birkin bag counterfeit problem, let’s suppose Hermès publicly declares the following for all to hear:<br /><br /><i>“Owners of True Birkin bags will be issued a digital certificate of authenticity represented by an NFT”<br /></i><br />As a buyer, you can be quite confident that the bag is authentic if the seller also owns the NFT, and you can verify that the NFT was indeed originally created by Hermès by looking up its public transaction history. During a transaction, the seller simply gives the buyer the bag and tells the blockchain to re-assign ownership of the NFT to the buyer’s digital identifier. If the payment is done in cryptocurrency, the escrow can even be performed <a href="https://medium.com/coinmonks/escrow-service-as-a-smart-contract-the-business-logic-5b678ebe1955">using a smart contract</a> without a centralized party (the seller publishes contract “If a specific buyer’s wallet address sends me X USDC in 24 hours, send the NFT is sent to them and send the cash to me.”)<br /><br />NFTs provide the means to implement digital scarcity, but there still needs to be a way to pair it with a real-world item in the “analog” world. A seller could still bypass the security of NFTs by selling you an NFT with a fake Birkin bag. However, for every fake bag you want to sell, you need to purchase a real NFT and the real bag that comes with it. After you sell the NFT with the fake bag, you are left with a real bag with no NFT! Subsequently, the market value of the real bag drops because buyers will be highly suspicious of a seller who says "this is a real bag, I don't have the NFT because I just sold it with a fake bag." While NFTs are not sure proof of a physical Birkin bag's authenticity, they all but ruin the economic incentives of counterfeiting. <br /><br />What about luxury consumable goods? You could buy NFT-certified Wagyu beef, sell the NFT with some cheaper steak, and then eat the real Wagyu beef - it doesn’t matter what other people think you're eating. However, NFT transactions are public, so a grocery shopper would be quite suspicious of a food NFT that has changed hands outside of the typical supply chain addresses. For NFTs paired with physical goods, each “unusual” transaction significantly adds to counterfeit risk, which diminishes the economic incentives to counterfeiters. This is especially true for consumable, perishable goods.<br /><br />Authenticity is useful, even outside of Veblen goods. You can imagine using NFTs to implement anonymous digital identity verification (<a href="https://www.marketsandmarkets.com/Market-Reports/digital-identity-solutions-market-247527694.html">a 30B market by 2024</a>), or ship it with food products like meat where the customer cares a lot about the provenance of the product. In Taiwan, there is a current ongoing scandal where a bunch of US-imported pork has been passed off as “domestic pork” and nobody can trust their butchers anymore. <br /><br />In the most general case, NFTs can be used to implement provenance tracking of both physical and digital assets - an increasingly important need in our modern age of disinformation. Where did this photo of a politician come from? Who originally produced this audio clip? <br /><br /><h3 style="text-align: left;">The Riddle of Intangible Value</h3><div><br /></div>NFTs make a lot of sense for protecting the authenticity of luxury goods or implementing single sign-on or tracking the provenance of meat products, but that’s not what they’re primarily used for today. Rather, most people sell NFTs for digital art. Here are some early examples of art NFTs, called “Cryptopunks”. Each punk is a 24x24 RGB image.<div><br /></div><div><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWH0gXJ6GwxYFvGmIiUX0uGmw44R5FW20HYx-Gwf0BLjiWelE1QImpYSpK62M5H_Gl2JqUstO9F0hJjXzn9lWX1suojZUm97MGQhEDovC4EuX5YTLKRJCeKz57EnYgVGaL-jF3nXrOlaI/s275/download.jpeg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em; text-align: center;"></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgi7GiemYjwXNEE6fTGVwTRX7GXKD8mP2c1UpgVSyV-vXl7gQ9U9TLRhr3nrUsHrpOSy1kbNQUOxIf1YlPV-lxAl_arNBqshpDtDiowE60OqsVdPpa3T6wPEjM2AoNNNN96tTZb8lPlW44/s1940/punk-variety-2x.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="560" data-original-width="1940" height="184" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgi7GiemYjwXNEE6fTGVwTRX7GXKD8mP2c1UpgVSyV-vXl7gQ9U9TLRhr3nrUsHrpOSy1kbNQUOxIf1YlPV-lxAl_arNBqshpDtDiowE60OqsVdPpa3T6wPEjM2AoNNNN96tTZb8lPlW44/w640-h184/punk-variety-2x.png" width="640" /></a><br /><br />One of these recently sold for 17M USD in an auction. At first glance, this is perplexing. The underlying digital content - some pixels stored in a file - are freely accessible to anyone. Why would anyone pay so much for a certificate of authenticity on something that anyone can enjoy for free? Is the buyer the one that gets punked?</div><div><br />It’s easy to dismiss this behavior as poor taste colliding with the arbitrarily large disposable income of rich people, in particular crypto millionaires that swap crypto assets with other crypto millionaires. While this may be true, I think it’s far more interesting to ask “what worldview would cause a rational person to bet $17M on a certificate for a 24x24x3 set of pixel values”? <br /><br />Historically, the lion’s share of rewards for digital content has been owned by distribution technology like Spotify or content aggregators like Facebook, and then split with the management company. The creatives themselves are paid pittances, and do not share in the financialization of their labor. The optimist case for NFT art is as follows: NFTs are decentralized, which means any artist with an internet connection can draw up financial contracts for their art on their own terms. If NFTs revolutionize the business model of digital art, and if the future of art is mostly digital, then the first art NFTs to ever be issued might accrue significant cultural relevance, and that’s why they command such high speculative prices. <br /><br />Valuing art based on cultural relevance might be a bit absurd, but why is the Mona Lisa “The Mona Lisa”? da Vinci arguably made “better” paintings from a technical standpoint. It's because of intangible value. The Mona Lisa is valuable because of its cultural proximity to important events and people in history, and the mimetic desire of other humans. In fact, it was a relatively obscure painting until 1911, when it was stolen from the Louvre and became a source of national shame overnight. <br /><br />All art, from your child’s first finger painting, to an antique heirloom passed down generations, to a “masterpiece” like the Mona Lisa, are valued this way. They are valuable simply because others deem it valuable. <br /><br />NFTs are the digital equivalent of buying a <a href="https://en.wikipedia.org/wiki/Comedian_(artwork)">banana duck-taped to a wall</a>; you are betting that in the future, that statement of ownership on some blockchain will be historically significant, which you can presumably trade in for cash or clout or both. But buyer beware: things get philosophically tricky when applying the theory of “intangible value” to digital information and artwork where the cost of replication goes to zero. <br /><br />I can think of two ways to look at how one values NFTs for digital art. One perspective is that in a world full of fake Birkin bags and products sourced from ethically dubious places, the only thing of value is the certificate of authenticity. The cultural and mimetic value of content has transferred entirely to the provenance certificate, and not the pixels themselves (which can be copied for free). If art’s value is derived from the cultural relevance it represents and its proximity to important people, then the most sensible way to make high art would not be to improve one’s painting skills, but to schmooze with a lot of famous people and insert oneself into important events in history, and issue scarce status symbols for the bourgeoisie. Warhol did exactly that. <br /><br />The alternate view is that if a perfect copy can be made of some pixels, then it is not really a counterfeit at all, and therefore the NFT secures nothing of actual value. Is it meaningful to ascribe a certificate of authenticity to something that can be perfectly replicated? Is “authenticity” of a stream of 0s and 1s meaningless? There is certainly utility in verifying the source of some information, but anyone can mint an NFT for the same information.<br /><br />In summary, the Pro-NFT crowd values the intangible “collector’s scarcity and cultural relevance”. The anti-NFT focuses on tangible value - how much real value does this secure? Both are reasonable frameworks to value things, and you can end up with wildly different conclusions.<br /></div><div><br /></div><div><h3 style="text-align: left;">Artistic and Financial Nihilism: One and The Same?</h3><div><br /></div>Convince enough people that a urinal is valuable, and it becomes an investment grade asset. This is no longer merely a matter of art philosophy - when you invest in an index fund, you are essentially reinforcing the market’s current belief of valuations. When people bid up the price of TSLA or GME to stratospheric valuations, the index fund must re-adjust their market-weighted holdings to reflect those prices, creating further money inflows to the asset and thus a <a href="https://www.ft.com/content/0ca06172-bfe9-11de-aed2-00144feab49a">self-fulfilling prophecy</a>. As it turns out, the art-of-investing is much like investing-in-art. As I have suggested in the title of this essay and borrowed from Warhol (who probably borrowed it from Marshall McLuhan), <a href="https://knowyourmeme.com/memes/stonks">stonks</a> are what you can get away with.</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFpgUtOTfV5FIFt6uPUoZR8rw-eCpkZMoSZYVcsHSK11xEGvhoMy3s7zbjtUg_pWo7L7PVieaaIeIO9950QZMJJFfRJcoho7OZshkXkTu20NYacjhNSY-xnC8nZRiJG9EqDXGuJUhs4l0/s275/download.jpeg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="183" data-original-width="275" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFpgUtOTfV5FIFt6uPUoZR8rw-eCpkZMoSZYVcsHSK11xEGvhoMy3s7zbjtUg_pWo7L7PVieaaIeIO9950QZMJJFfRJcoho7OZshkXkTu20NYacjhNSY-xnC8nZRiJG9EqDXGuJUhs4l0/w400-h266/download.jpeg" width="400" /></a></div><br /><div><br /><br />We are starting to see this valuation framework being applied to the equities market today, where price movements are dominated by narratives about where the price is going and what other people are willing to pay for it, especially with meme stocks like GME and AMC. Many retail investors don’t really care about whether GME’s price is justified by their corporate earnings - they simply buy at any cost. This financial nihilism - where intrinsic value is unknowable and all that matters is what other people think - is a worldview often encountered in Gen Z retail traders and a surprising number of professional traders I know. Perhaps the <a href="https://knowyourmeme.com/memes/iq-bell-curve-midwit">midwit meme</a> is really true.<br /><br />This is definitely a cause for some concern, but at the same time, I think value investors should keep an open mind that what first seems like irrational behavior might have a method to madness. If you have an irrational force acting in the markets, like shareholders who refuse to sell or lend their stock, a discounted cash flow model for AMC or GME starts to not become very predictive of share price. By reflexivity, that will have impacts on future cash flows! In a similar fashion, using present-day frameworks for thinking about business and value do not account for the disruptive force of technology. That’s why I find NFTs so fascinating - they are an intersection of finance, art, technology, and the nihilistic framework of valuation that is so prevalent in our society today. <br /><br />What is rational behavior for an investor, anyway? Is it “standard behavior” as measured against the population average? How do you tell apart standard behavior from a collective delusion? Perhaps the luxury bag makers, <a href="https://twitter.com/ryancohen?lang=en">Ryan Cohen</a>s, and Andy Warhol’s of the world understand it best: Convince the world to believe in your values, and you will be the sanest person on the planet. For fifteen minutes, at least.<br /><br /><h3 style="text-align: left;">Acknowledgements</h3>Thanks to <a href="https://twitter.com/catisgrasso?lang=en">Cati Grasso</a>, <a href="https://www.linkedin.com/in/samhoffman523/">Sam Hoffman</a>, <a href="https://twitter.com/lkhphuc">Phúc Lê</a>, Chung Kang Wang, Jerry Suh, and Ellen Jiang for comments and feedback on drafts of this post.</div>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-842965756326639856.post-5153403399359378752021-05-26T17:02:00.007-07:002021-05-26T17:47:10.971-07:00Sovereign Arcade: Currency as High-Margin InfrastructureThis essay is about how the powerful want to become countries, and the implications of cryptocurrencies on the sovereignty of nations. I’m not an economics expert: please leave a comment if I have made any errors.<br /><br />Money allows goods, services, and everything else under the sun to be assigned a value using the same unit of measurement. Without money, society reverts to <a href="https://en.wikipedia.org/wiki/Barter">bartering</a>, which is highly inefficient. You may need plumbing services but have nothing that the plumber wants, so your toilet remains clogged. By acting as a measure of value everyone agrees on, money facilitates frictionless economic collaboration between people. <br /><br />Foreign monetary policy is surprisingly simple to understand when viewed through the lens of power and control. Nation states get nervous when other nation states get too powerful, and controlling the currency is a form of power.<br /><br />To see why this is the case, let’s consider a Gaming Arcade (yes, like Chuck E. Cheese) as a miniature model of a “Nation State”. To participate inside the “arcade economy”, you are to swap your outside money (USD) for arcade tokens.<div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3_pUtJ_PwVQ289KTAP17yxthQa9NudhOQ5dTKAaT8QFLY0nwJhQXKcPRfqO7HgZ1rDtnJUcMp6ttCabHCIUu9DUYP84Dfr95_gYk0zsxhQIwSlwNAbXD-LNVhKvcQWlRdkF6VwzT-hkY/s640/chuckecheesetokens0213.2e16d0ba.fill-661x496.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="482" data-original-width="640" height="301" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3_pUtJ_PwVQ289KTAP17yxthQa9NudhOQ5dTKAaT8QFLY0nwJhQXKcPRfqO7HgZ1rDtnJUcMp6ttCabHCIUu9DUYP84Dfr95_gYk0zsxhQIwSlwNAbXD-LNVhKvcQWlRdkF6VwzT-hkY/w400-h301/chuckecheesetokens0213.2e16d0ba.fill-661x496.jpg" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><i>Arcades are like mini nation-states: they issue their own currency, encourage spending with state-owned enterprises, and have a one-sided currency exchange to prevent money outflows.</i></div><div class="separator" style="clear: both; text-align: center;"><br /></div><br />The coins are a store of value that facilitate a one-way transaction with the Nation-State: you get to play an arcade game, and in return you get some entertainment value and some tickets, which we call “wages”.<br /><br />The tickets are another store of value that can facilitate another one-way transaction: converting them into prizes. Prizes can be a stuffed animal or something else of value. Typically, the cost of winning a prize at an arcade is many multiples of what it would cost to just buy the prize at an outside store. The arcade captures that price difference as their profit.<br /><br />Money’s most important feature requirement is that it is a *stable* measure of value. Too much inflation, and people stop saving money. Too much deflation, and people and companies aren’t incentivized to spend money (for example, employing people). Imagine if tomorrow, an arcade coin could let you play a game for two rounds instead of one, and the day after, you could play for four rounds! Well, no one would want to play arcade games today anymore. <br /><br />The arcade imposes many kinds of draconian capital controls, and in many ways resembles an extreme form of <a href="https://en.wikipedia.org/wiki/State_capitalism">State Capitalism</a>:<br /><ul style="text-align: left;"><li>All transactions are with state-owned enterprises (the arcade games) and must be conducted using state currencies (coins and tickets). You can’t start a business that takes people’s coins or tickets within the arcade.</li><li>The state can hand out valuable coins at virtually zero cost without worrying about inflation - every coin they issue is backed by a round of a coin-operated game, of which they have near-infinite supply. They can’t hand out infinite tickets though, because that would either require backing it up with more prizes, or devaluing each ticket so that more tickets are needed to buy the same prize.</li><li>You can bring outside money into the arcade, but you can’t convert coins, tickets, or prizes into money to take out.</li></ul><br />Controlling the currency supply is indeed a very powerful business to be in, and why arcades would prefer to issue their own currency and keep money from leaving their borders. <br /><br />Governments are just like arcades. They prefer their citizens and trading partners to use a currency they control, because it gives them a lever with which they can influence spending behavior. If country A uses country B’s currency instead, then country B’s currency supply shenanigans can actually influence saving and spending behavior of country A. This can pose a threat to the sovereignty of a nation (a fancy way to say “control over its people”). <br /><br />After World War II, the US Dollar <a href="https://en.wikipedia.org/wiki/Bretton_Woods_system">became the world’s reserve currency</a>, which means that it’s the currency used for the majority of international trade. The USA wants the world to buy oil with US dollars, and we go to great lengths to enforce it with various forms <a href="https://www.bloomberg.com/news/features/2016-05-30/the-untold-story-behind-saudi-arabia-s-41-year-u-s-debt-secret">of soft</a> and <a href="https://en.wikipedia.org/wiki/General_Atomics_MQ-1_Predator">hard power</a>. The US dollar is backed by oil (petrodollar theory), and this “dollars-are-oil rule” in turn <a href="https://en.wikipedia.org/wiki/Petrodollar_recycling#Petrodollar_warfare">is enforced by US military might</a>. <br /><br /><div>Governments print money all the time to pay for needed short-term needs like building bridges and COVID relief. However, too much of this can be a dangerous thing. The government gets what it wants in the short term, but more money chasing the same amount of goods will cause businesses to raise prices, causing inflation. Countries like Venezuela and Turkey who print too much of their own currency experience a runaway feedback loop where money supply and prices skyrocket, and then no one trusts the government currency as a stable source of value anymore.<br /><br />The USA is not like other countries in this regard; controlling the world’s reserve currency gives the USA the ability to print money like no other country can. The US government owing 28 trillion USD of debt is like the Arcade owing you a trillion game coins. Yes, it is a lot of coins - maybe the arcade doesn’t even have a trillion coins to give you. But the arcade knows that you know that it’s in the best interest of everyone to not try and collect all those coins right away, because the arcade would go bankrupt, and then the coins you asked for would be worthless. <br /><br />Is this sketchy? Absolutely. Most other countries absolutely hate this power dynamic. Especially China. The USA <a href="https://home.treasury.gov/news/press-releases/sm751">calls China a currency manipulator</a> for devaluing the yuan, but will turn around and do the exact same thing by printing dollars. China does not want to be subject to the whims of US monetary policy, so they are working very hard to establish the yuan as the currency of exchange in <a href="https://www.nbr.org/publication/chinas-ten-year-struggle-against-u-s-financial-power/">international trade</a>. Everyone wants to be the arcade operator, not the arcade player.</div><div><br /><h3 style="text-align: left;">Large Companies as Nation-States</h3><br />Nation-states not only have to worry about the currencies of other nation-states, but increasingly, large global corporations as well. Any businesses that get big enough start to think about the currency game, since currency is a form of high-margin infrastructure. <br /><br />AliPay is a mobile wallet made by an affiliate company of Alibaba. It’s basically backed by an SQL table saying how much money each AliPay user has. It would be very easy for AliPay to print money - all they have to do is bump up some number in a row in the SQL table. As long as users are able to redeem their AliPay balance on something of equivalent value, Alibaba’s accounts remain solvent and they can get away with this. In fact, many of their users shop on Alibaba’s e-commerce properties anyway, so Alibaba doesn’t even need to have 100% cash reserves to back up all entries in their SQL table. Users can redeem their balances by paying for Alibaba goods, which Alibaba presumably can acquire for less than the price the user pays for.<br /><br /></div><div>Of course, outright printing money incurs the wrath of the Sovereign Arcade. Alibaba was <a href="https://www.reuters.com/technology/exclusive-chinas-ant-explores-ways-jack-ma-exit-beijing-piles-pressure-sources-2021-04-17/">severely</a> <a href="https://www.npr.org/2021/04/10/986112628/china-fines-alibaba-2-8-billion-for-breaking-anti-monopoly-law">punished</a> for merely suggesting that they could do a better job than China’s banks. Facebook tried to challenge the dollar by <a href="https://techcrunch.com/2019/06/18/facebook-libra/">introducing a token</a> backed with other countries’ reserve currencies, and the idea was <a href="https://en.wikipedia.org/wiki/Diem_(digital_currency)#United_States_regulatory_response">slapped down so hard</a> that FB had to rename the project and start over. In contrast, the US government is happy to <a href="https://www.circle.com/en/usdc">approve crypto tokens backed using the US dollar</a>, because ultimately the US government controls the underlying resource.<br /><br /></div><div>There are clever ways to build high margin infrastructure without crossing the money-printing line. Any large institution with a monopoly over a high-margin resource can essentially mint debt for free, effectively printing currency like an arcade does with its coins. The resource can be a lot of things - coffee, cloud computing credits, energy, user data. In the case of a nation-state, the resource is simply violence and enforcement of the law.<br /><br />As of 2019, <a href="http://jpkoning.blogspot.com/2019/08/starbucks-monetary-superpower.html">Starbucks had 1.6B USD of gift cards in circulation</a>, which puts it above the national GDP of about 20 countries. Like the arcade coins, Starbucks gift cards are only redeemable for limited things: scones and coffee. Starbucks can essentially mint Starbucks gift cards for free, and this doesn’t suffer from inflation because each gift card is backed by future coffee which Starbucks can also make at a marginal cost. You can even use Starbucks cards internationally, which makes “Star-Bucks” more convenient than current foreign currency exchange protocols. <br /><br />As long as account balances are used to redeem a resource that the company can acquire cheaply (e.g. gift cards for coffee, gift cards for cloud computing, advertising credits), a large company could also practice “currency manipulation” by arbitrarily raising monetary balances in their SQL tables.<br /><br /><br /><h3 style="text-align: left;">The Network State</h3><div><br /></div>Yet another threat to the sovereign power is decentralized rogue nations, made possible by cryptocurrency. At the heart of cryptocurrency’s rise is a social problem in our modern, globalized society: how do we trust our sovereigns to actually be good stewards of our property? Banking executives who overleveraged risky investments got bailed out in 2008 by the US government. The USA printed a lot of money in 2020 to bail out those impacted by COVID-19 economic shutdowns. Every few weeks, we hear about data breaches in the news. A lot of Americans are losing trust in their institutions to protect their bank accounts, their privacy, and their economic interests. <br /><br />Even so, most Americans still take the power of the dollar for granted: 1) our spending power remains stable and 2) the number we see in our bank accounts is ours to spend. We have American soft and hard diplomacy to thank for that. But in less stable countries, capital controls can be rather extreme: a bank may simply decide one day that you can’t withdraw more than 1 USD per day. Or some government can decide that you’re a criminal and freeze your assets entirely. <br /><br />Cryptocurrency offers a simple answer: You can’t trust the sovereign, or the bank, or any central authority to maintain the SQL table of who owns what. Instead, everyone cooperatively maintains the record of ownership in a decentralized, trustless way. For those of you who aren’t familiar with how this works, I recommend this 26-minute <a href="https://www.youtube.com/watch?v=bBC-nXj3Ng4&t=3s">video</a> by 3Blue1Brown.<br /><br />To use the arcade analogy, cryptocurrency would be like a group of teenagers going to the arcade, and instead of converting their money into arcade coins, they pool it together to buy prizes from outside. They bring their own games (Nintendo Switches or whatever), and then swap prizes with each other based on who wins. They get the fun value of hanging out with friends and playing games and prizes, while cutting the arcade operator out.<br /><br />The decentralized finance (DeFi) ecosystem has grown a lot in the last few years. In the first few years of crypto, all you could do was send Bitcoin and other Altcoins to each other. Today, you can <a href="https://uniswap.org/">swap currencies</a> in decentralized exchanges, <a href="https://www.coindesk.com/what-is-a-flash-loan">take out flash loans</a>, buy <a href="https://medium.com/dragonfly-research/liquidators-the-secret-whales-helping-defi-function-acf132fbea5e">distressed debt at a discount</a>, <a href="https://docs.uniswap.org/concepts/introduction/liquidity-user-guide">provide liquidity as a market maker</a>, <a href="https://vitalik.ca/general/2021/02/18/election.html">perform no-limit betting on prediction markets</a>, pay a <a href="https://cryptolawinsider.com/us-government-uses-stablecoins-to-send-foreign-aid-to-venezuela/">foreigner with USD-backed stablecoins</a>, and <a href="https://www.glossy.co/fashion/beyond-the-hype-nfts-stand-to-benefit-fashion-brands-in-the-future/">cryptographically certify authenticity of luxury</a> goods.<br /><br /><a href="https://balajis.com/about/">Balaji Srinivasan</a> predicts that as decentralized finance projects continue to grow, a large group of individuals with a shared sense of values and territory will congregate on the internet and declare themselves citizens of a “<a href="https://www.youtube.com/watch?v=KMI_aGw2Cts">Network State</a>”. It sounds fantastical at first, but many of us already live in Proto-Network states. We do our work on computers, talk to people over the internet, shop for goods online, and spend leisure time in online communities like Runescape and such. It makes sense for a geographically distributed economy to adopt a digital-native currency that transcends borders. <br /><br />Network states will have the majority of their assets located on the internet, with a small amount of physical property distributed around the world for our worldly needs. The idea of a digital rogue nation is less far-fetched than you might think. If you walk into a Starbucks or McDonalds or a Google Office or an Apple Store anywhere in the world, there is a feeling of cultural consistency, a familiar ambience. In fact, Starbucks gets pretty close: you go there to eat and work and socialize and pay for things with Starbucks gift cards. </div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPqMBqlv7ZnKkGgfCJurL_XY99WkLeoJPtTGi-exacMUM7GIlpNrIGEVeIwG4j_BceBrZfzT2f9mE5u1qmiJSsjtyraoj0E6o7CNawUQj_FC7W18OOUtEYCds-rcw5xbc9o55ab1k695Y/s640/download.jpeg" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" data-original-height="399" data-original-width="640" height="251" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPqMBqlv7ZnKkGgfCJurL_XY99WkLeoJPtTGi-exacMUM7GIlpNrIGEVeIwG4j_BceBrZfzT2f9mE5u1qmiJSsjtyraoj0E6o7CNawUQj_FC7W18OOUtEYCds-rcw5xbc9o55ab1k695Y/w400-h251/download.jpeg" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><span style="text-align: start;"><i>A network state might have geographically distributed physical locations that have a consistent culture, with most of its assets and culture in the cloud. Pictured: Algebraist coffee, a new entrant into the luxury coffee brand space</i></span></div><div></div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYObgvH6EfQEZKI3xDDN_fKiNidgmvcZIZbPZPYlTurZDWyEjb7b91ZCCPyewp5JMxVXIh86KF3R5CdA2gjCliJgGj9UtI0_ja0d95g5ZUgVUFP8W5BcNIchvSShssaekOCoPlB3d6zV4/s550/campfire-cookout-at-texas.jpeg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="396" data-original-width="550" height="287" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYObgvH6EfQEZKI3xDDN_fKiNidgmvcZIZbPZPYlTurZDWyEjb7b91ZCCPyewp5JMxVXIh86KF3R5CdA2gjCliJgGj9UtI0_ja0d95g5ZUgVUFP8W5BcNIchvSShssaekOCoPlB3d6zV4/w400-h287/campfire-cookout-at-texas.jpeg" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><span style="text-align: start;"><i>A network state could have a national identity independent of physical location. I see no reason why a "Texan" couldn’t enjoy ranching and brisket and big cars and football anywhere in the world.</i></span></div><br /><br />Balaji is broadly optimistic that existing sovereigns will be tolerant or even facilitate network states, by offering them economic development zones and tax incentives to establish their physical embodiments within their borders, in exchange for the innovation and capital they attract. <br /><br />I am not quite so optimistic - the fact that US persons can now pseudonymously perform economic activities with anyone in the world (including sanctioned countries) without the US government knowing, using a currency that the US government cannot control - is a terrifying prospect to the sovereign. The world’s governments highly underestimate the degree to which future decentralized economies will upset the world order and power structures of the world. Any one government can make life difficult for cryptocurrency businesses to get big, but as long as some countries are permissive towards it, it’s hard to put that genie back into the bottle and prevent the emergence of a new digital economy.<br /><br /><h3 style="text-align: left;">Crypto Whales</h3><div><br /></div>I think the biggest threat to the emergence of a network state is not existing sovereigns, but rather the power imbalance of early stakeholders versus new adopters. <br /><br />At the time of writing, there are <a href="https://bitinfocharts.com/top-100-richest-bitcoin-addresses.html">nearly 100 Bitcoin billionaires</a> and 7062 Bitcoin wallets that own more than 10M each. This isn’t even counting the other cryptocurrencies or DeFi wealth locked in Ethereum - the other day, someone up <a href="https://bitinfocharts.com/dogecoin/address/DG6XWP6ruf4ZTsXAGzjDyQFaRimUn1mF2A">bought nearly a billion dollars</a> of the meme currency DOGE. We mostly have no idea who these people are - they walk amongst us, and are referred to as “whales”.<br /><br />A billionaire’s taxes substantially alter state budget planning in smaller states, so politicians actually go out of their way to appease billionaires (e.g. <a href="https://www.forbes.com/sites/giacomotognini/2020/11/05/battle-of-the-billionaires-failed-illinois-income-tax-initiative-drew-more-than-110-million-from-governor-jb-pritzker-and-citadels-ken-griffin/?sh=39617bc2da4b">Illinois with Ken Griffin</a>). If crypto billionaires colluded, they could institute quite a lot of political change at local and maybe even national levels.<br /><br />China has absolutely <a href="https://www.nytimes.com/2021/05/25/world/asia/john-cena-taiwan-apology.html">zero chill</a> when it comes to any challenge to their sovereignty, so it was not surprising at all that they recently <a href="https://www.reuters.com/world/china/crypto-miners-halt-china-business-after-beijings-crackdown-bitcoin-dives-2021-05-24/">cracked down on domestic use of cryptocurrency</a>. However, by shutting their miners down, I believe China is losing a strategic advantage in their quest to unseat America as the world superpower. A lot of crypto billionaires reside in China, having operated large mining pools and developing the world’s mining hardware early on. I think the smart move for China would have been to allow their miners to operate, but force them to sell their crypto holdings for digital yuan. This would peg crypto to the yuan, and also allow China to stockpile crypto reserves in case the world starts to use it more as a reserve currency. <br /><br />There’s a chance that crypto might even overtake the Yuan as the challenger to reserve currency, because it’s easier to acquire in countries with strict capital controls (e.g. Venezuela, Argentina, Zimbabwe). If I were China, I’d hedge against both possibilities and try to control both.<br /><br />Controlling miners has power implications far beyond stockpiling of crypto wealth. Miners play an important role in the market microstructure of cryptocurrency - they have the ability to see all potential transactions before they get permanently appended to blockchain. The assets minted by miners are virtually untraceable. One way a Network State could be compromised is if China smuggled several crypto whales into these fledgling nations that are starting to adopt Bitcoin, and then used their influence over Bitcoin reserves, tax revenues, and market microstructure to punish those who spoke out against China. <br /><br />The more serious issue than China’s hypothetical influence over Bitcoin monetary policy is the staggering inequality of crypto wealth distribution. Presently, <a href="https://insights.glassnode.com/bitcoin-supply-distribution/#:~:text=A%20recent%20report%20by%20Bloomberg,wealth%20in%20the%20Bitcoin%20network.">2% of wallets control over 95% of Bitcoin</a>. Many people are already uncomfortable with the majority of Bitcoins being owned by a handful of mining operators and Silicon Valley bros and other agents of tech inequality. Institutions fail violently when inequality is high - people will drop the existing ledger of balances and install a new one (such as Bitcoin). If people decide to form a new network state, why should they adopt a currency that would make these tech bros the richest members of their society? Would you want your richest citizen to be someone who bet their life savings on DOGE? Would you trust this person’s judgement or capacity for risk management? <br /><br />Like any currency, Bitcoin and Ethereum face adoption risk if the majority of assets are held by people who lack the leadership to deploy capital effectively on behalf of society. Unless crypto billionaires vow to not spend the majority of their wealth (like Satoshi has seemingly done), or demonstrate a remarkable level of leadership and altruism towards growing the crypto economy (like Vitalik Buterin has done), the inequality aspect will remain a large barrier to the formation of stable network states.<br /><br /><h3 style="text-align: left;">Summary</h3><ol style="text-align: left;"><li>A gaming arcade is a miniature model of a nation-state. Controlling the supply and right to issue currency is lucrative. </li><li>Large businesses with high-margin infrastructure can essentially mint debt, much like printing money. </li><li>Cryptocurrencies will create “Network States” that challenge existing nation-states. But they will not prosper if they set up their richest citizens as ones who won the “early adopter” lottery.</li></ol><h3 style="text-align: left;">Further reading and Acknowledgements</h3><div><br /></div><div>I highly recommend <a href="https://www.lynalden.com/fraying-petrodollar-system/">Lyn Alden’s essay</a> on the history of the US dollar, the fraying petrodollar system, and the future of reserve currency.<br /><br />Thanks to Austin Chen and Melody Cao for providing feedback on earlier drafts.<br /><br /><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div></div>Unknownnoreply@blogger.com0Cañada Rd, Redwood City, CA 94062, USA37.505704 -122.34001635.753372430912535 -124.537281625 39.258035569087468 -120.142750375tag:blogger.com,1999:blog-842965756326639856.post-36003271396202055832021-03-14T14:30:00.008-07:002021-04-03T21:23:00.725-07:00Science and Engineering for Learning Robots<style>
.column-center-outer {width:125%}
.left {
width: 44%;
float: left;
}
.right {
width: 56%;
float: left;
margin-left: 3rem
}
.row {
display: flex;
margin-bottom: 3rem;
}
}
<!--remove image styling-->
.post-body img {
border: none!important;
box-shadow: none!important;
/* Browser specific implementations */
-moz-box-shadow: none!important;
-webkit-box-shadow: none!important;
}
</style>
<p><i>This is the text version of a talk I gave on March 12, 2021, at the Brown University Robotics Symposium. As always, all views are my own, and do not represent those of my employer.</i></p>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggH6dyCdITuefz0S21cZP4rAx_F7g5ePqqxOOYV5uPFjRwp1HGAkbfXidqa-5gGmVMCCFBf1Ha4ZhK1zTbHkel9AdUOJBzXM1dOhWtSGmjaL9WCSofRFuxscTRcmpMP0sPXQQGb9fc8e0/w400-h225/Brown+Robotics+Seminar+Talk.png" width="100%" />
</div>
<div class="right">I'm going to talk about why I believe end-to-end Machine Learning is the right approach for solving robotics problems, and invite the audience to think about a couple interesting open problems that I don't know how to solve yet.</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhp3Z82AYDI6E60-RbM5gYK90Ib1u1jjA0lmX5zkWCYRiOzy5K-f4tSlCSvuTnjbzPwR5LyjAcjXxncH31gG1m75rJcO_Kr8fyisucRiYXEV0VVGMcW-nTyv0bUp5eXum34p_8GYRpwDYw/s400/Brown+Robotics+Seminar+Talk+%25281%2529.png" width="100%"/>
</div>
<div class="right"><p>I'm a research scientist at Robotics at Google. This is my first full-time job out of school, but I actually started my research career doing high school science fairs. I volunteered at UCSF doing wet lab experiments with telomeres, and it was a lot of pipetting and only a fraction of the time was spent thinking about hypotheses and analyzing results. I wanted to become a deep sea marine biologist when I was younger, but after pipetting several 96-well plates (and messing them up) I realized that software-defined research was faster to iterate on and freed me up to do more creative, scientific work.</p>
<p>I got interested in brain simulation and machine learning (thanks to Andrew Ng's Coursera Course) in 2012. I did volunteer research at a neuromorphic computing lab at Stanford and did some research at Brown on biological spiking neuron simulation in tadpoles. Neuromorphic hardware is the only plausible path to real-time, large-scale biophysical neuron simulation on a robot, but much like wet-lab research is rather slow to iterate on. It was also a struggle to learn even simple tasks, which made me pivot to artificial neural networks which were starting to work much better at a fraction of the computational cost. In 2015 I watched Sergey Levine's talk on <a href="https://www.youtube.com/watch?v=EtMyH_--vnU">Guided Policy Search</a> and remember thinking to myself, "oh my God, this is what I want to work on".</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWPMe84u2W7-EmE0aYJBZiSyTR5V_hqrJgff1Ed3EfVa_bJz2eJjQTXX6cEi1CzQ-lGTdmp-vrnSDvkR16_354qy0Yj39MY1bZeifx8Bl78r3uQkdTxMS-vYHDNCRkmAuiiFYY0ySY3Wg/s400/Brown+Robotics+Seminar+Talk+%252812%2529.png" width="100%"/>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyxDRKgZU_5TNqdeV2Jy3RkHLbTHZhfzTz3lSNBn62bYbhHFYa8wGBpRXQF7EH1cYt6IpaqpS6GRrw_yRAyt_RW65iOjyHFixGa_hA4VC56iT0JHWcIcFkQviQ6aa9me1pJuHodCJv9QY/s400/Brown+Robotics+Seminar+Talk+%25282%2529.png" width="100%"/>
</div>
<div class="right">
<h2>The Deep Learning Revolution</h2>
<p>We've seen a lot of progress in Machine Learning in the last decade, especially in end-to-end machine learning, also known as deep learning. Consider a task like audio transcription: classically, we would chop up the audio clip into short segments, detect phonemes, aggregate phonemes into words, words into sentences, and so on. Each of these stages is a separate software module with distinct inputs and outputs, and these modules might involve some degree of machine learning. The idea of deep learning is to fuse all these stages together into a single learning problem, where there are no distinct stages, just the end-to-end prediction task from raw data. With a lot of data and compute, such end-to-end systems vastly outperform the classical pipelined approach. We've seen similar breakthroughs in vision and natural language processing, to the extent that all state-of-the-art systems for these domains are pretty much deep learning models. </p><p>Robotics has for many decades operated under a modularized software pipeline, where first you estimate state, then plan, then perform control to realize your plan. The question our team at Google is interested in studying is whether the end-to-end advances we've seen in other domains holds for robotics as well.</p></div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPW1YbGqaMiAE6RaQ3auR6KUtclA_OTIDBjnO8RK8RO_O7npWY8uFrAfA86Cm8V1PzAmjxONZjyC7HB6aT3fSbruXEVMdDwwAZy5pczai5Q9C7ABzMahKDlXDv0JBPA-V78EGSv9T0iyk/s400/Brown+Robotics+Seminar+Talk+%25283%2529.png" width="100%"/>
</div>
<div class="right">
<h2>Software 2.0</h2>
<p>When it comes to thinking about the tradeoff between hand-coded, pipelined approaches versus end-to-end learning, I like Andrej Karpathy's abstraction of <a href="https://karpathy.medium.com/software-2-0-a64152b37c35">Software 1.0 vs Software 2.0</a>: Software 1.0 is where a human explicitly writes down instructions for some information processing. Such instructions (e.g. in C++) are passed through a compiler that generates the low level instructions of what the computer actually executes. When building Software 2.0, you don't write the program - you give a set of inputs and outputs and it's the ML system's job to finds the best program that satisfies your input-output description. You can think of ML as a "higher order compiler that takes data and gives you programs".</p>
<p>The gradual or not-so-gradual subsumption of software 1.0 code into software 2.0 is inevitable - one might start by tuning some coefficients here and there, then you might optimize over one of several code branches to run, and before you know it, the system actually consists of an implicit search procedure over many possible sub-programs. The hypothesis is that as we increase availability of compute and data, we will be able to automatically do more and more search over programs to find the optimal routine. Of course, there is always a role for Software 1.0 - we need it for things like visualization and data management. All of these ideas are covered in Andrej's talks and blog posts, so I encourage you to check those out.
</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeDALSuSvKMYmAdPCwEzn2zs-taOfp3pfxvfOtlAJGQxqVq_aNia2xcvPGqTLaBSxRW2adxFVuCvB3nnfLK_4oPXjJY9TYGLO1t-YP4XS2nJfzPWg7s0ZDIvmaoirAX5U5OkmTPQkSSZU/s400/Brown+Robotics+Seminar+Talk+%25284%2529.png" width="100%"/>
</div>
<div class="right">
<h2>How Much Should We Learn in Robotics?</h2>
<p>End-to-end learning has yet to outperform the classical control-theory approaches in <a href="https://www.youtube.com/watch?v=fn3KWM1kuAw">some tasks</a>, so within the robotics community there is still an ideological divide on how much learning should actually be done. </p>
<p>On one hand, you have classical robotics approaches, which breaks down the problem into three stages: perception, planning, and control. Perception is about determining the state of the world, planning is about high level decision making around those states, and control is about applying specific motor outputs so that you achieve what you want. Many of the ideas we explore in deep reinforcement learning today (meta-learning, imitation learning, etc.) have already been studied in classical robotics under different terminology (e.g. system identification). The key difference is that classical robotics deals with smaller state spaces, whereas end-to-end approaches fuse perception, planning, and control into a single function approximation problem. There's also a middle ground where one can attempt to use hand-coded constructs from classical robotics as a prior, and then use data to adapt the system to reality. According to Bayesian decision making theory, the stronger prior you have, the less data (evidence) you need to construct a strong posterior belief.
</p>
<p>I happen to fall squarely on the far side of the spectrum - the end-to-end approach. I'll discuss why I believe strongly in these approaches.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCW1pSd5VzTF034Vsajeb3VVazxyg4awHJ8ET60NzfsB0OSIJ61CtypIk40lz5uO6PJkMOYZNBSHKC_2YWYCMGj2hAubFcBNYOSpKKLVEdi2PuoGNkptuPMUAARS50vPIllagaYr0lmRA/s400/Brown+Robotics+Seminar+Talk+%25285%2529.png" width="100%"/>
<iframe class="BLOG_video_class" allowfullscreen="" width="400" height="322" youtube-src-id="z-2q1eMAwps" src="https://www.youtube.com/embed/z-2q1eMAwps"></iframe>
</div>
<div class="right">
<h2>Three reasons for end-to-end learning</h2>
<p>First, it's worked for other domains, so why shouldn't it work for robotics? If there is something about robotics that makes this decidedly not the case, it would be super interesting to understand what makes robotics unique. As an existence proof, our lab and other labs have already built a few real-world systems that are capable of doing manipulation and navigation from end-to-end pixel-to-control. Shown on the left is our grasping system, Qt-Opt, which essentially performs grasping using only monocular RGB, the current arm pose, and end-to-end function approximation. It can grasp objects it's never seen before. We've also had success on door opening and manipulation from imitation learning.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCur_bO9Sw_Loc6Oh5IApuXdqN-5MG1TLjzyHAj7bWiMY8PAPdTtKwNZ7_3mbvTuzQ96j6an_YPHdO_Qb5lr26KRiRaf-0ctB52Ecx0Wo-TVJ46g4ZFxKCV-QYov2gEcOdj3lPPHae-g0/s400/honeybee.PNG" width="100%"/>
</div>
<div class="right">
<h2>Fused Perception-to-Action in Nature</h2>
<p>Secondly, there are often many shortcuts one can take to solve specific tasks, without having to build a unified perception-planning-control stack that is general across all tasks. Work from <a href="https://www.youtube.com/watch?v=HJDFiuw9Djo&t=1133s">Mandyam Srinivasan's lab</a> has done cool experiments getting honeybees to fly and perch inside small holes, with a spiral pattern painted on the wall. They found that bees will de-accelerate as they approach the target by the simple heuristic of keeping the rate of image expansion (the spiral) constant. They found that if you artificially increase or decrease the rate of expansion by spinning the spiral clockwise or counterclockwise, the honeybee will predictably speed up or slow down. This is Nature's elegant solution to a control problem: visually-guided odometry is computationally cheaper and less error prone than having to detect where the target is in world frame, plan a trajectory, and so on. It may not be a general framework for planning and control, but it is sufficient for accomplishing what honeybees need to do.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQCIAQL4xqud2fYUoiS5LBw_arFuXh55XxoSrJaHH1rgekINE0RLN0me8ee7zbYr5qANM9Nur7nGw_G1DvPLtYpCnSA_QQTVoA6Cmi6TUKb6ykbLXQh8Ac5309JvLieiNMqudcKSkve1o/s400/slide_4.jpg" width="100%"/>
</div>
<div class="right">
<p>Okay, maybe honeybees can use end-to-end approaches, but what about humans? Do we need a more general perception-planning-control framework for human problems? Maybe, but we also use many shortcuts for decision making. Take ball catching: we don't catch falling objects by solving ODEs or planning, we instead employ a gaze heuristic - as long as an object stays in the same point in your field of view, you will eventually intersect with the object's trajectory. Image taken from <a href="https://slideplayer.com/slide/4626208/">Henry Brighton's talk on Robust decision making in uncertain environments</a>.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhViC0bc3UjUOz3dhzgYupepV3Nmxccb6D8R0CNmk3plZ0ah4q3X2jD29adVNCm4PGWQsvBDj6n9kqDFL4oXksl3CsXjPGFOidbKvA9ea67-pa0CYZRWkhQlDxn6zxmjiJbImZ-yRwH7eQ/s400/Brown+Robotics+Seminar+Talk+%25286%2529.png" width="100%"/>
</div>
<div class="right">
<h2>The Trouble With Defining Anything</h2>
<p>Third, we tend to describe decision making processes with words. Words are pretty much all we have to communicate with one another, but they are inconsistent with how we actually make decisions. I like to describe this as an intelligence "iceberg"; the surface of the iceberg is how we <i>think</i> our brain ought to make decisions, but the vast majority of intelligent capability is submerged from view, inaccessible to our consciousness and incompressible into simple language like English. That is why we are capable of performing intelligent feats like perception and dextrous manipulation, but struggle to articulate how we actually perform them in short sentences. If it were easy to articulate in clear unambiguous language, we could just type up those words into a computer program and not have to use machine learning for anything. Words about intelligence are lossy compression, and a lossy representation of a program is not sufficient to implement the full thing.</p>
<p>Consider a simple task of identifying the object in the image on the left (a cow). A human might attempt to string some word-based reasoning together to justify why this is a cow: "you see the context (an open field), you see a nose, you see ears, and black-and-white spots, and maybe the most likely object that has all these parts is a cow". </p>
<p>This is a post-hoc justification, and not actually a full description of how our perception system registers whether something is a cow or not. If you take an actual system capable of recognizing cows with great accuracy (e.g a convnet) and inspect its salient neurons and channels that respond strongly to cows, you will find a strange looking feature map that is hard to put into words. We can't define anything in reality with human-readable words or code with the level of precision needed for interacting with reality, so we must use raw sensory data - grounded in reality - to figure out the decision-making capabilities we want.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEje8qWsFw0NLpf7UFHYmfN8NyZL7XnFbuCZmcSpVhruzgRHnweyW-qtp4Ry9WyGBYzx0ImsLax0QhDlIj3FN9maGSkw21jGsj5Aw2TwQz6iaXcbTfxUrFeEceD1gUg0ru5UnrnFBU02gNk/s400/Brown+Robotics+Seminar+Talk+%25287%2529.png" width="100%"/>
</div>
<div class="right">
<h2>Cooking is Not Software 1.0</h2>
<p>Our obsession with focusing on the top half of the intelligence iceberg biases us towards the Software 1.0 way of programming, where we take a hard problem and attempt to describe it - using words - as the composition of smaller problems. There is also a tendency for programmers to think of general abstractions for their code, via ontologies that organize words with other words. Reality has many ways to defy your armchair view of what cows are and how robotic skills ought to be organized to accomplish tasks in an object-oriented manner.</p>
<p>Cooking is one of the holy grails of robotic tasks, because environments are open-ended and there is a lot of dextrous manipulation involved. Cooking analogies abound in programming tutorials - here is an example of making breakfast with asynchronous programming. It's tempting to think that you can build a cooking robot by simply breaking down the multi-stage cooking task into sub-tasks and individual primitive skills.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjkvRYCjo70PY4pw2uW36VIwEbLy7nYWGhZF1Qu0fF8vQJ-8lH0A3SJvgC2U7KXWSp80XgxZmGenW6VDiXWXLyWxuiXgToJJUDD3WtThIMPs-XnJXt3yzVpm0X30-zqvx-hibr3GTA616s/s400/Brown+Robotics+Seminar+Talk+%25288%2529.png" width="100%"/>
</div>
<div class="right">
<p>Sadly, even the most trivial of steps abounds with complexity. Consider the simple task of spreading jam on some toast.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgaC-DUmv8u01tX9qQxptYEiF7_6-nJcyBPrELsiYYS9wSkRsEvfDs4W8SThNDb6O5-CfPWdLmdOLYBJBdb-QACwxsip8LQLfx2FTMsQH359c3Y7kz2k6YMqSG3-4kS1EGGWwvVbDtE0WE/s400/Brown+Robotics+Seminar+Talk+%25289%2529.png" width="100%"/>
</div>
<div class="right">
<p>The software 1.0 programmer approaches this problem by breaking down the task into smaller, reusable routines. Maybe you think to yourself, first I need a subroutine for holding the slice of toast in place with the robot fingers, then I need a subroutine to spread jam on the toast.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLEJFyBNQ5cBgrynOUPH1DsY272wyA5HjEraJoy3ES5WXyxBoNqWk_qPKRfhB_o_28Bf8ji09jT2MLBPp3z2nfegQ0LhdjxsdL338knYl8o_OTzSU0a8v1UO7t_lFlA6WbxLSBE-dvxVk/s400/Brown+Robotics+Seminar+Talk+%252810%2529.png" width="100%"/>
</div>
<div class="right">
<p>Spreading jam on toast entails three subroutines: a subroutine for scooping the jam with the knife, depositing the lump of jam on the toast, then spreading it evenly.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBREUTZp1K1MYFZPVxukrUAPGrPfXWhoG1tVoVf2gbla8KtCrzJsLtzvd7IeNGsz1GYyWn0PiZQFHI6rEnUEGWuetVDvZG6pDZlMB-A_opWHeGd6HzyiraR6PwWUXnng9msOlnVpYsvCM/s400/Brown+Robotics+Seminar+Talk+%252811%2529.png" width="100%"/>
</div>
<div class="right">
<p>Here is where the best laid plans go awry. A lot of things can happen in reality at any stage that would prevent you from moving onto the next stage. What if the toaster wasn't plugged in and you're starting with untoasted bread? What if you get the jam on the knife but in the process break something on the robot and you aren't checking to make sure everything is fine before proceeding to the next subroutine? What if there isn't enough jam in the jar? What if you're on the last slice of bread in the loaf and the crust side is facing up?</p>
<p>The prospect of writing custom code to handle the ends of the bread loaf (literal edge cases) ought to give one pause as to whether this is approach is scalable to unstructured environments like kitchens - you end up with a million lines of code that essentially capture the state machine of reality. Reality is chaotic - even if you had a perfect perception system, simply managing reality at the planning level quickly becomes intractable. Learning based approaches give us hope of managing this complexity by accumulate all these edge cases in data, and let the end-to-end objective (getting some jam on the toast) and Software 2.0 compiler figure out how to handle all the edge cases. My belief in end-to-end learning is not because I think ML has unbounded capability, but rather that the alternative approach where we capture all of reality into a giant hand-coded state machine is utterly hopeless.</p>
</div>
</div>
<div class="row">
<div class="left">
<iframe class="BLOG_video_class" allowfullscreen="" youtube-src-id="_Hd7JkOo0B8" width="400" height="322" src="https://www.youtube.com/embed/_Hd7JkOo0B8"></iframe>
</div>
<div class="right">
<p>Here is a video where I am washing and cutting strawberries and putting them on some cheesecake. A roboticist that spends too much time in the lab and not the kitchen might prescribe a program that (1) "holds strawberry", (2) "cut strawberry", (3) "pick-and-place on cheesecake", but if you watch the video frame by frame, there are a lot of other manipulation tasks that happen in the meantime - opening and closing containers with one or two hands, pushing things out of the way, inspecting for quality. To use the Intelligence Iceberg analogy: the recipe and high level steps are the surface ice, but the submerged bulk are all the <a href="https://www.youtube.com/watch?v=b1lysnGFpqI">little micro-skills</a> the hands need to do to open containers and adapt to reality. I believe the most dangerous conceit in robotics is to design elegant programming ontologies on a whiteboard, and ignore the subtleties of reality and what its data tells you.</p>
<p>There are a few links I want to share highlighting the complexity of reality. I enjoyed this recent article on Quanta Magazine <a href="https://www.quantamagazine.org/what-is-life-its-vast-diversity-defies-easy-definition-20210309/">about the trickiness of defining life</a>. This is not merely a philosophical question; people at NASA are planning a Mars expedition to collect soil samples and answer whether life ever existed on Mars. This mission requires clarity on the definition of life. Just like it's hard to define intelligent capabilities in precise language, so it is to define life. These two words may as well be one and the same.</p>
<p>Klaus Greff's talk on <a href="https://slideslive.com/38930701/what-are-objects">What Are Objects?</a> raises some interesting queestions about the fuzziness of word. Obviously we want our perception systems to recognize objects so that we may manipulate and plan around them. But as the talk points out, defining what is and is not an object can be quite tricky (is a hole an object? Is the frog prince defined by what he once was, or what he looks like now?).</p>
<p>I've also written a <a href="https://blog.evjang.com/2018/02/teacup-story.html">short story</a> on the trickiness of defining even simple classes like "teacups".</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiH5Cc66Vni4vpjTFoU3f-bQ96nuLQlGSkl5CxpeEBiZJ98tO_eYK9fBfAGy-g0plvLH0zg56e1I9QFmJBqdL6b8pCd9kf4GfAAKASdC34QCDpNDHqgD97ZPwGI2hVkLgZr0CgwNwr3_q0/s400/Brown+Robotics+Seminar+Talk+%252813%2529.png" width="100%"/>
<iframe class="BLOG_video_class" allowfullscreen="" youtube-src-id="QzlI_ny4l8s" width="400" height="322" src="https://www.youtube.com/embed/QzlI_ny4l8s"></iframe>
</div>
<div class="right">
<p>I worked on a project with <a href="https://people.eecs.berkeley.edu/~coline/">Coline Devin</a> where we used data and Software 2.0 to <i>learn</i> a definition of objects without any human labels. We use a grasping system to pick up stuff and define objects as "that which is graspable". Suppose you have a bin of objects and pick one of them up. The object is now removed from the bin and maybe the other objects have shifted around the bin a little. You can also easily look at whatever is in your hand. We then design an embedding architecture and use the following assumption about reality to train it: the pre-grasp objects embedding - post-grasp objects embedding to be equal to the embedding of whatever you picked up. This allowed us to bootstrap a completely self-supervised instance grasping system from a grasping system without ever relying on labels. This is by no means a comprehensive definition of "object" (see Klaus's talk) but I think it's a pretty good one.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHlY3ovW1g_MtkuWycgw67kfi1HOS5H3nAeJ6AfV5s4X04u8cJIMmJHaMGryT8wv6SYPUOeLus9UHzTKa5IDy4VgjswQQ-LH_hXMulYCT7NGTes2FRrRnBjgSV8g55vWxpPMJzWcnE9wg/s400/photo-1596741964346-791466b552b6.jpg" width="100%"/>
</div>
<div class="right">
<h2>Science and Engineering of End-to-End ML</h2>
<p>End-to-end learning is a wonderful principle for building robotic systems, but it is not without its practical challenges and execution risks. Deep neural nets are opaque black box function approximators, which makes debugging them at scale challenging. This requires discipline in both engineering and science, and often the roboticist needs to make a choice as to whether to solve an engineering problem or a scientific one.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDvStHtHpE9I_frRJBZGMnmAQxS3oTjFmoqD8514C1o0rkb0Js6RWhSuH2Iplv8BrU9Aj22hU9taBZuPtExYTI_HafEMgFycCUw8ieoiNzENkXcnGDt9rHVySosH_K1GyXljcyHG67UY0/s400/Brown+Robotics+Seminar+Talk+%252815%2529.png" width="100%"/>
</div>
<div class="right">
<p>This is what a standard workflow looks like for end-to-end robotics. You start by collecting some data, cleaning it, then designing the input and output specification. You fit a model to the data, validate it offline with some metrics like mean-squared error or accuracy, then deploy it in the real world and see if it continues to work as well on your validation sets. You might iterate on the model and validation via some kind of automated hyperparameter tuning.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZCWIUcmemobXdSsEBfr-15HBc0HKRiEoBPFz4eygFML_teVUVBN-fDzHjpem1KirnleCcLRUi3Rna7qDU9yQszGg_B1Hlfhg3dlSvbwIZO5L-OGrdUhrZbqoHzogelnuogklpJujffrE/s400/Brown+Robotics+Seminar+Talk+%252816%2529.png" width="100%"/>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgiWXbSsu6bvZMZAoHRpkrz1dmo360dSAHNuveIyP8ZQLNUaVhoR6NVR9uGnp5Wo18N470ClvsA4wlOXxlgMSF-_D1mj9juzQLSIlaNroJv9QGhfC7XuVbPxP61rjzijiohRPzPkaonTno/s400/Brown+Robotics+Seminar+Talk+%252817%2529.png" width="100%"/>
</div>
<div class="right">
<p>Most ML PhDs spend all their time on the model training and validation stages of the pipeline. RL PhDs have a slightly different workflow, where they think a bit more about data collection via the exploration problem. But most RL research also happens in simulation, where there is no need to do data cleaning and the feature and label specification is provided to you via the benchmark's design. </p><p>While it's true that advancing learning methods is the primary point of ML, I think this behavior is the result of perverse academic incentives.</p>
<p>There is a viscious tendency for papers to put down old ideas and hype up new ones in the pursuit of "technical novelty". The absurdity of all this is that if we ever found that an existing algorithm works super well on harder and harder problems, it would have a hard time getting published on in academic conferences. Reviewers operate under the assumption that our ML algorithms are never good enough.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEge-b1F6ZsBHxB1z3jyWBqjT7w_yIX-d2APDyKDGi6YQcfl6wCcYVQPWenTc0sHFhxwMRsrpFdDU0SEAn3vihs7OhiE0TEn0ZTgRV9KOWTkINYrlmRHSxWmjw9OdwjtiDg-gpBXSxz17Ds/s400/Brown+Robotics+Seminar+Talk+%252818%2529.png" width="100%"/>
</div>
<div class="right">
<p>In contrast, production ML usually emphasizes everything else in the pipeline. Researchers on Tesla's Autopilot team have found that in general, 10x'ing your data on the same model architecture outperforms any incremental modeling improvement in the last few years. As Ilya Sutskever says, most incremental algorithm improvements are just <a href="https://twitter.com/ilyasut/status/1114658175272095744?s=20">data in disguise</a>. Researchers at quantitative trading funds do not change models drastically: they spend their time finding novel data sources that add additional predictive signal. By focusing on large-scale problems, you get a sense of where the real bottlenecks are. You should only work on innovating new learning algorithms if you have reason to believe that that is what is holding your system back.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhanSfOs2T9ZHxzKsWdQIYe2LoPT1v0QeydJKHwYhm4YR9oQHfi3Zd4uum5Y7W0qo2RSzpKDu5CvEpAwlGTFpEuMQ3z-LLwUApg4LEWGD_Qx6365P-Tyb_vgIWTM5S5uTbD5kHN2yKrBzs/s400/Brown+Robotics+Seminar+Talk+%252819%2529.png" width="100%"/>
</div>
<div class="right">
<p>Here are some examples of real problems I've run into in building end-to-end ML systems. When you collect data on a robot, certain aspects of the code get baked into the data. For instance, the tuning of the IK solver or the acceleration limits on the joints. A few months later, the code on the robot controllers might have changed in subtle ways, like maybe the IK solver was swapped with a different solver. This happens a lot in a place like Google where multiple people work on a single codebase. But because assumptions of the v0 solver were baked into the training data, you now have a train-test mismatch and the ML policy no longer works as well.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_P_QYK4s7tlVpJKWqHqDAtJlG9ckYABytErk42rKDrGDmUavirOjlDf2zDf6Qnnq5wSt3b9ej7okzETRE4jzwNgJEjCVcLBUMUJ6Hpo5Z6GqhFTh0SIXPJR4Rq90wybRKoasP0SPG8V0/s400/Brown+Robotics+Seminar+Talk+%252820%2529.png" width="100%"/>
</div>
<div class="right">
<p>Consider an imitation learning task where you collect some demonstrations, and then predict actions (labels) from states (features). An important unit test to perform before you even start training a model is to check whether a robot that replays the exact labels in order can actually solve the task (for an identical initialization as the training data). This check is important because the way you design your labels might make assumptions that don't necessarily hold at test-time.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqTxgNk5ahuVSa_dJ6uQJlJnI-EaNCVikVxsSVeQriOvA_mdknIcJ07x5gjgaQAJ6rPwmfa7evH3sOKU6Ug6ow5q1rVU9P97U0orj2mBrcrP7DfhO4Z6WGg7tZkSq46c8FD2UTsAq422U/s400/Brown+Robotics+Seminar+Talk+%252821%2529.png" width="100%"/>
</div>
<div class="right">
<p>I've found data management to be one of the most crucial aspects of debugging real world robotic systems. Recently I found a "data bug" where there was a demonstration of the robot doing nothing for 5 minutes straight - the operator probably left the recording running without realizing it. Even though the learning code was fine, noisy data like this can be catastrophic for learning performance.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigHd1xZmrc8gU0qLp8YzrAJqeHNr-IQe4PjdWg8OBzlf3koi_I5p8AyakyKDZMyJOxUUs2F6JW_ioBnUTJrrojysYdk_3tynhDbRBQr_rakXEyzo7_-d0p7L9kWeFKjcXfcVKQ-823Rfs/s400/Brown+Robotics+Seminar+Talk+%252836%2529.png" width="100%"/>
</div>
<div class="right">
<p>As roboticists we all want to see in our lifetime robots doing holy grail tasks like tidying our homes and cooking in the kitchen. Our existing systems, whether you work on Software 1.0 or Software 2.0 approaches, are far away from that goal. Instead of spending our time researching how to re-solve a task a little bit better than an existing approach, we should be using our existing robotic capabilities to collect new data for tasks we can't solve yet.</p>
<p>There is a delicate balance in choosing between understanding ML algorithms better, versus pushing towards a longer term goal of qualitative leaps in robotic capability. I also acknowledge that the deep learning revolution for robotics needs to begin with solving the easier tasks and then eventually working its way up to the harder problems. One way to accomplish both good science and long term robotics is to understand how existing algorithms break down in the face of harder data and tougher generalization demands encountered in new tasks.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJhItN55AeNJquRmVh6D1-jRaRniovamt9094hEx76BTIOvt7XEL1ch3aL8zzdGesn2zbCtL98okFViYJL1yVnI1yt2keSoOg9AzEqrp4JRqHvrULCTWr6BT__lBO_0PANzQsIgAtKIFc/s400/photo-1533073526757-2c8ca1df9f1c.jpg" width="100%"/>
</div>
<div class="right">
<h2>Interesting Problems</h2>
<p>Hopefully I've convinced you that end-to-end learning is full of opportunities to really get robotics right, but also rife with practical challenges. I want to highlight two interesting problems that I think are deeply important to pushing this field forward, not just for robotics but for any large-scale ML system.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaV4_kllNs_g0Ts45_AOlaeuzEh81J6eMsp44sUI5B4qq3tg5ckSa0vHSLw3u-qZrFdUuSr7a4UPjpiYPxnAFTtJyidx0VhWFkA3NpVVVCdZnolhWtrdPl670gc5OMyuvKTWL9KGPEAbY/s400/Brown+Robotics+Seminar+Talk+%252823%2529.png" width="100%"/>
</div>
<div class="right">
<p>A typical ML research project starts from a fixed dataset. You code up and train a series of ML experiments, then you publish a paper once you're happy with one of the experiments. These codebases are not very large and don't get maintained beyond the duration of the project, so you can move quickly and scrappily with little to no version control or regression testing.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoNLLKsULSQRFjAcu11jBi69E2gdZGOcxAu5h4T7qbeK5DN-vBWZsL8lJZEZSOugv63-o5OqARF-Mupf4rWLPhFreBUp-l_5VnB02M9_t0CmswQjUQnTdGcvSRHa0TMaGqEWMy-eRRO7g/s400/Brown+Robotics+Seminar+Talk+%252824%2529.png" width="100%"/>
</div>
<div class="right">
<p>Consider how this would go for a "lifelong learning" system for robotics, where you are collecting data and never throwing it away. You start the project with some code that generates a dataset (Data v1). Then you train a model with some more code, which compiles a Software 2.0 program (ckpt.v1.a). Then you use that model to collect more data (Data v2), and concatenate your datasets together (Data v1 + Data v2) to then train another model, and use that to collect a third dataset (Data v3), and so on. All the while you might be publishing papers on the intermediate results.</p>
<p>The tricky thing here is that the behavior of Software 1.0 and Software 2.0 code is now baked into each round of data collection, and the Software 2.0 code has assumptions from all prior data and code baked into it. The dependency graph between past versions of code and your current system become quite complex to reason about.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjeXB-teprsC4ZP_JAFzyn9tF5BcJgTcGnQfLKYYLutvPPtwSyd1HdEEQ2NagsEWHUWtFPvmYWkzoNhzRMzUjjjH2K-VkAAwOrVzQdEJ5oBxoZ1q2T8M-DptT8TZo_0k1DQE8vsNOaD754/s400/Brown+Robotics+Seminar+Talk+%252825%2529.png" width="100%"/>
</div>
<div class="right">
<p>This only gets trickier if you are running multiple experiments and generating multiple Software 2.0 binaries in parallel, and collecting with all of those.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhR-tx4NV6cpvsJPi6AbAA7mEZ0j1JjXWL3v_qUd2TNBSM8aTBEhmOC0HqrGMkf4YvPD3rXAlq7N_sq2lluz7X2W42R7w-PD9f4t2SK0N00Dd5EEKVPfn0O8KczyvMFYprbSrJX4bIryKY/s400/Brown+Robotics+Seminar+Talk+%252826%2529.png" width="100%"/>
</div>
<div class="right">
<p>Let's examine what code gets baked into a collected dataset. It is a combination of Software 1.0 code (IK solver, logging schema) and Software 2.0 code (a model checkpoint). The model checkpoint itself is the distillation of a ML experiment, which consists of more Software 1.0 code (Featurization, Training code) and Data, which in turn depends on its own Software 1.0 and 2.0 code, and so on.</p>
<p>Here's the open problem I'd like to pose to the audience: <b>how can we verify correctness of lifelong learning systems (accumulating data, changing code), while ensuring experiments are reproducible and bug free? Version control software and continuous integration testing is indispensable for team collaboration on large codebases. </b> What would the <a href="https://git-scm.com/">Git</a> of Software 2.0 look like?</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8w_Q8I5fSWr78DbOtEFrxZAHi7w3M_ywHhPJ5D8ZtJPsuDJGeeVZk9Qg9nurWz74anND-mDb0XMd_eHL57XUXU20kAxmlc-DwQJinZelw7bF5wdFcbJJjIMCKOV3UHd_F_KR7esW6oCM/s400/Brown+Robotics+Seminar+Talk+%252827%2529.png" width="100%"/>
</div>
<div class="right">
<p>Here are a couple ideas on how to mitigate the difficulty of lifelong learning. The flywheel of an end-to-end learning system involves converting data to a model checkpoint, then a model checkpoint to predictions, and model predictions to a final real world evaluation number. That eval also gets converted into data. It's critical to test these four components separately to ensure there are no regressions - if one of these breaks, so does everything else.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkHMMyBAxzwDUHnV4lRYBQ0FgRBctddM3myyiTpE6CjH-lqZdz1qX3GlQ0Mk_BA0v1LAsnwowtu1Vzy4n2duuPpR_HBBtvadNOksIIA78RjYlt-Rhd6VH6H7vHZkwIOhuh9fgHLtIALoE/s400/Brown+Robotics+Seminar+Talk+%252828%2529.png" width="100%"/>
</div>
<div class="right">
<p>Another strategy is to use Sim2Real, where you train everything in simulation and develop a lightweight fine-tuning procedure for transferring the system to reality. We rely on this technique heavily at Google and I've heard this is OpenAI's strategy as well. In simulation, you can transmute compute into data, so data is relatively cheap and you don't have to worry about handling old data. Every time you change your Software 1.0 code, you can just re-simulate everything from scratch and you don't have to deal with ever-increasing data heterogeneity. You might still have to manage some data dependencies for real world data, because typically sim2real methods require training a CycleGAN.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAiqhsbE5eVNj5dLna8GZDUmqPQt5Al5CaKOanyb3T9b0FIcLOyx3ZWCOKUw-D0yBDPN3isUYcMbxYQr1Ul-O6wsQYdNNKYKsKlI-T_TC4adziaCN1PU-50TMhyASPkzmE4VEXMz2ogjQ/s400/Brown+Robotics+Seminar+Talk+%252829%2529.png" width="100%"/>
</div>
<div class="right">
<h2>Compiling Software 2.0 Capable of Lifelong Learning</h2>
<p>When people use the phrase "lifelong learning" there are really two definitions. One is about lifelong dataset accumulation, and concatenating prior datasets to train systems that do new capabilities. Here, we may re-compile the Software 2.0 over and over again. </p>
<p>A stronger version of "lifelong learning" is to attempt to train systems that learn on their own and never need to have their Software 2.0 re-compiled. You can think about this as a task that runs for a very long time.</p>
<p>Many of the robotic ML models we build in our lab have goldfish memories - they make all their decisions from a single instant in time. They are, by construction, incapable of remembering what the last action they took was or what happend 10 seconds ago. But there are plenty of tasks where it's useful to remember: </p>
<ul>
<li> An AI that can watch a movie (>170k images) and give you a summary of the plot.
<li>An AI that is conducting experimental research, and it needs to remember hundreds of prior experiments to build up its hypotheses and determine what to try next. </li>
<li>An AI therapist that should remember the context of all your prior conversations (say, around 100k words). </li>
<li>A robot that is is cooking and needs to leave something in the oven for several hours and then resume the recipe afterwards.</li>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLyWkAVroC-xQxxJWtK-VcfjPkoIMmtU6naNk_XXluqDghaI2zlh2Zt7uu6kOhZ9gy74xWv_ewXJK5_o1CAEx4Y23VkaikK2KHUxVwLQ8ZWF3bGfQh4C7brMJp0ZtxzM1xeETc0wM-0m4/s400/Brown+Robotics+Seminar+Talk+%252830%2529.png" width="100%"/>
</div>
<div class="right">
<p>Memory and learning over long time periods requires some degree of selective memory and attention. We don't know how to select which moments in a sequence are important, so we must acquire that by compiling a Software 2.0 program. We can train a neural network to fit some task objective to the full "lifetime" of the model, and let the model figure out how it needs to selectively remember within that lifetime in order to solve the task.</p>
<p>However, this presents a big problem: in order to optimize this objective, you need to run forward predictions over every step in the lifetime. If you are using backpropagation to train your networks, then you also need to run a similar number of steps in reverse. If you have N data elements and the lifetime is T steps long, the computational cost of learning is between O(NT) and O(NT^2), depending on whether you use RNNs, Transformers, or something in between. Even though a selective attention mechanisms might be an efficient way to perform long-term memory and learning, the act of finding that program via Software 2.0 compilation is very expensive because we have to consider full sequences.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhjVGV03GIYaQmsktWCSha8fSOHyoPlWrRdrJrMkzzq9mWIMTq3syS9kAERwOYWRMAX_Smq3eycMgfqkgwC8-E6mMQPecZM2yJucKUzBKnqnS1RGoObMj2PojFMENan1ZUkX8e2muSpISY/s400/Brown+Robotics+Seminar+Talk+%252832%2529.png" width="100%"/>
</div>
<div class="right">
<h2>Train on Short Sequences and It Just Works</h2>
<p>The optimistic take is that we can just train on shorter sequences, and it will just generalize to longer sequences at test time. Maybe you can train selective attention on short sequences, and then couple that with a high capacity external memory. Ideas from <a href="https://arxiv.org/abs/2007.03629">Neural Program Induction</a> and <a href="https://arxiv.org/abs/1410.5401">Neural Turing Machines</a> seem relevant here. Alternatively, you can use ideas from Q-learning to essentially do dynamic programming across time and avoid having to ingest the full sequence into memory (<a href="https://openreview.net/pdf?id=r1lyTjAqYX">R2D2</a>)</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieIDL9d6DYrEbN3V8MFsgziJHvdHU4IGkyg12WdnU033lQVUz93x5_HOxL6XJ5MRBJznPWrXIfvCQBUvZU4Lw3vUDlMZyHgYIFyu1kOX_rJGsxcrQAu7jNjoVyv2AtqfDQ5E0_ljyQeF0/s400/Brown+Robotics+Seminar+Talk+%252833%2529.png" width="100%"/>
</div>
<div class="right">
<h2>Hierarchical Computation</h2>
<p>Another approach is to fuse multiple time steps into a single one, potentially repeating this trick over and over again until you have effectively O(log(T)) computation cost instead of O(T) cost. This can be done in both forward and backward passes - clockwork RNNs and Dilated Convolutions used in WaveNet are good examples of this. A variety of recent sub-quadratic attention improvements to Transformers (Block Sparse Transformers, Performers, Reformers, etc.) can be thought of as special cases of this as well.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMDSHYdkD3f1zTZF-YZ2pRrFus-T_Kh2RlQ0D75Pa6baBYAxfXKtAsPdPREHXyhvmVVOBNkyklUbmw51T9pp3BSDDK_8V7i_2bkEcXTBB-NwlQbhK6SCILYzI_Wc0S53jOmAv01AvfmaY/s400/Brown+Robotics+Seminar+Talk+%252834%2529.png" width="100%"/>
</div>
<div class="right">
<h2>Parallel Evolution</h2>
<p>Maybe we do need to just bite the bullet and optimize over the full sequences, but use embarassingly parallel algorithms to ammortize the time complexity (by distributing it across space). Rather than serially running forward-backward on the same model over and over again, you could imagine testing multiple lifelong learning agents simultaneously and choosing the best-of-K agents after T time has elapsed.</p>
</div>
</div>
<div class="row">
<div class="left">
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBti3GnYG7yY6caXTOeiEtuLv9l3PNfgS-sfv4VpsAOS4jbWf5cTNw8dSEMG-yxxC0T757UQGP2UPpZLZFZ4vyyv4wp48M80jIn-f09eR8p2SJdUfqeO9Iu4MPEWbLGZbH61AjibqG_3U/s400/Brown+Robotics+Seminar+Talk+%252835%2529.png" width="100%"/>
</div>
<div class="right">
<p>If you're interested in these problems, here's some concrete advice for how to get started. Start by looking up the existing literature in the field, pick one of these papers, and see if you can re-implement it from scratch. This is a great way to learn and make sure you have the necessary coding chops to get ML systems working well. Then ask yourself, how well does the algorithm handle harder problems? At what point does it break down? Finally, rather than thinking about incremental improvements to existing algorithms and benchmarks, constantly be thinking of harder benchmarks and new capabilities.</p>
</div>
</div>
<div>
<h2>Summary</h2>
<ul>
<li>Three reasons why I believe in end-to-end ML for robotics: (1) it worked for other domains (2) fusing perception and control is a nice way to simplfiy decision making for many tasks (3) we can't define anything precisely so we need to rely on reality (via data) to tell us what to do.</li>
<li>When it comes to improving our learning systems, think about the broader pipeline, not just the algorithmic and mathy learning part.</li>
<li>Challenge: how do we do version control for Lifelong Learning systems?</li>
<li>Challenge: how do we compile Software 2.0 that does Lifelong Learning? How can we optimize for long-term memory and learning without having to optimize over full lifetimes?</li>
</ul>
</div>
Unknownnoreply@blogger.com0California, USA36.778261 -119.41793248.4680271638211551 -154.57418239999998 65.088494836178853 -84.2616824tag:blogger.com,1999:blog-842965756326639856.post-42369216171692037002021-02-13T13:45:00.010-08:002021-06-28T09:49:55.617-07:00Don't Mess with Backprop: Doubts about Biologically Plausible Deep Learning<p><span style="background-color: white; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: small;"><a href="https://www.ibidemgroup.com/edu/backprop-bpdl-aprendizaje-profundo/">“Traducción a Español”</a></span></p><p>Biologically Plausible Deep Learning (BPDL) is an active research field at the intersection of Neuroscience and Machine Learning, studying how we can train deep neural networks with a "learning rule" that could conceivably be implemented in the brain.</p><p>The line of reasoning that typically motivates BPDL is as follows:</p><p></p><ol style="text-align: left;"><li>A Deep Neural Network (DNN) can learn to perform perception tasks that biological brains are capable of (such as detecting and recognizing objects).</li><li>If activation units and their weights are to DNNs as what neurons and synapses are to biological brains, then what is <a href="https://en.wikipedia.org/wiki/Backpropagation">backprop </a>(the primary method for training deep neural nets) analogous to?</li><li>If learning rules in brains are not implemented using backprop, then how are they implemented? How can we achieve similar performance to backprop-based update rules while still respecting biological constraints?</li></ol><p></p><p>A nice overview of the ways in which backprop is not biologically plausible can be found <a href="https://psychology.stackexchange.com/questions/16269/is-back-prop-biologically-plausible">here</a>, along with various algorithms that propose fixes.</p><p>My somewhat contrarian opinion is that designing biologically plausible alternatives to backprop is the wrong question to be asking. The motivating premises of BPDL makes a faulty assumption: that <b>layer activations are neurons and weights are synapses, and therefore learning-via-backprop must have a counterpart or alternative in biological learning.</b></p><p>Despite the name and their impressive capabilities on various tasks, DNNs actually have very little to do with biological neural networks. One of the great errors in the field of Machine Learning is that we ascribe too much biological meaning to our statistical tools and optimal control algorithms. It leads to confusion from newcomers, who ascribe entirely different meaning to "learning", "evolutionary algorithms", and so on.</p><p>DNNs are a sequence of linear operations interspersed with nonlinear operations, applied sequentially to real-valued inputs - nothing more. They are optimized via gradient descent, and gradients are computed efficiently using a dynamic programming scheme known as backprop. Note that I didn't use the word "learning"!</p><p>Dynamic programming is the ninth wonder of the world<span style="font-size: xx-small;">1</span>, and in my opinion one of the top three achievements of Computer Science. Backprop has linear time-complexity in network depth, which makes it extraordinarily hard to beat from a computational cost perspective. Many BPDL algorithms often don't do better than backprop, because they try to take an efficient optimization scheme and shoehorn in an update mechanism with additional constraints. </p><p>If the goal is to build a biologically plausible learning mechanism, there's no reason that units in Deep Neural Networks should be one-to-one with biological neurons. Trying to emulate a DNN with models of biologically neurons feels backwards; like trying to emulate the Windows OS with a human brain. It's hard and a human brain can't simulate Windows well.</p><p>Instead, let's do the emulation the other way around: optimizing a function approximator to implement a biologically plausible learning rule. The recipe is straightforward:</p><p></p><ol style="text-align: left;"><li>Build a biological plausible model of a neural network with model neurons and synaptic connections. Neurons communicate with each other using spike trains, rate coding, or gradients, and respect whatever constraints you deem to be "sufficiently biologically plausible". It has parameters that need to be trained.</li><li>Use computer-aided search to design a biologically plausible learning rule for these model neurons. For instance, each neuron's feedforward behavior and local update rules can be modeled as a decision from an artificial neural network.</li><li>Update the function approximator so that the biological model produces the desired learning behavior. We could train the neural networks via backprop. </li></ol><p></p><p>The choice of function approximator we use to find our learning rule is irrelevant - what we care about at the end of the day is answering how a biological brain is able to learn hard tasks like perception, while respecting known constraints like the fact that biological neurons don't store all activations in memory or only employ local learning rules. We should leverage Deep Learning's ability to find good function approximators, and direct that towards finding a good biological learning rules.</p><p>The insight that we should <i>(artificially) learn to (biologically) learn</i> is not a new idea, but it is one that I think is not yet obvious to the neuroscience + AI community. <a href="https://en.wikipedia.org/wiki/Meta_learning_(computer_science)">Meta-Learning</a>, or "Learning to Learn", is a field that has emerged in recent years, which formulates the act of acquiring a system capable of performing learning behavior (potentially superior to gradient descent). If meta-learning can find us more <a href="https://arxiv.org/pdf/1703.05175.pdf">sample efficient</a> or <a href="https://arxiv.org/abs/1904.07392">superior</a> or <a href="https://arxiv.org/pdf/1906.03367.pdf">robust</a> learners, why can't it find us rules that respect biological learning constraints? Indeed, recent work [<a href="https://arxiv.org/pdf/2006.09549.pdf">1</a>, <a href="https://www.biorxiv.org/content/10.1101/2019.12.30.891184v1.full.pdf">2</a>, <a href="https://arxiv.org/pdf/2012.03837.pdf">3</a>, <a href="https://arxiv.org/abs/1608.05343">4</a>, <a href="http://proceedings.mlr.press/v119/real20a/real20a.pdf">5</a>] shows this to be the case. You can indeed use backprop to train a separate learning rule superior to naïve backprop.</p><p>I think the reason that many researchers have not really caught onto this idea (that we should emulate biologically plausible circuits with a meta-learning approach) is that until recently, compute power wasn't quite strong enough to both train a meta-learner and a learner. It still requires substantial computing power and research infrastructure to set up a meta-optimization scheme, but tools like <a href="https://blog.evjang.com/2019/02/maml-jax.html">JAX make it considerably easier now</a>.</p><p>A true biology purist might argue that finding a learning rule using gradient descent and backprop is not an "evolutionarily plausible learning rule", because evolution clearly lacks the ability to perform dynamic programming or even gradient computation. But this can be amended by making the meta-learner evolutionarily plausible. For instance, the mechanism with which we select good function approximators does not need rely on backprop at all. Alternatively, we could formulate a meta-meta problem whereby the selection process itself obeys rules of evolutionary selection, but the selection process is found using, once again, backprop.</p><p>Don't mess with backprop!</p><p><br /></p><p><b>Footnotes</b></p><p>[1] The eighth wonder being, of course, <a href="https://www.listenmoneymatters.com/compound-interest/">compound interest</a>.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-36080174063814785992021-01-25T21:39:00.006-08:002021-01-25T21:54:05.435-08:00How to Understand ML Papers QuicklyMy <a href="https://blog.evjang.com/2020/06/free-office-hours-for-non-traditional.html">ML mentees</a> often ask me some variant of the question "how do you choose which papers to read from the deluge of publications flooding Arxiv every day?” <div><br /></div><div>The nice thing about reading most ML papers is that you can cut through the jargon by asking just five simple questions. I try to answer these questions as quickly as I can when skimming papers.<br /><br /><b>1) What are the inputs to the function approximator? <br /></b><br />E.g. a 224x224x3 RGB image with a single object roughly centered in the view. <br /><br /><b>2) What are the outputs to the function approximator?</b><br /><br />E.g. a 1000-long vector corresponding to the class of the input image.<br /><br />Thinking about inputs and outputs to the system in a method-agnostic way lets you take a step back from the algorithmic jargon and consider whether other fields have developed methods that might work here using different terminology. I find this approach especially useful when reading <a href="https://arxiv.org/abs/2007.05549">Meta-Learning papers</a>. </div><div><br /></div><div>By thinking about a ML problem first as a set of inputs and desired outputs, you can reason whether the input is even sufficient to predict the output. Without this exercise you might accidentally set up a ML problem where the <a href="https://news.ycombinator.com/item?id=24173440">output can't possibly be determined by the inputs</a>. The result might be a ML system that <a href="https://arxiv.org/abs/2002.06673">performs predictions in a way that are problematic for society</a>. </div><div><br /></div><div><b>3) What loss supervises the output predictions? What assumptions about the world does this particular objective make?</b><br /><br />ML models are formed from combining <i>biases</i> and <i>data</i>. Sometimes <a href="https://en.wikipedia.org/wiki/Linear_regression">the biases are strong</a>, other times they <a href="https://lilianweng.github.io/lil-log/2020/08/06/neural-architecture-search.html">are weak</a>. To make a model generalize better, you need to add more biases or add more unbiased data. There is <a href="https://en.wikipedia.org/wiki/No_free_lunch_theorem">no free lunch</a>. <br /><br />An example: many optimal control algorithms make the assumption of a stationary episodic data generation procedure which is a Markov-Decision Process (MDP). In an MDP, “state” and “action” deterministically map via the environment’s transition dynamics to “a next-state, reward, and whether the episode is over or not”. This structure, though very general, can be used to formulate a loss that allows learning Q values to follow the <a href="https://en.wikipedia.org/wiki/Bellman_equation">Bellman Equation</a>. <br /><br /><b>4) Once trained, what is the model able to generalize to, in regards to input/output pairs it hasn’t seen before? <br /></b><br />Due to the information captured in the data or the architecture of the model, the ML system may generalize fairly well to inputs it has never seen before. In recent years we are <a href="https://en.wikipedia.org/wiki/AlphaGo">seeing more</a> and <a href="https://en.wikipedia.org/wiki/GPT-3">more</a> <a href="https://openai.com/blog/dall-e/">ambitious levels of generalization</a>, so when reading papers I watch out to see any surprising generalization capabilities and where it comes from (data, bias, or both). </div><div><br /></div><div><div>There is a lot of noise in the field about better inductive biases, like causal reasoning or symbolic methods or object-centric representations. These are important tools for building robust and reliable ML systems and I get that the line separating structured data vs. model biases can be blurry. That being said, it baffles me how many researchers think that the way to move ML forward is to <i>reduce</i> the amount of learning and <i>increase</i> the amount of hard-coded behavior. </div><div><br /></div><div>We do ML precisely because there are things we don't know how to hard-code. As Machine <i>Learning</i> researchers, we should focus our work on <a href="http://www.incompleteideas.net/IncIdeas/BitterLesson.html">making learning methods better</a>, and leave the hard-coding and symbolic methods to the Machine <i>Hard-Coding</i> Researchers. </div></div><div><br /></div><div><b>5) Are the claims in the paper falsifiable? </b></div><div><br /></div><div>Papers that make claims that cannot be <a href="https://en.wikipedia.org/wiki/Falsifiability">falsified</a> are not within the realm of science. </div><div><br /></div><div><br /></div><div>P.S. for additional hot takes and mentorship for aspiring ML researchers, sign up for <a href="https://blog.evjang.com/2020/06/free-office-hours-for-non-traditional.html">my free office hours</a>. I've been mentoring students over Google Video Chat most weekends for 7 months now and it's going great. </div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-68851503990290377022020-11-28T23:53:00.022-08:002021-04-23T15:36:48.289-07:00Software and Hardware for General Robots<p><i>Disclaimer, these are just my opinions and not necessarily those of my employer or robotics colleagues.</i></p><p><i>2021-04-23: If you liked this post, you may be interested in a <a href="https://blog.evjang.com/2021/03/learning-robots.html">more recent blog post</a> I wrote on why I believe in end-to-end learning for robots.</i></p><p><a href="https://news.ycombinator.com/item?id=25247499">Hacker News Discussion</a></p><p><a href="https://en.wikipedia.org/wiki/Moravec%27s_paradox">Moravec's Paradox</a> describes the observation that our AI systems can solve "adult-level cognitive" tasks like <a href="https://en.wikipedia.org/wiki/MuZero">chess-playing</a> or passing <a href="https://en.wikipedia.org/wiki/GPT-3">text-based intelligence tests</a> fairly easily, while accomplishing basic sensorimotor skills like crawling around or grasping objects - things one-year old children can do - are very difficult.</p><p>Anyone who has tried to build a robot to do anything<span> </span>will realize that Moravec's Paradox is not a paradox at all, but rather a direct corollary of our physical reality being so <i>irredeemably complex </i>and <i>constantly demanding</i><i>. </i>Modern humans traverse <i>millions </i>of square kilometers in their lifetime, a labyrinth full of dangers and opportunities. If we had to consciously process and deliberate all the survival-critical sensory inputs and motor decisions like we do moves in a game of chess, we would have probably been selected out of the gene pool by Darwinian evolution. Evolution has optimized our biology to perform sensorimotor skills in a split second and make it feel easy. </p><p>Another way to appreciate this complexity is to adjust your daily life to a major motor disability, like losing fingers or trying to get around San Francisco without legs.</p><p><b>Software for General Robots</b></p><p>The difficulty of sensorimotor problems is especially apparent to people who work in robotics and get their hands dirty with the messiness of "the real world". What are the consequences of an irredeemably complex reality on how we build software abstractions for controlling robots? </p><p>One of my pet peeves is when people who do not have sufficient respect for Moravec's Paradox propose a programming model where high-level robotic tasks ("make me dinner") can be divided into sequential or parallel computations with clearly defined logical boundaries: wash rice, de-frost meat, get the plates, set the table, etc. These sub-tasks can be in turn broken down further. When a task cannot be decomposed further because there are too many edge cases for conventional software to handle ("does the image contain a cat?"), we can attempt to invoke a Machine Learning model as "magic software" for that capability.</p><p>This way of thinking - symbolic logic that calls upon ML code - arises from engineers who are used to clinging to the tidiness of <a href="https://medium.com/@karpathy/software-2-0-a64152b37c35">Software 1.0 abstractions</a> and <a href="https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/async/">programming tutorials</a> that use cooking analogies. </p><p>Do you have any idea how much intelligence goes into a task like "fetching me a snack", at the very lowest levels of motor skill? Allow me to illustrate. I recorded a short video of me opening a package of dates and annotated it with all the motor sub-tasks I performed in the process.</p><p></p><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="BLOG_video_class" height="416" src="https://www.youtube.com/embed/b1lysnGFpqI" width="501" youtube-src-id="b1lysnGFpqI"></iframe></div><div class="separator" style="clear: both; text-align: center;"><a href="https://youtu.be/b1lysnGFpqI">https://youtu.be/b1lysnGFpqI</a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both;">In the span of 36 seconds, I counted about 14 motor and cognitive skills. They happened so quickly that I didn't consciously notice them until I went back and analyzed the video, frame by frame. </div><div class="separator" style="clear: both;"><br /></div><div class="separator" style="clear: both;">Here are some of the things I did:</div><div class="separator" style="clear: both;"><ul style="text-align: left;"><li>Leverage past experience opening this sort of package to understand material properties and how much force to apply.</li><li>Constantly adapt my strategy in response to unforeseen circumstances (Ziploc not giving)</li><li>Adjusting grasp when slippage occurs</li><li>Devising an ad-hoc Weitlaner Retractor with thumb knuckles to increase force on the Ziploc.</li></ul></div><div class="separator" style="clear: both;">As a roboticist, it's humbling to watch <a href="https://twitter.com/ericjang11/status/1315537835630358530?s=20">videos of animals</a> making decisions so quickly and then watch our own robots struggle to do the simplest things. We even have to <a href="https://www.youtube.com/watch?v=QzlI_ny4l8s">speed up the robot video 4x-8x </a>to prevent the human watcher from getting bored! </div><div class="separator" style="clear: both;"><br /></div><div class="separator" style="clear: both;"><div class="separator" style="clear: both;"><div class="separator" style="clear: both;">With this video in mind, let's consider where we currently are in the state of robotic manipulation. In the last decade or so, multiple research labs have used deep learning to develop robotic systems that can perform any-object robotic grasping from vision. Grasping is an important problem because in order to manipulate objects, one must usually first grasp them. It took the Google Robotics and X teams <a href="https://www.blogger.com/blog/post/edit/842965756326639856/6885150399029037702#">2-3 years</a> to develop our own system, QT-Opt. This was a huge research achievement because it was a general method that worked on pretty much any object and, in principle, could be used to learn other tasks. </div><div class="separator" style="clear: both;"><br /></div></div><div class="separator" style="clear: both;">Some people think that this capability to pick up objects can be wrapped in a simple programmatic API and then used to bootstrap us to human-level manipulation. After all, hard problems are just composed of simpler problems, right? </div><div class="separator" style="clear: both;"><br /></div></div><div class="separator" style="clear: both;">I don't think it's quite so simple. The high-level API call "<span style="font-family: courier;">pick_up_object()</span>" implies a clear semantic boundary between when the robot grasping begins and when it ends. If you re-watch the above video above, how many times do I perform a grasp? It's not clear to me at all where you would slot those function calls. Here is <a href="https://forms.gle/1GLHrf4PcBdHKXNS6">a survey</a> if you are interested in participating in a poll of "how many grasps do you see in this video", whose results I will update in this blog post. </div><div class="separator" style="clear: both;"><br /></div><div class="separator" style="clear: both;">If we need to solve 13 additional manipulation skills just to open a package of dates, and each one of these capabilities take 2-3 years to build, then we are a long, long way from making robots that match the capabilities of humans. Never mind that there isn't a clear strategy for how to integrate all these behaviors together into a single algorithmic routine. Believe me, I wish reality was simple enough that complex robotic manipulation could be done mostly in Software 1.0. However, as we move beyond pick-and-place towards dexterous and complex tasks, I think we will need to completely rethink how we integrate different capabilities in robotics.</div><div class="separator" style="clear: both;"><br /></div><div class="separator" style="clear: both;">As you might note from the video, the meaning of a "grasp" is somewhat blurry. Biological intelligence was not specifically evolved for grasping - rather, hands and their behaviors emerged from a few core drives: regulate internal and external conditions, find snacks, replicate.</div><div class="separator" style="clear: both;"><br /></div><div class="separator" style="clear: both;"><div class="separator" style="clear: both;"><div class="separator" style="clear: both;">None of this is to say that our current robot platforms and the Software 1.0 programming models are useless for robotics research or applications. A general purpose function <span style="font-family: courier;">pick_up_object() </span>can still be combined with "Software 1.0 code" into a reliable system worth billions of dollars in value to Amazon warehouses and other logistics fulfillment centers. General pick-and-place for any object in any unstructured environment remains an unsolved, valuable, and <i>hard</i> research problem.</div><div class="separator" style="clear: both;"><br /></div></div><div class="separator" style="clear: both;"><b>Hardware for General Robots</b></div><div class="separator" style="clear: both;"><b><br /></b></div><div class="separator" style="clear: both;">What robotic hardware do we require in order to "open a package of dates"?</div><div class="separator" style="clear: both;"><br /></div><div class="separator" style="clear: both;">Willow Garage was one of the pioneers in home robots, showing that a teleoperated PR2 robot could be used to tidy up a room (note that two arms are needed here for more precise placement of pillows). These are made up of many pick-and-place operations.</div><div class="separator" style="clear: both;"> </div><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="BLOG_video_class" height="374" src="https://www.youtube.com/embed/o7JH3UWO6I0" width="450" youtube-src-id="o7JH3UWO6I0"></iframe></div><div class="separator" style="clear: both; text-align: center;"><a href="https://youtu.be/o7JH3UWO6I0">https://youtu.be/o7JH3UWO6I0</a></div><br /><div class="separator" style="clear: both;">This video was made in 2008. That was 12 years ago! It's sobering to think of how much time has passed and how little the needle has seemingly moved. Reality is hard. </div><div class="separator" style="clear: both;"><br /></div><div class="separator" style="clear: both;">The <a href="https://hello-robot.com/product">Stretch</a> is a simple telescoping arm attached to a vertical gantry. It can do things like pick up objects, wipe planar surfaces, and open drawers.</div><div class="separator" style="clear: both;"><br /></div><div class="separator" style="clear: both;"><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="BLOG_video_class" height="373" src="https://www.youtube.com/embed/2msVU0ygrqM" width="449" youtube-src-id="2msVU0ygrqM"></iframe></div><div class="separator" style="clear: both; text-align: center;"><a href="https://youtu.be/2msVU0ygrqM">https://youtu.be/2msVU0ygrqM</a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div>However, futurist beware! A common source of hype for people who don't think enough about physical reality is to watch demos of robots doing useful things in one home, and then conclude that the same robots are ready to do those tasks in <i>any home</i>.</div><div><br /></div><div><div>The Stretch video shows the <a href="https://youtu.be/2msVU0ygrqM?t=41">robot pulling open a dryer door</a> (left-swinging) and retrieving clothes from it. The video is a bit deceptive - I think the camera physically cannot see the interior of the dryer, so even though a human can teleoperate the robot to do the task, it would run into serious difficulty when ensuring that the dryer has been completely emptied.</div><div><br /></div><div>Here is a picture of my own dryer, which features a dryer with a <i>right-swinging</i> door close to a wall. I'm not sure if the Stretch actually can fit in this tight space, but the PR2 definitely would not be able to open this door without the base getting in the way. </div></div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1Sq_8d_8pPxeYQOTeQf85YRTAAejOhPhr5LopUpP6op0U1NzhOIgZRn6xSrzrwf1uX9HE68stRmLT2erEiFBbhMUM5XjJw5E-ZWXhOuNfs0gWnRPpuxo9uAnYi1sPjpnVd216yDvuzfc/s2048/128546115_197081455371161_5776583945816761781_n.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="2048" data-original-width="1536" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1Sq_8d_8pPxeYQOTeQf85YRTAAejOhPhr5LopUpP6op0U1NzhOIgZRn6xSrzrwf1uX9HE68stRmLT2erEiFBbhMUM5XjJw5E-ZWXhOuNfs0gWnRPpuxo9uAnYi1sPjpnVd216yDvuzfc/s320/128546115_197081455371161_5776583945816761781_n.jpg" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrb28BpYK5CRsnQDCR6ol19IgNr74wJL8nsSwgEygnqE0KJPok5wxLENJ-GznxYL79lLZYhwEOjgB_UPQSjXNEG9UE3gQHDXilXo_1tZpmAhrDL-qWU5QeIcTrDF_VB0SW-_LgXOkOz7g/s1920/128903470_992029601279712_5029308404263396832_n.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1920" data-original-width="1080" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrb28BpYK5CRsnQDCR6ol19IgNr74wJL8nsSwgEygnqE0KJPok5wxLENJ-GznxYL79lLZYhwEOjgB_UPQSjXNEG9UE3gQHDXilXo_1tZpmAhrDL-qWU5QeIcTrDF_VB0SW-_LgXOkOz7g/w225-h400/128903470_992029601279712_5029308404263396832_n.png" width="225" /></a><br /><br /><div class="separator" style="clear: both; text-align: center;"><br /></div></div><div><div>Reality's edge cases are often swept under the rug when making robot demo videos, which usually show the robot operating in an optimal environment that the robot is well-suited for. But the full range of tasks humans do in the home is <i>vast</i>. Neither the PR2 nor the Stretch can crouch under a table to pick up lint off the floor, change a lightbulb while standing on a chair, fix caulking in a bathroom, open mail with a letter opener, move dishes from the dishwasher to the high cabinets, break down cardboard boxes for the recycle bin, go outside and retrieve the mail. </div><div><br /></div><div><div>And of course, they can't even open a Ziploc package of dates. If you think <i>that </i>was complex, here is a first-person video of me chopping strawberries, washing utensils, and decorating a cheesecake. This was recorded with a GoPro strapped to my head. Watch each time my fingers twitch - each one is a separate manipulation task!</div></div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="BLOG_video_class" height="400" src="https://www.youtube.com/embed/_Hd7JkOo0B8" width="482" youtube-src-id="_Hd7JkOo0B8"></iframe></div><div class="separator" style="clear: both; text-align: center;"><a href="https://youtu.be/_Hd7JkOo0B8">https://youtu.be/_Hd7JkOo0B8</a></div><br /><div><br /></div></div><div>We often talk about a future where robots do our cooking for us, but I don't think it's possible with any hardware on the market today. The only viable hardware for a robot meant to do <i>any task</i> in human spaces is an adult-sized humanoid, with two-arms, two-legs, and five fingers on each hand. </div><div><br /></div><div>Just like I discussed about Software 1.0 in robotics, there is still an enormous space of robot morphologies that can still provide value to research and commercial applications. That doesn't change the fact that any alternative hardware can't do all the things a humanoid can in a human-centric space. Agility Robotics is one of the companies <a href="https://www.youtube.com/watch?v=BKjRVlzKEMI">that gets it</a> on the hardware design front. People who build physical robots use their hands a lot - could you imagine the robot you are building assembling a copy of itself? </div><div><br /></div><div><b><br /></b></div><div><b>Why Don't We Just Design Environments to be More Robot-Friendly?</b></div><div><b><br /></b></div><div>A compromise is to co-design the environment with the robot to avoid infeasible tasks like above. This can simplify both the hardware and software problems. Common examples I hear <i>incessantly </i>go like this:</div><div><ol style="text-align: left;"><li>Washing machines are better than a bimanual robot washing dishes in the sink, and a dryer is a more efficient machine than a human hanging out clothes to air-dry.</li><li>Airplanes are better at transporting humans than birds </li><li>We built cars and roads, not faster horses </li><li>Wheels can bear more weight and are more energetically efficient than legs.</li></ol></div><div>In the home robot setting, we could design special dryer machine doors that the robot can open easily, or have custom end-effectors (tools) for each task instead of a five-fingered hand. We could go as far as to to have the doors be motorized and open themselves with a remote API call, so the robot doesn't even need to open the dryer on its own.</div><div><br /></div><div>At the far end of this axis, why even bother with building a robot? We could re-imagine the design of homes themselves to be a single <a href="https://en.wikipedia.org/wiki/Automated_storage_and_retrieval_system">ASRS system</a> that brings you whatever you need from any location in the house like a <a href="https://en.wikipedia.org/wiki/Dumbwaiter">Dumbwaiter </a>(except it would work horizontally and vertically). This would dispenses with the need to have a robot walking around in your home.</div><div><br /></div><div>This pragmatic line of thinking is fine for commercial applications, but as a human being and a scientist, it feels a bit like a concession of defeat that we cannot make robots do tasks the way humans do. Let's not forget the Science Fiction dreams that inspired so many of us down this career path - it is not about doing the tasks better, it is about doing everything humans can. A human can wash dishes and dry clothes by hand, so a truly general-purpose robot should be able too. For many people, this endeavor is as close as we can get to Biblical Creation<i>: “Let Us make man in Our image, after Our likeness, to rule over the fish of the sea and the birds of the air, over the livestock, and over all the earth itself and every creature that crawls upon it.” </i></div><div><br /></div></div></div><div class="separator" style="clear: both;">Yes, we've built airplanes to fly people around. Airplanes are wonderful flying machines. But to build a bird, which can do a million things <i>and</i> fly? That, in my mind, is the true spirit of general purpose robotics.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-7041274013736007292020-09-27T11:27:00.013-07:002020-11-02T19:37:37.525-08:00My Criteria for Reviewing Papers<p><i style="background-color: white; color: #666666; font-family: "Trebuchet MS", Trebuchet, Verdana, sans-serif; font-size: 15.4px;">Xiaoyi Yin (尹肖贻) has kindly translated this post into Chinese (<a href="https://www.jianshu.com/p/72ee1f44aaf4" style="color: #007cbb; text-decoration-line: none;">中文</a>)</i></p><p><a href="https://nips.cc/Conferences/2020/Dates">Accept-or-reject decisions for the NeurIPS 2020</a> conference are out, with 9454 submissions and 1900 accepted papers (20% acceptance rate). Congratulations to everyone (regardless of acceptance decision) for their hard work in doing good research!</p><p>It's common knowledge among machine learning (ML) researchers that acceptance decisions at NeurIPS and other conferences are something of a weighted dice roll. In this silly theatre we call "Academic Publishing" -- a mostly disjoint concept from <i>research</i> by the way --, reviews are all over the place because each reviewer favors different things in ML papers. Here are some criteria that a reviewer might care about: </p><p><b>Correctness: </b>This is the bare minimum for a scientific paper. Are the claims made in the paper scientifically correct? Did the authors take care not to train on the test set? If an algorithm was proposed, do the authors convincingly show that it works for the reasons they stated? </p><p><b>New Information:</b> Your paper has to contribute <i>new</i> knowledge to the field. This can take the form of a new algorithm, or new experimental data, or even just a different way of explaining an existing concept. Even survey papers should contain some nuggets of new information, such as a holistic view unifying several independent works.</p><p><b>Proper Citations: </b>a related work section that articulates connections to prior work and why your work is novel. Some reviewers will reject papers that don't tithe prior work adequately, or isn't sufficiently distinguished from it.</p><div><b>SOTA results:</b> It's common to see reviewers demand that papers (1) propose a new algorithm and (2) achieve state-of-the-art (SOTA) on a benchmark. </div><p><b>More than "Just SOTA": </b>No reviewer will penalize you for achieving SOTA, but some expect more than just beating the benchmark, such as one or more of the criteria in this list. Some reviewers go as far as to bash the "SOTA-chasing" culture of the field, which they deem to be "not very creative" and "incremental". </p><p><b>Simplicity: </b>Many researchers profess to favor "simple ideas". However, the difference between "<i>your </i>simple idea" and "<i>your </i>trivial extension to someone else's simple idea" is not always so obvious.</p><p><b>Complexity: </b>Some reviewers deem papers that don't present any new methods or fancy math proofs as "trivial" or "not rigorous".</p><p><b>Clarity & Understanding: </b>Some reviewers care about the mechanistic details of proposed algorithms and furthering <i>understanding </i>of ML<i>, </i>not just achieving better results. This is closely related to "Correctness".</p><p><b>Is it "Exciting"?: </b>Julian Togelius (AC for NeurIPS '20) <a href="https://twitter.com/togelius/status/1309711892798205957?s=20">mentions</a> that many papers he chaired were simply not very exciting. Only Julian can know what he deems "exciting", but I suppose he means having "good taste" in choosing research problems and solutions. </p><p><br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnclrPBK3yqq2qcrrkXBAhIEcVq8PG8mllEDhuueg0fhMB8jgUEWdT2bTWhrS0M-TCkFyFDXC9Y-O-2lLBz-_l62zucbnaTUsUFrEhJFNhhhlN57tpKi-NYgBkX4dlMakEid8bTUBZ3oE/s836/Capture2.PNG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="836" data-original-width="579" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnclrPBK3yqq2qcrrkXBAhIEcVq8PG8mllEDhuueg0fhMB8jgUEWdT2bTWhrS0M-TCkFyFDXC9Y-O-2lLBz-_l62zucbnaTUsUFrEhJFNhhhlN57tpKi-NYgBkX4dlMakEid8bTUBZ3oE/w445-h640/Capture2.PNG" width="445" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><p><b>Sufficiently Hard Problems</b>: Some reviewers reject papers for evaluating on datasets that are too simple, like MNIST. "Sufficiently hard" is a moving goal post, with the implicit expectation that as the field develops better methods the benchmarks have to get harder to push unsolved capabilities. Also, SOTA methods on simple benchmarks are not always SOTA on harder benchmarks that are closer to real world applications. Thankfully my <a href="https://arxiv.org/abs/1611.01144">most cited paper</a> was written at a time where it was still acceptable to publish on MNIST.</p><p><b>Is it Surprising? </b>Even if a paper demonstrates successful results, a reviewer might claim that they are unsurprising or "obvious". For example, papers applying standard object recognition techniques to a novel dataset might be argued to be "too easy and straightforward" given that the field expects supervised object recognition to be mostly solved (this is not really true, but the benchmarks don't reflect that). </p><p></p><p>I really enjoy papers that defy intuitions, and I personally strive to write surprising papers. </p><p>Some of my favorite papers in this category do not achieve SOTA or propose any new algorithms at all:</p><p></p><ol style="text-align: left;"><li><a href="https://openreview.net/forum?id=SkfMWhAqYQ">Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet</a></li><li><a href="https://arxiv.org/abs/1611.03530">Understanding Deep Learning Requires Rethinking Generalization</a>.</li><li><a href="https://arxiv.org/pdf/2003.08505.pdf">A Metric Learning Reality Check</a></li><li><a href="https://arxiv.org/abs/1811.12359">Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations</a> </li><li><a href="https://arxiv.org/abs/1801.02774">Adversarial Spheres</a></li></ol><div><p><b>Is it Real? </b>Closely related to "sufficiently hard problems". Some reviewers think that games are a good testbed to study RL, while others (typically from the classical robotics community) think that <a href="https://gym.openai.com/envs/Ant-v2/">Mujoco Ant</a> and a real robotic quadruped are entirely different problems; algorithmic comparisons on the former tell us nothing about the same set of experiments on the latter.</p></div><p><b>Does Your Work Align with Good AI Ethics? </b>Some view the development of ML technology as a means to build a better society, and discourage papers that don't align with their AI ethics. The required "Broader Impact" statements in NeurIPS submissions this year are an indication that the field is taking this much more seriously. For example, if you submit a paper that attempts to infer criminality from only facial features or perform autonomous weapon targeting, I think it's likely your paper will be rejected regardless of what methods you develop.</p><p>Different reviewers will prioritize different aspects of the above, and many of these criteria are highly subjective (e.g. problem taste, ethics, simplicity). For each of the criteria above, it's possible to come up with counterexamples of highly-cited or impactful ML papers that don't meet that criteria but possibly meet others.</p><p><br /></p><h2 style="text-align: left;">My Criteria</h2><p>I wanted to share my criteria for how I review papers. <b>When it comes to recommending accept/reject,</b> <b>I mostly care about Correctness and New Information.</b> Even if I think your paper is boring and unlikely to be an actively researched topic in 10 years, I will vote to accept it as long as your paper helped me learn something new that I didn't think was already stated elsewhere. </p><p>Some more specific examples:</p><p></p><ul style="text-align: left;"><li>If you make a claim about humanlike exploration capabilities in RL in your introduction and then propose an algorithm to do something like that, I'd like to see substantial empirical justification that the algorithm is indeed similar to what humans do.</li><li>If your algorithm doesn't achieve SOTA, that's fine with me. But I would like to see a careful analysis of why your algorithm doesn't achieve it and why.</li><li>When papers propose new algorithms, I prefer to see that the algorithm is <b>better</b> than prior work. However, I will still vote to accept if the paper presents a factually correct analysis <i>of why </i>it doesn't do better than prior work. </li><li>If you claim that your new algorithm works better because of reason X, I would like to see experiments that show that it isn't because of alternate hypotheses X1, X2. </li></ul><div>Correctness is difficult to verify. Many metric learning papers were proposed in the last 5 years and accepted at prestigious conferences, only for <a href="https://twitter.com/ericjang11/status/1259174316970618881?s=20">Musgrave et al. '20</a> to point out that the experimental methodology between these papers were not consistent.</div><div><br /></div><div>I should get off my high horse and say that I'm part of the circus too. I've reviewed papers for 10+ conferences and workshops and I can honestly say that I only understood 25% of papers from just reading them. An author puts in tens or hundreds of hours into designing and crafting a research paper and the experimental methodology, and I only put in a few hours in deciding whether it is "correct science". Rarely am I able to approach a paper with the level of mastery needed to rigorously evaluate correctness.</div><div><br /></div><div>A good question to constantly ask yourself is: "what experiment would convince me that the author's explanations are correct and not due to some alternate hypothesis? Did the authors check that hypothesis?"</div><div><br /></div><div>I believe that we should accept all "adequate" papers, and more subjective things like "taste" and "simplicity" should be reserved for paper awards, spotlights, and oral presentations. I don't know if everyone should adopt this criteria, but I think it's helpful to at least be transparent as a reviewer on how I make accept/reject decisions. </div><div><br /></div><h2 style="text-align: left;">Opportunities for Non-Traditional Researchers</h2><p>If you're interested in getting mentorship for learning how to read, critique, and write papers better, I'd like to plug <a href="https://blog.evjang.com/2020/06/free-office-hours-for-non-traditional.html">my weekly office hours</a>, which I hold on Saturday mornings over <a href="https://meet.google.com/">Google Meet</a>. I've been mentoring about 6 people regularly over the last 3 months and it's working out pretty well. </p><p>Anyone who is not in a traditional research background (not currently in an ML PhD program) can reach out to me to book an appointment. You can think of this like visiting your TA's office hours for help with your research work. Here are some of the services I can offer, completely <i>pro bono</i>:</p><p></p><ul style="text-align: left;"><li>If you have trouble understanding a paper I can try to read it with you and offer my thoughts on it as if I were reviewing it.</li><li>If you're very very new to the field and don't even know where to begin I can offer some starting exercises like reading / summarizing papers, re-producing existing papers, and so on.</li><li>I can try to help you develop a good taste of what kinds of problems to work on, how to de-risk ambitious ideas, and so on.</li><li>Advice on software engineering aspects of research. I've been coding for over 10 years; I've picked up some opinions on how to get things done quickly.</li><li>Asking questions about your work as if I was a visitor at your poster session.</li><li>Helping you craft a compelling story for a paper you want to write.</li></ul><div>No experience is required, all that you need to bring to the table is a desire to become better at doing research. The acceptance rate for my office hours is literally 100% so don't be shy!</div><div><br /></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-16174202465969954732020-09-13T11:17:00.006-07:002020-09-13T11:23:08.967-07:00Chaos and Randomness<p></p><blockquote><p><i>For want of a nail the shoe was lost.</i></p><p><i>For want of a shoe the horse was lost.</i></p><p><i>For want of a horse the rider was lost.</i></p><p><i>For want of a rider the message was lost.</i></p><p><i>For want of a message the battle was lost.</i></p><p><i>For want of a battle the kingdom was lost.</i></p><p><i>And all for the want of a horseshoe nail.</i></p></blockquote><p></p><p>- <a href="https://en.wikipedia.org/wiki/For_Want_of_a_Nail" target="_blank">For Want of a Nail</a></p><p><br /></p><p>Was the kingdom lost due to random chance? Or was it the inevitable outcome resulting from sensitive dependence on initial conditions? Does the difference even matter? Here is a blog post about Chaos and Randomness with <a href="https://github.com/ericjang/chaos/blob/master/chaos-randomness.ipynb" target="_blank">Julia code</a>.</p><p><br /></p><h2 style="text-align: left;">Preliminaries</h2><div><div><br /></div><div>Consider a real vector space $X$ and a function $f: X \to X$ on that space. If we repeatedly apply $f$ to a starting vector $x_1$, we get a sequence of vectors known as an <a href="https://en.wikipedia.org/wiki/Orbit_(dynamics)" target="_blank">orbit</a> $x_1, x_2, ... ,f^n(x_1)$. </div><div><br /></div><div>For example, the logistic map is defined as </div><div><br /></div><div><div><span style="font-family: courier;">function logistic_map(r, x)</span></div><div><span style="font-family: courier;"> r*x*(1-x) </span></div><div><span style="font-family: courier;">end</span></div></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div>Here is a plot of successive applications of the logistic map for r=3.5. We can see that the system constantly oscillates between two values, ~0.495 and ~0.812. </div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_tzjmUZI1SYFGm83od7rIkhLKvvrdHdU8dpSKIQqjF5GL3wcUWzUK8FU7Ome3qNCt8M2yQqdXsUpqKljURxWKS38Yz5Z-r8BfI4MIhKGtwjroCsUstfXCs11eHFoSiVmG0Ra8z6zYnB4/s600/logistic-dynamics.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="400" data-original-width="600" height="333" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_tzjmUZI1SYFGm83od7rIkhLKvvrdHdU8dpSKIQqjF5GL3wcUWzUK8FU7Ome3qNCt8M2yQqdXsUpqKljURxWKS38Yz5Z-r8BfI4MIhKGtwjroCsUstfXCs11eHFoSiVmG0Ra8z6zYnB4/w500-h333/logistic-dynamics.png" width="500" /></a></div><div><br /></div><h2 style="text-align: left;">Definition of Chaos</h2><div><div>There is surprisingly no universally accepted mathematical definition of Chaos. For now we will present a commonly used characterization by Devaney: </div><div><br /></div></div><div><br /></div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvovcW42DvJYXu80pkwP9L04PpC7iroBWvjaYgNPC8p7GpixKeSi9p-s7E_iLwvJslQwG9z9BBouHE-3wcTNJGp-yeW9vtyAtchr7de3WTy9sTAgPxlx2n2QQqiRQ3ehnkxYXghqYAgzM/s1280/14.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="1280" height="281" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvovcW42DvJYXu80pkwP9L04PpC7iroBWvjaYgNPC8p7GpixKeSi9p-s7E_iLwvJslQwG9z9BBouHE-3wcTNJGp-yeW9vtyAtchr7de3WTy9sTAgPxlx2n2QQqiRQ3ehnkxYXghqYAgzM/w500-h281/14.png" width="500" /></a></div><br /><div><br /></div></div><div>We can describe an orbit $x_1, x_2, ... ,f^n(x_1)$ as *chaotic* if:</div><div><br /></div><div><ol style="text-align: left;"><li>The orbit is not asymptotically periodic, meaning that it never starts repeating, nor does it approach an orbit that repeats (e.g. $a, b, c, a, b, c, a, b, c...$).</li><li>The maximum Lyapunov exponent $\lambda$ is greater than 0. This means that if you place another trajectory starting near this orbit, it will diverge at a rate $e^\lambda$. A positive $\lambda$ implies that two trajectories will diverge exponentially quickly away from each other. If $\lambda<0$, then the distance between trajectories would <i>shrink</i> exponentially quickly. This is the basic definition of "Sensitive Dependence to Initial Conditions (SDIC)", also colloquially understood as the "butterfly effect".</li></ol></div><div><br /></div><div>Note that (1) intuitively follows from (2), because the Lyapunov exponent of an orbit that approaches a periodic orbit would be $<0$, which contradicts the SDIC condition.</div><div><br /></div><div>We can also define the map $f$ itself to be chaotic if there exists an invariant (trajectories cannot leave) subset $\tilde{X} \subset X$, where the following three conditions hold:</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhb-QJkd2wC77_DnkKa7waLDPNejn7jVmYGEkkdwq8JcJiw2YAMj3RpJnz_twa4PuFh8BqiZasMhWqbp5KJcEQCCKColBRDDi9Av73wgMFa782SByZhp2Z_Tia_ObM7XWyCEpDH4iik9rk/s549/chaos-definition.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="381" data-original-width="549" height="348" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhb-QJkd2wC77_DnkKa7waLDPNejn7jVmYGEkkdwq8JcJiw2YAMj3RpJnz_twa4PuFh8BqiZasMhWqbp5KJcEQCCKColBRDDi9Av73wgMFa782SByZhp2Z_Tia_ObM7XWyCEpDH4iik9rk/w500-h348/chaos-definition.png" width="500" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdtorwgMSTCN81OadRhK7YnTqyP3D2MSS-TQ7AUJ0tasHR2exbgLf3EJevvLdy-CmaQsxCjg0EVhctV_qsnpZ_zFR40rPSLShGw92vK-iPZKQfHN3xi8_-KLktBr2g-KPmI9DqNa9EkQ0/s593/chaos+conditions.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="287" data-original-width="593" height="243" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdtorwgMSTCN81OadRhK7YnTqyP3D2MSS-TQ7AUJ0tasHR2exbgLf3EJevvLdy-CmaQsxCjg0EVhctV_qsnpZ_zFR40rPSLShGw92vK-iPZKQfHN3xi8_-KLktBr2g-KPmI9DqNa9EkQ0/w500-h243/chaos+conditions.png" width="500" /></a></div><div><br /></div><div><br /></div><div><ol style="text-align: left;"><li>Sensitivity to Initial Conditions, as mentioned before.</li><li>Topological mixing (every point in orbits in $\tilde{X}$ approaches any other point in $\tilde{X}$).</li><li>Dense periodic orbits (every point in $\tilde{X}$ is arbitrarily close to a periodic orbit). At first, this is a bit of a head-scratcher given that we previously defined an orbit to be chaotic if it *didn't* approach a periodic orbit. The way to reconcile this is to think about the subspace $\tilde{X}$ being densely covered by periodic orbits, but they are all unstable so the chaotic orbits get bounced around $\tilde{X}$ for all eternity, never settling into an attractor but also unable to escape $\tilde{X}$.</li></ol><div>Note that SDIC actually follows from the second two conditions. If these unstable periodic orbits cover the set $\tilde{X}$ densely and orbits also cover the set densely while not approaching the periodic ones, then intuitively the only way for this to happen is if all periodic orbits are unstable (SDIC).</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiN1u6oHuDX1K7Us4i3QrJ2C-mp7bx1exRD9l1zKXj-2FPV8iTASlyisWGNYntXfxQXZH-eBPnUmSTBzbDoNDEmlkgUsopLAiySc-0IEf8oB1RW_-7lF4qHQ1ZCTUCNQptE5HoTWxIQVxs/s632/periodic+vs+asymptotic+periodic.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="518" data-original-width="632" height="328" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiN1u6oHuDX1K7Us4i3QrJ2C-mp7bx1exRD9l1zKXj-2FPV8iTASlyisWGNYntXfxQXZH-eBPnUmSTBzbDoNDEmlkgUsopLAiySc-0IEf8oB1RW_-7lF4qHQ1ZCTUCNQptE5HoTWxIQVxs/w400-h328/periodic+vs+asymptotic+periodic.png" width="400" /></a></div></div></div><div><br /></div><div><br /></div><div><br /></div><div>These are by no means the only way to define chaos. The DynamicalSystems.jl package has an excellent documentation on several <a href="https://juliadynamics.github.io/DynamicalSystems.jl/latest/chaos/chaos_detection/" target="_blank">computationally tractable</a> definitions of chaos.</div><div><br /></div><h2 style="text-align: left;">Chaos in the Logistic Family</h2><div><br /></div><div>Incidentally, the logistic map exhibits chaos for most of the values of r from values 3.56995 to 4.0. We can generate the bifurcation diagram quickly thanks to Julia's <a href="https://julialang.org/blog/2013/09/fast-numeric/" target="_blank">de-vectorized way of numeric</a> programming.</div><div><span face="sans-serif" style="background-color: white; color: #202122; font-size: 14px;"><br /></span></div><div><span style="background-color: white;"><span style="color: #202122;"><div style="font-family: courier; font-size: 14px;">rs = [2.8:0.01:3.3; 3.3:0.001:4.0]</div><div style="font-family: courier; font-size: 14px;">x0s = 0.1:0.1:0.6</div><div style="font-family: courier; font-size: 14px;">N = 2000 # orbit length</div><div style="font-family: courier; font-size: 14px;">x = zeros(length(rs), length(x0s), N)</div><div style="font-family: courier; font-size: 14px;"># for each starting condtion (across rows)</div><div style="font-family: courier; font-size: 14px;">for k = 1:length(rs)</div><div style="font-family: courier; font-size: 14px;"> # initialize starting condition</div><div style="font-family: courier; font-size: 14px;"> x[k, :, 1] = x0s</div><div style="font-family: courier; font-size: 14px;"> for i = 1:length(x0s)</div><div style="font-family: courier; font-size: 14px;"> for j = 1:N-1</div><div style="font-family: courier; font-size: 14px;"> x[k, i, j+1] = logistic_map((r=rs[k] , x=x[k, i, j])...)</div><div style="font-family: courier; font-size: 14px;"> end</div><div style="font-family: courier; font-size: 14px;"> end</div><div style="font-family: courier; font-size: 14px;">end</div><div style="font-family: courier;"><span style="font-size: 14px;">plot(rs, x[:, :, end], markersize=2, seriestype = :scatter, title = "Bifurcation Diagram (Logistic Map)")</span></div><div style="font-family: courier;"><span style="font-size: 14px;"><br /></span></div><div><span style="color: black;">We can see how starting values y1=0.1, y2=0.2, ...y6=0.6 all converge to the same value, oscillate between two values, then start to bifurcate repeatedly until chaos emerges as we increase r.</span></div><div><span style="color: black;"><br /></span></div><div style="font-family: courier;"><span style="font-size: 14px;"><br /></span></div><div style="font-family: courier;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTh11CwH2lxDmUGC8Lc0LjGf8ybr_mDc8J1-GUYc_sLM2talIC8CQsb9VcokE5Wsnbwe2exJ2yel67fXKW32YscT51FCxBEkD6f5iIy6khdYwmDCv4PE1j7W9OpzVq1VI_ATnB8ZDw5yE/s600/bifurcation-logistic.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="400" data-original-width="600" height="416" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTh11CwH2lxDmUGC8Lc0LjGf8ybr_mDc8J1-GUYc_sLM2talIC8CQsb9VcokE5Wsnbwe2exJ2yel67fXKW32YscT51FCxBEkD6f5iIy6khdYwmDCv4PE1j7W9OpzVq1VI_ATnB8ZDw5yE/w625-h416/bifurcation-logistic.png" width="625" /></a></div><br /><span style="font-size: 14px;"><br /></span></div></span></span></div><h2 style="text-align: left;">Spatial Precision Error + Chaos = Randomness</h2><div>What happens to our understanding of the dynamics of a chaotic system when we can only know the orbit values with some finite precision? For instance, x=0.76399 or x=0.7641 but we only observe x=0.764 in either case.</div><div><br /></div><div>We can generate 1000 starting conditions that are identical up to our measurement precision, and observe the histogram of where the system ends up after n=1000 iterations of the logistic map.<br /><span face="sans-serif" style="background-color: white; color: #202122; font-size: 14px;"><br /></span></div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGJ1-PKPF2-4bk1ANKvyFJ_wdWeTTFQXF3hjWQwfbq_Uk_bSvkPTgWq4k6nuSDqfSI8r15hEL4G2iuPZHlr0hQHPvQ-gC-Ptj-5f3KvtoAtMnA4Pt9_RT5ta0Fq98qEWAKJpiQa3pWty0/s600/mygif.gif" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="400" data-original-width="600" height="416" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGJ1-PKPF2-4bk1ANKvyFJ_wdWeTTFQXF3hjWQwfbq_Uk_bSvkPTgWq4k6nuSDqfSI8r15hEL4G2iuPZHlr0hQHPvQ-gC-Ptj-5f3KvtoAtMnA4Pt9_RT5ta0Fq98qEWAKJpiQa3pWty0/w625-h416/mygif.gif" width="625" /></a></div><br />Let's pretend this is a probabilistic system and ask the question: what are the conditional distributions of $p(x_n|x_0)$, where $n=1000$, for different levels of measurement precision? <br /><br />At less than $O(10^{-8})$ precision, we start to observe the entropy of the state evolution rapidly increasing. Even though we know that the underlying dynamics are deterministic, measurement uncertainty (a form of <a href="https://blog.evjang.com/2018/12/uncertainty.html" target="_blank">aleotoric uncertainty</a>) can expand exponentially quickly due to SDIC. This results in $p(x_n|x_0)$ appearing to be a complicated probability distribution, even generating "<a href="https://en.wikipedia.org/wiki/Long_tail" target="_blank">long tails</a>".</div><div><br />I find it interesting that the "multi-modal, probabilistic" nature of $p(x_n|x_0)$ vanishes to a simple uni-modal distribution when measurement is sufficiently high to mitigate chaotic effects for $n=1000$. In machine learning we concern ourselves with learning fairly rich probability distributions, even going as far as to <a href="https://blog.evjang.com/2018/01/nf1.html" target="_blank">learn transformations </a>of simple distributions into more complicated ones. </div><div><br /></div><div>But what if we are being over-zealous with using powerful function approximators to model $p(x_n|x_0)$? For cases like the above, we are discarding the inductive bias that $p(x_n|x_0)$ arises from a simple source of noise (uniform measurement error) coupled with a chaotic "noise amplifier". Classical chaos on top of measurement error will indeed produce <a href="https://en.wikipedia.org/wiki/Indeterminism" target="_blank">Indeterminism</a>, but does that mean we can get away with treating $p(x_n|x_0)$ as purely random?</div><div><br /></div><div>I suspect the apparent complexity of many "rich" probability distributions we encounter in the wild are more often than not just chaos+measurement error (e.g. weather). If so, how can we leverage that knowledge to build more useful statistical learning algorithms and draw inferences?</div><div><br /></div><div>We already know that chaos and randomness are nearly equivalent from the perspective of computational distinguishability. Did you know that you can <a href="https://aip.scitation.org/doi/10.1063/1.4917383" target="_blank">use chaos to send secret messages</a>? This is done by having Alice and Bob synchronize a chaotic system $x$ with the same initial state $x_0$, and then Alice sends a message $0.001*signal + x$. Bob merely evolves the chaotic system $x$ on his own and subtracts it to recover the signal. Chaos has also been used to design pseudo-random number generators. </div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-837923281690020692020-06-20T12:35:00.011-07:002020-11-04T20:56:22.930-08:00Free Office Hours for Non-Traditional ML Researchers<div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;"><i style="color: #666666; font-family: "Trebuchet MS", Trebuchet, Verdana, sans-serif; font-size: 15.4px; white-space: normal;">Xiaoyi Yin (尹肖贻) has kindly translated this post into Chinese (<a href="https://www.jianshu.com/p/17611f223cb5" style="color: #007cbb; text-decoration-line: none;">中文</a>)</i></span></div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;"><br /></span></div><div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">This post was prompted by a tweet I saw from my colleague, Colin:</span></div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;"><br /></span></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkyssEVvnTvUEniNxG6Y3jS4d_DR-p1bbbBBoyzhc7xCNnYtlgvvSaeLsOPgvhqGIAqPaamC7pE1w0cSqR0M9J_QgiJiUPL-T50lXCW0wWv6hNE_hQoAJYxH5S8oPAx716dZjxDa9qK-Q/s588/Capture.PNG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="296" data-original-width="588" height="251" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkyssEVvnTvUEniNxG6Y3jS4d_DR-p1bbbBBoyzhc7xCNnYtlgvvSaeLsOPgvhqGIAqPaamC7pE1w0cSqR0M9J_QgiJiUPL-T50lXCW0wWv6hNE_hQoAJYxH5S8oPAx716dZjxDa9qK-Q/w500-h251/Capture.PNG" width="500" /></a></div><div><br /></div><div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">I'm currently a researcher at Google with a "non-traditional background", where non-traditional background means "someone who doesn't have a PhD". People usually get PhDs so they can get hired for jobs that require that credential. In the case of AI/ML, this might be to become a professor at a university, or land a research scientist position at a place like Google, or sometimes even both.</span></div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;"><br /></span></div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">At Google it's possible to become a researcher without having a PhD, although it's not very easy. T</span><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">here are a two main paths [1]:</span></div></div></div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;"><br /></span></div><div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">One path is to join an AI Residency Program, which are fixed-term jobs from non-university institution (FAANG companies, AI2, etc.) that aim to jump-start a research career in ML/AI. However, </span><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">these residencies are usually just 1 year long and are not long enough to really "prove yourself" as a researcher.</span></div></div><div><br /></div><div>Another path is to start as a software engineer (SWE) in an ML-focused team and build your colleagues' trust in your research abilities. This was the route I took: <span style="background-color: white; font-size: 15px; white-space: pre-wrap;">I joined Google in 2016 as a software engineer in the Google Brain Robotics team. Even though I was a SWE by title, it made sense to focus on the "</span><a href="http://www.paulgraham.com/hamming.html" style="font-size: 15px; white-space: pre-wrap;" target="_blank">most important problem</a><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">", which was to think really hard about why the robots weren't doing what we wanted and train deep neural nets in an attempt to fix those problems. One research project led to another, and now I just do research + publications all the time.</span></div><div><br /></div><div><div><div>As the ML/AI publishing field has grown exponentially in the last few years, it has gotten harder to break into research (see Colin's tweet). T<span style="background-color: white; font-size: 15px; white-space: pre-wrap;">op PhD programs like <a href="https://bair.berkeley.edu/">BAIR</a> usually require students to have a publication at a top conference like ICML, ICLR, NeurIPS </span><i style="font-size: 15px; white-space: pre-wrap;">before</i><span style="background-color: white; font-size: 15px; white-space: pre-wrap;"> they even apply. </span><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">I'm pretty sure I would not have been accepted to any PhD programs if I were graduating from college today, and would have probably ended up taking a job offer in quantitative finance instead.</span></div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;"><br /></span></div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">The uphill climb gets even steeper for aspiring researchers with non-traditional backgrounds; they are competing with no shortage of qualified PhD students. As Colin alludes to, it is also getting harder for internationals to work at American technology companies and learn from American schools, thanks to our administration's moronic leadership.</span></div></div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;"><br /></span></div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">The supply-demand curves for ML/AI labor are getting quite distorted. </span>On one hand, we have a tremendous global influx of people wanting to solve hard engineering problems and contribute to scientific knowledge and share it openly with the world. On the other hand, there seems to be a shortage of formal training:</div><div><ol style="text-align: left;"><li><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">A research mentor to learn the academic lingo and academic customs from, and more importantly, how to ask good questions and design experiments to answer them.</span></li><li><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">Company environments where software engineers are encouraged to take bold risks and lead their own research (and not just support researchers with infra). </span></li></ol></div><div><b><br /></b></div><div><b>Free Office Hours</b></div><div><b><br /></b></div><div>I can't do much for (2) at the moment, but I can definitely help with (1). To that end, I'm offering free ML research mentorship to aspiring researchers from non-traditional backgrounds via email and video conferencing.</div><div><br /></div><div>I'm most familiar with applied machine learning, robotics, and generative modeling, so I'm most qualified to offer technical advice in these areas. I have a bunch of tangential interests like quantitative finance, graphics, and neuroscience. Regardless of technical topic, I can help with academic writing and de-risking ambitious projects and choosing what problems to work on. I also want to broaden my horizons and learn more from you.</div><div><br /></div><div>If you're interested in using this resource, send me an email at <font face="courier"><myfirstname><mylastname><2004><at><g****.com>.</font><font face="inherit"> <b>In your email, include:</b></font></div><div><ol style="text-align: left;"><li><font face="inherit">Your resume</font></li><li><font face="inherit">What you want to get out of advising</font></li><li><font face="inherit">A cool research idea you have in a couple sentences</font></li></ol></div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">Some more details on how these office hours will work:</span></div></div><div><ol style="text-align: left;"><li><span style="font-size: 15px; white-space: pre-wrap;">Book weekly or bi-weekly Google Meet [2] calls to check up on your work and ask questions, with 15 minute time slots scheduled via Google Calendar.</span></li><li><b style="font-size: 15px; white-space: pre-wrap;">The point of these office hours is not to answer "how do I get a job at Google Research", but to fulfill an advisor-like role in lieu of a PhD program</b><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">. If you are farther along your research career we can discuss career paths and opportunities a little bit, but mostly I just want to help people with (1).</span></li><li><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">I'm probably not going to write code or run experiments for you.</span></li><li><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">I don't want to be <i>that PI</i> that slaps their name on all of their student's work - most advice I give will be given freely with no strings attached. If I make a significant contribution to your work or spend > O(10) hours working with you towards a publishable result, I may request being a co-author on a publication. </span></li><li><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">I reserve the right to decline meetings if I feel that it is not a productive use of my time or if other priorities take hold.</span></li><li><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">I cannot tell you about unpublished work that I'm working on at Google or any Google-confidential information.</span></li><li><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">I'm not offering ML consultation for businesses, so your research work has to be unrelated to your job.</span></li><li><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">To re-iterate point number 2 once more, I'm less interested in giving career advice and more interested in teaching you how to design experiments, how to cite and write papers, and communicating research effectively.</span></li></ol></div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">What do I get out of this? First, I get to expand my network. Second, I can only personally run so many experiments by myself so this would help me grow my own research career. </span><span style="background-color: white;">Third, I think the supply of mentorship opportunities offered by academia is currently not scalable, and this is a bit of an experiment on my part to see if we can do better.</span><span style="background-color: white; font-size: 15px; white-space: pre-wrap;"> I'd like to give aspiring researchers similar opportunities that I had 4 years ago that allowed me to break into the field.</span></div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;"><br /></span></div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;"><b>Footnotes</b></span></div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;"><b>[1] </b></span><span style="background-color: white; font-size: 15px; white-space: pre-wrap;">Chris Olah has a </span><a href="https://colah.github.io/posts/2020-05-University/" style="font-size: 15px; white-space: pre-wrap;" target="_blank">great essay</a><span style="background-color: white; font-size: 15px; white-space: pre-wrap;"> on some additional options and pros and cons of non-traditional education.</span></div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;"><b>[2] </b><a href="http://archive.is/ocFS9" target="_blank">Zoom complies with Chinese censorship requests</a>, so as a statement of protest I avoid using Zoom when possible.</span></div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;"><br /></span></div><div><span style="background-color: white; font-size: 15px; white-space: pre-wrap;"><br /></span></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-15388633356863151112020-04-01T19:03:00.004-07:002020-07-02T10:20:14.963-07:00Three Questions that Keep Me Up at NightA Google interview candidate recently asked me: "What are three big science questions that keep you up at night?" This was a great question because one's answer reveals so much about one's intellectual interests - here are mine:<br />
<br />
<b>Q1: Can we imitate "thinking" from only observing behavior? </b><br />
<br />
Suppose you have a large fleet of autonomous vehicles with human operators driving them around diverse road conditions. We can observe the decisions made by the human, and attempt to use imitation learning algorithms to map robot observations to the steering decisions that the human would take.<br />
<br />
However, we can't observe what the <i>homunculus</i> is <i>thinking</i> directly. Humans read road text and other signage to interpret what they should and should not do. Humans plan more carefully when doing tricky maneuvers (parallel parking). Humans <i>feel</i> rage and drowsiness and translate those feelings into behavior.<br />
<br />
Let's suppose we have a large car fleet and our dataset is so massive and perpetually growing that we cannot train it faster than we are collecting new data. If we train a powerful black-box function approximator to learn the mapping from robot observation to human behavior [<a href="https://arxiv.org/pdf/1812.03079.pdf">1</a>], and we use active-learning techniques like <a href="https://www.cs.cmu.edu/~sross1/publications/Ross-AIStats11-NoRegret.pdf">DAgger</a> to combat false negatives, will that be enough to acquire these latent information processing capabilities? Can the car learn to <i>think like a human</i>, and how much?<br />
<br />
Inferring low-dimensional unobserved states from behavior is a <a href="https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation">well-studied</a> technique in statistical modeling. In recent years, meta-reinforcement learning algorithms have increased the capability of agents to change their behavior in the presence of new information. However, no one has applied this principle to the scale and complexity of "human-level thinking and reasoning variables". If we use basic black-box function approximators (ConvNets, ResNets, Transformers, etc.), will it be enough? Or will it still fail even with a million lifetimes worth of driving data?<br />
<br />
In other words, can simply predicting human behavior lead to a model that can learn to think like a human?<br />
<br />
<img alt="The Self Illusion and Psychotherapy | Psychology Today" src="https://cdn.psychologytoday.com/sites/default/files/field_blog_entry_images/MIB_homunculus3.jpg" /><br />
<br />
One cannot draw a hard line between "thinking" and "pattern matching", but loosely speaking I'd want to see such learned latent variables reflect basic deductive and inductive reasoning capabilities. For example, a <a href="https://en.wikipedia.org/wiki/Law_of_excluded_middle">logical proposition formulated</a> as a steering problem: "Turn left if it is raining; right otherwise".<br />
<br />
This could also be addressed via other high-data environments:<br />
<br />
<ul>
<li>Observing trader orders on markets and seeing if we can recover the trader's deductive reasoning and beliefs about the future. See if we can observe rational thought (if not rational behavior).</li>
<li>Recovering intent and emotions and desire from social network activity.</li>
</ul>
<div>
<br /></div>
<b>Q2: What is the computationally cheapest "organic building block" of an Artificial Life simulation that could lead to human-level AGI?</b><br />
<b><br /></b>
Many AI researchers, myself included, believe that competitive survival of "living organisms" is the only true way to implement general intelligence.<br />
<br />
If you lack some mental power like deductive reasoning, another agent might exploit the reality to its advantage to out-compete you for resources.<br />
<br />
If you don't know how to grasp an object, you can't bring food to your mouth. Intelligence is not merely a byproduct of survival; I would even argue that it is Life and Death itself from which all semantic meaning we perceive in the world arises (the difference between a "stable grasp" and an "unstable grasp").<br />
<br />
How does one realize an A-Life research agenda? It would be prohibitively expensive to implement large-scale evolution with real robots, because we don't know how to get robots to self-replicate as living organisms do. We could use synthetic biology technology, but we don't know how to write complex software for cells yet and even if we could, it would probably take billions of years for cells to evolve into big brains. A less messy compromise is to implement A-Life <i>in silico</i> and evolve thinking critters in there.<br />
<br />
We'd want the simulation to be fast enough to simulate armies of critters. Warfare was a great driver of innovation. We also want the simulation to be rich and open-ended enough to allow for ecological niches and tradeoffs between mental and physical adaptations (a hand learning to grasp objects).<br />
<br />
Therein lies the big question: if the goal is to replicate the billions of years of evolutionary progress leading up to where we are today, what are the basic pieces of the environment that would be just good enough?<br />
<br />
<ul>
<li>Chemistry? Cells? Ribosomes? I certainly hope not.</li>
<li>How do nutrient cycles work? Resources need to be recycled from land to critters and back for there to be ecological change.</li>
<li>Is the discovery of fire important for evolutionary progression of intelligence? If so, do we need to simulate heat?</li>
<li>What about sound and acoustic waves?</li>
<li>Is a rigid-body simulation of MuJoCo humanoids enough? Probably not, if articulated hands end up being crucial.</li>
<li>Is Minecraft enough? </li>
<li>Does the mental substrate need to be embodied in the environment and subject to the physical laws of the reality? Our brains certainly are, but it would be bad if we had to simulate neural networks in MuJoCo.</li>
<li>Is conservation of energy important? If we are not careful, it can be possible through evolution for agents to harvest free energy from their environment.</li>
</ul>
<br />
In the short story <i><a href="https://www.quora.com/What-are-the-strong-AI-ideas-in-Crystal-Nights-by-Greg-Egan">Crystal Nights</a></i> by Greg Egan, simulated "Crabs" are built up of organic blocks that they steal from other Crabs. Crabs "reproduce" by assembling a new crab out of parts, like LEGO. But the short story left me wanting for more implementation details...<br />
<br />
<img alt="Listen to a ghost crab frighten away enemies—with its stomach ..." height="360" src="https://www.sciencemag.org/sites/default/files/styles/article_main_large/public/091019-ghost-crab-thumbnail.png?itok=tFKACQ6W" width="640" /><br />
<br />
<br />
<b>Q3: Loschmidt's Paradox and What Gives Rise to Time?</b><br />
<b><br /></b>
I recently read <a href="https://www.goodreads.com/no/book/show/38714658-the-order-of-time">The Order of Time</a> by Carlo Rovelli and being a complete Physics newbie, finished the book feeling more confused and mystified than when I had started.<br />
<br />
The second law of thermodynamics, $\Delta{S} > 0$, states that entropy increases with time. That is the only physical law that is requires time "flow" forwards; all other physical laws have <a href="https://en.wikipedia.org/wiki/T-symmetry">Time-Symmetry</a>: they hold even if time was flowing backwards. In other words, T-Symmetry in a physical system implies conservation of entropy.<br />
<br />
Microscopic phenomena (laws of mechanics on position, acceleration, force, electric field, Maxwell's equations) exhibit T-Symmetry. Macroscopic phenomena (gases dispersing in a room, people going about their lives), on the other hand, are T-Asymmetric. It is perhaps an adaptation to macroscopic reality being T-Asymmetric that our conscious experience itself has evolved to become aware of time passing. Perhaps bacteria do not need to know about time...<br />
<br />
But if macroscopic phenomena are comprised of nothing more than countless microscopic phenomena, where the heck does entropy really come from?<br />
<br />
Upon further Googling, I learned that this question is known as <a href="https://en.wikipedia.org/wiki/Loschmidt%27s_paradox">Loschmidt's Paradox</a>. One <a href="https://physics.stackexchange.com/questions/19970/does-the-scientific-community-consider-the-loschmidt-paradox-resolved-if-so-wha">resolution that I'm partially satisfied with</a> is to consider that if we take all microscopic collisions to be driven by QM, then there really is no such thing as "T-symmetric" interactions, and thus microscopic interactions are actually T-asymmetric. A lot of the math becomes simpler to analyze if we consider a single pair of particles obeying randomized dynamics (whereas in Statistical Mechanics we are only allowed to assume that about a population of particles).<br />
<br />
Even if we accept that macroscopic time originates from a microscopic equivalent of entropy, this still begs the question of what the origin of microscopic entropy (time) is.<br />
<br />
Unfortunately, many words in English do not help to divorce my subjective, casual understanding of time from a more precise, formal understanding. Whenever I think of microscopic phenomena somehow "causing" macroscopic phenomena or the cause of time (entropy) "<b>increasing</b>", my head gets thrown for a loop. So much T-asymmetry is baked into our language!<br />
<br />
I'd love to know of resources to gain a complete understanding of what we know and don't know, and perhaps a new <a href="https://en.wikipedia.org/wiki/Linguistic_relativity">language to think about Causality</a> from a physics perspective<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYOB9dL9z50gmzyZ64JNk1SluzOYs0zFwEhbdGUdlB_p4l_JatiiHjBK9_4aISbC1p6spoaIUtZyQqEmBg56YrEeBm9SV704XCvCHDDwCdJZ8DXwPNot9_pzkr893nDJL8YRXZH955BdI/s1600/1d-sitting.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="744" data-original-width="1024" height="464" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYOB9dL9z50gmzyZ64JNk1SluzOYs0zFwEhbdGUdlB_p4l_JatiiHjBK9_4aISbC1p6spoaIUtZyQqEmBg56YrEeBm9SV704XCvCHDDwCdJZ8DXwPNot9_pzkr893nDJL8YRXZH955BdI/s640/1d-sitting.jpg" width="640" /></a></div>
<br />
<br />
If you have thoughts on these questions, or want to share your own big science questions that keep you up at night, let me know in the comments or <a href="https://twitter.com/ericjang11">on Twitter</a>! #3sciencequestions<br />
<br />
<b><br /></b>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-87002534699729421442019-12-25T20:10:00.004-08:002021-07-29T18:24:28.876-07:00Selected Quotes from "The Dark Ages of AI Panel Discussion"In 1984, a panel at the AAAI conference discussed whether the field was approaching an <a href="https://en.wikipedia.org/wiki/AI_winter">"AI Winter"</a>. Mitch Waldrop wrote a <a href="https://www.researchgate.net/publication/220604602_The_dark_ages_of_AI_A_panel_discussion_at_AAAI-84">transcript of the discussion</a>, and much of it reads <i>exactly</i> like something written 35 years into the future.<br />
<br />
Below are some quotes from the transcript that I found impressive, as they describe the feelings of many an AI researcher today and how the public views AI, despite all the advances in computing and software since 1984. 👇<br />
<br />
<i>"People make essentially no distinction between computers, broadly defined, and Artificial Intelligence... as far as they're concerned, there is no difference; they're just worried about the impact of very capable, smart computers" </i>- Mitch Waldrop<br />
<br />
<i>"The computer is not only a mythic emblem for this bright, high-technology future, it's a mythic symbol for much of the anxiety that people have about their own society." </i>- Mitch Waldrop<br />
<br />
<i>"A second anxiety, what you might call the 'Frankenstein Anxiety', is the fear of being replaced, of becoming superfluous..."</i> - Mitch Waldrop<br />
<br />
<i>"Modern Times Anxiety: People becoming somehow, because of computers, just a cog in the vast, faceless machine; the strong sense of helplessness, that we really have no control over our lives"</i> - Mitch Waldrop<br />
<br />
<i>"The problem is not a matter of imminent deadlines or lack of space or lack of time... the real problem is that what reporters see as real issues in the world are very different from what the AI community sees as real issues."</i> - Mitch Waldrop<br />
<br />
<i>"If we expect physicists to be concerned about arms control and chemists to be concerned about toxic waste, it's probably reasonble to expect AI people to be concerned about the human impact of these technologies"</i> - Mitch Waldrop<br />
<br />
<i>"It [Doomsday] is already here. There is no content in this conference"</i> - Bob Wilensky<br />
<br />
<i>"What I heard was that only completed scientific work was going to be accepted. This is a horrible concept - no new unformed ideas, no incremental work building on previous work"</i> - Roger Schank<br />
<br />
<i>"When I first got into this field twenty years ago, I used to explain to people what I did, and they would already say, 'you mean computers can't do that already?' They'll always believe that." </i>- Roger Schank<br />
<br />
<i>"Big business has a very serious role in this country. Among other things, they get to determine what's 'in' and what's 'out' in the government."</i> - Roger Schank<br />
<br />
<i>"I got scared when big business started getting into this - Schlumberger, Xerox, HP, Texas Instruments, GTE, Amico, Exxcon, they were all making investments - they all have AI groups. And you find out that, thoise people weren't trained in AI." </i>- Roger Schank<br />
<br />
<i>"It's easier to go into a startup... [or] a big company... than to go into a university and try to organize an AI lab, which is just as hard to do now as it ever was. But if we don't do that, we will find that we are in the 'Dark Ages' of AI"</i> - Roger Schank<br />
<br />
<i>"The first [message] is incumbent upon AI because we have promised so much, to produce. We must produce working systems. Some of you must devote yourselves to doing that. It is also the case that some of you had better commit to doing science."</i> - Roger Schank<br />
<br />
<i>"If it turns out that our AI conference isn't the place to discuss science, then we better start finding a place where we can discuss science, because this show for all the venture capitalists is very nice."</i> - Roger Schank<br />
<br />
<i>"the notion of cognition as computation is going to have extraordinary importance to the philosophy and psychology of the next generation. And for well or ill, this notion has affected some of the deepest aspects of our self-image."</i> - B. Chandrasekaran<br />
<i><br /></i>
<i>"symbol-level theories, which may even be right, are being mistaken for knowledge-level theories"</i> - B. Chandrasekaran<br />
<br />
<i>"My hope is that AI will evolve more like biotech in the sense that certain technologies will be spun off, and researchers will remain and extremely interesting progress will be made" </i> - B. Chandrasekaran<br />
<br />
<i>"I have encountered people who have a science fiction view of the world and think that computers now can do just about anything... these people have a feeling that computers can do wonderful things, but if you ask them how exactly could an AI program help in work, they don't have the sense that within a week or two they could be replaced or that computers can come in and do a much better job than they do in work."</i> - John McDermott<br />
<br />
<i>"There have been a number of technologies that have run into dead ends, like dirigibles and external combustion engines. And there have been other ones, like television, and in fact, the telephone system itself, which took between twenty and forty years to go from being laboratory possibilities to actual commercial successes. Do you really think that AI is going to become a commercial success in the next 10-15 years?" </i>- Audience member<br />
<br />
<i>"They [lay people] seem to have a vague idea that great things can happen, have sublime confidence... but when it gets down to the nitty-gritty, they tend to be pretty unimaginative and have pretty low expectations as to what can be done."</i> - Mitch Waldrop<br />
<br />
<i>"It seems that academic AI people tend to blame everyone but themselves when it comes to problems of AI in terms of relationship to the general society."</i> - Audience memberUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-55386186080888052132019-11-28T21:50:00.001-08:002019-11-30T10:51:03.575-08:00Differentiable Path Tracing on the GPU/TPU<div class="separator" style="clear: both; text-align: left;">
You can download a PDF (typset in LaTeX) of this blog post <a href="https://drive.google.com/open?id=1T-doCTjWGFPtucnVTFU69wDkxfkiXxGv">here</a>.</div>
<div class="separator" style="clear: both; text-align: left;">
Jupyter Notebook Code on GitHub: <a href="https://github.com/ericjang/pt-jax">https://github.com/ericjang/pt-jax</a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyLG3-sfSze-m9JelRtGwwzcKjdsWoQBhFI3zu4JCOcz62aQPxwnKAqkcphPOGRfOFCC11VhX9dBAJGoDSAbHjw56zlZidYeGQHNKkeNiI7yDLZqb5tXpqjLSmqAw0E01iWxvuVtqEIxw/s1600/cornell_box.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="225" data-original-width="222" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyLG3-sfSze-m9JelRtGwwzcKjdsWoQBhFI3zu4JCOcz62aQPxwnKAqkcphPOGRfOFCC11VhX9dBAJGoDSAbHjw56zlZidYeGQHNKkeNiI7yDLZqb5tXpqjLSmqAw0E01iWxvuVtqEIxw/s1600/cornell_box.png" /></a></div>
<br />
This blog post is a tutorial on implementing path tracing, a physically-based rendering algorithm, in <a href="https://github.com/google/jax">JAX</a>. This code runs on the CPU, GPU, and Google Cloud TPU, and is implemented in a way that also makes it end-to-end differentiable. You can compute gradients of the rendered pixels with respect to geometry, materials, whatever your heart desires.<br />
<br />
I love JAX because it is equally suited for pedagogy and high-performance computing. We will implement a path tracer for a single pixel in numpy-like syntax, slap a <span style="font-family: "courier new" , "courier" , monospace;">jax.vmap</span> operator on it, and JAX automatically converts our code to render multiple pixels with SIMD instructions! You can do the same thing for multiple devices using <span style="font-family: "courier new" , "courier" , monospace;">jax.pmap</span>. If that isn't magic, I don't know what is. At the end of the tutorial you will not only know how to render a Cornell Box, but also understand geometric optics and radiometry from first principles.<br />
<br />
The figure below, borrowed from a <a href="https://blog.evjang.com/2018/08/dijkstras.html">previous post from this blog</a>, explains at a high level the light simulator we're about to implement:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPMm4YSTVqyubMpAecfcts6G7z-9NMT9Og4Ypjbk1lOacKB37NjV2hfGx4JGblLk3KBlTgxMqlWlAPt7iJVzvRwQNl4u_pGv754ZDvfGTcrBLsS3qzauISVzMHYna-C47e6xVZfujr698/s1600/pt.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1070" data-original-width="1432" height="478" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPMm4YSTVqyubMpAecfcts6G7z-9NMT9Og4Ypjbk1lOacKB37NjV2hfGx4JGblLk3KBlTgxMqlWlAPt7iJVzvRwQNl4u_pGv754ZDvfGTcrBLsS3qzauISVzMHYna-C47e6xVZfujr698/s640/pt.png" width="640" /></a></div>
<br />
I divide this tutorial into two parts: 1) implementing geometry-related functions like ray-scene intersection and normal estimation, and 2) the "light transport" part where we discuss how to accumulate radiance arriving at an imaginary camera sensor.<br />
<br />
JAX and Matplotlib (and a bit of calculus and probability) are the only required dependencies for this tutorial:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">import jax.numpy as np</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">from jax import jit, grad, vmap, random, lax</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">import matplotlib.pyplot as plt</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">from mpl_toolkits.mplot3d import Axes3D</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
JAX is essentially a drop-in replacement for numpy, with the exception that operations are all functional (no indexing assignment) and the user must manually pass around an explicit <span style="font-family: "courier new" , "courier" , monospace;">rng_key</span> to generate random numbers. Here is a <a href="https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html">short list of JAX gotchas</a> if you are coming to JAX as a numpy user.<br />
<br />
<h3>
Part I: Geometry</h3>
<div>
The vast majority of rendering software represents scene geometry as a collection of surface primitives that form "meshes". 3D modeling software form meshes using quadrilaterial faces, and then the rendering software converts the quads to triangles under the hood. Collections of meshes are composed together to form entire objects and scenes. For this tutorial we're going to use an unorthodox geometry representation and we'll need to implement a few helper functions to manipulate them.</div>
<div>
<div>
<br /></div>
<h4>
Differentiable Scene Intersection with Distance Fields</h4>
<div>
<br /></div>
<div>
Rendering requires computing intersection points $y$ in the scene with ray $\omega_i$, and usually involves traversing a highly-optimized spatial data structure called a bounding volume hierarchy (BVH). $y$ can be expressed as a parametric equation of the origin point $x$ and raytracing direction $\omega_i$, and the goal is to find the distance $t$:</div>
<div>
<br /></div>
<div>
$\hat{y} = x + t \cdot \omega_i$</div>
<div>
<br /></div>
<div>
There is usually a lot of branching logic in BVH traversal algorithms, which makes it harder to implement efficiently on accelerator hardware like GPUs and TPUs. Instead, let's use <i>raymarching on signed distance fields</i> to find the intersection point $y$. I first learned of this geometry modeling technique when <a href="https://www.iquilezles.org/">Inigo "IQ" Quilez</a>, a veritable wizard of graphics programming, gave a live coding demo at Pixar about how he modeled vegetation in the "Brave" movie. Raymarching is the primary technique used by the ShaderToy.com community to implement cool 3D movies using only instructions available to WebGL fragment shaders.</div>
<div>
<br /></div>
<div>
A signed distance field over position $p$ specifies "the distance you can move in any direction without coming into contact with the object". For example, here is the signed distance field for a plane that passes through the origin and is perpendicular to the y-axis. </div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">def sdFloor(p):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return p.y</span></div>
<div>
<br /></div>
<div>
To find the intersection distance $t$, the raymarching algorithm iteratively increments $t$ by step sizes equal to the signed distance field of the scene (so we never pass through an object). This iteration happens until $t$ "leaves the scene'" or the distance field shrinks to zero (we have collided with an object). For the plane distance, we see from the diagram below that stepping forward using the distance field allows us to get arbitrarily close to the plane without ever passing through it.</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKNNfVvoZwokyLnvVWYguAkx3nyEUi6I9g0EFLBP8Z3pluFhnRGe5d0nja9JbPAJTQGdKJmTUE6AWLteroLTe-3K_7AAYSVLoV8pWUEBwg9LR9z_Z9g-DwD7ZvQ0mV3jtR97st2mCFIYY/s1600/plane-raymarch-diagram.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="192" data-original-width="575" height="212" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKNNfVvoZwokyLnvVWYguAkx3nyEUi6I9g0EFLBP8Z3pluFhnRGe5d0nja9JbPAJTQGdKJmTUE6AWLteroLTe-3K_7AAYSVLoV8pWUEBwg9LR9z_Z9g-DwD7ZvQ0mV3jtR97st2mCFIYY/s640/plane-raymarch-diagram.png" width="640" /></a></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">def raymarch(ro, rd, sdf_fn, max_steps=10):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> t = 0.0</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> for i in range(max_steps):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> p = ro + t*rd</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> t = t + sdf_fn(p)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return t</span></div>
<div>
<br /></div>
<div>
Signed distance fields combined with raymarching have a number of nice mathematical properties. The most important one is that unlike analytical ray-shape intersection, raymarching does not require re-deriving an analytical solution for intersecting points for every primitive shape we wish to add to the scene. Triangles are also general, but they require a lot of memory to store expressive scenes. In my opinion, signed distance fields strike a good balance between memory budget and geometric expressiveness.</div>
<div>
<br /></div>
<div>
Similar to ResNet architectures in Deep Learning, the raymarching algorithm is a form of "unrolled iterative inference" of the same signed distance field. If we are trying to differentiate through the signed distance function (for instance, trying to approximate it with a neural network), this representation may be favorable to gradient descent algorithms.</div>
<div>
<br /></div>
<h4>
Building Up Our Scene</h4>
<div>
<br /></div>
<div>
The first step is to implement the signed distance field for the scene of interest. The naming and programming conventions in this tutorial are heavily inspired by stylistic conventions used by ShaderToy DemoScene community. One such convention is to define hard-coded enums for each object, so we can associate intersection points to their nearest object. The values are arbitrary; you can substitute them with your favorite numbers if you like.</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">OBJ_NONE=0.0</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">OBJ_FLOOR=0.1</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">OBJ_CEIL=.2</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">OBJ_WALL_RD=.3</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">OBJ_WALL_WH=.4</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">OBJ_WALL_GR=.5</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">OBJ_SHORT_BLOCK=.6</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">OBJ_TALL_BLOCK=.7</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">OBJ_LIGHT=1.0</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">OBJ_SPHERE=0.9</span></div>
<div>
<br /></div>
<div>
Computing a ray-scene intersection should therefore return an object id and an associated distance, for which we define a helper function to zip up those two numbers.</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">def df(obj_id, dist):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return np.array([obj_id, dist])</span></div>
<div>
<br /></div>
<div>
Next, we'll define the distance field for a box (source: <a href="https://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm)">https://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm)</a>.</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">def udBox(p, b):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> # b = half-widths</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return length(np.maximum(np.abs(p)-b,0.0))</span></div>
<div>
<br /></div>
<div>
Rotating, translating, and scaling an object implied by a signed distance field is done by performing the inverse operation to the input point to the distance function. For example, if we want to rotate one of the boxes in the scene by an angle of $\theta$, we rotate its argument $p$ by $-\theta$ instead.</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">def rotateX(p,a):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> # We won't be using rotateX for this tutorial.</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> c = np.cos(a); s = np.sin(a);</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> px,py,pz=p[0],p[1],p[2]</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return np.array([px,c*py-s*pz,s*py+c*pz])</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">def rotateY(p,a):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> c = np.cos(a); s = np.sin(a);</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> px,py,pz=p[0],p[1],p[2]</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return np.array([c*px+s*pz,py,-s*px+c*pz])</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">def rotateZ(p,a):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> c = np.cos(a); s = np.sin(a);</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> px,py,pz=p[0],p[1],p[2]</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return np.array([c*px-s*py,s*px+c*py,pz])</span></div>
<div>
<br /></div>
<div>
Another cool property of signed distance fields is that you can compute the union of two solids with a simple <span style="font-family: "courier new" , "courier" , monospace;">np.minimum</span> operation. By the definition of a distance field, if you take a step size equal to the smaller of the two distances in either direction, you are still guaranteed not to intersect with anything. The following method, short for "Union Operation", joins to distance fields by comparing their distance property. </div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">def opU(a,b):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> if a[1] < b[1]:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return a</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> else:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return b</span></div>
<div>
<br /></div>
<div>
Unfortunately, the JAX compiler complains when combining both <span style="font-family: "courier new" , "courier" , monospace;">grad</span> and <span style="font-family: "courier new" , "courier" , monospace;">jit</span> operators through conditional logic like the one above. So we need to write things a little differently to preserve differentiability:</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">def opU(a,b):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> condition = np.tile(a[1,None]<b[1,None], [2])</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return np.where(condition, a, b)</span></div>
<div>
<br /></div>
<div>
Now we have all the requisite pieces to build the signed distance field for the Cornell Box, which we call <span style="font-family: "courier new" , "courier" , monospace;">sdScene</span>. Recall from the previous section that the distance field for an axis-aligned plane is just the height along that axis. We can use this principle to build infinite planes that comprise the walls, floor, and ceiling of the Cornell Box.</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">def sdScene(p):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> # p is [3,]</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> px,py,pz=p[0],p[1],p[2]</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> # floor</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> obj_floor = df(OBJ_FLOOR, py) # py = distance from y=0</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> res = obj_floor </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> # ceiling</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> obj_ceil = df(OBJ_CEIL, 4.-py)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> res = opU(res,obj_ceil)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> # backwall</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> obj_bwall = df(OBJ_WALL_WH, 4.-pz)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> res = opU(res,obj_bwall)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> # leftwall</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> obj_lwall = df(OBJ_WALL_RD, px-(-2))</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> res = opU(res,obj_lwall)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> # rightwall</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> obj_rwall = df(OBJ_WALL_GR, 2-px)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> res = opU(res,obj_rwall)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> # light</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> obj_light = df(OBJ_LIGHT, udBox(p - np.array([0,3.9,2]), np.array([.5,.01,.5])))</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> res = opU(res,obj_light)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> # tall block</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> bh = 1.3</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> p2 = rotateY(p- np.array([-.64,bh,2.6]),.15*np.pi)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> d = udBox(p2, np.array([.6,bh,.6]))</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> obj_tall_block = df(OBJ_TALL_BLOCK, d)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> res = opU(res,obj_tall_block) </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> # short block</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> bw = .6</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> p2 = rotateY(p- np.array([.65,bw,1.7]),-.1*np.pi)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> d = udBox(p2, np.array([bw,bw,bw]))</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> obj_short_block = df(OBJ_SHORT_BLOCK, d)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> res = opU(res,obj_short_block)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> return res</span></div>
<div>
<br /></div>
<div>
Notice that we model the light source on the ceiling as a rectangular prism with half-widths $(0.5, 0.5)$. All numbers are expressed in SI units, so this implies a 1 meter x 1 meter light, and a big 4m x 4m Cornell box (this is a big scene!). The size of the light will become relevant later when we compute quantitites like emitted radiance.</div>
<div>
<br /></div>
<h4>
Computing Surface Normals</h4>
<div>
In rendering we need to frequently compute the normals of geometric surfaces. In ShaderToy programs, the most common algorithm used to compute normals is a finite-difference gradient approximation of the distance field $\nabla_p d(p)$, and then normalize that vector to obtain an approximate normal.</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">def calcNormalFiniteDifference(p):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> # derivative approximation via midpoint rule</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> eps = 0.001</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> dx=np.array([eps,0,0])</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> dy=np.array([0,eps,0])</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> dz=np.array([0,0,eps])</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> # extract just the distance component</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> nor = np.array([</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> sdScene(p+dx) - sdScene(p-dx),</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> sdScene(p+dy) - sdScene(p-dy),</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> sdScene(p+dz) - sdScene(p-dz),</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> ])</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return normalize(nor)</span></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
Note that this requires <i>six</i> separate evaluations to the <span style="font-family: "courier new" , "courier" , monospace;">sdScene</span> function! As it turns out, JAX can give us analytical normals basically for free via its auto-differentiation capabilities. The backward pass has the same computational complexity as the forward pass, resulting in autodiff gradients being 6x faster than finite-differencing. Neat!</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">def dist(p):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> # return the distance-component only</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return sdScene(p)[1]</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">def calcNormalWithAutograd(p):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return normalize(grad(dist)(p))</span></div>
<div>
<br /></div>
<h4>
Cosine-Weighted Sampling</h4>
<div>
<br /></div>
<div>
We require is the ability to sample scattering rays around some local surface normal, for when we choose recursive rays to scatter. All the objects in the scene are assigned "Lambertian BRDFs'', which mean that they are matte in reflectance properties and the apparent brightness to an observer is the same regardless of viewing angle. For Lambertian materials, it is much more effective to sample from a cosine-weighted distribution because it allows two cosine-related probability terms (from the sampling and from the BRDF) to cancel out. The motivation for this will become apparent in Part II of the tutorial, but here is the code up front.</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">def sampleCosineWeightedHemisphere(rng_key, n):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> rng_key, subkey = random.split(rng_key)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> u = random.uniform(subkey,shape=(2,),minval=0,maxval=1)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> u1, u2 = u[0], u[1]</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> uu = normalize(np.cross(n, np.array([0.,1.,1.])))</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> vv = np.cross(uu,n)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> ra = np.sqrt(u2)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> rx = ra*np.cos(2*np.pi*u1)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> ry = ra*np.sin(2*np.pi*u1)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> rz = np.sqrt(1.-u2)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> rr = rx*uu+ry*vv+rz*n</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return normalize(rr) </span></div>
<div>
<br /></div>
<div>
Here's a quick 3D visualization to see whether our implementation is doing something reasonable:</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">from mpl_toolkits.mplot3d import Axes3D</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">nor = normalize(np.array([[1.,1.,0.]]))</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">nor = np.tile(nor,[1000,1])</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">rng_key = random.split(RNG_KEY, 1000)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">rd = vmap(sampleCosineWeightedHemisphere)(rng_key, nor)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">fig = plt.figure()</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">ax = fig.add_subplot(121, projection='3d')</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">ax.scatter(rd[:,0],rd[:,2],rd[:,1])</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">ax = fig.add_subplot(122)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">ax.scatter(rd[:,0],rd[:,1])</span></div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhOThl_2BdjiToXlIah8oW8Utg8XpANhF-nNKhXyWGqMbW66cYA46ykBlP6TuzcuyrYDJxaduv860dIyHkCCz2uOtgLkj28LqrxtEesIudbtnoHV8eoMkTJJFugVCLhFdSlbPGe-oOeYAA/s1600/cos_sampling.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="182" data-original-width="338" height="215" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhOThl_2BdjiToXlIah8oW8Utg8XpANhF-nNKhXyWGqMbW66cYA46ykBlP6TuzcuyrYDJxaduv860dIyHkCCz2uOtgLkj28LqrxtEesIudbtnoHV8eoMkTJJFugVCLhFdSlbPGe-oOeYAA/s400/cos_sampling.png" width="400" /></a></div>
<div>
<br /></div>
<h4>
Camera Model</h4>
<div>
<br /></div>
<div>
For each pixel we want to render, we need to associate it with a ray direction <span style="font-family: "courier new" , "courier" , monospace;">rd</span> and a ray origin <span style="font-family: "courier new" , "courier" , monospace;">ro</span>. The most basic camera model for computer graphics is a pinhole camera, shown below:</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuL-JZuoFKTDjBTfuhaX2MgKduancEPFMaiRbUtGqdTSv1qjv86vEaceUj2hx0miHrzDVT-FtW4EJrWCO-nIYGRNdkU8FG1rMvFSB3rmchjY1ZO-bNXze4hJwF-PZIStwZQdtI9SSnDfY/s1600/perspective_camera.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="740" data-original-width="805" height="294" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuL-JZuoFKTDjBTfuhaX2MgKduancEPFMaiRbUtGqdTSv1qjv86vEaceUj2hx0miHrzDVT-FtW4EJrWCO-nIYGRNdkU8FG1rMvFSB3rmchjY1ZO-bNXze4hJwF-PZIStwZQdtI9SSnDfY/s320/perspective_camera.png" width="320" /></a></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
The following code sets up a pinhole camera with focal distance of 2.2 meters:</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">N=150 # width of image plane</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">xs=np.linspace(0,1,N) # 10 pixels</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">us,vs = np.meshgrid(xs,xs) </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">uv = np.vstack([us.flatten(),vs.flatten()]).T</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"># normalize pixel locations to -1,1</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">p = np.concatenate([-1+2*uv, np.zeros((N*N,1))], axis=1)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"># Render a pinhole camera.</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">eye = np.tile(np.array([0,2.,-3.5]),[p.shape[0],1])</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">look = np.array([[0,2.0,0]]) # look straight ahead</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">w = vmap(normalize)(look - eye)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">up = np.array([[0,1,0]]) # up axis of world</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">u = vmap(normalize)(np.cross(w,up))</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">v = vmap(normalize)(np.cross(u,w))</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">d=2.2 # focal distance</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">rd = vmap(normalize)(p[:,0,None]*u + p[:,1,None]*v + d*w)</span></div>
<div>
<br /></div>
<div>
If you wanted to render an orthographic projection, you can simply set all ray direction values to point straight forward along the Z-axis, instead of all originating from the same eye point: <span style="font-family: "courier new" , "courier" , monospace;">rd = np.array([0, 0, 1])</span>.</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">N=150 # width of image plane</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">xs=np.linspace(0,1,N) # 10 pixels</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">us,vs = np.meshgrid(xs,xs) </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">us = (2*us-1)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">vs *= 2</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">uv = np.vstack([us.flatten(),vs.flatten()]).T # 10x10 image grid</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">eye = np.concatenate([uv, np.zeros((N*N,1))], axis=1)*2</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">rd = np.zeros_like(eye) + np.array([[0, 0, 1]])</span></div>
<div>
<br /></div>
<div>
An orthographic camera is what happens when you stretch the focal distance to infinity. That will yield an image like this:</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLLfqcxUKW2pgy_RRMI-Am-yf2zhj9dAqgK_dtW5kwOFdWyT0oc4MsKx7Z4RbbZ16ObOOvSlBUnWrN64LrWgezW97lvV07QYcidXh9ACpJIHIlZHLc-qaTFJKufX1t4ayqDyehnaadKQM/s1600/cornell_box_ortho.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="238" data-original-width="243" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLLfqcxUKW2pgy_RRMI-Am-yf2zhj9dAqgK_dtW5kwOFdWyT0oc4MsKx7Z4RbbZ16ObOOvSlBUnWrN64LrWgezW97lvV07QYcidXh9ACpJIHIlZHLc-qaTFJKufX1t4ayqDyehnaadKQM/s1600/cornell_box_ortho.png" /></a></div>
<div>
<br /></div>
</div>
<div>
<br /></div>
<h3>
Part II: Light Simulation</h3>
<div>
<br /></div>
<div>
<div>
With our scene defined and basic geometric functions set up, we can finally get to the fun part of implementing light transport. This part of the tutorial is agnostic to the geometry representation described in Part I, so you can actually follow along with whatever programming language and geometry representation you like (raymarching, triangles, etc).</div>
<div>
<br /></div>
<h4>
Radiometry From First Principles</h4>
<div>
<br /></div>
<div>
Before we learn the path tracing algorithm, it is <i>illuminating</i> to first understand the underlying physical phenomena being simulated. <b>Radiometry</b> is a mathematical framework for measuring electromagnetic radiation. Not only can it be used to render pretty pictures, but it can also be used to understand heat and energy propagated in straight lines within closed systems (e.g. blackbody radiation). What we are ultimately interested in are human perceptual color quantities, but to get them first we will simulate the physical quantities (Watts) and then convert them to lumens and RGB values.</div>
<div>
<br /></div>
<div>
This section borrows some figures from the <a href="http://www.pbr-book.org/3ed-2018/Color_and_Radiometry/Radiometry.html">PBRT webpage on Radiometry</a>. I highly recommend reading that page before proceeding, but I also summarize the main points you need to know here.</div>
<div>
<br /></div>
<div>
You can actually derive the laws of radiometry from first principles, using only the principle of conservation of energy: <b>within a closed system, the total amount of energy being emitted is equal to the total amount of energy being absorbed.</b> </div>
<div>
<br /></div>
<div>
Consider a small sphere of radius $r$ emitting 60 Watts of electromagnetic power into a larger enclosing sphere of radius $R$. We know that the bigger sphere must be absorbing 60 Watts of energy, but because it has a larger surface area ($4\pi R^2$), the incoming energy density per unit area is a factor of $\frac{R^2}{r^2}$ smaller.</div>
<div>
<br /></div>
<div>
We call this "area density of flux'' irradiance (abbreviated $E$) if it is arriving at a surface, and radiant exitance (abbreviated $M$) if it is leaving a surface. The SI unit for these quantities are Watts per square meter.</div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjz6IGH0ttjFOICaaDi-1SEufts3OOllIR_0fE-6jbE_kii64U4UgjJ2s4YIRc7Sq-btMEkegONTUtP60avMFwEFRgUqHACSfuvJ_HjHafuApBvK3MDe4Nt1n0Sv0V270MjansWVqGSVEs/s1600/flux_sphere.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="425" data-original-width="425" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjz6IGH0ttjFOICaaDi-1SEufts3OOllIR_0fE-6jbE_kii64U4UgjJ2s4YIRc7Sq-btMEkegONTUtP60avMFwEFRgUqHACSfuvJ_HjHafuApBvK3MDe4Nt1n0Sv0V270MjansWVqGSVEs/s320/flux_sphere.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure Source: http://www.pbr-book.org/3ed-2018/Color_and_Radiometry/Radiometry.html</td></tr>
</tbody></table>
<div>
<br /></div>
<div>
Now let's consider a slightly different scene in the figure below, where a small flat surface with area $A$ emits a straight beam of light onto the floor. On the left, the emitting and receiving surfaces have the same area, $A = A_1$, so the irradiance equals radiant exitance $E = M$. On the right, the beam of light shines on the floor at an angle $\theta$, which causes the projection $A_2$ to be larger. Calculus and trigonometry tell us that as we shrink the area $A \to 0$, the area of the projected light $A_2$ approaches $\frac{A}{\cos \theta}$. Because flux must be conserved, the irradiance of $A_2$ must be $E = M \cos \theta$, where $\theta$ is the angle between the surface normal and light direction. This is known as "Lambert's Law''.</div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgc8RBNWMO3pyFZlIgZZ5NeglnOqZvHucX56T_ggyCI9fw4jP9GrEkwqtXVmI_0aipzB_7suRM_V1m-Hc6scJmljWn8IFud2-cT5_FZhanE34wPazRC8qzQwV9ulQ-qjQF_NwkfV6hRHpA/s1600/lamberts_law.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="438" data-original-width="542" height="258" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgc8RBNWMO3pyFZlIgZZ5NeglnOqZvHucX56T_ggyCI9fw4jP9GrEkwqtXVmI_0aipzB_7suRM_V1m-Hc6scJmljWn8IFud2-cT5_FZhanE34wPazRC8qzQwV9ulQ-qjQF_NwkfV6hRHpA/s320/lamberts_law.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure Source: http://www.pbr-book.org/3ed-2018/Color_and_Radiometry/Radiometry.html</td></tr>
</tbody></table>
<div>
<br /></div>
<div>
In the above examples, the scenes were simple or symmetric enough that we did not have to think about what direction light is coming from when computing the irradiance of a surface. However, if we want to simulate light in a complex scene, we will need to compute irradiance by integrating light over many possible directions. For non-transparent surfaces, this set of directions forms a hemisphere surrounding the point of interest, and is perpendicular to the surface normal.</div>
<div>
<br /></div>
<div>
<b>Radiance</b> extends the measure of irradiance to also depend on the solid angle of incident light. Solid angles are just extensions of 2D angles to 3D spheres (and hemispheres). You can recover irradiance and power by integrating out angle and area of irradiance, respectively:</div>
<div>
<br /></div>
<div>
<ul>
<li>Radiance $L = \frac{\partial^2 \Phi}{\partial \Omega \partial A \cos \theta}$ measures flux per projected unit area $A \cos \theta$ per unit solid angle (Figure 5.10) $\Omega$.</li>
<li>Irradiance $E = \frac{\partial \Phi}{\partial A \cos \theta}$ is the integral of radiance over solid angles $\Omega$.</li>
<li>Power $\Phi$ is the integral of irradiance over projected area $A$.</li>
</ul>
</div>
<div>
<br /></div>
<div>
A nice property of radiance is that it is conserved along rays through empty space. We have the incoming radiance $L_i$ from direction $\omega_i$ to point $x$ equal to the outgoing radiance $L_o$ from some other point $y$, in the reverse direction $-\omega_i$. $y$ is the intersection of origin $x$ along ray $\omega_i$ with the scene geometry.</div>
<div>
<br /></div>
<div>
$ L_i(x, \omega_i) = L_o(y, -\omega_i) $</div>
<div>
<br /></div>
<div>
It's important to note that although incoming and outgoing radiance are conserved along empty space, we still need to respect Lambert's Law when computing an <i>irradiance at a surface</i>.</div>
<div>
<br /></div>
<h3>
Different Ways to Integrate Radiance</h3>
<div>
<br /></div>
<div>
You may remember from calculus class that it is sometimes easier to compute integrals by changing the integration variable. The same concept holds in rendering: we'll use three different integration methods in building a computationally efficient path tracer. In this section I will draw some material directly from the PBRTv3 online textbook, which you can find here: <a href="http://www.pbr-book.org/3ed-2018/Color_and_Radiometry/Working_with_Radiometric_Integrals.html">http://www.pbr-book.org/3ed-2018/Color_and_Radiometry/Working_with_Radiometric_Integrals.html</a></div>
<div>
<br /></div>
<div>
I was a teaching assistant for the graduate graphics course for 2 years at Brown and by far the most common mistake made in the path tracing project assignments were insufficient understanding of the calculus that went into correctly integrating radiometric quantities. </div>
<div>
<br /></div>
<h4>
Integrating Over Solid Angle</h4>
<div>
<br /></div>
<div>
As mentioned before, in order to compute <i>irradiance</i> $E(x, n)$ at a surface point $x$ with normal $n$, we need to take Lambert's rule into account, because there is a "spreading out'' of flux density that occurs when light sources are facing at an angle.</div>
<div>
<br /></div>
<div>
$E(x, n) = \int_\Omega d\omega L_i(x, \omega) |\cos \theta| = \int_\Omega d\omega L_i(x, \omega) |\omega \cdot n| $</div>
<div>
<br /></div>
<div>
One way to estimate this integral is a single-sample Monte Carlo Estimator, where we sample a single ray direction $\omega_i$ uniformly from the hemisphere, and evaluate the radiance for that direction. In expectation over $\omega_i$, the estimator computes the correct integral.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
$\omega_i \sim \Omega $</div>
<div>
$\hat{E}(x, n) = L_i(x, \omega_i) |\omega \cdot n| \frac{1}{p(\omega_i)} $</div>
<div>
<br /></div>
<h4>
Integrating Over Projected Solid Angle</h4>
<div>
<br /></div>
<div>
Due to Lambert's law, we should never sample outgoing rays perpendicular to the surface normal because the projected area $\frac{A}{\cos \theta}$ approaches infinity, so the radiance contribution to that area is zero.</div>
<div>
<br /></div>
<div>
We can avoid sampling these "wasted'' rays by weighting the probability of sampling a ray according to Lambert's law - in other words, a cosine-weighted distribution $H^2$ along the hemisphere. This requires us to perform a change of variables, and integrate with respect to the projected solid angle $d\omega^\perp = |\cos \theta| d\omega$. </div>
<div>
<br /></div>
<div>
This is where the cosine-weighted hemisphere sampling function we implemented earlier will come in handy. </div>
<div>
<br /></div>
<div>
$ E(x, n) = \int_{H^2} d\omega^\perp L_i(x, \omega^\perp) $</div>
<div>
<br /></div>
<div>
The cosine term in the integral means that the contribution to irradiance is higher as the light source becomes more perpendicular to the light.</div>
<div>
<br /></div>
<h4>
Integrating Over Light Area</h4>
<div>
<br /></div>
<div>
If the light source subtends a very small solid angle on the hemisphere, we will need to sample a lot of random outgoing rays before we find one that intersects the light source. For small or directional light sources, it is far more computationally efficient to integrate over the area of the light, rather than the hemisphere.</div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhDDDPZ7deETQxKgNh8RPWz_D_zp2T2vKI_7Vq3BsIIX1cAZpuDN5HIkaC6ssX9HsI-_XIGMsdqq10Q2MdO3e7oc5DVQ20ObB6Ii6xx9dMJfzS7TVgFLyPbWpG3RMQKMXJ6lHXclhpwebk/s1600/area_integral.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="598" data-original-width="727" height="263" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhDDDPZ7deETQxKgNh8RPWz_D_zp2T2vKI_7Vq3BsIIX1cAZpuDN5HIkaC6ssX9HsI-_XIGMsdqq10Q2MdO3e7oc5DVQ20ObB6Ii6xx9dMJfzS7TVgFLyPbWpG3RMQKMXJ6lHXclhpwebk/s320/area_integral.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: start;">Figure Source: http://www.pbr-book.org/3ed-2018/Color_and_Radiometry/Working_with_Radiometric_Integrals.html</span></td></tr>
</tbody></table>
<div>
<br /></div>
<div>
<br /></div>
<div>
If we perform a change in variables from differential solid angle $d\omega$ to differential area $dA$, we must compensate for the change in volume. </div>
<div>
<br /></div>
<div>
$ d\omega = \frac{dA \cos \theta_o}{r^2} $</div>
<div>
<br /></div>
<div>
<div>
I won't go through the derivation in this tutorial, but the interested reader can find it here: <a href="https://www.cs.princeton.edu/courses/archive/fall10/cos526/papers/zimmerman98.pdf">https://www.cs.princeton.edu/courses/archive/fall10/cos526/papers/zimmerman98.pdf</a>. Substituting the above equation into the irradiance integral, we have:</div>
</div>
<div>
<br /></div>
<div>
$ E(x, n) = \int_{A} L \cos \theta_i \frac{dA \cos \theta_o}{r^2} $</div>
<div>
<br /></div>
<div>
where $L$ is the emitted radiance of the light coming from the implied direction $-\omega$, which has an angular offset of $\theta_o$ from the light surface's surface normal. The corresponding single-sample Monte Carlo estimator is given by sampling a point on the area light, rather than a direction on the hemisphere. The probability $p(p)$ of sampling the point $p$ on an area $A$ is usually given by a uniform $\frac{1}{A}$.</div>
<div>
<br /></div>
<div>
$p \sim A $</div>
<div>
$\omega = \frac{p-x}{\left\lVert {p-x} \right\rVert} $</div>
<div>
$r^2 = \left\lVert {p-x} \right\rVert ^2 $</div>
<div>
$\hat{E}(x, n) = \frac{1}{p(p)}\frac{L}{r^2} |\omega \cdot x| |-\omega \cdot n| $</div>
<div>
<br /></div>
<h3>
Making Rendering Computationally Tractable with Path Integrals</h3>
<div>
<br /></div>
<div>
The rendering equation describes the outgoing radiance $L_o(x, \omega_o)$ from point $x$ along ray $\omega_o$.</div>
<div>
<br /></div>
<div>
$ L_o(x, \omega_o) = L_e(x, \omega_o) + \int_{\Omega} f_r(x, \omega_i, \omega_o) L_i(x, \omega_i) (-\omega_i \cdot n) d\omega_i $</div>
<div>
<br /></div>
<div>
where $L_e(x, \omega_o)$ is emitted radiance, $f_r(x, \omega_i, \omega_o)$ is the BRDF (material properties), $L_i(x, \omega_i)$ is incoming radiance, $(-\omega_i \cdot n)$ is the attenuation of light coming in at an incident angle with surface normal $n$. The integral is with respect to solid angle on a hemisphere.</div>
<div>
<br /></div>
<div>
How do we go about implementing this on a computer? Evaluating the incoming light to a point requires integrating over an infinite number of directions, and for each of these directions, we have to recursively evaluate the incoming light to those points. Our computers simply cannot do this.</div>
<div>
<br /></div>
<div>
Fortunately, path tracing provides a tractable way to approximate this scary integral. Instead of integrating over the hemisphere $\Omega$, we can sample a random direction $w_i \sim \Omega$, and the probability-weighted contribution from that single ray is an unbiased, single-sample monte carlo estimator for Eq. 1.</div>
<div>
<br /></div>
<div>
$ \omega_i \sim \Omega $</div>
<div>
$\hat{L}_o(x, \omega_o) = L_e(x, \omega_o) + \frac{1}{p(\omega_i)} f_r(x, \omega_i, \omega_o) L_i(x, \omega_i) (-\omega_i \cdot n(x)) $</div>
<div>
<br /></div>
<div>
We still need to deal with infinite recursion. In most real-world scenarios, a photon only bounces around a few times before it is absorbed, so we can truncate the depth or use a more unbiased technique like <a href="https://smerity.com/montelight-cpp/">Russian Roulette sampling</a>. We recursively trace the $L_i(x, \omega_i)$ function until we hit the termination condition, which results in a linear computation cost with respect to depth.</div>
<div>
<br /></div>
<h3>
A Naive Path Tracer</h3>
<div>
<br /></div>
<div>
Below is the code for a naive path tracer, which is more or less a direct translation of the equation above.</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">def trace(ro, rd, depth):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> p = intersect(ro, rd)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> n = calcNormal(p)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> radiance = emittedRadiance(p, ro)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> if depth < 3:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> # Uniform hemisphere sampling</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> rd2 = sampleUniformHemisphere(n)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> Li = trace(p, rd2, depth+1)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> radiance += brdf(p, rd, rd2)*Li*np.dot(rd, n)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return radiance</span></div>
<div>
<br /></div>
<div>
We assume a 25 Watt square light fixture at the top of the Cornell Box that acts as a diffuse area light and only emits light from one side of the plane. Diffuse lights have uniform spatial and directional radiance distribution; this is also known as a "Lambertian Emitter'', and it has a closed-form solution for its emitted radiance from any direction:</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">LIGHT_POWER = np.array([25, 25, 25]) # Watts</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">LIGHT_AREA = 1.</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">def emittedRadiance(p, ro):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return LIGHT_POWER / (np.pi * LIGHT_AREA)</span></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
The $\pi$ term is a little surprising at first, but you can find the derivation here for where it comes from: <a href="https://computergraphics.stackexchange.com/questions/3621/total-emitted-power-of-diffuse-area-light">https://computergraphics.stackexchange.com/questions/3621/total-emitted-power-of-diffuse-area-light</a>.</div>
<div>
<br /></div>
<div>
Normally we'd have to track radiance for every visible wavelength, but we can obtain a good approximation of the entire spectral power distribution by tracking radiance for just a few wavelengths of light. According to tristimulus theory, it is actually possible to represent all human-perceivable colors with 3 numbers, such as XYZ or RGB color bases. For simplicity, we'll only compute radiance values for R, G, B wavelengths in this tutorial. The brdf term corresponds to material properties. This is a simple scene in which all materials are Lambertian, meaning that the direction of the incident and exitant angles don't matter, so the brdf reflects incident radiance by multiplying its R, G, B values. Here are the BRDFs we use for various objects in the scene, expressed in the RGB basis:</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">lightDiffuseColor = np.array([0.2,0.2,0.2])</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">leftWallColor = np.array([.611, .0555, .062]) * 1.5</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">rightWallColor = np.array([.117, .4125, .115]) * 1.5</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">whiteWallColor = np.array([255, 239, 196]) / 255</span></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
We can make our path tracer more efficient by switching the integration variable to the projected solid angle $d\omega_i |\cos \theta|$. As discussed in the last section, this has the benefit of importance-sampling the solid angles that are proportionally larger due to Lambert's law, and as an added bonus we can drop the evaluation of the cosine term. </div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">def trace(ro, rd, depth):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> p = intersect(ro, rd)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> n = calcNormal(p)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> radiance = emittedRadiance(p, ro)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> if depth < 3:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> # Cosine-weighted hemisphere sampling</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> rd2 = sampleCosineWeightedHemisphere(n)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> Li = trace(p, rd2, depth+1)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> radiance += brdf(p, rd, rd2)*Li</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return radiance</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<h4>
Reducing Variance by Splitting Up Indirect Lighting</h4>
<div>
<br /></div>
<div>
The above estimator is correct and will get you the right result in expectation, but ends up being a high-variance estimator because the samples only have nonzero radiance when one or more of the path intersections intersects the emissive geometry. If you are trying to render a scene that is illuminated by a geometrically small light source -- a candle in a dark room perhaps -- the vast majority of path samples will never intersect the candle, and subsequently these samples will be sort of wasted. The image will appear very grainy and dark.</div>
<div>
<br /></div>
<div>
Luckily, the area integration trick we discussed a few sections back comes to our rescue. In graphics, we actually know where the light surfaces are ahead of time, so we can integrate over the emissive surface instead of integrating over the receiving surface's solid angles. We do this by performing a change of variables $d\omega = \frac{dA \cos \theta_o}{r^2}$. </div>
<div>
<br /></div>
<div>
To implement this trick, we can split up indirect lighting reflecting off point $p$ into two separate calculations: (1) direct lighting a the light source bouncing off of $p$, and (2) indirect lighting from a non-light source reflecting off of $p$. Notice that we have to modify the recursive <span style="font-family: "courier new" , "courier" , monospace;">trace</span> term to ignore <span style="font-family: "courier new" , "courier" , monospace;">emittedRadiance</span> from any lights it encounters, except for the case where light leaves the emitter and enters the eye directly (which is when <span style="font-family: "courier new" , "courier" , monospace;">depth=0</span>). This is because for each point $p$ in the path, we are already accounting for an extra path that goes from an area light directly to $p$. We don't want to double count such paths!</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">def trace(ro, rd, depth):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> p = intersect(ro, rd)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> n = calcNormal(p)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> if depth == 0:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> # Integration over solid angle (eye ray)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> radiance = emittedRadiance(p, ro)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> # Direct Lighting Term</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> pA, M, pdf_A = sampleAreaLight()</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> n_light = calcNormal(pA)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> if visibilityTest(p, pA):</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> square_distance = np.sum(np.square(pA - p))</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> w_i = normalize(pA - p)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> dw_da = np.dot(n_light, -w_i)/square_distance # dw/dA</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> radiance += (brdf(p, rd, w_i) * np.dot(n, w_i) * M) * dw_da</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> # Indirect Lighting Term</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> if depth < 3:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> # Integration over cosine-weighted solid angle</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> rd2 = sampleCosineWeightedHemisphere(n)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> Li = trace(p, rd2, depth+1)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> radiance += brdf(p, rd, rd2)*Li</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> return radiance</span></div>
<div>
<br /></div>
<div>
The <span style="font-family: "courier new" , "courier" , monospace;">sampleAreaLight()</span> function samples a point $p$ on an area light with emitted radiance $M$ and also computes the probability of choosing that sample (for a uniform emitter, it's just one over the area).</div>
<div>
<br /></div>
<div>
The cool thing about this path tracer implementation is that it features three different ways to integrate irradiance: solid angles, projected solid angles, and area light surfaces. Calculus is useful!</div>
</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<h3>
Ignoring Photometry</h3>
<div>
<br /></div>
<div>
<b>Photometry</b> is the study of how we convert radiometric quantities (the outputs of the path tracer) to the color quantities perceived by the human visual system. For this tutorial we will do a crude approximation of the radiometric-to-photometric by simply clipping the values of each R, G, B radiance to a maximum of 1, and display the result directly in matplotlib.</div>
<div>
<br /></div>
<div>
And voila! We get a beautifully path-traced image of a Cornell Box. Notice how colors from the walls "bleed" onto adjacent walls, and the shadows cast by the boxes are "soft".</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhz7DiAZOQY5qO2F9sw8F8QxknCeFffVXjUbi659DzRuUO71CPLrdHhpYpIjEWSMipTqVZ975spapeYRH8bTwp57ep9z_ZzIwgsCc_yzaoKq7dPV_-Uo6ezvFG2TYY_94zNDUDOC00RjpQ/s1600/cornell_box.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="225" data-original-width="222" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhz7DiAZOQY5qO2F9sw8F8QxknCeFffVXjUbi659DzRuUO71CPLrdHhpYpIjEWSMipTqVZ975spapeYRH8bTwp57ep9z_ZzIwgsCc_yzaoKq7dPV_-Uo6ezvFG2TYY_94zNDUDOC00RjpQ/s1600/cornell_box.png" /></a></div>
<div>
<br /></div>
<div>
<br /></div>
<h3>
Performance Benchmarks: P100 vs. TPUv2</h3>
<div>
<br /></div>
<div>
Copying data between accelerators (TPU, GPU) and host chips (CPU) is very slow, so we'll try to compile the path tracing code into as few XLA calls from Python as possible. We can do this by applying the <span style="font-family: "courier new" , "courier" , monospace;">jax.jit </span>operator to the entire trace() function, so the rendering happens completely on the accelerator. Because <span style="font-family: "courier new" , "courier" , monospace;">trace</span> is a recursive function, we need to tell the XLA compiler that we are actually compiling it with a statically fixed depth of 3, so that XLA can unroll the loop and make it non-recursive. The <span style="font-family: "courier new" , "courier" , monospace;">vmap </span>call then transforms the function into a vectorized version. </div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">trace = jit(trace, static_argnums=(3,)) # optional</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">render_fn = lambda rng_key, ro, rd : trace(rng_key, ro, rd, 0)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">vec_render_fn = vmap(render_fn)</span></div>
<div>
<br /></div>
<div>
According to <span style="font-family: "courier new" , "courier" , monospace;">jax.local_device_count()</span>, a Google Cloud TPU has 8 cores. The code above only performs SIMD vectorization across 1 device, so we can also parallelize across multiple TPU cores using JAX's <span style="font-family: "courier new" , "courier" , monospace;">pmap</span> operator to get an additional speed boost..<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;"># vec_render_fn = vmap(render_fn)</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">vec_render_fn = jax.soft_pmap(render_fn)</span><br />
<br />
How fast does this path tracer run? I benchmarked the performance of a (1) manually-vectorized Numpy implementation, (2) a vmap-vectorized single-pixel implementation, and (3) a manually-vectorized JAX implementation (almost identical in syntax to numpy). Jitting the recursive trace function was very slow to compile (occasionally even crashed my notebook kernel), so I also implemented a version where the recursion happens in Python but the loop body of <span style="font-family: "courier new" , "courier" , monospace;">trace</span> (direct lighting, emission, sampling rays) are executed on the accelerator. </div>
<div>
<br /></div>
<div>
The plot below shows that JAX code is much slower to run on the first sample because the just-in-time compilation has to compile and fuse all the necessary XLA operations. I wouldn't read too carefully into this plot (especially when comparing GPU vs. TPU) because when I was doing these experiments I encountered a huge amount of variance in compile times. Numpy doesn't have any JIT compilation overhead, so it runs much faster for a single sample, even on the CPU.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfoCsfhyphenhyphenKTvQnNfAK0WRAew0SePZfd6tNVp6ka69QOdWcfehrwJ_BreDX4yk-ObNBYd22coe_2lMI6TG2wL9185AuiIgZysRzNrknA4u7HVY_kP7s2DNIwHuCMtAxkiqQ5omMK3cbSM0g/s1600/1_sample.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="371" data-original-width="600" height="394" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfoCsfhyphenhyphenKTvQnNfAK0WRAew0SePZfd6tNVp6ka69QOdWcfehrwJ_BreDX4yk-ObNBYd22coe_2lMI6TG2wL9185AuiIgZysRzNrknA4u7HVY_kP7s2DNIwHuCMtAxkiqQ5omMK3cbSM0g/s640/1_sample.png" width="640" /></a></div>
<br /></div>
<div>
<br /></div>
<div>
What about a multi-sample render? After the XLA kernels have been compiled, subsequent calls to the <span style="font-family: "courier new" , "courier" , monospace;">trace</span> function are very fast.</div>
<div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4LoDIDQn3dudDxBPzjMIFt91covuT4_A69Em4r0-NBpwm1CdajCpQj76B94XFgGhO6HwZ7n9MU0UQMIsE7PC6JL3Hk4LgRxHQrCNDBKJlKE0t9KDilQmdvJVt_hw-_ohFMPSiaum-iQA/s1600/100_samples.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="371" data-original-width="600" height="394" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4LoDIDQn3dudDxBPzjMIFt91covuT4_A69Em4r0-NBpwm1CdajCpQj76B94XFgGhO6HwZ7n9MU0UQMIsE7PC6JL3Hk4LgRxHQrCNDBKJlKE0t9KDilQmdvJVt_hw-_ohFMPSiaum-iQA/s640/100_samples.png" width="640" /></a></div>
<br />
<br /></div>
<div>
We see that there's a trade-off between compilation time and runtime: the more we compile, the faster things run when performing many samples. I haven't tuned the code to favor any accelerator in particular, and this is the first time I've measured TPU and GPU performance under a reasonable path tracing workload. Path tracing is an embarrassingly parallel workload (on the pixel level and image sample level), so it should be quite possible to get a linear speedup from using more TPU cores. My code currently does not do that because each <span style="font-family: "courier new" , "courier" , monospace;">pmap</span>'ed worker is blocked on rendering an entire image sample. If you have suggestions on how to accelerate the code further, I'd love to hear from you.</div>
<div>
<br /></div>
<h3>
Summary</h3>
<div>
<br /></div>
<div>
In this blog post we derived the principles of physically based rendering from scratch, and implemented a differentiable path tracer in pure JAX. There are three kinds of radiometric integrals (solid angle, projected solid angle, and area) that come up in a basic implementations of a path tracer and we used all three to implement a path tracer that separates direct lighting contributions from area lights separately from indirect lighting bouncing from non-light surfaces.</div>
<div>
<br /></div>
<div>
JAX provides us with a lot of useful features to implement this:</div>
<div>
<ul>
<li>You can write a one-pixel path tracer and<span style="font-family: "courier new" , "courier" , monospace;"> vmap</span> it into a vectorized version without sacrificing performance. You can parallelize trivially across devices using <span style="font-family: "courier new" , "courier" , monospace;">pmap</span>.</li>
<li>Code runs on GPU and TPU without modifications.</li>
<li>Analytical surface normals of signed distance fields provided by automatic differentiation.</li>
<li>Lightweight enough to run in a Jupyter/Colaboratory notebook, making it ideal for trying out graphics research ideas without getting bogged down by software engineering abstractions.</li>
</ul>
</div>
<div>
There are still some sharp bits with JAX because graphics and rendering workloads are not its first-class customers. Still, I think there is a lot of promise and future work to be done with combining the programmatic expressivity of modern deep learning frameworks with the field of graphics.</div>
<div>
<br />
We didn't explore the differentiability of this path tracer, but rest assured that the combination of ray-marching and Monte Carlo path integration makes everything tractable. Stay tuned for the next part of the tutorial, when we mix differentiation of this path tracer with neural networks and machine learning.<br />
<br /></div>
<h3>
Acknowledgements</h3>
<div>
<br /></div>
<div>
Thanks to <a href="http://lukemetz.com/">Luke Metz</a>, <a href="https://jonathantompson.github.io/">Jonathan Tompson</a>, <a href="https://pharr.org/matt/">Matt Pharr</a> for interesting discussion a few years ago when I wrote the first version of this code in TensorFlow. Many thanks to Peter Hawkins, <a href="https://twitter.com/jekbradbury">James Bradbury</a>, and <a href="http://stephanhoyer.com/">Stephan Hoyer</a> for teaching me more about JAX and XLA. Thanks to <a href="https://www.yiningkarlli.com/">Yining Karl Li</a> for entertaining my dumb rendering questions and Vincent Vanhoucke for catching typos.</div>
</div>
<div>
<br /></div>
<div>
<h3>
Fun Facts</h3>
<div>
<ul>
<li>Jim Kajiya's first path tracer took 7 hours to render a 256x256 image on a 280,000 USD IBM computer. By comparison, this renderer takes about 10 seconds to render an image of similar size, and you can run it for free with Google's free hosted colab notebooks that come with JAX pre-installed.</li>
<li>I didn't discuss photometry much in this tutorial, but it turns out that the SI unit of photometric density, <a href="https://www.bipm.org/metrology/photometry-radiometry/units.html">the candela</a>, is the only SI base unit related to a biological process (human vision system).</li>
<li>Check out my <a href="https://blog.evjang.com/2018/01/nf1.html">blog post on normalizing flows</a> for more info on how "conservation of probability mass'' is employed in deep learning research!</li>
<li><a href="https://github.com/mattloper/opendr/wiki">OpenDR</a> was one of the first general-purpose differentiable renderers, and was technically innovative enough to merit publishing in ECCV 2014. It's remarkable to see how easy writing a differentiable renderer has become with modern deep learning frameworks like JAX, Pytorch, and TensorFlow.</li>
</ul>
</div>
</div>
<div>
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-21589657371426444932019-11-06T07:18:00.004-08:002019-11-07T04:36:50.231-08:00Robinhood, Leverage, and Lemonade<i>DISCLAIMER: NO INVESTMENT OR LEGAL ADVICE<br />The Content is for informational purposes only, you should not construe any such information or other material as legal, tax, investment, financial, or other advice. Investing involves risk, please consult a financial professional before making an investment.</i><br />
<br />
<div>
<a href="https://robinhood.com/">Robinhood</a> is a zero-commission brokerage that was founded in 2013. It has a beautiful mobile user interface that game-ifies the gambling of your life savi—, er, makes it seamless for millennials to buy and sell stocks.<br />
<br />
I <a href="https://qr.ae/TWPulj">wrote on Quora</a> in Dec 2014 on why lowering the barrier to entry to this extent can cause retail investors to make trades without knowing what they are doing. That post turned out to be rather prescient, for reasons I’ll explain below. <br />
<br />
One of the ways Robinhood makes money is via margin lending: they loan you some extra money to invest in the stock market with, and later you pay back the loan with some interest (currently about 5%).<br />
<br />
If you are in the business of lending money, not only do you have to safeguard your brokerage system against technological vulnerabilities (e.g. C++ memory leaks that expose users’ trades), but you also need to defend against financial vulnerabilities, which are portfolios that expose the lender or its customers to an irresponsible amount of investment risk.<br />
<br />
In the last few months it has come to light [<a href="https://www.reddit.com/r/explainlikeimfive/comments/ah578l/eli5_box_spreads_and_u1ronyman_50k_loss_on/">1</a>, <a href="https://www.reddit.com/r/wallstreetbets/comments/ckycr2/600k_yolo_in_fds_expiring_tmrw_if_i_die_remember/">2</a>, <a href="https://www.reddit.com/r/wallstreetbets/comments/dqg6xx/infinite_leverage_explained/">3</a>, <a href="https://www.reddit.com/r/wallstreetbets/comments/dsb0mz/robinhood_has_inbred_and_made_the_ultimate_autist">4</a>, <a href="https://www.reddit.com/r/wallstreetbets/comments/drt5tr/guh_of_fame_2019/">5</a>] that there are some serious financial vulnerabilities in Robinhood’s margin lending platform, whereby it is possible for users to borrow much, much more money from Robinhood than they are supposed to. <br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><span style="border: none; display: inline-block; height: 454px; overflow: hidden; width: 227px;"><br class="Apple-interchange-newline" /><img height="454" src="https://lh4.googleusercontent.com/BRfT0XuBcumCr-uXLpHyN2yIPfSsWCb72qz2-uMyNqBkHIc5Nv88pSCyvQFpoGeZH8gsqNOPKMbv9po84YueyJ5C0wYTWdGwBukrop6qtjf5eAmTSt6W3OYDJx1uFeJ3-g1RBWFZ" style="margin-left: 0px; margin-top: 0px;" width="227" /></span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="color: #1155cc; font-family: "arial"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><a href="https://www.reddit.com/r/wallstreetbets/comments/agovgl/only_invest_what_you_can_afford_to_lose_they_said/" style="text-decoration-line: none;">Reddit Discussion</a></span></div>
<br />
These users subsequently gamble huge amounts of borrowed money away in a coin toss, leaving Robinhood in a very bad spot, perhaps even at odds with <a href="https://www.investopedia.com/terms/r/regulationt.asp">Regulation T</a> laws (I am not a lawyer, just speculating here).<br />
<br />
“Leverage” is one of the most important concepts to understand in finance, and when used judiciously, is a net positive for everyone involved. It is important for everyone to understand how credit works, and how much leverage is too much. Borrowing more money than you can afford to pay back can take many forms, whether it is taking on college debt, credit card debt, or raising VC money. <br />
<br />
Here’s a tutorial on “financial leverage” in the form of a story about lemonade:<br />
<br />
<br />
<h3>
Lemonade Leverage</h3>
<br />
It’s a hot summer, and you decide to start a lemonade stand to make some money. You have 100<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span>, with which you can buy enough ingredients to make 120<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> of lemonade for the summer. Your “return on investment”, or ROI, for the summer is 20%, since you ended up with 20% more money than you started with.<br />
<br />
You also figure that if you had another 200<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span>, enough people want lemonade that you could sell three times as much lemonade and make 360<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span>. But you don’t have 200<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> to spare! What do you do?<br />
<br />
You could use the 120<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> to build a slightly bigger lemonade operation next year. Assuming you could get a 20% ROI again next summer, you end up with 144<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span>. But it will be many years before you even have 300<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span>! By this time next year, lemonade might be out of fashion and kids might be juuling at home watching Netflix instead. You would much prefer to scale up your lemonade operation now, while you are confident that you can sell lemonade at a "profit margin" of 20%.<br />
<br />
Fortunately, your friend “Britney Banker” is very wealthy and can lend you 200<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span>. Britney Banker doesn’t have your entrepreneurial spirit, so she lacks the ability to get a 20% ROI on her own money. She offers to give you 200<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> today, in exchange for you giving her 210<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> at the end of the year -- an interest rate of 5%. Your “<b>capital leverage ratio</b>” is 100 / 200 = 1:2, because for every dollar you own, Britney is willing to lend you 2<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span>.<br />
<br />
If things turn out well, you sell 360<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> worth of lemonade, pay Britney back 210<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span>, and pocket the remaining 150<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span>. Starting with 100<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span>, you were able to use borrowed money to “magnify” your return to 50%.<br />
<br />
However, if you make 200<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> worth of lemonade and fail to sell any of it before the lemonade spoils and became worthless, you would be in a very sticky situation! You would have worthless lemonade and a 210<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> debt to Britney. This is far worse than if you had lost your own 100<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span>, because at least you wouldn’t owe anyone anything afterwards. So even though 1:2 leverage may amplify your gains from 20% → 50%, so it may amplify your potential losses from 100% → -310%!<br />
<br />
The only reason why Britney is willing to lend you the money in the first place is that Britney thinks this outcome (you losing all of the borrowed money on top of your own assets) is unlikely. If Britney thought that you were less reliable, she might offer you a smaller leverage ratio (e.g. 1 : 1.5). <br />
<br />
<h3>
Lemonade Coupons</h3>
<br />
Suppose you make a big batch of lemonade (with Britney’s money) and then go door to door selling lemonade, but instead of giving customers a delicious drink right away, you give them a “deep-in-the-lemonade covered call option”. You take their money up front, and give them a coupon that allows them to “buy” a lemonade for free (0<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span>).<br />
<br />
The "call option" is referred to as "covered" because you actually have the lemonade to go with the coupon, it's just that you're holding onto the lemonade until the buyer actually redeems the coupon.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVmnKrqqRrBmkACZQjxdLChKnHXtuKIBi0gHQyXLzIPUqiK1sTQFBuIHLDT17j_KFNOI4MSAkbT2dqD3JZkwPUGKRWYUWirBktOj9ozQZsfdtbviXSlV08fD5CTF0TByxSkJvFUm2WdJU/s1600/coupon-schematic.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="532" data-original-width="754" height="450" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVmnKrqqRrBmkACZQjxdLChKnHXtuKIBi0gHQyXLzIPUqiK1sTQFBuIHLDT17j_KFNOI4MSAkbT2dqD3JZkwPUGKRWYUWirBktOj9ozQZsfdtbviXSlV08fD5CTF0TByxSkJvFUm2WdJU/s640/coupon-schematic.png" width="640" /></a></div>
<br />
You then go back to Britney and say “I have 360<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> of lemonade that I’ve made but haven’t sold, and 360<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> in cash from selling lemonade options to customers, and as for debts there’s 200<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> I’ve borrowed from you. That’s 520<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> in net assets, so can I please borrow 1040<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span>?”.<br />
<br />
Britney says “sure, that’s a 1:2 leverage ratio”, and writes you a check for 1040<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span>, again with 5% interest. But Britney has made a tragic mistake here! The 360<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> in lemonade she counted as your assets are not really yours to spend, because you actually owe them in obligations to customers.<br />
<br />
With 1204<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> in borrowed assets, you are now leveraged over 1:12 !<br />
<br />
You repeat this process again, turning 1040<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> cash into 1248<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> of lemonade, selling an additional 1248<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> of deep-in-the-lemonade options. You now have 1608<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> of lemonade, and 1608<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> in cash, and 1204<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> of debt, for net assets of 1608 + 1608 - 1204 = 2012<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span>.<br />
<br />
You go back to Britney and ask to borrow another 4024€, with 5% interest. Again, because Britney is forgetting to account for the 1608<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> in lemonade “debt” that you may have to deliver to coupon-holders, she thinks that the leverage is still 1:2. You repeat this process one more time, and your new total position is 6k<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> in lemonade, 6k<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> in cash, 5k<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> net debt.<br />
<br />
If you were to successfully deliver 6k<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> of lemonade, you would make 1k<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> in profit, starting from only 100<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> of your own cash. A 1000% return sounds too good to be true, right? That’s because it is.<br />
<br />
One hot summer day, all of the coupon holders decide to exercise their coupons at the same time. You realize that your lemonade stand can’t actually fulfill 6k<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> in lemonade orders and you are in way over your head. Desperate, you attempt to pivot and come up with a Billy Mcfarland-esque scheme to buy lemonade from a local grocery and dilute it with some water. But due to inexperience with food handling operations, you accidentally contaminate half the batch, and are left with only 3k<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> of lemonade. You have 6k<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> cash but still owe 3k<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> in lemonade and 5k<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> in cash.Your 1k<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> profit opportunity has now become a 2k<span style="background-color: white; color: #222222; font-family: "roboto" , "arial" , sans-serif; font-size: 16px;">€</span> DEBT (ROI of -2100%), and we haven't even factored in the interest! Because the debtors (lemonade coupon holders and Britney Banker) must be paid regardless of whether you successfully make lemonade or not, your leverage has an asymmetric payoff - the downsides are twice as bad as the upside!<br />
<br />
I wish I could say that this story was fictional, but to the best of my understanding this is more or less what /u/ControlTheNarrative and others attempted to do on Robinhood. Substitute "lemonade" for "AMD stock", and "lemonade coupon" for "deep-in-the-money covered call option". Theoretically, Robinhood shouldn't allow you to buy options on margin, but /u/ControlTheNarrative was very clever to use covered call options, which meant that he bought AMD stock with margin (valid) and then created cash and in-the-money AMD call options (sort of like creating matter and antimatter from nothing). Robinhood failed to detect the "antimatter", allowing /u/ControlTheNarrative to mask his "debt", thereby doubling his apparent net assets.<br />
<br />
Ok, where did /u/ControlTheNarrative go wrong? It might be possible to still turn a profit by investing the vast amount of leverage in a “safe asset”, right? This seems unlikely: Robinhood’s interest rate of 5% far exceeds the risk-free rate of 1.88% currently offered by a 1-year Treasury note. In other words, it only makes sense to use Robinhood's leverage when you have the ability to deliver annualized returns that exceed 5%. When one has limited assets and a risky investment opportunity, they should instead carefully choose leverage so that they do not end up owing 10x their net worth should they encounter a stroke of bad luck.<br />
<br />
Instead of trying to find an investment that minimizes risk while maintaining >5% return, /u/ControlTheNarrative proceeded to then take his enormous leverage and bet all of that on a coin toss: out-of-the-money (OTM) put options against Apple (remember that he is able to buy these options with leveraged cash because it has been "laundered" using covered call options).<br />
<br />
Unfortunately for him, Apple proceeded to beat performance expectations for earnings, and subsequently the OTM options became worthless!<br />
<br />
<a href="https://youtu.be/A-tNkuYV4_Q?t=40">Guh</a>!<br />
<br />
<h3>
Acknowledgements</h3>
<br />
Thanks to Ted Xiao and Daniel Ho for insightful discussion. We had a good laugh. I found the following links helpful in my research:<br />
<br />
<ul>
<li><a href="https://www.reddit.com/r/wallstreetbets/comments/dqg6xx/infinite_leverage_explained/">https://www.reddit.com/r/wallstreetbets/comments/dqg6xx/infinite_leverage_explained/</a></li>
<li><a href="https://www.bloomberg.com/opinion/articles/2019-11-05/playing-the-game-of-infinite-leverage">https://www.bloomberg.com/opinion/articles/2019-11-05/playing-the-game-of-infinite-leverage</a></li>
</ul>
<br />
<br />
<br />
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-49379954521863571262019-07-06T12:10:00.000-07:002019-12-18T23:33:58.365-08:00Normalizing Flows in 100 Lines of JAX<a href="https://github.com/google/jax">JAX</a> is a great linear algebra + automatic differentiation library for fast experimentation with and teaching machine learning. Here is a lightweight example, in just 75 lines of JAX, of how to implement <a href="https://arxiv.org/abs/1605.08803">Real-NVP</a>.<br />
<br />
This post is based off of a tutorial on normalizing flows I gave at the ICML workshop on <a href="https://slideslive.com/38917907/tutorial-on-normalizing-flows">Invertible Neural Nets and Normalizing Flows</a>. I've already written about <a href="https://blog.evjang.com/2018/01/nf1.html">how to implement your own flows</a> in TensorFlow using <a href="https://www.tensorflow.org/probability/api_docs/python/tfp/bijectors/Bijector">TensorFlow Probability's Bijector API</a>, so to make things interesting I wanted to show how to implement Real-NVP a different way.<br />
<br />
By the end of this tutorial you'll be able to reproduce this figure of a normalizing flow "bending" samples from a 2D Normal distribution to samples from the "Two Moons" dataset. Real-NVP forms the basis of a lot of flow-based architectures (as of 2019), so this is a good template to start learning from.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3bVupQbJV28sPagUB_U5CF14He-eiWp4q_MfXjVOqjmGCRojb3z8TUSDixm1lGp8XUr2SOhpQEkKMtchaCAIw0wDKXm3ZcqG2V_a2Y4s3x4vvDaBG29q8ayrtghakh5asbtu6ZIpK4DM/s1600/forward.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="288" data-original-width="432" height="213" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3bVupQbJV28sPagUB_U5CF14He-eiWp4q_MfXjVOqjmGCRojb3z8TUSDixm1lGp8XUr2SOhpQEkKMtchaCAIw0wDKXm3ZcqG2V_a2Y4s3x4vvDaBG29q8ayrtghakh5asbtu6ZIpK4DM/s320/forward.gif" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
If you are not already familiar with flows at a high level, please check out the 2-part tutorial: <a href="https://blog.evjang.com/2018/01/nf1.html">[part 1]</a> <a href="https://blog.evjang.com/2018/01/nf2.html">[part 2]</a>, as this tutorial just focuses on how to implement flows in JAX. You can find all the code along with the slides for my talk <a href="https://github.com/ericjang/nf-jax">here</a>.<br />
<br />
<h3>
Install Dependencies</h3>
<div>
There are just a few dependencies required to reproduce this tutorial. We'll be running everything on the CPU, though you can also build the GPU-enabled versions of JAX if you have the requisite hardware.</div>
<div>
<br /></div>
<div>
<div class="cell border-box-sizing code_cell rendered" style="-webkit-box-align: stretch; -webkit-box-orient: vertical; align-items: stretch; background-color: white; border: 1px solid transparent; box-sizing: border-box; display: flex; flex-direction: column; font-family: "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 14px; margin: 0px; outline: none; padding: 5px 5px 5px 0px; width: 700px;">
<div class="input" style="-webkit-box-align: stretch; -webkit-box-orient: horizontal; align-items: stretch; break-inside: avoid; display: flex; flex-direction: row; margin: 0px; padding: 0px;">
<div class="inner_cell" style="-webkit-box-align: stretch; -webkit-box-flex: 1; -webkit-box-orient: vertical; align-items: stretch; display: flex; flex-direction: column; flex: 1 1 0%; margin: 0px; padding: 0px;">
<div class="input_area" style="background: rgb(247, 247, 247); border-radius: 4px; border: 1px solid rgb(207, 207, 207); line-height: 1.21429em; margin: 0px; padding: 0px;">
<div class=" highlight hl-ipython3" style="background: transparent; border: none; margin: 0.4em; padding: 0px;">
<pre style="background-color: transparent; border-radius: 4px; border: none; color: #333333; font-size: inherit; line-height: inherit; overflow-wrap: break-word; padding: 0px; white-space: pre-wrap; word-break: break-all;"><span class="c1" style="color: #408080; font-style: italic; margin: 0px; padding: 0px;">pip install --upgrade jax jaxlib scikit-learn matplotlib</span></pre>
</div>
</div>
</div>
</div>
</div>
</div>
<div>
<br />
<h3>
Toy Dataset</h3>
<div>
<br /></div>
<div>
Scikit-Learn comes with some toy datasets that are useful for small scale density models.</div>
<div>
<br /></div>
<div>
<span id="docs-internal-guid-51afbc5c-7fff-7d95-4c9e-7f961d96d755"></span><br />
<div align="left" dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #fafafa; border-bottom: solid #e0e0e0 1pt; border-left: solid #e0e0e0 1pt; border-right: solid #e0e0e0 1pt; border-top: solid #e0e0e0 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">from</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"> sklearn </span><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">import</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"> cluster</span><span style="background-color: transparent; color: #616161; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"> datasets</span><span style="background-color: transparent; color: #616161; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"> mixture</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">from</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"> sklearn</span><span style="background-color: transparent; color: #616161; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">preprocessing </span><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">import</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #3367d6; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">StandardScaler</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">import</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"> matplotlib</span><span style="background-color: transparent; color: #616161; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">pyplot </span><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">as</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"> plt</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">n_samples </span><span style="background-color: transparent; color: #616161; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #c53929; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">2000</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">noisy_moons </span><span style="background-color: transparent; color: #616161; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"> datasets</span><span style="background-color: transparent; color: #616161; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">make_moons</span><span style="background-color: transparent; color: #616161; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">n_samples</span><span style="background-color: transparent; color: #616161; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">n_samples</span><span style="background-color: transparent; color: #616161; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"> noise</span><span style="background-color: transparent; color: #616161; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=.</span><span style="background-color: transparent; color: #c53929; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">05</span><span style="background-color: transparent; color: #616161; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">X</span><span style="background-color: transparent; color: #616161; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"> y </span><span style="background-color: transparent; color: #616161; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"> noisy_moons</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">X </span><span style="background-color: transparent; color: #616161; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #3367d6; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">StandardScaler</span><span style="background-color: transparent; color: #616161; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">().</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">fit_transform</span><span style="background-color: transparent; color: #616161; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-size: 10pt; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;">X</span><span style="background-color: transparent; color: #616161; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></span></div>
</td></tr>
</tbody></table>
</div>
<span id="docs-internal-guid-51afbc5c-7fff-7d95-4c9e-7f961d96d755">
</span></div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhM5oVUmCZ9d49EPL5mdxBKBXd7UkN4ZeWUeby5LkoEAt4RkLolk57A6EaC3tMgV5_fLmfP1SzqXno1g82CXFobf7nfKcIHuBbMADaA5T75rNKEpYfo4ghJdKxr-xXc89u05O4uyyRJLYM/s1600/download+%25281%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="252" data-original-width="388" height="207" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhM5oVUmCZ9d49EPL5mdxBKBXd7UkN4ZeWUeby5LkoEAt4RkLolk57A6EaC3tMgV5_fLmfP1SzqXno1g82CXFobf7nfKcIHuBbMADaA5T75rNKEpYfo4ghJdKxr-xXc89u05O4uyyRJLYM/s320/download+%25281%2529.png" width="320" /></a></div>
<h3>
</h3>
<h3>
Affine Coupling Layer in JAX</h3>
<br />
TensorFlow probability defines an object-oriented API for building flows, where a "TransformedDistribution" object is given a base "Distribution" object along with a "Bijector" object that implements the invertible transformation. In pseudocode, it goes something like this:<br />
<br />
<div align="left" dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #fafafa; border-bottom: solid #e0e0e0 1pt; border-left: solid #e0e0e0 1pt; border-right: solid #e0e0e0 1pt; border-top: solid #e0e0e0 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">class</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #3367d6; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">TransformedDistribution</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: #3367d6; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Distribution</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">):</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> sample</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">self</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">):</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> x </span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">self</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">base_distribution</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">sample</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">()</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">self</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">bijector</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">forward</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">x</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">)</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> log_prob</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">self</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> y</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">):</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> x </span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">self</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">bijector</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">inverse</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">y</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">)</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> ildj </span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">self</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">bijector</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">inverse_log_det_jacobian</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">y</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">)</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">self</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">base_distribution</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">log_prob</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">x</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">)</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">+</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> ildj</span></span></div>
</td></tr>
</tbody></table>
</div>
<br />
However, programming in JAX takes on a functional programming philosophy where functions are stateless and classes are eschewed. That's okay: we can still build a similar API in a functional way. To make everything end-to-end differentiable via JAX's grad() operator, it's convenient to put the parameters that we want gradients for as the first argument of every function. Here are the sample and log_prob implementations of the base distribution.<br />
<br />
<div align="left" dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #fafafa; border-bottom: solid #e0e0e0 1pt; border-left: solid #e0e0e0 1pt; border-right: solid #e0e0e0 1pt; border-top: solid #e0e0e0 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> sample_n01</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">N</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">):</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> D </span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #c53929; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">2</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> random</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">normal</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">rng</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">N</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> D</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">))</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> log_prob_n01</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">x</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">):</span></span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "trebuchet ms" , sans-serif;"><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> np</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">sum</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">(-</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">np</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">square</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">x</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">)/</span><span style="background-color: transparent; color: #c53929; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">2</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">-</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> np</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">log</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">np</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">sqrt</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: #c53929; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">2</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">*</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">np</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">pi</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">)),</span><span style="background-color: transparent; color: black; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">axis</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">=-</span><span style="background-color: transparent; color: #c53929; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">1</span><span style="background-color: transparent; color: #616161; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">)</span></span></div>
</td></tr>
</tbody></table>
</div>
<span style="font-family: "trebuchet ms" , sans-serif;"><br /></span>
Below are the forward and inverse functions of Real-NVP, which operates on minibatches (we could also re-implement this to operate over vectors, and use JAX's vmap operator to auto-batch it). Because we are dealing with 2D data, the masking scheme for Real-NVP is very simple: we just switch the masked variable every other flow via the "flip" parameter.<br />
<br />
<div align="left" dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #fafafa; border-bottom: solid #e0e0e0 1pt; border-left: solid #e0e0e0 1pt; border-right: solid #e0e0e0 1pt; border-top: solid #e0e0e0 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">def</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> nvp_forward</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">net_params</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> shift_and_log_scale_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> flip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">False</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">):</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> d </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">shape</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">[-</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">]</span><span style="background-color: transparent; color: #455a64; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">//2</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x2 </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">[:,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">:</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">d</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">],</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">[:,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> d</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">:]</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">if</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> flip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">:</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x2</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x1 </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x2</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> shift</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> log_scale </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> shift_and_log_scale_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">net_params</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y2 </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x2</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">*</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">np</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">exp</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">log_scale</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">+</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> shift</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">if</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> flip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">:</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y2 </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y2</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x1</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> np</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">concatenate</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">([</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">x1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y2</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">],</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> axis</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=-</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">return</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y</span></div>
</td></tr>
</tbody></table>
</div>
<b style="font-weight: normal;"><br /></b>
<br />
<div align="left" dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #fafafa; border-bottom: solid #e0e0e0 1pt; border-left: solid #e0e0e0 1pt; border-right: solid #e0e0e0 1pt; border-top: solid #e0e0e0 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">def</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> nvp_inverse</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">net_params</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> shift_and_log_scale_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> flip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">False</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">):</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> d </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">shape</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">[-</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">]</span><span style="background-color: transparent; color: #455a64; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">//2</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y2 </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">[:,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">:</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">d</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">],</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">[:,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> d</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">:]</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">if</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> flip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">:</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y2 </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y2</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y1</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> shift</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> log_scale </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> shift_and_log_scale_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">net_params</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x2 </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">y2</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">-</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">shift</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)*</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">np</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">exp</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(-</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">log_scale</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">if</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> flip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">:</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x2 </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x2</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y1</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> np</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">concatenate</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">([</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">y1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x2</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">],</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> axis</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=-</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">return</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> log_scale</span></div>
</td></tr>
</tbody></table>
</div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
<br /></div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
The "forward" NVP transformation takes in a callable shift_and_log_scale_fn (an arbitrary neural net that takes the masked variables as inputs), applies it to recover the shift and log scale parameters, transforms the un-masked inputs, and then stitches the masked scalar and the transformed scalar back together in the right order. The inverse does the opposite. </div>
</div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
<br /></div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
Here are the corresponding sampling (forward) and log-prob (inverse) implementations for a single RealNVP coupling layer. The ILDJ term is computed directly, as it is just the (negative) sum of the log_scale terms.</div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
<span id="docs-internal-guid-ade1e43a-7fff-002f-36a3-4619e5ea6d0b"><br /></span>
<br />
<div align="left" dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #fafafa; border-bottom: solid #e0e0e0 1pt; border-left: solid #e0e0e0 1pt; border-right: solid #e0e0e0 1pt; border-top: solid #e0e0e0 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> sample_nvp</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">net_params</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> shift_log_scale_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> base_sample_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> N</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> flip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">False</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">):</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> x </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> base_sample_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">N</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> nvp_forward</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">net_params</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> shift_log_scale_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> x</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> flip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">flip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> log_prob_nvp</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">net_params</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> shift_log_scale_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> base_log_prob_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> y</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> flip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">False</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">):</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> x</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> log_scale </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> nvp_inverse</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">net_params</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> shift_log_scale_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> y</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> flip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">flip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> ildj </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">-</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">np</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">sum</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">log_scale</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> axis</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=-</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> base_log_prob_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">x</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">+</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> ildj</span></div>
</td></tr>
</tbody></table>
</div>
<span id="docs-internal-guid-ade1e43a-7fff-002f-36a3-4619e5ea6d0b">
</span></div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
<br /></div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
What should we use for our shift_and_log_scale_fn? I've found that for 2D data + NVP, wider and shallow neural nets tend to train more stably. We'll use some JAX helper libraries to build a function that initializes the parameters and callable function for a MLP with two hidden layers (512) and ReLU activations. </div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
<br /></div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
<span id="docs-internal-guid-fedcd067-7fff-1fa4-e3f4-0e04f2a4c15f"></span><br />
<div align="left" dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 38.25pt;"><td style="background-color: #fafafa; border-bottom: solid #e0e0e0 1pt; border-left: solid #e0e0e0 1pt; border-right: solid #e0e0e0 1pt; border-top: solid #e0e0e0 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">from</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> jax</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">experimental </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">import</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> stax </span><span style="background-color: transparent; color: #455a64; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"># neural network library</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">from</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> jax</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">experimental</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">stax </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">import</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #3367d6; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">Dense</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #3367d6; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">Relu</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #455a64; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"># neural network layers</span></div>
</td></tr>
</tbody></table>
</div>
<span id="docs-internal-guid-fedcd067-7fff-1fa4-e3f4-0e04f2a4c15f">
</span></div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
<span id="docs-internal-guid-78dfd974-7fff-ed28-1087-6323dce3abd7"><br /></span>
<br />
<div align="left" dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #fafafa; border-bottom: solid #e0e0e0 1pt; border-left: solid #e0e0e0 1pt; border-right: solid #e0e0e0 1pt; border-top: solid #e0e0e0 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> init_nvp</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">():</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> D </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">2</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> net_init</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> net_apply </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> stax</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">serial</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #3367d6; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">Dense</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">512</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">),</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #3367d6; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">Relu</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #3367d6; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">Dense</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">512</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">),</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #3367d6; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">Relu</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #3367d6; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">Dense</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">D</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">))</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> in_shape </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(-</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> D</span><span style="background-color: transparent; color: #455a64; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">//2)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> out_shape</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> net_params </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> net_init</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">rng</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> in_shape</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> shift_and_log_scale_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">net_params</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> x1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">):</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> s </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> net_apply</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">net_params</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> x1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> np</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">split</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">s</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">2</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> axis</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> net_params</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> shift_and_log_scale_fn</span></div>
</td></tr>
</tbody></table>
</div>
</div>
<h3 style="margin-left: 0pt;">
</h3>
<h3 style="margin-left: 0pt;">
</h3>
<h3 style="margin-left: 0pt;">
Stacking Coupling Layers</h3>
<div align="left" dir="ltr" style="margin-left: 0pt;">
<br /></div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
TensorFlow Probability's object-oriented API is convenient because it allows us to "stack" multiple TransformedDistributions on top of each other for more expressive - yet tractable - transformations. </div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
<br /></div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
<span id="docs-internal-guid-3c5df3f9-7fff-3f19-3a17-2e756a32bafe"></span><br />
<div align="left" dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #fafafa; border-bottom: solid #e0e0e0 1pt; border-left: solid #e0e0e0 1pt; border-right: solid #e0e0e0 1pt; border-top: solid #e0e0e0 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">dist1 </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #3367d6; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">TransformedDistribution</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">base_dist</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> bijector1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">dist2 </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #3367d6; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">TransformedDistribtution</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">dist1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> bijector2</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">dist2</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">sample</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">()</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #455a64; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"># member variables reference dist1, which references base_dist</span></div>
</td></tr>
</tbody></table>
</div>
<span id="docs-internal-guid-3c5df3f9-7fff-3f19-3a17-2e756a32bafe">
</span></div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
<br /></div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
For "bipartite" flows like Real-NVP which leave some variables untouched, it is critical to be able to stack multiple flows so that all variables get a chance to be "transformed". </div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
<br /></div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
Here's the functional way to do the same thing in JAX. We have a function "init_nvp_chain" that returns neural net parameters, callable shift_and_log_scale_fns, and masking parameters for each flow. We then pass this big bag of parameters to the sample_nvp_chain function. </div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
<br /></div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
In log_prob_nvp_chain, there is an iteration loop that overrides log_prob_fn, which is initially set to base_log_prob_fn. This is to accomplish similar semantics to how TransformedDistribution.log_prob is defined with respect to the log_prob function of the base distribution beneath it. Python variable binding can be a bit tricky at times, and it's easy to make a mistake here that results in an infinite loop. The solution is to make a function generator (make_lob_prob_fn), that returns a function with the correct base log_prob_fn bound to the log_prob_nvp argument. Thanks to <a href="https://twitter.com/Bieber">David Bieber</a> for pointing this fix out to me.</div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
<span id="docs-internal-guid-4575d7a2-7fff-19b6-181b-53e366f8d61b"><br /></span>
<br />
<div align="left" dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #fafafa; border-bottom: solid #e0e0e0 1pt; border-left: solid #e0e0e0 1pt; border-right: solid #e0e0e0 1pt; border-top: solid #e0e0e0 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> init_nvp_chain</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">n</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">2</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">):</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> flip </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">False</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> ps</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> configs </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">[],</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">[]</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">for</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> i </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">in</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> range</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">n</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">):</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> p</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> f </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> init_nvp</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">()</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> ps</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">append</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">p</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">),</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> configs</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">append</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">((</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">f</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> flip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">))</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> flip </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">not</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> flip</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> ps</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> configs</span></div>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> sample_nvp_chain</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">ps</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> configs</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> base_sample_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> N</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">):</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> x </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> base_sample_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">N</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">for</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> p</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> config </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">in</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> zip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">ps</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> configs</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">):</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> shift_log_scale_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> flip </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> config</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> x </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> nvp_forward</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">p</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> shift_log_scale_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> x</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> flip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">flip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> x</span></div>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> make_log_prob_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">p</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> log_prob_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> config</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">):</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> shift_log_scale_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> flip </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> config</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">lambda</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> x</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">:</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> log_prob_nvp</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">p</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> shift_log_scale_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> log_prob_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> x</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> flip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">flip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> log_prob_nvp_chain</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">ps</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> configs</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> base_log_prob_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> y</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">):</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> log_prob_fn </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> base_log_prob_fn</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">for</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> p</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> config </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">in</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> zip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">ps</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> configs</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">):</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> log_prob_fn </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> make_log_prob_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">p</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> log_prob_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> config</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> log_prob_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">y</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
</td></tr>
</tbody></table>
</div>
<span id="docs-internal-guid-4575d7a2-7fff-19b6-181b-53e366f8d61b">
</span></div>
<h3 style="margin-left: 0pt;">
</h3>
<h3 style="margin-left: 0pt;">
Training Real-NVP</h3>
<div>
<br />
Finally, we are ready to train this thing! </div>
<div>
<br /></div>
<div>
We initialize our Real-NVP with 4 affine coupling layers (each variable is transformed twice), define the optimization objective to be model negative log-likelihood over minibatches (<a href="https://blog.evjang.com/2019/07/likelihood-model-tips.html">more precisely, cross entropy</a>). </div>
<div>
<br /></div>
<div>
<span id="docs-internal-guid-cebd2a7b-7fff-09af-928e-a18a4e78c1ef"></span><br />
<div align="left" dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #fafafa; border-bottom: solid #e0e0e0 1pt; border-left: solid #e0e0e0 1pt; border-right: solid #e0e0e0 1pt; border-top: solid #e0e0e0 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">from</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> jax</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">experimental </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">import</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> optimizers</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">from</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> jax </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">import</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> jit</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> grad</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">import</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> numpy </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">as</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> onp</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">ps</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> cs </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> init_nvp_chain</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">4</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> loss</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">params</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> batch</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">):</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">-</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">np</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">mean</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">log_prob_nvp_chain</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">params</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> cs</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> log_prob_n01</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> batch</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">))</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">opt_init</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> opt_update</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> get_params </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> optimizers</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">adam</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">step_size</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">1e-4</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
</td></tr>
</tbody></table>
</div>
<span id="docs-internal-guid-cebd2a7b-7fff-09af-928e-a18a4e78c1ef">
</span></div>
<div>
<div>
<br /></div>
<div>
Next, we declare a single optimization step where we retrieve the current optimizer state, compute gradients with respect to our big list of Real-NVP parameters, and then update our parameters. The cool thing about JAX is that we can "jit" (just-in-time compile) the step function to a single XLA op so that the entire optimization step happens without returning back to the (relatively slow) Python interpreter. We could even JIT the entire optimization process if we wanted to!</div>
</div>
<div>
<br /></div>
<div>
<div align="left" dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #fafafa; border-bottom: solid #e0e0e0 1pt; border-left: solid #e0e0e0 1pt; border-right: solid #e0e0e0 1pt; border-top: solid #e0e0e0 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">@jit</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> step</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">i</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> opt_state</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> batch</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">):</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">params</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> get_params</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">opt_state</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> g </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> grad</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">loss</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)(</span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">params</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> batch</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> opt_update</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">i</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> g</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> opt_state</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">iters </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">int</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">1e4</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">data_generator </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">X</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">[</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">onp</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">random</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">choice</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">X</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">shape</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">[</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">0</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">],</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">100</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)]</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">for</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> _ </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">in</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> range</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">iters</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">))</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">opt_state </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> opt_init</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">ps</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">for</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> i </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">in</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> range</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">iters</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">):</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> opt_state </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> step</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">i</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> opt_state</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">next</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">data_generator</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">))</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">ps </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">=</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;"> get_params</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: transparent; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">opt_state</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; vertical-align: baseline; white-space: pre-wrap;">)</span></div>
</td></tr>
</tbody></table>
</div>
<span id="docs-internal-guid-dce52d74-7fff-838a-0bfc-60188f0bbad3">
</span></div>
<h3 style="margin-left: 0pt;">
</h3>
<h3 style="margin-left: 0pt;">
</h3>
<h3 style="margin-left: 0pt;">
Animation</h3>
<div>
<br />
Here's the code snippet that will visualize each of the 4 affine coupling layers transforming samples from the base Normal distribution, in sequence. Is it just me, or does anyone else find themselves constantly having to Google "How to make a Matplotlib animation?"<br />
<b id="docs-internal-guid-58663940-7fff-4c0a-fc51-379dd5fa777a" style="font-weight: normal;"><br /></b>
<br />
<div align="left" dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #fafafa; border-bottom: solid #e0e0e0 1pt; border-left: solid #e0e0e0 1pt; border-right: solid #e0e0e0 1pt; border-top: solid #e0e0e0 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">from</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> matplotlib </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">import</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> animation</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> rc</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">from</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #3367d6; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">IPython</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">display </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">import</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> HTML</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #3367d6; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">Image</span></div>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">x </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> sample_n01</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">1000</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">values </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">[</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">x</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">]</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">for</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> p</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> config </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">in</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> zip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">ps</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> cs</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">):</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> shift_log_scale_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> flip </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> config</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> nvp_forward</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">p</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> shift_log_scale_fn</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> x</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> flip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">flip</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> values</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">append</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">x</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span></div>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #455a64; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"># First set up the figure, the axis, and the plot element we want to animate</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">fig</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> ax </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> plt</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">subplots</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">()</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">ax</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">set_xlim</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">xlim</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">ax</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">set_ylim</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">ylim</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span></div>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">y </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> values</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">[</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">0</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">]</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">paths </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> ax</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">scatter</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">y</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">[:,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">0</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">],</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">[:,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">],</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> s</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">10</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> color</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: #0f9d58; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">'red'</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span></div>
<br />
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">def</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> animate</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">i</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">):</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> l </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> i</span><span style="background-color: transparent; color: #455a64; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">//48</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> t </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">float</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">i</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">%</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">48</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">))/</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">48</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> y </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">-</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">t</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)*</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">values</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">[</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">l</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">]</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">+</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> t</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">*</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">values</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">[</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">l</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">+</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">]</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> paths</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">set_offsets</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">y</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">return</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">paths</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">anim </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> animation</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: #3367d6; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">FuncAnimation</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">fig</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> animate</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> frames</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">48</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">*</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">len</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">cs</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">),</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> interval</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> blit</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">False</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">anim</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">save</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: #0f9d58; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">'anim.gif'</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> writer</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: #0f9d58; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">'imagemagick'</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> fps</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">60</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span></div>
</td></tr>
</tbody></table>
</div>
<br />
<br />
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-55578764361868572672019-07-05T13:04:00.004-07:002019-12-18T23:33:39.358-08:00Tips for Training Likelihood Models<div>
This is a tutorial on common practices in training generative models that optimize likelihood directly, such as <a href="https://eigenfoo.xyz/deep-autoregressive-models/">autoregressive models</a> and <a href="https://blog.evjang.com/2018/01/nf1.html">normalizing flows</a>. Deep generative modeling is a fast-moving field, so I hope for this to be a newcomer-friendly introduction to the basic evaluation terminology used consistently across research papers, especially when it comes to modeling more complicated distributions like RGB images. This is a more in-depth version of the <a href="https://slideslive.com/38917907/tutorial-on-normalizing-flows">tutorial lecture</a> I gave on normalizing flows at ICML.</div>
<br />
This tutorial discusses the most mathematically straightforward of generative models (tractable density estimation models), and cover some important design considerations when choosing how to model image pixels. By the end of this post, you will know how to quantitatively compare likelihood models, even ones that differ drastically in architecture and the way pixels are modeled.<br />
<div>
<br /></div>
<h3>
Divergence Minimization: A General Framework for Generative Modeling</h3>
<div>
<br /></div>
<div>
The goal of generative modeling (all of statistical machine learning, really) is to take data sampled from some (possibly conditional) probability distribution $p(x)$ and learn a model $p_\theta(x)$ that approximates $p(x)$. Modeling allows us to extrapolate insight beyond the raw data we are given. Here are some versatile things you can do with generative models:</div>
<ul>
<li><a href="https://thispersondoesnotexist.com/">Draw new samples</a> from $p(x)$</li>
<li>Learn <a href="https://en.wikipedia.org/wiki/Latent_variable">hierarchical latent variables</a> $z$ that explain the observations $x$</li>
<li>You can intervene on latent variables to examine the interventionist distributions $p_\theta(x|do(z))$ Note that this will only work properly if your conditional distribution models the correct causal relationship $z \to x$ and we assume <a href="http://www.fragilefamilieschallenge.org/causal-inference/">ignorability</a>.</li>
<li>Interrogate the likelihood of a new data point $x^\prime$ under our model distribution to <a href="https://arxiv.org/abs/1810.01392">detect anomalies</a></li>
</ul>
Modeling conditional distributions has an even broader set of direct applications, since we can interpret classification and regression problems as learning generative models:<br />
<ul>
<li>Machine Translation $p(\text{translated english sentence}|\text{french sentence})$</li>
<li>Captioning $p(\text{caption}|\text{image})$</li>
<li>Regression objectives like minimizing mean-squared error $\min \frac{1}{2}(x-\mu)^2$ are mathematically equivalent to maximum log-likelihood estimation of a Gaussian with diagonal covariance: $\max -\frac{1}{2} (x-\mu)^2$. </li>
</ul>
<br />
In order to get $p_\theta(x)$ to match $p(x)$, we first have to come up with the notion of a <a href="https://en.wikipedia.org/wiki/Statistical_distance#Generalized_metrics">distance</a> between the two distributions. In statistics, it is more common to devise a weaker notion of “distance” called a <a href="https://en.wikipedia.org/wiki/Divergence_(statistics)">divergence measure</a>, which unlike a metric distance, is not symmetric ($D(p, q) \neq D(q, p)$). Once we have a formal divergence measure between distributions can we attempt to minimize it via optimization.<br />
<br />
There are many, many divergences $D(p_\theta || p)$ that we can formulate, and these are often chosen to suit the generative modeling algorithm. Here are just a few:<br />
<ul>
<li>Maximum Mean Discrepancy (MMD)</li>
<li>Jensen-Shannon Divergence (JSD)</li>
<li>Kullback-Leibler divergence (KLD)</li>
<li>Reverse KLD</li>
<li>Kernelized Stein discrepancy (KSD)</li>
<li>Bregman Divergence</li>
<li>Hyvärinen score</li>
<li>Chi-Squared Divergence</li>
<li>Alpha Divergence</li>
</ul>
Divergences between two distributions, unlike metrics, <a href="https://en.wikipedia.org/wiki/Statistical_distance">need not be symmetric</a>. In the limit of infinite data and compute, all these divergences arrive at the same answer, that is, $D(p_\theta || p) = 0$ iff $p_\theta \equiv p$. Note that these divergences are distinct from perceptual evaluation metrics like <a href="https://sudomake.ai/inception-score-explained/">Inception Score</a> or Fréchet Inception Distance, which do not guarantee convergence to the same result in the high-data limit (but are useful metrics if you care about visual quality of images).<br />
<br />
However, most experiments see a finite amount of data and compute, so the choice of metric matters and can actually change the qualitative behavior of what generative distribution $p_\theta(x)$ ends up being learned. For example, if the target density is $p$ is multi-modal and the model distribution $q$ is not expressive enough, minimizing forward KL $D_{KL}(p||q)$ will learn mode-covering behavior while minimizing reverse KL $D_{KL}(q||p)$ results in mode-dropping behavior. See this <a href="https://blog.evjang.com/2016/08/variational-bayes.html">blog post </a>for an explanation why.<br />
<br />
Thinking about generative modeling in the framework of divergence minimization is useful because it allows us to map what we properties want from our generative models to our choice of divergence objective in a principled way. It may be an implicit density model (GANs) where sampling is tractable but log-probabilities are not available, or a energy-based model where sampling is not available but (unnormalized) log-probabilities are tractable.<br />
<br />
This blog post will cover models trained and evaluated using the most straightforward metric: the Kullback-Leibler Divergence. These models include Autoregressive Models, Normalizing Flows, and Variational Autoencoders (approximately). Optimizing KLD is equivalent to optimizing log-probability, and we'll derive why in the next section!<br />
<br />
<h3>
Average Log-Probability and Compression</h3>
<br />
We want to model $p(x)$, the probability distribution for some data-generating stochastic process. We typically assume that sampling from a sufficiently large dataset is approximately the same thing as sampling from the true data-generating process. For instance, sampling an image from the MNIST dataset is equivalent to drawing a sample from the true handwriting process that created the MNIST dataset.<br />
<br />
Given a test set of images $x^1,...,x^N$ sampled i.i.d from $p(x)$, and a likelihood model $p_\theta$ parameterized by $\theta$, we want to maximize the following objective:<br />
<br />
<br />
$$<br />
\mathcal{L}(\theta) = \frac{1}{N}\sum_{i=1}^{N}\log p_\theta(x^i) \sim \int p(x) \log p_\theta(x) dx = -H(p, p_\theta)<br />
$$<br />
<br />
The average log-probability is a <a href="http://statweb.stanford.edu/~susan/courses/s208/node14.html">Monte Carlo estimator</a> of the negative <a href="https://en.wikipedia.org/wiki/Cross_entropy">cross entropy</a> between the true likelihood $p$ and model likelihood $p_\theta$, because we are not able to actually enumerate over all $x^i$. In plain language, this translates to "maximize average likelihood of data", or equivalently, "minimize negative cross-entropy between true distribution and model distribution".<br />
<br />
With <a href="https://stats.stackexchange.com/questions/12805/is-it-appropriate-to-use-the-term-bits-to-discuss-a-log-base-2-likelihood-rati">a little algebra</a>, the negative cross-entropy can be re-written in terms of KL divergence (relative entropy) and absolute entropy of $p$:<br />
<br />
$$-H(p, p_\theta) = -KL(p, p_\theta) - H(p)$$<br />
<br />
<a href="https://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem">Shannon’s Source Coding Theorem</a> (1948) tells us that entropy $H(p)$ is the lower bound on average code length for any code you can construct to communicate samples from $p(x)$ losslessly. More entropy means more "randomness", which cannot be compressed. In particular, when we use the natural logarithm $\log_e$ to compute entropy, it takes on the "natural units of information", or <a href="https://en.wikipedia.org/wiki/Nat_(unit)">nats</a>. When computing entropy in $\log_2$, the resulting units are the familiar "bit". The $H(p)$ term is independent of $\theta$, so maximizing $\mathcal{L}(\theta)$ is really just equivalent to minimizing $KL(p, p_\theta)$. That is why maximum likelihood is also known as minimizing KL divergence.<br />
<br />
The KL divergence $KL(p, p_\theta)$, or relative entropy, is the number of "extra nats" you would need to encode data from $p(x)$ using an entropy coding scheme based on $p_\theta(x)$. Therefore, our Monte Carlo estimator $\mathcal{L}(\theta)$ of negative cross entropy is also expressed in nats.<br />
<br />
Putting the two together, the cross entropy is nothing more than the average code length required to communicate samples from $p$, using a codebook based on $p_\theta$. We pay a "base fee" of $H(p)$ nats no matter what (the optimal code), and we pay an additional "fine" of $KL(p, p_\theta)$ nats for any deviations of $p_\theta$ from $p$.<br />
<br />
We can compare cross entropies of two different models in a very interpretable way: suppose model $\theta_1$ has average likelihood $\mathcal{L}(\theta_1)$ and model $\theta_2$ has average likelihood $\mathcal{L}(\theta_2)$. Subtracting $\mathcal{L}(\theta_1) - \mathcal{L}(\theta_2)$ causes the entropy terms $H(p)$ to cancel out, resulting in $KL(p, p_{\theta1})-KL(p, p_{\theta_2})$. This quantity is the "reduction of penalty nats you need to pay when switching from code $p_{\theta_1}$ to code $p_{\theta_2}$".<br />
<br />
<a href="https://blog.evjang.com/2017/11/exp-train-gen.html">Expressivity, optimization, and generalization</a> are three important properties of a good generative model, and likelihoods offer an interpretable metric with which to debug these properties in our models. If a generative model cannot memorize the training set, it suggests there are difficulties with optimization (getting stuck) or expressivity (underfitting). <br />
<br />
The Cifar10 image dataset has 50000 training samples, so we know that a model that memorizes the data perfectly will assign a probability mass of exactly 1/50000 to each image in the training dataset, thereby achieving a negative cross entropy of $log_2(\frac{1}{50000})$, or 15.6 bits per image (this is independent of how many pixels there are per image!). Of course, we usually don’t want our generative models to overfit to such extremes, but it’s useful to keep this upper bound in mind as a sanity check when debugging your generative model.<br />
<br />
Comparing the difference between training and test likelihoods can tell us if the networks are memorizing the training set or learning something that generalizes to the test set, or whether there are semantically meaningful modes in the data that the model fails to capture.<br />
<br />
<h3>
Which Distribution Should You Use For Modeling Image Pixels?</h3>
<br />
There are plenty ways to parameterize an image. For instance, you can represent an image via a 3D scene that is <a href="https://en.wikipedia.org/wiki/Ray_tracing_(graphics)">projected (rendered) into 2D</a>. Or you can parameterize images as <a href="http://blog.otoro.net/2017/05/19/teaching-machines-to-draw/">vector representations of sketches</a> (like SVG graphics), or <a href="https://arxiv.org/abs/1506.05751">Laplacian Pyramids</a>, or even motor torques for a robotic arm <a href="https://www.youtube.com/watch?v=eYIl6zi2wbo">that subsequently paint a picture</a>. However, for simplicity, researchers typically models image likelihoods as the joint distribution over RGB pixels - it is a general-purpose digital format that has proven effective for capturing natural data in the visible electromagnetic spectrum.<br />
<br />
<br />
Each RGB pixel is encoded by a uint8 integer, which can take on 256 possible values. Thus, an image with 3072 pixels and 256 possible values per pixel can take on $256^{3072}$ possible values. There are a finite number of images, which means we could technically represent images using a single $256^{3072}$-sided die. But this number is too large to be represented in memory! Even modeling 3 uint8-encoded pixels jointly as a Categorical results in $256^3=16777216$ possible categories, which is unwieldy even for modern computers. To make things computationally tractable, we must "factorize" the likelihood for the whole image into a combination of conditionally independent pixel-wise distributions. One easy factorization is to make each pixel likelihood independent of one another:<br />
<br />
<br />
$$ p(x_1, ..., x_N) = p(x_1)p(x_2)...p(x_N)$$<br />
<br />
This is also known as a mean-field decoders (see <a href="https://blog.evjang.com/2016/08/variational-bayes.html?showComment=1476820915459#c94172995220378442">this comment</a> for where the name “mean-field” comes from). Each pixel-wise distribution has its own tractable density or mass function.<br />
<br />
Another choice is to make the pixel likelihood autoregressive, where each conditional distribution has its own tractable density or mass function. <br />
<br />
$$ p(x_1, ..., x_N) = p(x_1)p(x_2|x_1)...p(x_N|x_1,...,x_{N-1})$$<br />
<br />
We still have to figure out how to model each conditional distribution though. Here are some common choices along with an example paper that used them:<br />
<ol>
<li>Bernoulli probabilities over each channel (<a href="https://arxiv.org/abs/1502.04623">DRAW</a>)</li>
<li>256-way Categorical distribution over each channel (<a href="https://arxiv.org/abs/1601.06759">PixelRNN</a>, <a href="https://arxiv.org/pdf/1802.05751.pdf">Image Transformer</a>)</li>
<li>Continuous density on de-quantized data (<a href="https://arxiv.org/abs/1605.08803">Real-NVP</a>)</li>
<li>Discretized logistic mixture (<a href="https://arxiv.org/pdf/1701.05517.pdf">PixelCNN++</a>, <a href="https://arxiv.org/pdf/1802.05751.pdf">Image Transformer</a>)</li>
</ol>
<h4>
Pixel Values as Bernoulli Emission Probabilities</h4>
<br />
The MNIST, FashionMNIST, NotMNIST datasets are good choices to start with when debugging your likelihood models:<br />
<br />
<ul>
<li>Those datasets can be stored completely in computer RAM</li>
<li>They do not require a lot of architecture tuning (allowing you to focus on the algorithmic aspects)</li>
<li>Small generative models for these datasets can train on modest hardware, such as a modern laptop lacking a GPU.</li>
</ul>
<br />
It is common to choose conditional pixel likelihoods $ p(x_i)$ to be modeled as Bernoulli random variables. For binarized data where pixel values are only 0 or 1 (heads or tails), this is fine. <br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><img height="278" src="https://lh6.googleusercontent.com/uGH5ASA1xsOMQHQgv9mcUJ1-qnmXdGeO-D8aC8ZPTMX3-MHMh1I7KatPHpBnGS-4YQlUHai4Sx98fVCAw4SrtYJxDfQxMIQp5zkOW1oCIdeYWplJ7qoi8J_XKJdwVFD3V0UFx-8h" style="margin-left: auto; margin-right: auto;" width="400" /></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: start;">Example of a binarized MNIST image. Binarized digits are recognizable, but not so much for natural images.</span></td></tr>
</tbody></table>
<br />
However, MNIST and its friends are encoded as floating point values in the range [0, 1], where the 256 integers are normalized to lie between these boundaries. There is an expressivity problem here, because Bernoulli variables cannot sample values between 0 and 1! <br />
<br />
For papers that train on non-binarized MNIST, we must instead interpret the encoded values as emission probabilities for corresponding Bernoulli variables, i.e. if we see a pixel value of 0.9, it actually represents a Bernoulli likelihood of the pixel being 1, not the sample value itself. The optimization objective is to minimize the cross entropy between predicted probability distribution (parameterized by a scalar emission probability), and the stored emission probability in the data. The cross-entropy of two Bernoullis with emission probabilities $p(1)$ and $p_\theta(1)$ are given by:<br />
<br />
$$H(p, p_\theta) = -\left[(1-p(1)) log (1-p_\theta(1)) + p(1) log p_\theta(1)\right]$$<br />
<br />
Remember from the earlier section in this post that minimizing this cross entropy results in the same objective as maximizing likelihood! The average log-likelihood (relative entropy) of these toy image datasets is usually reported in units of nats.<br />
<br />
The <a href="https://arxiv.org/abs/1502.04623">DRAW</a> paper (Gregor et al. 2015) extends this idea to modeling per-channel colors. However, there is a serious drawback to interpreting color pixel data as emission probabilities. When we sample from our generative model, we get noisy, speckly images rather than natural-looking coherent images. Here’s a Python code snippet that reproduces this problem:<br />
<br />
<div align="left" dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #fafafa; border-bottom: solid #e0e0e0 1pt; border-left: solid #e0e0e0 1pt; border-right: solid #e0e0e0 1pt; border-top: solid #e0e0e0 1pt; overflow-wrap: break-word; overflow: hidden; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">import</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> tensorflow_datasets </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">as</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> tfds</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">import</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> numpy </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">as</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> np</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">import</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> matplotlib</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">pyplot </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">as</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> plt</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">builder </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> tfds</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">builder</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: #0f9d58; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">"cifar10"</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">builder</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">download_and_prepare</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">()</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">datasets </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> builder</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">as_dataset</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">()</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">np_datasets </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> tfds</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">as_numpy</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">datasets</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">img </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #9c27b0; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">next</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">np_datasets</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">[</span><span style="background-color: transparent; color: #0f9d58; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">'train'</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">])[</span><span style="background-color: transparent; color: #0f9d58; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">'image'</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">]</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">sample </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> np</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">random</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">binomial</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">p</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">img</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">astype</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">np</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">float32</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)/</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">256</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">fig</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> arr </span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">=</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> plt</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">subplots</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">,</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">2</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">arr</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">[</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">0</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">].</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">imshow</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">img</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">)</span></div>
<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">arr</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">[</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">1</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">].</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">imshow</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">((</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">sample</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">*</span><span style="background-color: transparent; color: #c53929; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">255</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">).</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">astype</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">(</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">np</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">.</span><span style="background-color: transparent; color: black; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">uint8</span><span style="background-color: transparent; color: #616161; font-family: "consolas" , sans-serif; font-size: 10pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">))</span></div>
</td></tr>
</tbody></table>
</div>
<br />
<span id="docs-internal-guid-f690ba01-7fff-850f-6c00-df9f35ab9dd8"></span><br />
<br />
<img src="https://lh4.googleusercontent.com/WTJ5G1fsJ0R_73D9980JAylK2SrynWrN79vTF0vWCEvBH8cUpwncr5hpOm-VOxGf9WuKKHiv35YNLSd4O9yizaNnBLl4jYF_KXNeGTd2zTfZfaQcHkAMCGwegx6vZL9o4_Gpwd7c" /><br />
<br />
Interpreting pixel values as ‘emission probabilities’ results in unrealistic-looking samples - while it is an O.K. assumption for handwritten digits and sprites, it doesn't work for larger-scale, natural images. Papers that do use Bernoulli decoders will often showcase the emission probabilities (e.g. in a reconstruction or imputation task) rather than actual samples.<br />
<br />
<h4>
Pixel Values as Categorical Distributions</h4>
<div>
<br /></div>
Larger color datasets (SVHN, CIFAR10, CelebA, ImageNet) are encoded in 8-bit RGB color (each channel is a uint8 integer that ranges in value from 0 to 255, inclusive).<br />
<br />
Instead of interpreting their pixel values as Bernoulli emission probabilities, we can attempt to model the distribution over actual uint8 pixel values encoded in the image. One of the simplest choices is a 256-way categorical distribution. <br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><img height="320" src="https://lh5.googleusercontent.com/-ORNaiFzUDV-itsWKjKXeivFhXMKBRFCxANtiIumyR6KJuXtoYd2h9q5XkW303apXGVPqlBpVHNOWQT0waAaW5XVoIJwqA5jGfq3_JA1MjQKnfUBghq493c7rgombq7ulom0ZhSZ" style="margin-left: auto; margin-right: auto;" width="640" /></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Sampling from categorical distributions allows the generative model to sample images rather than emission probabilities.</td></tr>
</tbody></table>
<br />
For color images, it is common to report cross entropies of individual pixels in log base 2, instead of log base e. If a test set with 3072 pixels per image has average likelihood (nats) of $-H(p, q)$, the “bits-per-pixel” is just $-H(p, q)\div log (2)\div3072$.<br />
<br />
This metric is motivated by average-likelihood-as-compression interpretation we discussed earlier: for a pixel that is typically encoded using 8 bits, we can devise a lossless entropy coding scheme using our generative model $p_\theta$ that can actually compress the entire dataset with an average bit length of 3 for representing each pixel.<br />
<br />
At the time of this writing, the best generative model for Cifar10, <a href="https://arxiv.org/abs/1904.10509">Sparse Transformers</a>, achieves a test likelihood of 2.80 bits per pixel. As a point of comparison, PNG and WebP -- widely used algorithms for lossless image compression -- achieve about 5.87 and 4.61 bits on Cifar10 images, respectively (PNG achieves 5.72 bpp if you don’t count the extra bytes like headers and CRC checksums).<br />
<br />
This is quite exciting, because it suggests that machine learning can be used for better content-aware entropy-encoding schemes than existing compression schemes. Efficient lossless compression can be used to improve hashing algorithms, make your downloads faster, and improve your Zoom calls, and all of that technology is probably quite feasible today.<br />
<br />
<h4>
Stochastic De-Quantization for Continuous Density Models</h4>
<br />
If we optimize a continuous density model (such as a mixture of Gaussians) to maximize log-likelihood on discrete data, this can result in a degenerate solution where the model assigns the same density spike to each of the possible discrete values {0, ..., 255}. Even with an infinitely large dataset, the model can achieve arbitrarily high likelihoods by simply squeezing the spikes narrower and narrower.<br />
<br />
To address this problem. it is quite common to de-quantize the data by adding noise to the integer pixel values. One such transformation is given by $y = x + u$, where $u$ is a sample from the random uniform $U(0,1)$. The first paper that I am aware of that motivates stochastic de-quantization for density modeling is <a href="https://arxiv.org/pdf/1306.0186.pdf">Uria et al. 2013</a>, and has since become common practice in <a href="https://arxiv.org/pdf/1410.8516.pdf">Dinh et al., 2014</a>, <a href="https://arxiv.org/abs/1701.05517">Salimans et al., 2017,</a> and the works that built on top of these papers.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><img height="234" src="https://lh5.googleusercontent.com/wIVjndBQoE6VFhiw1qAMTtkgj-0gY_ALVwLFA9L5sWJH2F2E5AgK8fzVVwkhbI7XlehBoLJHo3zuHUYAuM1A3tZBrMcMSyq6L_giINFKWabUl4bUwDTcvdI5GtnYVLum78mlcSEx" style="margin-left: auto; margin-right: auto;" width="640" /></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: start;">Left: optimizing density models on discrete data can result in a degenerate solution where the model assigns a probability spike on a finite set of discrete values. Stochastic de-quantization is often applied so that we learn likelihood models on continuous data.</span></td></tr>
</tbody></table>
<br />
A discrete model assigns probability mass over an interval, while a continuous model assigns a density function. Let $P(x)$ and $p(x)$ represent the discrete probability masses and continuous densities of the true data distribution, and let $P_\theta(x)$ and $p_\theta(x)$ represent the same for the model density. We’ll derive below why optimizing the continuous likelihood model $p_\theta(y)$ over de-quantized data $y$ results in optimizing the lower-bound of the actual discrete probability model $P_\theta(x)$:<br />
<br />
Integrating the density over a unit interval gives us the total mass implied by the density function:<br />
<br />
$$ P_\theta(x) = \int_0^1 p_\theta(x+u) du $$<br />
<br />
Our model likelihood objective is trained on de-quantized data sampled from the true data distribution:<br />
<br />
$$ \mathbb{E}_{y \sim p}\left[ \log p_\theta(y) \right]$$<br />
<br />
By definition of expectation:<br />
<br />
$$ = \int p(y) \log p_\theta(y) dy $$<br />
<br />
Expanding the integral,<br />
<br />
$$ = \int dy p(y) \int dy \log p_\theta(x+du) $$<br />
$$ = \mathbb{E}_{x \sim P} \int du \log p_\theta(x+du) $$<br />
<br />
<br />
Via Jensen’s inequality (for the uniform variable u), <br />
<br />
$$ \leq \mathbb{E}_{x \sim P} \log \int du p_\theta(x+du) $$<br />
$$ = \mathbb{E}_{x \sim P} \log P_\theta(x) $$<br />
<br />
<br />
A recent paper, <a href="https://arxiv.org/abs/1902.00275">Flow++</a>, proposes using a learned de-quantization random variable to improve the tightness of the variational bound. The intuition here is that a single importance-sampled noise variate from $q(u|x)$ results in a lower-variance estimate of the integral $\int_0^1 p_\theta(x+u) du$ than a single sample from a uniform(0, 1) distribution. Because the de-quantization noise is different, one consequence of this work is that density models with different architectures and different quantization strategies cannot be compared in a controlled manner. <br />
<br />
One way to compare Flow++ and uniformly de-quantized generative models fairly is to permit researchers to use whatever variational bound they like at training time, but standardize the evaluation of likelihood at evaluation time to be some tight multi-sample bound. The intuition here is that as we integrate over more samples, we get a better approximation of the true log-likelihood of the corresponding discrete model $P_\theta(x)$.<br />
<br />
For instance, we could report the multi-sample bound from a fixed U(0, 1) de-quantization distribution, as commonly done in VAE literature with multi-sample IWAE bounds. A discussion of VAEs and IWAE bounds are out of the scope of this tutorial, and will be covered in the future.<br />
<br />
<h4>
Side Note: Data Preprocessing for Normalizing Flows</h4>
<br />
<a href="https://blog.evjang.com/2018/01/nf1.html">Normalizing Flows</a> are a family of generative models that “transform” a base probability distribution into a more complicated probability distribution.<br />
<br />
<br />
<img src="https://lh3.googleusercontent.com/joXi6MzAiWxX9MtZR7QRMR6w7OFa5U1zQ9UR9odufiXQ3T6sVUXfcHDEcRjWm6NhWdG1TFhUEi22jxA0mv9pQ03A63uotmT1Sf2MUKaoYGaUgZ9H8Jj_M05nSx9rueAvfMK_70fU" /><br />
<br />
<br />
Normalizing Flows learn transformations that have tractable inverses and Jacobian determinants. Being able to compute these two quantities efficiently allow us to calculate the transformed distribution’s log-density, using the change-in-variables rule:<br />
<br />
<br />
$$ \log p(y) = \log p(x) - \log |det J(f)(x)| $$<br />
<br />
<br />
The vast majority of Normalizing Flows operate over continuous density functions (thus requiring the volume-tracking Jacobian determinant term), though there is some recent research on “discrete flows” that learn to transform probability mass functions rather than transforming density (<a href="https://arxiv.org/abs/1905.10347">Tran et al. 2019</a>, <a href="https://arxiv.org/pdf/1905.07376.pdf">Hoogeboom et al. 2019</a>). We won’t be discussing these flows much in this blog post, but suffice it to say that they work by devising bijective discrete transformations of discrete base distributions. <br />
<br />
<br />
In addition to the stochastic de-quantization mentioned earlier, there are a couple additional tricks employed when training normalizing flows for image data. <br />
<br />
Empirically, re-scaling the data from the range [0, 256] to the unit interval [0, 1] prior to maximum likelihood estimation helps stabilize training, as neural network biases are usually centered around zero.<br />
<br />
To prevent boundary issues where a sample from the base distribution could get mapped to a point outside of the re-scaled boundary (0, 1) by the flow, we can transform the re-scaled data to the range $-\infty, \infty$ via the logistic function (the inverse of the sigmoid function).<br />
<br />
<br />
We can think of these re-scaling and logistic transforms as "preprocessing" flows at the beginning of the model, where just like any other bijector, we have to account for the change in volume induced by the transformation.<br />
<br />
The important thing to realize here is that for evaluation purposes, pixel densities should always be computed in the continuous interval [0, 256], so that we can compare likelihoods from flows and autoregressive over the same data (up to the variational gap induced by the stochastic dequantization).<br />
<br />
Here is a diagram showing a standard normalizing flow for RGB images, with the original discrete data on the left and the base density (can be a Gaussian, a logistic, or whatever your favorite tractable density is) on the right.<br />
<br />
<br />
<img src="https://lh5.googleusercontent.com/Y79cc7ArMglzUe-OSMRi5FbDY_YkW77p8CBVJCqQlx51u6yqmjR03dDwY6nIz8UiWV36eCHEjM-ndKh0sQG04ca0ahABSjzwx-DlC_dUe3UtgGj0Cpj9hbu_d28EgyGV2ZZ1uC8E" /><br />
<br />
<br />
Generative model likelihoods typically reported in de-quantized space (green). Starting from Dinh et al. 2016, many flow-based models re-scale pixels to $[\lambda, 1-\lambda]$ and apply the logistic function (inverse sigmoid) to help with numerical stability on boundary conditions.<br />
<br />
<br />
<h4>
Discretized Logistic Mixture Likelihood</h4>
<br />
One drawback of modeling pixels with categorical distributions is that a categorical cross entropy loss cannot tell us that a pixel value of 127 is closer to 128 than it is to 0. For an observed pixel value $p$, the gradient of the categorical cross entropy is constant with respect to pixel intensity (since the loss treats the categories as un-ordered). Although the cross entropy gradient is non-zero, it is said to be “sparse” because it does not provide information on how close (in pixel intensity) we are to the target distribution. Ideally, we would like gradients to be larger in magnitude when the predicted intensity is far away from the observed value, and smaller when the model is close. <br />
<br />
A more serious problem with modeling pixels as categorical distributions is that if we choose to represent more than 256 categories, we’d be in trouble. For example, we might want to model the R, G, and B pixels jointly (256^3 categories!) or model higher-precision pixel encodings than uint8 for HDR images. We’d quickly run out of memory attempting to store the projection matrices needed to map neural net activations to logits for that many categories.<br />
<br />
Two concurrent papers, <a href="https://arxiv.org/pdf/1606.04934.pdf">Inverse Autoregressive Flow</a> and <a href="https://arxiv.org/pdf/1701.05517.pdf">PixelCNN++</a>, solve these problems by introducing a probability distribution for modeling RGB pixels as <a href="https://en.wikipedia.org/wiki/Ordinal_data">ordinal data</a>, for which gradient information from the cross entropy loss can push pixels in the right direction while still preserving a discrete probability model. <br />
<br />
We can model continuous pixel probability densities via a mixture of logistics, which is a continuous distribution. To recover probability mass for discrete pixel values, we can use the convenient property of the logistic distribution is that its CDF is the sigmoid function. By subtracting two sigmoids CDF(x+0.5) - CDF(x-0.5), we can recover the total probability mass lying between two integer pixel values.<br />
<br />
<br />
<img height="127" src="https://lh3.googleusercontent.com/QKJkJwJxgc4nNIcy5PgaKvWvqsr2dO0A-aLPWDo59wfLpu-1TV9F-rWGJFWSMBXxVtvl1d9NokswEL_5wPb0YA0m2TxBaQedJL7xiTscSZs-oaq_80mtORkcBpWER6yMggK4kJgv" width="640" /><br />
<br />
<br />
For example, the probability of a pixel having value=127 is modeled as the probability mass lying between 126.5 and 127.5 for a continuous mixture of logistic distributions. The probability model must also account for edge cases such that CDF(0-0.5) is 0 and CDF(255+0.5) is 1, as is required of probability distributions. <br />
<br />
Representing pixels in this way also affords the luxury of being able to handle a lot more categories, which means that PixelCNN++ can model the R, G, and B pixel channels at once. The caveat here is that you must tune the number of mixture components adequately (on Cifar-10, 5 seems to be enough).<br />
<br />
Analogous to how <a href="https://arxiv.org/abs/1905.10347">Tran et al. 2019</a> devise discrete flows on top of categorical distributions, <a href="https://arxiv.org/pdf/1905.07376.pdf">Hoogeboom et al. 2019</a> devise discrete flows for ordinal data by using this discretized logistic mixture likelihood as the base distribution. This gets the best of both worlds: we get to use normalizing flows for tractable inverses and sampling, while avoiding to have to solve for a de-quantized likelihood objective (which may incur a variational lower bound penalty on the likelihood). Both are very exciting papers that I hope to write about more in the future! <br />
<br />
<h3>
Perplexity</h3>
<div>
<br /></div>
Log-likelihood is also a common metric for evaluating generative models in the language modeling domain. A discrete alphabet without ordering makes Categorical distributions the most natural choice for density modeling.<br />
<br />
One quirk of the natural language processing (NLP) field is that language model likelihoods are often evaluated in units of <a href="https://en.wikipedia.org/wiki/Perplexity">Perplexity</a>, which is given by $2^H(p, q)$. The inverse of perplexity, $\log_2 2^-H(p, q)$, is nothing more than average log-likelihood $-H(p, q)$. Perplexity is an intuitive concept since inverse probability is just the "branching factor" of a random variable, or the weighted average number of choices a random variable has. The relationship between perplexity and log-likelihood is so straightforward that some papers (<a href="https://arxiv.org/pdf/1802.05751.pdf">Image Transformer</a>) actually use the word “perplexity” interchangeably with log-likelihoods.<br />
<br />
<h3>
Closing Thoughts</h3>
<br />
In this blog post we derived the relationship between maximizing average log-likelihood and compression. We also mentioned several modeling choices between discrete and continuous likelihood models for individual pixels.<br />
<br />
There is a broader question of whether likelihood is even the right quantity to be measuring / optimizing. At <a href="https://blog.evjang.com/2017/01/nips2016.html">NIPS 2016</a> (now known as the NeurIPS conference), I recall there being a pretty lively debate in the generative modeling workshop where people were debating whether optimizing tractable-likelihood models was even a good idea.<br />
<br />
It turns out that optimizing and evaluating against likelihood was a good idea after all, because since then researchers have figured out how to build and scale up much more <a href="https://openai.com/blog/better-language-models/">flexible likelihood models</a> while keeping them computationally tractable. Models like Glow, GPT-2, WaveNet, and Image Transformer are trained with likelihood and can generate image, audio and text samples with stunning quality. On the other hand, one might argue that at the end of the day, generative modeling needs to be coupled to performance on an actual task, such as classification accuracy when the model is <a href="https://openai.com/blog/better-language-models/">fine-tuned on a labeled dataset</a>. My colleague Niki Parmar says the following about images vs text likelihood models:<br />
<blockquote class="tr_bq">
On text, there is generally a pattern where better likelihood leads to better performance on downstream tasks like GLUE. On images, I've heard from other practitioners that pixel prediction doesn't work as a pre-training task for downstream tasks like image classification. It could be because pixels don't mean much in terms of representations as compared to word-pieces or words in text. It's an open question but I find it fascinating that representation learning in images is quite different, almost difficult to establish and measure.</blockquote>
In a future blog post, I’ll build on top of this tutorial and discuss the evaluation of generative models that optimize variational lower bounds on log-likelihood (e.g. Variational Autoencoders, importance-weighted autoencoders).<br />
<br />
<h3>
Further Reading</h3>
<ul>
<li><a href="https://twitter.com/ericjang11/status/1141798416172797954">Twitter Thread</a> discussing alternative metrics to the commonly reported log-likelihood and IWAE bounds.</li>
<li>If you’re curious about what the compression ratios are for common lossless image compression algorithms, check out this <a href="https://github.com/fhkingma/bitswap/blob/master/benchmark_compress.py">script</a> and <a href="https://arxiv.org/pdf/1905.06845v3.pdf">paper</a>.</li>
<li>See the <a href="https://arxiv.org/pdf/1601.06759.pdf">PixelRNN paper</a> and this <a href="https://www.reddit.com/r/MachineLearning/comments/79mdd8/d_whats_the_intuition_behind_using_softmax_in/">Reddit thread</a> for further discussion on why categorical classification is preferable to continuous density modeling. </li>
<li><a href="https://projecteuclid.org/download/pdfview_1/euclid.aos/1336396183">Proper Local Scoring Rules</a> - Thanks to Ferenc Huszár for pointing this paper out to me.</li>
<li><a href="https://arxiv.org/pdf/1511.01844.pdf">A Note on the Evaluation of Generative Models</a> - A terrific paper by Theis et al. that is a must-read for anyone getting started in the field of generative modeling.</li>
<li><a href="https://arxiv.org/pdf/1611.04273.pdf">On the Quantitative Analysis of Decoder-based Generative Models</a></li>
<li><a href="https://towardsdatascience.com/perplexity-intuition-and-derivation-105dd481c8f3">Tutorial and Derivation of Perplexity</a>, and this <a href="https://stats.stackexchange.com/questions/10302/what-is-perplexity">Stack Exchange question on Perplexity</a>. Being unfamiliar with NLP myself, these links were super helpful for learning about the topic.</li>
<li>The generative modeling community is fairly consistent about reporting MNIST in nats and color images in bits-per-dim, though <a href="https://arxiv.org/pdf/1810.01367.pdf">some papers</a> report MNIST likelihoods in bits/dim and <a href="https://arxiv.org/pdf/1705.07057.pdf">others</a> will report CIFAR10 in nats.</li>
<li>This paper by <a href="https://arxiv.org/abs/1610.09033">Ranganath et al.</a> proposes a general framework for thinking about variational inference (e.g. VAEs) by moving away from the standard KL divergence objective. One first imagines the desiderata they’d like to see in their divergence measure (e.g. sampling, preventing mode collapse), and then the paper proposes a method to recover a variational objective for the desired divergence. Thanks to Dustin Tran for pointing this one out to me. </li>
</ul>
<div>
<h3>
Acknowledgements</h3>
Many thanks to <a href="http://dustintran.com/">Dustin Tran</a>, <a href="https://twitter.com/nikiparmar09">Niki Parmar</a>, and <a href="https://medium.com/@vanhoucke">Vincent Vanhoucke</a> for reviewing drafts of this blog post. As always, thank you for reading!<br />
<div>
<span id="docs-internal-guid-b422d2d6-7fff-a3fe-591b-09094846fc93"><br /></span></div>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-9798086587453060562019-05-23T17:32:00.002-07:002019-06-27T11:49:52.419-07:00Lessons from AI Research Projects: The First 3 YearsTranslations: <a href="https://www.jianshu.com/p/1e19e17e826f">中文</a><br />
<br />
I've been at Google Brain robotics (now referred to as <a href="https://ai.google/research/teams/brain/robotics/">Robotics @ Google</a>) for nearly 3 years. It's helpful to reflect, from time to time, on the scientific, engineering and personal productivity takeaways gleaned from working on large research projects. Every researcher's unique experiences and experimentation can potentially become their personal competitive edge for thinking about new problems in unique ways. Here are mine (so far).<br />
<br />
These are ordered chronologically (earliest work first), so that the reader can see how my past experiences shape my current biases and beliefs (orange = first author).<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgg_gsQXc_BIeYquUEgSLlLxmA9K2y9AAOnFuxR6chU4Uw6H1fJVDBKQScdSvFEjVDGJCaxewmsZMIpTqSE0EWC4by-QcvU3nCky389XsHmfvAxxCVK-RL6fiRCjWKJ89rRvoS6z3hISCc/s1600/timeline.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="540" data-original-width="960" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgg_gsQXc_BIeYquUEgSLlLxmA9K2y9AAOnFuxR6chU4Uw6H1fJVDBKQScdSvFEjVDGJCaxewmsZMIpTqSE0EWC4by-QcvU3nCky389XsHmfvAxxCVK-RL6fiRCjWKJ89rRvoS6z3hISCc/s640/timeline.png" width="640" /></a></div>
<br />
<br />
<a href="https://arxiv.org/abs/1611.01144">Categorical Reparameterization with Gumbel-Softmax</a><br />
<br />
<ul>
<li>The importance of a work environment that encourages serendipitous discovery and 20% time (the inspiration for Gumbel-Softmax came to me in a water cooler conversation I was having with Shane Gu).</li>
<li>Research on very basic techniques (e.g. generative modeling) can have a huge impact through various downstream applications.</li>
<li>The simplest method to implement is the one that gets cited the most.</li>
</ul>
<br />
<a href="https://arxiv.org/abs/1707.01932">End-to-End Learning of Semantic Grasping</a><br />
<br />
<ul>
<li>The notion of a "class label" is meaningless, and is the wrong way to tackle goal-conditioned grasping.</li>
<li>ML can help robotics, but robotics can also help ML (i.e. retroactive labeling via present poses).</li>
<li>The importance of moving fast, investing in visualization and analysis tools (e.g. <a href="https://twitter.com/ericjang11/status/1100115587966033920">notebooks</a>) that do not require a robot.</li>
</ul>
<br />
<a href="https://sermanet.github.io/tcn/">Time Contrastive Networks</a><br />
<br />
<ul>
<li>All you need is high-quality data and a contrastive loss. Pierre Sermanet is fond of saying, tongue-in-cheek, that these two things will get us to AGI.</li>
<li>Dream big.</li>
</ul>
<br />
<a href="https://goo.gl/pyMd6p">Deep Reinforcement Learning for Vision-Based Robotic Grasping</a><br />
<br />
<ul>
<li>The importance of a fast prototyping environment and quick experiment turnaround times.</li>
<li>Q-Learning works and scales pretty well.</li>
</ul>
<br />
<a href="https://sites.google.com/corp/view/qtopt">QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation</a><br />
<br />
<ul>
<li>Most people don’t really care how QT-Opt is trained; they are excited about what a trained QT-Opt system can do.</li>
<li>All you need is scale, compute, and data.</li>
</ul>
<br />
<a href="https://sites.google.com/site/grasp2vec/">Grasp2Vec: Learning Object Representations from Self-Supervised Grasping</a><br />
<ul>
<li>Magical things can happen if you focus on innovations in better-structured data, instead of better algorithms (all you need is high-quality data and a contrastive loss).</li>
<li>The notion of a class label is meaningless.</li>
<li>Good reward functions are a very nice piece of "Software 2.0" infrastructure: modular functionality, quick to verify for correctness, and does not impose strong assumptions on upstream or downstream computations (in contrast to RL algorithms).</li>
<li>More on <a href="https://twitter.com/ericjang11/status/1056442461965430785?s=20">Twitter</a>.</li>
</ul>
<br />
<div>
<a href="https://arxiv.org/abs/1810.01392">Generative Ensembles for Robust Anomaly Detection</a></div>
<div>
<ul>
<li>Thinking deeply about the nature of the OoD problem and <a href="https://blog.evjang.com/2018/12/uncertainty.html">different types of uncertainty</a>. </li>
<li>The OoD problem is ill-posed, but still useful for practical applications.</li>
<li>OoD and generalization are two sides of the same coin.</li>
<li>I spent a 10 days in Jeju mentoring DL camp students. Every day I woke up, ate 3 meals in the same cafeteria downstairs, had no meetings, and thought really hard about the research problem. This monastic working environment was tremendously useful for my creative "flow".</li>
</ul>
<a href="https://sites.google.com/corp/view/watch-try-learn-project/">Watch, Try, Learn: Meta-Learning from Demonstrations and Rewards</a></div>
<div>
<ul>
<li>Optimal control theory says that we need RL to make robots work, but you can get surprisingly far with the original Deep Learning recipe: supervised learning + lots of data + architecture tuning.</li>
<li>Meta-Learning is all about pushing the burden of learning into the prior.</li>
<li>Generative modeling (e.g. principled approaches to density estimation, being able to fit multi-modal distributions) is important for scaling up robotics.</li>
<li>More on <a href="https://twitter.com/ericjang11/status/1138312486027816960?s=20">Twitter</a>.</li>
</ul>
<br />
<div>
General Lessons from Deep RL + Robotics</div>
</div>
<div>
<ul>
<li>I am increasingly of the opinion that the biggest wins in making an ML system work come from high-quality data. Many researchers in sub-fields of ML do not <i>prioritize the choice of data</i> when looking for ways to improve on benchmarks. Deep RL on real robots is a great way to do ML research, because the researcher is forced to gather their own dataset and contend with how data biases generalization outcomes.</li>
<li>Robotics is full-stack ML (gathering and serializing custom data, building a custom data pipeline, training and evaluation binaries, inference on a real robotic system), which increases iteration times & decreases opportunities for spontaneous creativity and discovery. Robotics projects tend to take ~1 FTE year to finish, while most DL papers can be completed in 2-3 months. One of the most important things to me right now is figuring out how we can achieve the same iteration speeds in robotics as achieved in other deep learning domains.</li>
<li>Best software engineering practices for de-risking Deep RL engineering are in their early days. How to keep a full-stack dev environment flexible and fast to iterate on (scientific, creative risk) while keeping technical debt from bubbling over (execution risk)? My colleagues and I designed <a href="https://github.com/google-research/tensor2robot">Tensor2Robot</a> to solve a lot of our large-scale ML + robotics problems, but this is just the beginning.</li>
</ul>
</div>
<br />
The scope of this post is limited to my own research projects. Of course, there are papers that I didn't work on and inspire my views tremendously. I'll mention those in a follow-up blog post.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-2855847630423451602019-05-12T18:36:00.004-07:002019-05-13T11:06:19.497-07:00Fun with Snapchat's Gender Swapping FilterSnapchat's new gender-bending filter is a source of endless fun and laughs at parties. The results are very pleasing to look at. As someone who is used to working with machine learning algorithms, it's almost magical how robust this feature is.<br />
<br />
I was so duly impressed that I signed up for Snapchat and fiddled around with it this morning to try and figure out what's going on under the hood and how I might break it.<br />
<br />
N.B, this is not a serious exercise in reverse-engineering Snapchat's IPA file or studying how other apps engineer similar features; it's just some basic hypothesis testing into when it works and when it doesn't, plus a little narcissistic bathroom selfie fun.<br />
<br />
<br />
<b>Initial Observations</b><br />
<b><br /></b>
The center picture is a standard bathroom selfie. To the left is the "male" filter, and on the right the "female" filter.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeLMR4TagMKFsjvQFMz-Tfd4hzsuOE1fZlortFmkLtjbA0p25urhO0SCxvhbQvIXEh97CRNSs-sDWxLESR-B1qp7MX_lgj9zGlRkCaGKZhb1F2ErZRGMVWiFEnkm0INLQxaOxMgIr1WZY/s1600/triple.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="575" data-original-width="1600" height="227" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeLMR4TagMKFsjvQFMz-Tfd4hzsuOE1fZlortFmkLtjbA0p25urhO0SCxvhbQvIXEh97CRNSs-sDWxLESR-B1qp7MX_lgj9zGlRkCaGKZhb1F2ErZRGMVWiFEnkm0INLQxaOxMgIr1WZY/s640/triple.png" width="640" /></a></div>
<br />
<div>
<br /></div>
<div>
The first thing most users probably notice is that the app works in <i>real time</i>, works with a few different face angles, and does not require an internet connection to run. Hair behaves very naturally when wearing a beanie.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMTz1sh033boULtFkSpPC9ALAALrXsn70VdAcvJ4wAR5BaAP75l2_lDNb8yryloWv9EFUAMIMVmY32kwBm21eSUG2LtQWmF61l1RNH7IOhcy38KRPj_iJC3UQB_VhvFNTekYYyiDuWLeg/s1600/beanie.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="569" data-original-width="320" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMTz1sh033boULtFkSpPC9ALAALrXsn70VdAcvJ4wAR5BaAP75l2_lDNb8yryloWv9EFUAMIMVmY32kwBm21eSUG2LtQWmF61l1RNH7IOhcy38KRPj_iJC3UQB_VhvFNTekYYyiDuWLeg/s320/beanie.gif" width="179" /></a></div>
</div>
<div>
<br /></div>
Here's a rotating profile shot. The app seems to detect whether the face is pointing in a permissible orientation, and only if that boolean is satisfied does the filter get applied.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjF_NP3Yeru9Luo36spSHf6HvOaUh9EnKTEqZGOXnqYYJjqGkKFYZJAhrfR79ZLEaOA973S2lSlBt6Y9dKKHF1FcuEvyRIuEMt_FMena5qFHmV-719fPfkonW2AfCUpxuAL1XlvSsxN0uQ/s1600/turn.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="569" data-original-width="320" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjF_NP3Yeru9Luo36spSHf6HvOaUh9EnKTEqZGOXnqYYJjqGkKFYZJAhrfR79ZLEaOA973S2lSlBt6Y9dKKHF1FcuEvyRIuEMt_FMena5qFHmV-719fPfkonW2AfCUpxuAL1XlvSsxN0uQ/s320/turn.gif" width="179" /></a></div>
<br />
<br />
<div>
Gender swap works in a variety of lighting conditions, though the hair does not seem to cast shadows.</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEju2jv_zWKD_uLc23OjHnWiTKXanXUUr3sZ5WukaX7dJ33XrYfiIaLJ6M4SHoRfIGRfLHILk8dGBGrG2QKXUiNG8q9T1gxrK68TvXnSyzEHobNYSO4bpXXDwr2SODAVe4PzJJHXZHq8ZCQ/s1600/girls.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="793" data-original-width="1600" height="197" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEju2jv_zWKD_uLc23OjHnWiTKXanXUUr3sZ5WukaX7dJ33XrYfiIaLJ6M4SHoRfIGRfLHILk8dGBGrG2QKXUiNG8q9T1gxrK68TvXnSyzEHobNYSO4bpXXDwr2SODAVe4PzJJHXZHq8ZCQ/s400/girls.png" width="400" /></a></div>
<br />
Damn! I look cute.<br />
<br />
Here was an example that I thought was really cool - the hair captures the directional key lighting.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjme6KNffauV8Lc8iMLNPw8ZQxm9Ong4hiqtM_UzYoR6YiUmsKzPIYfzxTFtqJ_w94sqBkGQyLUUnkilQpyQIzeJ-fpGW2591Ht8q86JapJSU_Jhzfwwt2Qd89jVAEv50lyYoGHAVC7xa4/s1600/IMG_2176.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="655" data-original-width="697" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjme6KNffauV8Lc8iMLNPw8ZQxm9Ong4hiqtM_UzYoR6YiUmsKzPIYfzxTFtqJ_w94sqBkGQyLUUnkilQpyQIzeJ-fpGW2591Ht8q86JapJSU_Jhzfwwt2Qd89jVAEv50lyYoGHAVC7xa4/s320/IMG_2176.jpg" width="320" /></a></div>
<br />
<br />
<b>Occlusion Tests</b><br />
<b><br /></b>
Ok, it works pretty well. Can we get it to fail? The app detects when the face is in the wrong pose, but what if there are things occluding the face? Do those occluding objects get "transformed" too?<br />
<br />
The answer is yes. Below is a test where I slide an object across my face. The app works when half the face is occluded, but it seems like if too much of the face is blocked, the "should I face swap" bit is set to False.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwkSkoI9VGzUSo_DtAaBO9CX0SO2e3ZJx_Aq7CwA-H_cIewyghuJfxlX1nMP6vndjFyFaIBjAyUA_xGFoGxIVo955PXxBB2G4aw5AN2iVtIeBXhg6Y9F_GRJWMn6hIAGKsIbbY_mHiGt4/s1600/occlusion.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="569" data-original-width="320" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwkSkoI9VGzUSo_DtAaBO9CX0SO2e3ZJx_Aq7CwA-H_cIewyghuJfxlX1nMP6vndjFyFaIBjAyUA_xGFoGxIVo955PXxBB2G4aw5AN2iVtIeBXhg6Y9F_GRJWMn6hIAGKsIbbY_mHiGt4/s320/occlusion.gif" width="179" /></a></div>
<br />
<br />
Here's vertical occlusion, where the bit seems to depend on "what percentage of the face real estate is occluded" rather than what important semantic features (e.g. eyes, lips) are occluded. Right before the app decides that the "should I face swap" should switch to "False", you can see the blurring of the white bottle. Also, my hair turns blonde as I center the bottle in view.<br />
<br />
Very interesting. This suggests to me that there <i>definitely</i> some machine learning going on here, and it's picking up on some statistical artifact of the data it was trained on. Do blondes tend to make more makeup tutorials or something?<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjh5Gtf2JmXazlNVrk097OUaaybHBOmuxZxdjhwDLM0hZuj_nOeTk26-KpmnMLXZmZpmMe67kUy5_8PU-mNZrPm2eLda9PVi9kqu76kERMNpdtA2QX4cCVp8L3H3X49iTY5BkNOeRDNjUs/s1600/blonde.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="569" data-original-width="320" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjh5Gtf2JmXazlNVrk097OUaaybHBOmuxZxdjhwDLM0hZuj_nOeTk26-KpmnMLXZmZpmMe67kUy5_8PU-mNZrPm2eLda9PVi9kqu76kERMNpdtA2QX4cCVp8L3H3X49iTY5BkNOeRDNjUs/s320/blonde.gif" width="179" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
I partially covered my face in a black charcoal masque, and things seemed pretty stable. The female filter does lighten the masque a bit. It's pretty easy to tell from this GIF that the "face swap" feature is confined to a rectangular region that tracks the head (note the sharp cutoff of the hair as it gets to my shoulders).<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEpjABiXa_asaF-22FJHOeSNZM5eNCwKGxvZSiPmHhhB5NY4r7-26g8-LPmDiSJ8roQjZn1OdGgTdCGGUTCPdj3j3QE7kv3cDXLIgvRFb5TolQUZMgeea82hbOTMeEleLan_OFgk2in0A/s1600/partial_mask.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="569" data-original-width="320" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEpjABiXa_asaF-22FJHOeSNZM5eNCwKGxvZSiPmHhhB5NY4r7-26g8-LPmDiSJ8roQjZn1OdGgTdCGGUTCPdj3j3QE7kv3cDXLIgvRFb5TolQUZMgeea82hbOTMeEleLan_OFgk2in0A/s320/partial_mask.gif" width="179" /></a></div>
<br />
The filter stops working once I cover the rest of my face in the masque. Interestingly enough, the ovoid regions of my uncovered skin seem to be detected as faces, and the app proceeds to perform the style transform on that region. You can see the head and face templates flickering in and out like some <a href="https://en.wikipedia.org/wiki/Tomie">kind of Junji Ito horror story</a>.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0g2UOHn0OUiDgJoUHVAOpJIxmEb64NPPho96kZSdnM3zNvebIl_SLBl9BaB244H-9ESicUvLrYiSh1kszMDYBadYsPvBfGO2fTiR5tRAE27ZKzdBHwvVaOZh7g6yT-ZmYJTja7t64FhY/s1600/tomie.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="569" data-original-width="320" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0g2UOHn0OUiDgJoUHVAOpJIxmEb64NPPho96kZSdnM3zNvebIl_SLBl9BaB244H-9ESicUvLrYiSh1kszMDYBadYsPvBfGO2fTiR5tRAE27ZKzdBHwvVaOZh7g6yT-ZmYJTja7t64FhY/s320/tomie.gif" width="179" /></a></div>
<br />
Peeling off the masque is surprisingly stable.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-surkWjmTTwyJmExayFudDAszxWXwtgfANvZxDlaD4v1bbzFlTPgp8VCpnVvPN9LBQ5tuf4NPJmIjXF8h6SPaHVT0rIAPYWqRNAJOd9DP50r-7xvQFTFnkEsKYm7A1UhijR6XjkvvBkE/s1600/tear.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="569" data-original-width="320" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-surkWjmTTwyJmExayFudDAszxWXwtgfANvZxDlaD4v1bbzFlTPgp8VCpnVvPN9LBQ5tuf4NPJmIjXF8h6SPaHVT0rIAPYWqRNAJOd9DP50r-7xvQFTFnkEsKYm7A1UhijR6XjkvvBkE/s320/tear.gif" width="179" /></a></div>
<br />
<b>Hair Layer</b><br />
<b><br /></b>
I was most impressed by the realism of the hair, so I wanted to figure out whether there were any hair mesh models used for dynamic lighting, or whether it was all machine-learning based.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHWU189-UXjmfB5CufP5DZa43vJk-sqfw1SXmEvbG0LLQ9FjtfOG1m0DcQBXNDmUy7q2EyWyagSRznbu7RamkBwrn-pyd4iTeDNNoTGgvScuhZl8Ek97I7NyfyPHRGjnRNm65_-e62UlU/s1600/hair-occlude.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="569" data-original-width="320" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHWU189-UXjmfB5CufP5DZa43vJk-sqfw1SXmEvbG0LLQ9FjtfOG1m0DcQBXNDmUy7q2EyWyagSRznbu7RamkBwrn-pyd4iTeDNNoTGgvScuhZl8Ek97I7NyfyPHRGjnRNm65_-e62UlU/s320/hair-occlude.gif" width="179" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
The hair seems to be rendered as the topmost layer (like a Photoshop layer), but unlike your basic puppy ear/tongue filter, this hair layer has an alpha channel that is partially transparent. If you look closely there is also a clear segmentation mask for the hair that allows the face to show through. Snapchat is probably doing head tracking to figure out where the head is, computing the 2D alpha mask for the hair.<br />
<br />
<br />
<b>How does it work? A guess</b><br />
<b><br /></b>
At first glance, my mind jumped to some sort of <a href="https://junyanz.github.io/CycleGAN/">CycleGAN</a> architecture that maps the distribution of male faces to female faces, and vice versa. The dataset would be the billions of selfies Snap has, er, not deleted in the last 8 years.<br />
<br />
This does raise a lot of questions though:<br />
<br />
<ul>
<li>Are they training truly unpaired image translation? That would be incredibly impressive, given that CycleGAN is bonkers and shouldn't even work in the first place. I would bet they have an unpaired alignment objective that is regularized by a <a href="https://areeweb.polito.it/ricerca/cgvg/siblingsDB.html">limited dataset </a>of ground-truth pairs, such as pairs of images of male/female siblings, or even a hand-designed gender transform that acts as data augmentation (e.g. making the jawline rounder can be done without machine learning). </li>
<li>The hair and face transforms seem to be synthesized independently, given that they occupy different layers (or perhaps synthesized together and separated into different layers right before rendering). This is also the first instance I've seen of GANs being used to render the alpha channel. I am a bit dubious of whether the hair is even generated by a GAN at all. One one hand, there is clearly some smooth function that switches out highlights and hair colors as a function of the positioning of an occluding object, suggesting that colors are probably learned partially from data. On the other hand, the hair is so stable that I have a hard time believing it is synthesized completely with a GAN generator. I have seen a few examples of other East Asian male face swaps with similar hairdos, suggesting that maybe there is a large-ish template library of haridos (that is refined with some ML model).</li>
<li>How do Snap's ML engineers know whether a CycleGAN has converged for such an enormous dataset?</li>
<li>How do they get these neural nets to run with such limited compute budgets? What sorts of image resolutions are they generating on the fly?</li>
</ul>
<ul>
<li>If it indeed is a CycleGAN, then applying the <i>male</i> filter to a <i>female-filtered</i> image of me should recover the original image, right? </li>
</ul>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKCEYcrWytB5TSOmNHMIwAmwQL7U6qA00K4GuBJ7w6JHSCr4gl7J13Yb2eGBLbDu3bO9NDVRKZhJdxPo7HTKrTLe22PTMdwH4L8KdSeE2pxX7IDcOK7mI9Z6ysZUbKzGr-O7rMdaVzoc0/s1600/zoom_f2m.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="569" data-original-width="320" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKCEYcrWytB5TSOmNHMIwAmwQL7U6qA00K4GuBJ7w6JHSCr4gl7J13Yb2eGBLbDu3bO9NDVRKZhJdxPo7HTKrTLe22PTMdwH4L8KdSeE2pxX7IDcOK7mI9Z6ysZUbKzGr-O7rMdaVzoc0/s320/zoom_f2m.gif" width="179" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
<br />
<ul>
<li>The image is mostly scale invariant, but as we zoom in pretty close, the face does resemble mine more. I would guess that there is a preprocessing step that crops and resizes the canonical face image prior to feeding it to a neural net.</li>
<li>There are also probably other subroutines in the filter like jaw resizing that don't use a CycleGAN, but whose addition would cause the M2F and F2M filters to no longer be exact inverses of each other.</li>
</ul>
<br />
<br />
<br />
<b>Implications of Technology</b><br />
<b><br /></b>
I have a friend who does drag. It's a lot of work! I'm excited for technology like this, because it will make it easier for makeup artists, cosplayers, and drag artists to experiment with new ideas and identities cheaply and quickly.<br />
<br />
Technology such as face and voice changing enables a wider gap between public Internet personas and the real people managing those characters. This isn't necessarily a bad thing: if you are born a man but are passionate about being a cute anime girl on the internet, <a href="https://www.youtube.com/watch?v=DIFbgtiQnZY">who are we to judge</a>? Will gender fluidity & drag culture will become more normalized in society as our daily social media normalize gender-bending?<br />
<br />
The future is quite exciting.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-41048530086790697672019-03-10T15:10:00.004-07:002019-03-20T08:31:01.253-07:00What I Cannot Control, I Do not Understand<i>Xiaoyi Yin has graciously translated this blog post to <a href="https://www.jianshu.com/p/24fbbb58ccea">中文</a>.</i><br />
<br />
I often hear the remark around the proverbial AI watering hole that there are no examples of reinforcement learning (RL) deployed in commercial settings that couldn’t be replaced by simpler algorithms.<br />
<div>
<br /></div>
<div>
This is somewhat true. If one takes RL to mean “neural networks trained with DQN / PPO / Soft-Actor Critic etc.”, then indeed, there are no commercial products (yet!) whose success relies on Deep RL algorithmic breakthroughs in the last 5 years [1].</div>
<div>
<br /></div>
<div>
However, if one interprets “reinforcement learning” to mean the notion of “learning from repeated trial and error”, then commercial applications abound, especially in pharmaceuticals, finance, TV show recommendations, and other endeavors based on scientific experimentation and intervention.</div>
<div>
<br /></div>
<div>
I’ll explain in this post how Reinforcement Learning is a general approach to solving the Causal Inference problem, the desiderata of nearly all machine learning systems. In this sense, many high-impact problems are <i>already</i> tackled using ideas from RL, but under different terminology and engineering processes.<br />
<br />
<h3>
Doctor, Won’t You Help Me Live Longer</h3>
<br />
Let’s suppose you are a doctor tasked with helping your patients live longer. You know a thing or two about data science, so you fit a model on a lot of patient records to predict life expectancy, and make a shocking finding: people who drink red wine every day have a 90% likelihood of living over 80 years, compared to the base probability of 50% for non drinkers. <br />
<br />
In the <a href="https://www.inference.vc/untitled/">parlance of causal inference</a>, you’ve found the following observational distribution:<br />
<br />
<b>p(patient lives > 80 yrs | patient drinks red wine daily) = .9</b><br />
<br />
Furthermore, your model has high accuracy on holdout datasets, which increases your confidence that your model has discovered the secret to longevity. Elated, you start telling your patients to drink red wine daily. After all, as a doctor, it is insufficient to <i>predict</i>; we must also <i>prescribe!</i> And what’s not to like about living longer and drinking red wine on the daily? <br />
<br />
Many decades later, you follow up on your patients and -- with great disappointment -- observe the following interventional distribution:</div>
<div>
<br /></div>
<div>
<b>p(patient lives > 80 yrs | do(patient drinks red wine daily)) = .5</b><br />
<br />
The life expectancy of patients on the red wine has not increased! What gives? <br />
<br />
<h3>
Finding the Causal Model</h3>
The core problem here lies in confounding variables. When we decided to prescribe red wine to patients based on the observational model, we made a strong hypothesis about the causality diagram:<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhOyV_HhtltAChcfnTYHIqFXxq74zUJRsWEx9lwGocksLfAhdSWZca1HbJ0p_yFJC6m0xraB2b3zZ42KGKdZ7A9-ppLmwJccVeJrQ_B-HsAMxfLiV6qmgE4BeQcw8qptTlOfMBjGuztClc/s1600/causal1+%25281%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="201" data-original-width="377" height="170" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhOyV_HhtltAChcfnTYHIqFXxq74zUJRsWEx9lwGocksLfAhdSWZca1HbJ0p_yFJC6m0xraB2b3zZ42KGKdZ7A9-ppLmwJccVeJrQ_B-HsAMxfLiV6qmgE4BeQcw8qptTlOfMBjGuztClc/s320/causal1+%25281%2529.png" width="320" /></a></div>
<br />
The directed edges between these random variables here denote causality, which can also be thought of as "the arrow of time". Changing the value of the “Drinks Red Wine” variable ought to have an effect on “Live > 80 years”, but changing “Lives > 80 years” has no effect on drinking red wine.<br />
<br />
If this causal diagram was correct, then our intervention should have increased the lifespan of patients. But the actual experiment does not support this, so we must reject this hypothetical causal model and reach for alternative hypotheses to explain the data. Perhaps there are one or more variables that cause a higher propensity of red wine drinking, AND living longer, thus correlating those variables together?<br />
<br />
We make the educated guess that a confounding variable might be that wealthy people tend to simultaneously live longer and drink more wine. Combing through the data again, we find that P(drinks red wine | is wealthy) = 0.9 and P(lives > 80 | is wealthy) = 1.0. So our hypothesis now takes the form:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIp8bG-nrEtHE-BR8RfllBSoT1WlhoANE_JtYaBYC83N2YFR70xkmzfVXj8C5AP_09pvzUfSjYBqb4kCX7HUFlCAxYrGjScr7HsrKYTJhfvmHVLgrNr8tF8F_LdEPxJgA1zYBA4YLUpfc/s1600/causal2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="312" data-original-width="837" height="236" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIp8bG-nrEtHE-BR8RfllBSoT1WlhoANE_JtYaBYC83N2YFR70xkmzfVXj8C5AP_09pvzUfSjYBqb4kCX7HUFlCAxYrGjScr7HsrKYTJhfvmHVLgrNr8tF8F_LdEPxJgA1zYBA4YLUpfc/s640/causal2.png" width="640" /></a></div>
<br />
<br />
If our understanding of the world is correct, then do(is wealthy) should make people live > 80 years and drink more red wine. And indeed, we find that once we give patients $1M cash infusions to make them wealthy (by USA standards), they end up living longer and drinking red wine daily (this is a hypothetical result, fabricated for the sake of this blog post).<br />
<br />
<h3>
RL as Automated Causal Inference</h3>
ML models are increasingly used to drive decision making in recommender systems, self-driving cars, pharmaceutical R&D, and experimental physics. In many cases, we desire an outcome event $y$, for which we attempt to learn a model $p(y|x_1, .., x_N)$ and then choose inputs $x_1...x_N$ to maximize $p(y|x_1...x_N)$. <br />
<br />
It should be quite obvious from the previous medical example that to avoid causality when building decision-making systems is to risk overfitting models that are not useful for prescribing intervention. Suppose we automated the causal model discovery process in the following manner:<br />
<ol>
<li>Fit an observational model to the data p(y|x_1, x_2, … x_N)</li>
<li>Assume the observational model captures the causal model. Prescribe an intervention do(x_i) that maximizes p(y|x_1..N) and gather a new dataset where 50% of x_i has the intervention and 50% does not.</li>
<li>Fit an observational model to the new data p(y|x_i)</li>
<li>Repeat steps 1-3 until observational model matches intervention model: p(y|do(x_i)) = p(y|x_i)</li>
</ol>
</div>
<div>
To return to the red wine case study as a test case:<br />
<ol>
<li>You would initially have p(live > 80 years | drink red wine daily) = .9. </li>
<li>Upon gathering a new dataset, you would obtain p(live > 80 years | do(drink red wine daily)) = .5. Model is not converged, but at least your observational model no longer believes that drinking red wine explains living longer. Furthermore, it now pays attention to the right variable, that p(live > 80 years | is_wealthy) = 1.</li>
<li>The subsequent iteration of this procedure then finds that p(live > 80 years | do(is wealthy)) = 1, so we are done.</li>
</ol>
</div>
<div>
<br />
The act of gathering a randomized trial (the 50% split of intervention vs. non-intervention) and re-training a new observational model is one of the most powerful ways to do general causal inference, because it uses data from reality (which “knows” the true causal model) to stamp out incorrect hypotheses.<br />
<br />
Repeatedly training observational models and suggesting interventions is what RL algorithms are all about, which is solving <i>optimal control</i> for sequential decision-making problems. <i>Control </i>is the operative word here - the true test of whether an agent understands its environment is whether it can solve it.<br />
<br />
For ML models whose predictions are used to infer interventions (so as to manipulate some downstream random variable), I argue that the overfitting problem is nothing more than a causal inference problem. This also explains why RL tends to be much harder as a machine learning problem than supervised learning - not only are there <a href="https://medium.com/syncedreview/yann-lecun-cake-analogy-2-0-a361da560dae">fewer bits of supervision</a> per observation, but the RL agent must also figure out the causal, interventionist distribution required to behave optimally.</div>
<div>
<br />
One salient case of “overfitting” is in RL algorithms can theoretically be trained “offline” -- that is, learning entirely from off-policy data without gathering new data samples from the environment. However, without periodically gathering new experience from the environment, agents can overfit to finite-size datasets or dataset imbalances, and propose interventions that do not generalize past their offline data. The best way to check if an agent is “learning the right thing” is to deploy it in the world and verify its hypotheses under the interventionist distribution. Indeed, for our robotic grasping research at Google, we often find that fine-tuning with “online” experience improves performance substantially. This is equivalent to re-training an observational model on new data p(grasp success | do(optimal_action)).<br />
<br />
<br />
<span style="font-size: 18.72px; font-weight: 700;">Production "RL"</span><br />
<span style="font-size: 18.72px; font-weight: 700;"><br /></span>
The <a href="https://en.wikipedia.org/wiki/A/B_testing">A/B testing</a> framework often used in production engineering is a manual version of the "automated causal inference" pipeline, where a random 50% of users (assumed to be identically distributed) are shown one intervention and the other 50% are shown the control.<br />
<br />
This is the cornerstone of data-driven decision making, and is used widely at hedge funds, Netflix, StitchFix, Google, Walmart, and so on. Although this process has humans in the loop (specifically for proposing interventions and choosing the stopping criterion), there are many related nuances to these methodologies that also arise in RL literature like data non-stationarity, the difficulty of obtaining truly randomized experiments, and long-term credit assignment. I’m just starting to learn about causal inference myself, and hope that in the next few years there will be more cross-fertilization of ideas between the RL, Data Science, and Causal Inference research communities.<br />
<br />
For a more technical introduction to Causal Inference, see this <a href="https://www.inference.vc/untitled/">great blog series </a>by Ferenc Huszar.</div>
<div>
<br />
<br />
<br />
[1] A footnote on why I think RL hasn’t had much commercial deployment yet. Feel free to clue me in if there are indeed companies using RL in production that I don’t know about!<br />
<br />
In order for a company to be justified in adopting RL technology, the problem at hand needs to be 1) commercially useful 2) feasible for current Deep RL algorithms 3) the marginal utility of optimal control must be worth the technical risks of Deep RL.<br />
<br />
Let’s consider deep image understanding by comparison: 1) everything from surveillance to self-driving cars to FaceID is highly commercially interesting 2) current models are highly accurate and scale well to a variety of image datasets 3) the models generally work as expected and do not require great expertise to train and deploy.<br />
<br />
As for RL, it doesn’t take a great imagination to realize that general RL algorithms would eventually enable robots to learn skills entirely on their own, or help companies make complex financial decisions like stock buybacks and hiring, or enable far richer NPC behavior in games. Unfortunately, these problem domains don’t meet criteria (2) - the technology simply isn’t ready and requires many more years of R&D.<br />
<br />
For problems where RL is plausible, it is difficult to justify being the first user of a technology whose marginal utility to your problem of choice is unproven. Example problems might include datacenter cooling or air traffic control. Even for domains where RL has been shown clearly to work (e.g. low-dimensional control or pixel-level control), RL still requires a lot of research skill to build a working system. </div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-29487581839079146052019-02-21T08:06:00.001-08:002019-02-27T08:37:52.330-08:00Meta-Learning in 50 Lines of JAXGithub repo here: <a href="https://github.com/ericjang/maml-jax">https://github.com/ericjang/maml-jax</a><br />
<br />
Adaptive behavior in humans and animals occurs at many time scales: when I use a new shower handle for the first time, it takes me a few seconds to figure out how to adjust the water temperature to my liking. Upon reading a news article, I obtain new information that I didn't have before. More difficult skills, such as mastering a musical instrument, are acquired over a lifetime of deliberate practice.<br />
<br />
Learning is hardly restricted to animal-level intelligence; it can be found in every living creature. Multi-cellular developmental programs are highly plastic and can even <a href="https://www.youtube.com/watch?v=RjD1aLm4Thg">store epigenetic “memories'” between generations</a>. At the longest time-scales, evolution itself can be thought of as “learning” on the genomic level, whereby favorable genetic codes are discovered and remembered over the course of many generations. At the shortest of timescales, a single ion channel activating in response to a stimulus can also be thought of as “learning”, as it is an adaptive, stateful response to the environment. Biological intelligence blurs the boundaries between “<b>behavior</b>” (responding to the environment), “<b>learning</b>” (acquiring information about the world in order to improve fitness), and “optimization” (<b>improving fitness</b>). <br />
<br />
The focus of Machine Learning (ML) is to imbue computers with the ability to learn from data, so that they may accomplish tasks that humans have difficulty expressing in pure code. However, what most ML researchers call “learning” right now is but a very small subset of the vast range of behavioral adaptability encountered in biological life! Deep Learning models are powerful, but require a large amount of data and many iterations of stochastic gradient descent (SGD). This learning procedure is time-consuming and once a deep model is trained, its behavior is fairly rigid; at deployment time, one cannot really change the behavior of the system (e.g. correcting mistakes) without an expensive retraining process. Can we build systems that can learn faster, and with less data?<br />
<br />
“Meta-learning'', one of the most exciting ML research topics right now, addresses this problem by optimizing a model not just for the ability to “predict well'', but also the ability to “learn well''. Although Meta-Learning has attracted a lot of research attention in recent years, related ideas and algorithms have been around for some time (see Hugo Larochelle's <a href="https://t.co/Wjp8BvSBfp">slides</a> and Lilian Weng’s <a href="https://lilianweng.github.io/lil-log/2018/11/30/meta-learning.html">blog post</a> for an excellent overview of related concepts). <br />
<br />
This blog post won’t cover all the possible ways in which one can build a meta-learning system; instead, this is a practical tutorial on how to get your feet wet in meta-learning research. Specifically, I'll show you how to implement the <a href="https://arxiv.org/abs/1703.03400">MAML</a> meta-learning algorithm in about 50 lines of Python code, using Google's awesome JAX library.<br />
<div>
<br /></div>
<div>
You can find a self-contained Jupyter notebook <a href="https://github.com/ericjang/maml-jax/blob/master/maml.ipynb">here</a> reproducing this tutorial.</div>
<div>
<br /></div>
<h3>
An Operator Perspective on Learning and Meta-Learning</h3>
<div>
<br /></div>
“Meta-learning” is used in so many different research contexts nowadays that it's difficult to communicate to other researchers what I’m exactly working on when I say “Meta-Learning”. A source of this confusion stems from the blurred semantics between “optimization”, “learning”, “adaptation”, “memory”, and how these terms can be employed in wildly different applications.<br />
<br />
This section is my attempt to make the definition of “learning” and “meta-learning” more mathematically precise, and explain why seemingly different algorithms are all branded as “meta-learning” these days. Feel free to skip to the next section if you want to dive straight into the MAML+JAX coding tutorial.<br />
<br />
We define a <b>learning operator</b> $f : F_\theta \to F_\theta$ as a function that improves a model function $f_\theta$ with respect to some task. A common learning operator used in deep learning and reinforcement learning literature is the stochastic gradient descent algorithm, with respect to a loss function. In standard DL contexts, learning occurs over hundreds of thousands or even millions of gradient steps, but generally, “learning'' can also occur on shorter (conditioning) or longer timescales (hyperparameter search). In addition to explicit optimization, learning can also be implemented implicitly via a dynamical system (recurrent neural networks conditioning on the past) or probabilistic inference.<br />
<br />
A <b>meta-learning operator</b> $f_o(f_i(f_\theta))$ is a composite operator of two learning operators: an “inner loop'' $f_i \in F_i$ and an “outer loop'' $f_o \in F_o$. Furthermore, $f_i$ is a model itself, and $f_o : F_i \to F_i$ is an operator over the inner learning rule $f_i$. In other words, $f_o$ learns the learning rule $f_i$, and $f_i$ learns a model for a given task, where we define “task'' to be a self-contained family of problems for which $f_i$ can adequately update $f_\theta$ to solve. At <b>meta-training time</b>, $f_o$ is applied to select for $f_i$ across a variety of training tasks. At <b>meta-test time</b>, we evaluate the generalization properties of $f_i$ and $f_\theta$ to holdout tasks.<br />
<br />
The choice of $f_o$ and $f_i$ depends largely on the problem domain. In architecture search literature (also called “<b>learning to learn</b>''), $f_i$ is a relatively slow training procedure of a neural network from scratch, while $f_o$ can be a neural controller, random search algorithm, or a Gaussian Process Bandit.<br />
<br />
A wide variety of machine learning problems can be formulated in terms meta-learning operators. In <b>(meta) imitation learning</b> (or <b>goal-conditioned reinforcement learning</b>), $f_i$ is used to relay instructions to the RL agent, such as conditioning on a task embedding or human demonstrations. In <b>meta-reinforcement learning</b> (MRL), $f_i$ instead implements a “fast reinforcement learning'' algorithm by which an agent improves itself after trying the task a couple times. It’s worth re-iterating here that I don’t see a distinction between “learning” and “conditioning”, because they both rely on inputs that are supplied at test time (i.e. “new information provided by the environment”). <br />
<br />
MAML is a meta-learning algorithm that implements $f_i$ via SGD, i.e. $\theta := \theta - \alpha \nabla_{\theta}(\mathcal{L}(\theta))$. This SGD update is differentiable with respect to $\theta$, allowing $f_o$ to effectively optimize $f_i$ via backpropagation without requiring many additional parameters to express $f_i$.<br />
<br />
<div>
<div>
<span id="docs-internal-guid-4c099eea-7fff-afe1-9be4-08124ea73ada">
</span>
<br />
<div dir="ltr" style="margin-left: 0pt;">
<h3>
<span id="docs-internal-guid-4c099eea-7fff-afe1-9be4-08124ea73ada">
Exploring JAX: Gradients</span></h3>
<span id="docs-internal-guid-4c099eea-7fff-afe1-9be4-08124ea73ada"><br /></span>
<span id="docs-internal-guid-4c099eea-7fff-afe1-9be4-08124ea73ada">We begin the tutorial by importing JAX’s numpy drop-in and the gradient operator, grad. </span><br />
<span id="docs-internal-guid-4c099eea-7fff-afe1-9be4-08124ea73ada"><span id="docs-internal-guid-472840e5-7fff-942b-aad2-c8770ec17df1"><br /></span>
</span><br />
<div dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #333333; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">import</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> jax.numpy </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">as</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> np</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">from</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> jax </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">import</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> grad</span></span></div>
</td></tr>
</tbody></table>
</div>
<span id="docs-internal-guid-4c099eea-7fff-afe1-9be4-08124ea73ada"><span id="docs-internal-guid-472840e5-7fff-942b-aad2-c8770ec17df1">
</span></span></div>
<span id="docs-internal-guid-4c099eea-7fff-afe1-9be4-08124ea73ada">
</span>
<div dir="ltr" style="margin-left: 0pt;">
<span id="docs-internal-guid-4c099eea-7fff-afe1-9be4-08124ea73ada"><br /></span>
<span id="docs-internal-guid-4c099eea-7fff-afe1-9be4-08124ea73ada">The gradient operator grad transforms a python function into another function that computes the gradients. Here, we compute first, second, and third order derivatives of $e^x$ and $x^2$:</span></div>
<span id="docs-internal-guid-4c099eea-7fff-afe1-9be4-08124ea73ada">
<div dir="ltr" style="margin-left: 0pt;">
<br /></div>
<div dir="ltr" style="margin-left: 0pt;">
<span id="docs-internal-guid-7f2bf7c0-7fff-c722-584d-a8d03da74443"><br /></span>
<br />
<div dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #333333; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">f = </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">lambda</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> x : np.exp(x)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">g = </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">lambda</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> x : np.square(x)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">print(grad(f)(</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">1.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">)) </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># = e^{1}</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">print(grad(grad(f))(</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">1.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">))</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">print(grad(grad(grad(f)))(</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">1.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">))</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">print(grad(g)(</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">2.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">)) </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># 2x = 4</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">print(grad(grad(g))(</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">2.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">)) </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># x = 2</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">print(grad(grad(grad(g)))(</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">2.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">)) </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># x = 0</span></span></div>
</td></tr>
</tbody></table>
</div>
<span id="docs-internal-guid-7f2bf7c0-7fff-c722-584d-a8d03da74443">
</span></div>
<div dir="ltr" style="margin-left: 0pt;">
<br /></div>
</span></div>
<h3>
<span id="docs-internal-guid-b0d8c8b5-7fff-e35e-5b1d-ce6ab8cff1d3">Exploring JAX: Auto-Vectorization with <span style="font-family: "courier new" , "courier" , monospace;">vmap</span></span></h3>
<div>
<br /></div>
<div>
Now let’s consider a toy regression problem in which we try to learn the function $f_\theta(x) = sin(x)$ with a neural network. The goal here is to get familiar with defining and training models. JAX provides some lightweight helper functions to make it easy to set up a neural network.</div>
<div>
<br /></div>
<div>
<br />
<div dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #333333; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">from</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> jax </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">import</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> vmap </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># for auto-vectorizing functions</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">from</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> functools </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">import</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> partial </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># for use with vmap</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">from</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> jax </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">import</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> jit </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># for compiling functions for speedup</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">from</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> jax.experimental </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">import</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> stax </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># neural network library</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">from</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> jax.experimental.stax </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">import</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> Conv, Dense, MaxPool, Relu, Flatten, LogSoftmax </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># neural network layers</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">import</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> matplotlib.pyplot </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">as</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> plt </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># visualization</span></span></div>
</td></tr>
</tbody></table>
</div>
<br />
<br />
We’ll define a simple neural network with 2 hidden layers. We’ve specified an in_shape of (-1, 1), which means that the model takes in a variable-size batch dimension, and has a feature dimension of 1 scalar (since this is a 1-D regression task). JAX’s helper libraries all take on a functional API (unlike TensorFlow, which maintains a graph state), so we get back a function that initializes parameters and a function that applies the forward pass of the network. These callables return lists and tuples of numpy arrays - a simple and flat data structure for storing network parameters.</div>
<div>
<br /></div>
<div>
<br />
<div dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #333333; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># Use stax to set up network initialization and evaluation functions</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">net_init, net_apply = stax.serial(</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> Dense(</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">40</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">), Relu,</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> Dense(</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">40</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">), Relu,</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> Dense(</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">1</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">in_shape = (</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">-1</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">, </span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">1</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">,)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">out_shape, net_params = net_init(in_shape)</span></span></div>
</td></tr>
</tbody></table>
</div>
<br />
Next, we define the model loss to be Mean-Squared Error (MSE) across a batch of inputs.</div>
<div>
<br />
<br />
<div dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #333333; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #ffffaa; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">loss</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">(params, inputs, targets):</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># Computes average loss for the batch</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> predictions = net_apply(params, inputs)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> np.mean((targets - predictions)**</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">2</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">)</span></span></div>
</td></tr>
</tbody></table>
</div>
<br />
We evaluate the uninitialized network across a range of inputs:</div>
<div>
<br />
<br />
<div dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #333333; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># batch the inference across K=100</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">xrange_inputs = np.linspace(</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">-5</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">5</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">100</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">).reshape((</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">100</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">, </span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">1</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">)) </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># (k, 1)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">targets = np.sin(xrange_inputs)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">predictions = vmap(partial(net_apply, net_params))(xrange_inputs)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">losses = vmap(partial(loss, net_params))(xrange_inputs, targets) </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># per-input loss</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">plt.plot(xrange_inputs, predictions, label=</span><span style="color: #a2fca2; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">'prediction'</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">plt.plot(xrange_inputs, losses, label=</span><span style="color: #a2fca2; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">'loss'</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">plt.plot(xrange_inputs, targets, label=</span><span style="color: #a2fca2; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">'target'</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">plt.legend()</span></span></div>
</td></tr>
</tbody></table>
</div>
<br />
As expected, at random initialization, the model’s predictions (blue) are totally off the target function (green).</div>
<div>
<br /></div>
</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRHyeggB7FDwLB1aviGCO1PW90jW6r7I5c4gfnl0jSJKN2B1LxotTAav-HHjeqMll0DEf33yMnUYnL5PeTOBEC0_wBTikVA8qDc_feLaaQAp28Xu0ZanWevHB3trMmscN3jDnEksQ0qss/s1600/download.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="252" data-original-width="374" height="268" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRHyeggB7FDwLB1aviGCO1PW90jW6r7I5c4gfnl0jSJKN2B1LxotTAav-HHjeqMll0DEf33yMnUYnL5PeTOBEC0_wBTikVA8qDc_feLaaQAp28Xu0ZanWevHB3trMmscN3jDnEksQ0qss/s400/download.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
Let’s train the network via gradient descent. JAX’s random number generator is set up differently than Numpy’s, so to initialize network parameters we’ll use the original Numpy library (onp) to generate random numbers. We’ll also import the tree_multimap utility to easily manipulate collections of per-parameter gradients (for TensorFlow users, this is analogous to nest.map_structure for Tensors).<br />
<br />
<div dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #333333; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="background-color: #333333; color: #fcc28c; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">import</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> numpy </span><span style="background-color: #333333; color: #fcc28c; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">as</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> onp</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="background-color: #333333; color: #fcc28c; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">from</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> jax.experimental </span><span style="background-color: #333333; color: #fcc28c; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">import</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> optimizers</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="background-color: #333333; color: #fcc28c; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">from</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> jax.tree_util </span><span style="background-color: #333333; color: #fcc28c; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">import</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> tree_multimap </span><span style="background-color: #333333; color: #888888; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"># Element-wise manipulation of collections of numpy arrays </span></span></div>
</td></tr>
</tbody></table>
</div>
<div class="separator" style="clear: both; text-align: left;">
<b style="font-weight: normal;"><br /></b></div>
<br />
We initialize the parameters and optimizer, and run the curve fitting for 100 steps. Note that adding the @jit decorator to the “step” function uses XLA to compile the entire training step into machine code, along with optimizations like fused accelerator kernels, memory and layout optimization. TensorFlow itself also uses XLA for <a href="https://developers.googleblog.com/2017/03/xla-tensorflow-compiled.html">accelerating statically defined graphs</a>. XLA makes the computation very fast and amenable to hardware acceleration because the entire thing can be executed without returning to a Python interpreter (or Graph interpreter in the case of TensorFlow sans XLA). The code in this tutorial will <i>just work</i> on CPU/GPU/TPU.<br />
<br />
<div dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #333333; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">opt_init, opt_update = optimizers.adam(step_size=</span><span style="background-color: #333333; color: #d36363; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">1e-2</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">)</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">opt_state = opt_init(net_params)</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="background-color: #333333; color: #888888; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"># Define a compiled update step</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="background-color: #333333; color: #fc9b9b; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">@jit</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="background-color: #333333; color: #fcc28c; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: #333333; color: #ffffaa; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">step</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">(i, opt_state, x1, y1):</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> p = optimizers.get_params(opt_state)</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> g = grad(loss)(p, x1, y1)</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: #333333; color: #fcc28c; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> opt_update(i, g, opt_state)</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="background-color: #333333; color: #fcc28c; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">for</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> i </span><span style="background-color: #333333; color: #fcc28c; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">in</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> range(</span><span style="background-color: #333333; color: #d36363; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">100</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">):</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> opt_state = step(i, opt_state, xrange_inputs, targets)</span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="background-color: #333333; color: white; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">net_params = optimizers.get_params(opt_state)</span></span></div>
</td></tr>
</tbody></table>
</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<br />
Evaluating our network again, we see that the sinusoid curve has been correctly approximated.<br />
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiKZOKhHM9JWWElaax5TvJzA4Q7P91mLDvXMkGxfq_LLoB16kpKz7NvB1m8RSSfBsE4oGr6myxNxUM9cOiXls794ReIt7uH8yb4B4unMc1OzVNLMVkBsArV5iEee6jIxJQV2d0s4NllI0g/s1600/download+%25281%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="252" data-original-width="383" height="262" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiKZOKhHM9JWWElaax5TvJzA4Q7P91mLDvXMkGxfq_LLoB16kpKz7NvB1m8RSSfBsE4oGr6myxNxUM9cOiXls794ReIt7uH8yb4B4unMc1OzVNLMVkBsArV5iEee6jIxJQV2d0s4NllI0g/s400/download+%25281%2529.png" width="400" /></a></div>
<br />
<br />
This result is nothing to write home about, but in just a moment we’ll re-use a lot of these functions to implement MAML.<br />
<div>
<br /></div>
<div>
<h3>
Exploring JAX: Checking MAML Numerics</h3>
<br />
<br />
When implementing ML algorithms, it’s important to unit-testing implementations against test cases where the true values can be computed analytically. The following example does this for MAML on a toy objective $g$. Note that by default JAX computes gradients with respect to the first argument of the function.</div>
<div>
<br />
<div dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #333333; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># gradients of gradients test for MAML</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># check numerics</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">g = </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">lambda</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> x, y : np.square(x) + y</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">x0 = </span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">2.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">y0 = </span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">1.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">print(</span><span style="color: #a2fca2; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">'grad(g)(x0) = {}'</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">.format(grad(g)(x0, y0))) </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># 2x = 4</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">print(</span><span style="color: #a2fca2; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">'x0 - grad(g)(x0) = {}'</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">.format(x0 - grad(g)(x0, y0))) </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># x - 2x = -2</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #ffffaa; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">maml_objective</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">(x, y):</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> g(x - grad(g)(x, y), y)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">print(</span><span style="color: #a2fca2; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">'maml_objective(x,y)={}'</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">.format(maml_objective(x0, y0))) </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># x**2 + 1 = 5</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">print(</span><span style="color: #a2fca2; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">'x0 - maml_objective(x,y) = {}'</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">.format(x0 - grad(maml_objective)(x0, y0))) </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># x - (2x) = -2.</span></span></div>
</td></tr>
</tbody></table>
</div>
<br />
<h3>
<br />Implementing MAML with JAX</h3>
<br />
Now let’s extend our sinusoid regression task to a multi-task problem, in which the sinusoid function can have varying phases and amplitudes. This task was proposed in the MAML paper as a way to illustrate how MAML works on a toy problem. Below are some points sampled from two different tasks, divided into “train” (used to compute the inner loss) and “validation” splits (sampled from the same task, used to compute the outer loss).</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjAKxkJvUUgEaS897rbUhA3wa316sH7aky6Z38qd7uehND7lsD9UcMahm5GheO5BNQOejErwfX2YVQZTlHDr68TnwXQTGHZcu0g4YbwcTo-w-kI244WLvlXWtfWLrVdIdxzGhuGcQzEuhs/s1600/download+%25282%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="252" data-original-width="383" height="262" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjAKxkJvUUgEaS897rbUhA3wa316sH7aky6Z38qd7uehND7lsD9UcMahm5GheO5BNQOejErwfX2YVQZTlHDr68TnwXQTGHZcu0g4YbwcTo-w-kI244WLvlXWtfWLrVdIdxzGhuGcQzEuhs/s400/download+%25282%2529.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div>
<span id="docs-internal-guid-313a0369-7fff-b1c4-535d-2c17f57be320"><br />Suppose a task loss function $\mathcal{L}$ is defined with respect to model parameters $\theta$, input features $X$, output labels $Y$. Let $x_1, y_1$ and $x_2, y_2$ be identically distributed task instance data sampled from $X, Y$. Then MAML optimizes the following:<br /><br /><br />$\mathcal{L}(\theta - \nabla \mathcal{L}(\theta, x_1, y_1), x_2, y_2)$<br /><br /><br />MAML’s inner update operator is just gradient descent on the regression loss. The outer loss, <span style="font-family: "courier new" , "courier" , monospace;">maml_loss</span>, is simply the original loss applied <i>after</i> the inner_update operator has been applied. One interpretation of the MAML objective is that it is a differentiable estimate of a cross-validation loss with respect to a learner. Meta-training results in an <span style="font-family: "courier new" , "courier" , monospace;">inner_update</span> that minimizes the cross-validation loss.</span></div>
<div>
<br />
<br />
<div dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #333333; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #ffffaa; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">inner_update</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">(p, x1, y1, alpha=</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">.1</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">):</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> grads = grad(loss)(p, x1, y1)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> inner_sgd_fn = </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">lambda</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> g, state: (state - alpha*g)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> tree_multimap(inner_sgd_fn, grads, p)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #ffffaa; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">maml_loss</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">(p, x1, y1, x2, y2):</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> p2 = inner_update(p, x1, y1)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> loss(p2, x2, y2)</span></span></div>
</td></tr>
</tbody></table>
</div>
<br />
<br />
In each iteration of optimizing the MAML objective, we sample a single new task, sample a different set of input features and input labels for both the training and validation splits.</div>
<div>
<br />
<br />
<div dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #333333; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">opt_init, opt_update = optimizers.adam(step_size=</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">1e-3</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">) </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># this LR seems to be better than 1e-2 and 1e-4</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">out_shape, net_params = net_init(in_shape)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">opt_state = opt_init(net_params)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #fc9b9b; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">@jit</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #ffffaa; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">step</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">(i, opt_state, x1, y1, x2, y2):</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> p = optimizers.get_params(opt_state)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> g = grad(maml_loss)(p, x1, y1, x2, y2)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> l = maml_loss(p, x1, y1, x2, y2)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> opt_update(i, g, opt_state), l</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">K=</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">20</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">np_maml_loss = []</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># Adam optimization</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">for</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> i </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">in</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> range(</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">20000</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">):</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># define the task</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> A = onp.random.uniform(low=</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">0.1</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">, high=</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">.5</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> phase = onp.random.uniform(low=</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">0.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">, high=np.pi)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># meta-training inner split (K examples)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> x1 = onp.random.uniform(low=</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">-5.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">, high=</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">5.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">, size=(K,</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">1</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">))</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> y1 = A * onp.sin(x1 + phase)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># meta-training outer split (1 example). Like cross-validating with respect to one example.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> x2 = onp.random.uniform(low=</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">-5.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">, high=</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">5.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> y2 = A * onp.sin(x2 + phase)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> opt_state, l = step(i, opt_state, x1, y1, x2, y2)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> np_maml_loss.append(l)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">if</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> i % </span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">1000</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> == </span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">0</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">:</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> print(i)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">net_params = optimizers.get_params(opt_state)</span></span></div>
</td></tr>
</tbody></table>
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_uHZDEeUyDzZdT5GCWGX-Z4pDSq8FRMwfmIZOcfd_-R0fUgqyWWhBcKyrFS9d0_Y-6V12Js3M7kAMcJfjXidoV7g-DaKek_J5s1TmCD8mmstLpEC9LYO_eewiKWoQhDFQ89lyAjRiHoo/s1600/download+%25283%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="252" data-original-width="384" height="262" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_uHZDEeUyDzZdT5GCWGX-Z4pDSq8FRMwfmIZOcfd_-R0fUgqyWWhBcKyrFS9d0_Y-6V12Js3M7kAMcJfjXidoV7g-DaKek_J5s1TmCD8mmstLpEC9LYO_eewiKWoQhDFQ89lyAjRiHoo/s400/download+%25283%2529.png" width="400" /></a></div>
<div>
<br /></div>
<br />
At meta-training time, the network learns to “quickly adapt” to x1, y1 in order to minimize cross-validation error on a new set of points x2. At deployment time (shown in the plot above), when we have a <i>new</i> task (new amplitude and phase not seen at training time), the model can apply the <span style="font-family: "courier new" , "courier" , monospace;">inner_update</span> operator to fit the target sinusoid much faster and with fewer data samples than simply re-training the parameters with SGD.<br />
<br />
Why is <span style="font-family: "courier new" , "courier" , monospace;">inner_update</span> a more effective learning rule than retraining with SGD on a new dataset? The magic here is that by training in a multi-task setting, the <span style="font-family: "courier new" , "courier" , monospace;">inner_update</span> operator has <i>generalized</i> across tasks into a learning rule that is specially adapted for sinusoid regression tasks. In the standard data regime of deep learning, generalization is obtained from many examples of a single task (e.g. RL, image classification). In meta-learning, generalization is obtained from a few examples each from many tasks, and a shared learning rule is learned for the task distribution.<br />
<br /></div>
<div>
<span id="docs-internal-guid-27909428-7fff-4416-96ea-1572559d1674"><br /></span>
<br />
<div dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #333333; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># batch the inference across K=100</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">targets = np.sin(xrange_inputs)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">predictions = vmap(partial(net_apply, net_params))(xrange_inputs)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">plt.plot(xrange_inputs, predictions, label=</span><span style="color: #a2fca2; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">'pre-update predictions'</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">plt.plot(xrange_inputs, targets, label=</span><span style="color: #a2fca2; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">'target'</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">x1 = onp.random.uniform(low=</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">-5.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">, high=</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">5.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">, size=(K,</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">1</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">))</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">y1 = </span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">1.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> * onp.sin(x1 + </span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">0.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">for</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> i </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">in</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> range(</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">1</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">5</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">):</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> net_params = inner_update(net_params, x1, y1)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> predictions = vmap(partial(net_apply, net_params))(xrange_inputs)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> plt.plot(xrange_inputs, predictions, label=</span><span style="color: #a2fca2; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">'{}-shot predictions'</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">.format(i))</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">plt.legend()</span></span></div>
</td></tr>
</tbody></table>
</div>
<span id="docs-internal-guid-27909428-7fff-4416-96ea-1572559d1674">
</span></div>
<div>
<h3>
<span id="docs-internal-guid-69eed823-7fff-6a28-8986-809fe526b254"><br /></span><span id="docs-internal-guid-69eed823-7fff-6a28-8986-809fe526b254">Batching MAML Gradients Across Tasks with <span style="font-family: "courier new" , "courier" , monospace;">vmap</span></span></h3>
<span id="docs-internal-guid-69eed823-7fff-6a28-8986-809fe526b254">
We can compute the MAML gradients across multiple tasks at once to reduce the variance of gradients of the learning operator. This was proposed in the MAML paper, and is analogous to how increasing minibatch size of standard SGD reduces variance of the parameter gradients (leading to more efficient learning).<br /><br />Thanks to the <span style="font-family: "courier new" , "courier" , monospace;">vmap</span> operator, we can automatically transform our single-task MAML implementation into a “batched version” that operates across tasks. From a software engineering & testing perspective, <span style="font-family: "courier new" , "courier" , monospace;">vmap</span> is extremely nice because the "task-batched" MAML implementation simply re-uses code from the non-task batched MAML algorithm, without losing any vectorization benefits. This means that when unit-testing code, we can test the single-task MAML algorithm for numerical correctness, then scale up to a more advanced batched version (e.g. for handling harder tasks such as robotic learning) for efficiency. </span></div>
<div>
<br />
<div dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #333333; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># vmapped version of maml loss.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># returns scalar for all tasks.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #ffffaa; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">batch_maml_loss</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">(p, x1_b, y1_b, x2_b, y2_b):</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> task_losses = vmap(partial(maml_loss, p))(x1_b, y1_b, x2_b, y2_b)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> np.mean(task_losses)</span></span><span style="color: white; font-family: "consolas"; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span></div>
</td></tr>
</tbody></table>
</div>
<br />
Below is a function that samples a batch of tasks, where outer_batch_size is the number of tasks we meta-train on in each step, and inner_batch_size is the number of data points per-task. </div>
<div>
<br />
<br />
<div dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #333333; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #ffffaa; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">sample_tasks</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">(outer_batch_size, inner_batch_size):</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># Select amplitude and phase for the task</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> As = []</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> phases = []</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">for</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> _ </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">in</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> range(outer_batch_size): </span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> As.append(onp.random.uniform(low=</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">0.1</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">, high=</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">.5</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">))</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> phases.append(onp.random.uniform(low=</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">0.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">, high=np.pi))</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #ffffaa; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">get_batch</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">():</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> xs, ys = [], []</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">for</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> A, phase </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">in</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> zip(As, phases):</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> x = onp.random.uniform(low=</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">-5.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">, high=</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">5.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">, size=(inner_batch_size, </span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">1</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">))</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> y = A * onp.sin(x + phase)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> xs.append(x)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> ys.append(y)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> np.stack(xs), np.stack(ys)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> x1, y1 = get_batch()</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> x2, y2 = get_batch()</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> x1, y1, x2, y2</span></span></div>
</td></tr>
</tbody></table>
</div>
<div>
<br /></div>
<div>
Now for the training loop, which strongly resembles the previous single-task one. As you can see, gradient-based meta-learning requires treating two kinds of variance: those of <i>intra-task</i> gradients for the inner loss, and those of <i>inter-task</i> gradients for the outer loss.</div>
<br />
<div dir="ltr" style="margin-left: 0pt;">
<table style="border-collapse: collapse; border: none;"><colgroup></colgroup><tbody>
<tr style="height: 0pt;"><td style="background-color: #333333; padding: 5pt 5pt 5pt 5pt; vertical-align: top;"><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: "courier new" , "courier" , monospace;"><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">opt_init, opt_update = optimizers.adam(step_size=</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">1e-3</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">out_shape, net_params = net_init(in_shape)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">opt_state = opt_init(net_params)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># vmapped version of maml loss.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #888888; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"># returns scalar for all tasks.</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #ffffaa; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">batch_maml_loss</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">(p, x1_b, y1_b, x2_b, y2_b):</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> task_losses = vmap(partial(maml_loss, p))(x1_b, y1_b, x2_b, y2_b)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> np.mean(task_losses)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #fc9b9b; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">@jit</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">def</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #ffffaa; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">step</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">(i, opt_state, x1, y1, x2, y2):</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> p = optimizers.get_params(opt_state)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> g = grad(batch_maml_loss)(p, x1, y1, x2, y2)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> l = batch_maml_loss(p, x1, y1, x2, y2)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> opt_update(i, g, opt_state), l</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">np_batched_maml_loss = []</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">K=</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">20</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">for</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> i </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">in</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> range(</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">20000</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">):</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> x1_b, y1_b, x2_b, y2_b = sample_tasks(</span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">4</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">, K)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> opt_state, l = step(i, opt_state, x1_b, y1_b, x2_b, y2_b)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> np_batched_maml_loss.append(l)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="color: #fcc28c; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">if</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> i % </span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">1000</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> == </span><span style="color: #d36363; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">0</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">:</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"> print(i)</span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;"><br /></span><span style="color: white; font-size: 11pt; vertical-align: baseline; white-space: pre-wrap;">net_params = optimizers.get_params(opt_state)</span></span></div>
</td></tr>
</tbody></table>
</div>
<br />
When we plot the MAML objective as a function of training step, we see that the batched MAML trains much faster (as a function of gradient steps) and also has lower variance during training.</div>
<div>
<br />
<div class="separator" style="clear: both;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhEl80tcGJjseyNw45g3qmoeodrfw28HX7KhDfs3J3dbsH6zU5MdpSwJplFm_iP7LpTQ-mzERmsEHw-PiLGGILjVfShcA4lMBbXRj2XcveN9q-YnudVBb-Rwv3Yl-uiD1qpuJbhGL4rZrc/s1600/download+%25284%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="252" data-original-width="381" height="263" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhEl80tcGJjseyNw45g3qmoeodrfw28HX7KhDfs3J3dbsH6zU5MdpSwJplFm_iP7LpTQ-mzERmsEHw-PiLGGILjVfShcA4lMBbXRj2XcveN9q-YnudVBb-Rwv3Yl-uiD1qpuJbhGL4rZrc/s400/download+%25284%2529.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<br />
<h3>
Conclusions</h3>
<br />
In this tutorial we explored the MAML algorithm and reproduced the Sinusoid regression task from the paper in about 50 lines of Python code. I was very pleasantly surprised to find how easy <span style="font-family: "courier new" , "courier" , monospace;">grad</span>, <span style="font-family: "courier new" , "courier" , monospace;">vmap</span>, and <span style="font-family: "courier new" , "courier" , monospace;">jit</span> made it to implement MAML, and I am excited to continue using it for my own meta-learning research.<br />
<br />
So, what are the distinctions between “optimization”, “learning”, “adaptation”, and “memory”? I believe they are all equivalent, because it is possible to implement memory capabilities with optimization techniques (MAML) and vice versa (e.g. RNN-based meta reinforcement learning). In reinforcement learning, imitating a teacher or conditioning on user-specified goal or recovering from a failure can all use the same machinery.<br />
<br />
Thinking about precise definitions of “learning” and “meta-learning”, and attempting to reconcile them with the capabilities of biological intelligence have led me to realize that every process in Life itself, spanning molecular reaction to behavioral adaptation to genetic evolution, is nothing more than learning happening at many time scales. I’ll have much more to say on the topic of Artificial Life and Machine Learning in the future, but for now, thank you for reading this humble tutorial on fitting sinusoidal functions!<br />
<br />
<h3>
Acknowledgements</h3>
Thanks to Matthew Johnson for helping to proofread this post and helping me to resolve JAX questions.</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-1209759151577969272019-02-05T23:35:00.003-08:002019-07-05T13:42:57.808-07:00Thoughts on the BagNet PaperSome thoughts on the interesting <a href="https://openreview.net/forum?id=SkfMWhAqYQ">BagNet paper</a> (accepted at ICLR 2019) currently being circulated around the Machine Learning Twitter Community.<br />
<div>
<br />
<div>
Disclaimer: I wasn't a reviewer of this paper for ICLR. I think it was worthy of acceptance to the conference, and hope it prompts further investigation by the research community. Please feel free to email me if you spot any mistakes / misunderstandings in this post.<br />
<br />
<h3>
Paper Summary:</h3>
Deep Convolutional Networks (CNNs) work by aggregating local features via learned convolutions followed by spatial pooling. Successive application of these "convolutional layers" results in a "hierarchy of features" that integrate low-level information across a wide spatial extent to form high-level information. </div>
<div>
<br /></div>
<div>
As for algorithmic solutions, those aboard the deep learning hype train (myself included) believe that current deep CNNs perform global integration of information. There is a hand-wavy notion that intelligent visual understanding requires "seeing the forest for the trees." </div>
<div>
<br />
In the BagNet paper, the authors find that for the ImageNet classification task, the following algorithm (BagNet) works surprisingly well (86% Top-5 accuracy) in comparison to the deep AlexNet model (84.7% accuracy):<br />
<div>
<div>
<br /></div>
<div>
1) Chopping up the input images into 33x33 patches.</div>
<div>
2) Running each patch through a deep net (1x1 convolutions) to get a class vector.</div>
<div>
3) Add up the resulting class vectors spatially (across all patches). </div>
<div>
4) Prediction is the class with the most counts.</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjE2dD_AonL8xFyvNBRtWS_VwXpj8nCaW388zJJsaIBJLfrtqJOYUg-Hejt6MhN-SbwpOLk2wRpYuM95iK3h3J4DbT6KcL46xyjyq0rzI1wl-7733h7OALHVQQdTtzmXvxes4SUjsfouJc/s1600/Screen+Shot+2019-02-05+at+9.10.58+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="875" data-original-width="1600" height="350" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjE2dD_AonL8xFyvNBRtWS_VwXpj8nCaW388zJJsaIBJLfrtqJOYUg-Hejt6MhN-SbwpOLk2wRpYuM95iK3h3J4DbT6KcL46xyjyq0rzI1wl-7733h7OALHVQQdTtzmXvxes4SUjsfouJc/s640/Screen+Shot+2019-02-05+at+9.10.58+PM.png" width="640" /></a></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
By way of analogy, it suggests that for image classification, you don't need a non-linear model to integrate a bunch of local features into a global representation, you just need to <b>"count a bunch of trees to guess that it's a forest".</b></div>
<div>
<br />
<div>
Some other experimental conclusions:<br />
<ul>
<li>BagNet works slightly better when using 33x33 patches compared to 17x17 patches (80%). So deep nets <i>do</i> extract useful spatial information (9x9 vs. 17x17 vs. 33x33), just perhaps not to the global spatial extent we might have previously imagined (e.g. 112x112, 224x224).</li>
<li>Spatially distinct features from the BagNet model do not interact beyond the bagging step. This begs the question of whether most of the "power" of deep nets comes from merely examining local features. Are Deep Nets just BagNets? This would be quite concerning if that were the case! </li>
<li>VGG appears to approximate BagNets quite well (though I am a bit skeptical about the author's methodology of showing this) while DenseNets and ResNets appear to be doing something totally different from BagNets (authors explain in the rebuttal that this may come from "(1) a more non-linear classifier on top of the local features or (2) larger local feature sizes".</li>
</ul>
<h3>
Thoughts & Questions</h3>
<div>
Regardless of your beliefs on whether CNNs can/should take us all the way to Artificial General Intelligence or not, this paper offers a neat bit of evidence that we can build surprisingly powerful image classification models by only examining local features. It is often more helpful to tackle applied problems with a more interpretable model, and I'm glad to see such models doing surprisingly well for certain problems.</div>
<div>
<br /></div>
<div>
BagNet seems quite similar in principle to <a href="https://en.wikipedia.org/wiki/Generalized_additive_model">Generalized Additive Models</a>, which predate Deep Learning quite a bit. The basic idea of GAMs to combine non-linear univariate features (i.e. $f(x_i)$ where each $x_i$ is a pixel and $f$ is a neural net) into a simple, interpretable features so that the marginal predictive distribution with respect to each variable can be interrogated. I'm particularly excited about ideas like <a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2017/06/kdd13.pdf">Lou et al.</a> which relax GAMs to support pairwise interactions between univariate feature extractors (2D marginals are still interpretable to humans).</div>
<div>
<br /></div>
<div>
<div>
The authors do not claim this explicitly, but it's easy to skim the paper quickly and think "DNNs suck; they are nothing more than BagNets". That's not actually the case (and the authors' experiments suggest this).</div>
<div>
<br /></div>
<div>
One counterexample: adversarial examples are clear instances where local modifications (sometimes a single pixel) can change global feature representations. So it is clear that global shape integration is happening for test inputs. The remaining question is whether global shape integration is happening <i>where we think it should happen</i>, and on which <i>tasks</i> this happens. As someone who is deeply interested in AGI, I find ImageNet much less interesting now, precisely because it can be solved with models that have little global understanding of images.</div>
<div>
<br /></div>
</div>
<div>
The authors also say this much themselves, that we need harder tasks that <i>require</i> global shape integration.</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBndXF8p1RuOyyIkCAWALDnRWCPctvF5AIQaXt-zbo6RkPnyLDMVA1KorYEUI5M3Xc-EcWLmwNFnMeO7IyG11Zphxh-1zaDx2299K6apTPgr5j9Y8zTDOWWTwuRJk_1okGbSw9epJV3Yo/s1600/Screen+Shot+2019-02-05+at+9.06.36+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="358" data-original-width="1276" height="177" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBndXF8p1RuOyyIkCAWALDnRWCPctvF5AIQaXt-zbo6RkPnyLDMVA1KorYEUI5M3Xc-EcWLmwNFnMeO7IyG11Zphxh-1zaDx2299K6apTPgr5j9Y8zTDOWWTwuRJk_1okGbSw9epJV3Yo/s640/Screen+Shot+2019-02-05+at+9.06.36+PM.png" width="640" /></a></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
Generative modeling of images (e.g. GANs) is a task where it's quite clear that linear interactions between patch features are insufficient to model the unconditional joint distribution across pixels. Or consider my favorite RL task, Life on Earth, in which agents clearly need to perform spatial reasoning to solve problems like chasing prey and running away from predators. It would be fun to design an artificial life setup and see if organisms using bag-of-features perception can actually compete with organisms that use non-linear global integration (I doubt it).</div>
<div>
<br /></div>
<div>
If we train a model that <i>should</i> do better by integrating global information (i.e. classification), and it ends up just overfitting to local features, then this is a truly interesting result - it means that we need an optimization objective that does not allow models to cheat in this way. I think the "Life-on-Earth" is a great task for this, though I hope to find one that is computationally less resource intensive :)</div>
<div>
<br /></div>
<div>
Finally, a word on interpretability vs. causal inference. In the near term, I could see BagNet being useful for self-driving cars, where the parallelizability of considering each patch separately would give even better speedups for large images. Everyone wants ML models on self-driving cars to be interpretable, right? But there is also the psychological question of whether a human would prefer to get in a car that drives with a black box CNN that is "accurate, uninterpretable, and maybe wrong", or whether they want a car that makes decisions using Bag-of-Features: "accurate, interpretable, and <i>definitely</i> wrong". Lobbying for interpretability (as used by BagNet) seems to be at odds with demands for "causal inference" and "program induction" by means of achieving better generalizable machine learning, because a strong assumption of causal inference is that your model can express the true causal distribution. I'm curious how members of the community think we should reconcile this difference.<br />
<br />
<i>Update (Feb 9):</i> There is a more positive way to look at these methods for better causal inference. Methods like BagNet can serve as a very useful sanity check when designing end-to-end systems (like robotics, self-driving cars): if your deep net is <i>not</i> performing much better than a system only examining local statistical regularities (like BagNet), it is a good sign that your model may still yet benefit from better global information integration. One might even consider jointly optimizing <i>BagNet</i> and <i>Advantage(DeepNet, BagNet) </i>so that the DeepNet must explicitly extract strictly better information than what BagNet does. I have been thinking of how to better verify our ML systems for robotics and building such "null hypothesis" models can be a good way to check that they aren't learning something silly.</div>
</div>
</div>
</div>
</div>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-7539084631860414922018-12-28T10:14:00.000-08:002019-11-09T22:40:14.545-08:00Uncertainty: a TutorialA PDF version of this post can be found <a href="https://drive.google.com/open?id=1swsAR8q5nJMB1SE6cQBKHrA1tCAsU_EP">here</a>.<br />
<a href="https://www.jianshu.com/p/dc9128123afc">Chinese translation by Xiaoyi Yin</a><br />
<br />
Notions of <b>uncertainty </b>are tossed around in conversations around AI safety, risk management, portfolio optimization, scientific measurement, and insurance. Here are a few examples of colloquial use:<br />
<br />
<div>
<ul>
<li>"We want machine learning models to know what they don't know.''</li>
<li>"An AI responsible for diagnosing patients and prescribing treatments should tell us how confident it is about its recommendations.''</li>
<li>"Significant figures in scientific calculations represent uncertainty in measurements.''</li>
<li>"We want autonomous agents to explore areas where they are uncertain (about rewards or predictions) so that they may discover sparse rewards.''</li>
<li>"In portfolio optimization, we want to maximize returns while limiting risk.''</li>
<li>"US equity markets finished disappointingly in 2018 due to increased geopolitical uncertainty.''</li>
</ul>
</div>
<div>
<div>
<div>
<br /></div>
<div>
<br /></div>
<div>
What exactly then, is uncertainty? </div>
<div>
<b><br /></b></div>
<div>
<b>Uncertainty </b>measures reflect the amount of <a href="https://en.wikipedia.org/wiki/Statistical_dispersion">dispersion</a> of a random variable. In other words, it is a scalar measure of how "random" a random variable is. In finance, it is often referred to as <b>risk</b>.</div>
<div>
<br /></div>
<div>
There is no single formula for uncertainty because there are many different ways to measure dispersion: standard deviation, variance, value-at-risk (VaR), and entropy are all appropriate measures. However, it's important to keep in mind that a single scalar number cannot paint a full picture of "randomness'', as that would require communicating the entire random variable itself! </div>
<div>
<br /></div>
<div>
Nonetheless, it is helpful to collapse randomness down to a single number for the purposes of optimization and comparison. The important thing to remember is that "more uncertainty'' is usually regarded as "less good'' (except in simulated RL experiments).</div>
</div>
</div>
<div>
<br /></div>
<h3>
Types of Uncertainty</h3>
<div>
<div>
<br /></div>
<div>
Statistical machine learning concerns itself with the estimation of models $p(\theta|\mathcal{D})$, which in turn estimate unknown random variables $p(y|x)$. Multiple forms of uncertainty come into play here. Some notions of uncertainty describe inherent randomness that we should expect (e.g. outcome of a coin flip) while others describe our lack of confidence about our best guess of the model parameters.</div>
<div>
<br /></div>
<div>
To make things more concrete, let's consider a recurrent neural network (RNN) that predicts the amount of rainfall today from a sequence of daily barometer readings. A barometer measures atmospheric pressure, which often drops when its about to rain. Here's a diagram summarizing the rainfall prediction model along with different kinds of uncertainty.</div>
</div>
<div>
<br /></div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhALWDfpsEhD1F-FqsOPU4tOsa9ll88uFcvpUSGo3lqNUhgsNVgxVkX8cu0yfeiasTZITAeb2tLq5dHDN8aYikLWAq806RboL2FnwlDWDj65C7LTHb-BQHGG75Z3O9HvqGi80sSShPs06o/s1600/barometer_uncertainty.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="710" data-original-width="1430" height="316" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhALWDfpsEhD1F-FqsOPU4tOsa9ll88uFcvpUSGo3lqNUhgsNVgxVkX8cu0yfeiasTZITAeb2tLq5dHDN8aYikLWAq806RboL2FnwlDWDj65C7LTHb-BQHGG75Z3O9HvqGi80sSShPs06o/s640/barometer_uncertainty.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Uncertainty can be understood from a simple machine learning model that attempts to predict daily rainfall from a sequence of barometer readings. Aleatoric uncertainty is irreducible randomness that arises from the data collection process. Epistemic uncertainty reflects confidence that our model is making the correct predictions. Finally, out-of-distribution errors arise when the model sees an input that differs from its training data (e.g. temperature of the sun, other anomalies).</td></tr>
</tbody></table>
<div>
<br /></div>
<h4>
Aleatoric Uncertainty</h4>
<div>
<div>
Aleatoric Uncertainty draws its name from the Latin root <i>aleatorius</i>, which means the <a href="https://en.wikipedia.org/wiki/Aleatoricism%7D">incorporation of chance into the process of creation</a>. It describes randomness arising from the data generating process itself; noise that cannot be eliminated by simply drawing more data. It is the coin flip whose outcome you cannot know.</div>
<div>
<br /></div>
<div>
In our rainfall prediction analogy, aleatoric noise arises from imprecision of the barometer. There are also important variables that the data collection setup does not observe: How much rainfall was there yesterday? Are we measuring barometric pressure in the present day, or the last ice age? These unknowns are inherent to our data collection setup, so collecting more data from that system does not absolve us of this uncertainty.</div>
<div>
<br /></div>
<div>
Aleatoric uncertainty propagates from the inputs to the model predictions. Consider a simple model $y = 5x$, which takes in normally-distributed input $x \sim \mathcal{N}(0,1)$. In this case, $y \sim \mathcal{N}(0, 5)$, so the aleatoric uncertainty of the predictive distribution can be described by $\sigma=5$. Of course, predictive aleatoric uncertainty is more challenging to estimate when the random structure of the input data $x$ is not known.</div>
<div>
<br /></div>
<div>
One might think that because aleatoric uncertainty is irreducible, one cannot do anything about it and so we should just ignore it. No! One thing to watch out for when training models is to choose an output representation capable of representing aleatoric uncertainty correctly. A standard LSTM does not emit probability distributions, so attempting to learn the outcome of a coin flip would just converge to the mean. In contrast, models for language generation emit a sequence of categorical distributions (words or characters), which can capture the inherent ambiguity in sentence completion tasks. </div>
</div>
<div>
<br /></div>
<h4>
Epistemic Uncertainty</h4>
<div>
<div>
<br /></div>
<div>
<i>"Good models are all alike; every bad model is wrong in its own way."</i></div>
<div>
<br /></div>
<div>
Epistemic Uncertainty is derived from the Greek root <i>epistēmē</i>, which pertains to <a href="https://en.wikipedia.org/wiki/Epistemology">knowledge about knowledge</a>. It measures our ignorance of the correct prediction arising from our ignorance of the correct model parameters.</div>
<div>
<br /></div>
<div>
Below is a plot of a Gaussian Process Regression model on some toy 1-dimensional dataset. The confidence intervals reflect epistemic uncertainty; the uncertainty is zero for training data (red points), and as we get farther away from training points, the model ought to assign higher standard deviations to the predictive distribution. Unlike aleatoric uncertainty, epistemic uncertainty can be reduced by gathering more data and "ironing out" the regions of inputs where the model lacks knowledge.</div>
</div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiISOiqcOPJfZgxyEcA4hVm1klSV8QJMk4SpQj5eT5yPBBoHjr4ITI1qauc1_kYs8bc_u-dRTFUOPUlYYrakVbvsTr0gX-eMbxuNcpo4GxiQ5vwNhw5OU5xxWbGA4DdQIY8jnsmCebIKIo/s1600/plot_gp_regression_001.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="600" data-original-width="800" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiISOiqcOPJfZgxyEcA4hVm1klSV8QJMk4SpQj5eT5yPBBoHjr4ITI1qauc1_kYs8bc_u-dRTFUOPUlYYrakVbvsTr0gX-eMbxuNcpo4GxiQ5vwNhw5OU5xxWbGA4DdQIY8jnsmCebIKIo/s640/plot_gp_regression_001.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">1-D Gaussian Process Regression Model showcasing epistemic uncertainty for inputs outside its training set.</td></tr>
</tbody></table>
<div>
<div>
There is a <a href="https://arxiv.org/abs/1810.05148">rich</a> <a href="https://arxiv.org/abs/1711.00165">line</a> <a href="https://arxiv.org/abs/1511.02222">of</a> <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.79.5292&rep=rep1&type=pdf">inquiry</a> connecting Deep Learning to Gaussian Processes. The hope is that we can extend the uncertainty-awareness properties of GPs with the representational power of neural networks. Unfortunately, GPs are challenging to scale to the uniform stochastic minibatch setting for large datasets, and they have fallen out of favor among those working on large models and datasets.</div>
<div>
<br /></div>
<div>
If one wants maximum flexibility in choosing their model family, a good alternative to estimating uncertainty is to use ensembles, which is just a fancy way of saying "multiple independently learned models''. While GP models analytically define the predictive distribution, ensembles can be used to compute the <i>empirical distribution</i> of predictions.</div>
<div>
<br /></div>
<div>
Any individual model will make some errors due to randomized biases that occur during the training process. Ensembling is powerful because other models in the ensembles tend to expose the idiosyncratic failures of a single model while agreeing with the correctly inferred predictions.</div>
</div>
<div>
<br /></div>
<div>
<div>
How do we sample models randomly to construct an ensemble? In <a href="https://en.wikipedia.org/wiki/Bootstrap_aggregating">Ensembling with bootstrap aggregation</a>, we start with a training dataset of size $N$ and sample $M$ datasets of size $N$ from the original training set (with replacement, so each dataset does not span the entire dataset). The $M$ models are trained on their respective datasets and their resulting predictions collectively form an empirical predictive distribution.</div>
<div>
<br /></div>
<div>
If training multiple models is too expensive, it is also possible <a href="https://arxiv.org/abs/1506.02142">to use Dropout training</a> to approximate a model ensemble. However, introducing dropout involves an extra hyperparameter and can compromise single model performance (often unacceptable for real world applications where calibrated uncertainty estimation is secondary to accuracy). </div>
<div>
<br /></div>
<div>
Therefore, if one has access to plentiful computing resources (as one does at Google), it is often easier to just re-train multiple copies of a model. This also yields the benefits of ensembling without hurting performance. This is the approach taken by the <a href="https://arxiv.org/pdf/1612.01474.pdf">Deep Ensembles</a> paper. The authors of this paper also mention that the random training dynamics induced by differing weight initializations was sufficient to introduce a diverse set of models without having to resort to reducing the training set diversity via bootstrap aggregation. From a practical engineering standpoint, it's smart to bet on risk estimation methods that do not get in the way of the model's performance or whatever other ideas the researcher wants to try.</div>
</div>
<div>
<br /></div>
<div>
<br /></div>
<h4>
Out-of-Distribution Uncertainty</h4>
<div>
<div>
<br /></div>
<div>
For our rainfall predictor, what if instead of feeding in the sequence of barometer readings, we fed in the temperature of the sun? Or a sequence of all zeros? Or barometer readings from a sensor that reports in different units? The RNN will happily compute away and give us a prediction, but the result will likely be meaningless.</div>
<div>
<br /></div>
<div>
The model is totally unqualified to make predictions on data generated via a different procedure than the one used to create the training set. This is a failure mode that is often overlooked in benchmark-driven ML research, because we typically assume that the training, validation, and test sets consist entirely of clean i.i.d data. </div>
<div>
<br /></div>
<div>
Determining whether inputs are "valid'' is a serious problem for deploying ML in the wild, and is known as the Out of Distribution (OoD) problem. OoD is also synonymous with <a href="https://eng.uber.com/neural-networks-uncertainty-estimation/">model misspecification error</a> and <a href="https://arxiv.org/abs/1809.04729">anomaly detection</a>.</div>
<div>
<br /></div>
<div>
Besides its obvious importance for hardening ML systems, anomaly detection models are an intrinsically useful technology. For instance, we might want to build a system that monitors a healthy patient's vitals and alerts us when something goes wrong without necessarily having seen that pattern of pathology before. Or we might be managing the "health" of a datacenter and want to know whenever unusual activity occurs (disks filling up, security breaches, hardware failures, etc.)</div>
<div>
<br /></div>
<div>
Since OoD inputs only occur at test-time, we should not presume to know the distribution of anomalies the model encounters. This is what makes OoD detection tricky - we have to harden a model against inputs it never sees during training! This is exactly the standard attack scenario described in <a href="https://en.wikipedia.org/wiki/Adversarial_machine_learning">Adversarial Machine Learning</a>.</div>
<div>
<br /></div>
<div>
There are two ways to handle OoD inputs for a machine learning model: 1) catch the bad inputs before we even put them through the model 2) let the "weirdness'' of model predictions imply to us that the input was probably malformed.</div>
<div>
<br /></div>
<div>
In the first approach, we assume nothing about the downstream ML task, and simply consider the problem of whether an input is in the training distribution or not. This is exactly what discriminators in Generative Adversarial Networks (GANs) are supposed to do. However, a single discriminator is not completely robust because it is only good for discriminating between the true data distribution and whatever the generator's distribution is; it can give arbitrary predictions for an input that lies in neither distribution. </div>
<div>
<br /></div>
<div>
Instead of a discriminator, we could build a density model of the in-distribution data, such as a kernel density estimator or fitting a <a href="https://blog.evjang.com/2018/01/nf1.html">Normalizing Flow</a> to the data. Hyunsun Choi and I investigated this in our recent paper on using modern <a href="https://arxiv.org/abs/1810.01392">generative models to do OoD detection</a>.</div>
<div>
<br /></div>
<div>
The second approach to OoD detection involves using the predictive (epistemic) uncertainty of the task model to tell us when inputs are OoD. Ideally, malformed inputs to a model ought to generate "weird'' predictive distribution $p(y|x)$. For instance, <a href="https://arxiv.org/abs/1610.02136">Hendrycks and Gimpel</a> showed that the maximum softmax probability (the predicted class) for OoD inputs tends to be lower than that of in-distribution inputs. Here, uncertainty is inversely proportional to the "confidence'' as modeled by the max sofmax probability. Models like Gaussian Processes give us these uncertainty estimates by construction, or we could compute epistemic uncertainty via Deep Ensembles.</div>
<div>
<br /></div>
<div>
In reinforcement learning, OoD inputs are actually assumed to be a <i>good thing</i>, because it represents inputs from the world that the agent does not know how to handle yet. Encouraging the policy to find its own OoD inputs implements "intrinsic curiosity'' to <a href="https://blog.openai.com/reinforcement-learning-with-prediction-based-rewards/">explore regions the model predicts poorly in</a>. This is all well and good, but I do wonder what would happen if such curiousity-driven agents are deployed in real world settings where sensors break easily and other experimental anomalies happen. How does a robot distinguish between "unseen states" (good) and "sensors breaking" (bad)? Might that result in agents that learn to interfere with their sensory mechanisms to generate maximum novelty?</div>
</div>
<div>
<br /></div>
<h3>
Who Will Watch the Watchdogs?</h3>
<div>
<div>
<br /></div>
<div>
As mentioned in the previous section, one way to defend ourselves against OoD inputs is to set up a likelihood model that "watchdogs" the inputs to a model. I prefer this approach because it de-couples the problem of OoD inputs from epistemic and aleatoric uncertainty in the task model. It makes things easy to analyze from an engineering standpoint.</div>
<div>
<br /></div>
<div>
But we should not forget that the likelihood model is also a function approximator, possibly with its own OoD errors! We show in our recent work on <a href="https://arxiv.org/abs/1810.01392">Generative Ensembles</a> (and also showed in <a href="https://arxiv.org/abs/1810.09136">concurrent work by DeepMind</a>), that under a CIFAR likelihood model, natural images from SVHN can actually be more likely than the in-distribution CIFAR images themselves!</div>
</div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheVKkvQFGFgR9q9KYg8fm3qpW1AmeOybQe5VZKnXbAT5Gn1YwgrcNi_KvQaRAnPd0imuXqhs9d0M_-w6M4MJ8EEhog1rEJD2gPkxPNcN-oStwTD6WeF6vcrl8ZzTiJO1EFcEinmd1BV5A/s1600/cifar-glow-ood.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="566" data-original-width="738" height="490" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheVKkvQFGFgR9q9KYg8fm3qpW1AmeOybQe5VZKnXbAT5Gn1YwgrcNi_KvQaRAnPd0imuXqhs9d0M_-w6M4MJ8EEhog1rEJD2gPkxPNcN-oStwTD6WeF6vcrl8ZzTiJO1EFcEinmd1BV5A/s640/cifar-glow-ood.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Likelihood estimation involves a function approximator that can itself be susceptible to OoD inputs. A likelihood model of CIFAR assigns higher probabilities to SVHN images than CIFAR test images!</td></tr>
</tbody></table>
<div>
<br /></div>
<div>
However, all is not lost! It turns out that the <i>epistemic uncertainty</i> of likelihood models is an excellent OoD detector for the likelihood model itself. By bridging epistemic uncertainty estimation with density estimation, we can use ensembles of likelihood models to protect machine learning models against OoD inputs in a model-agnostic way.</div>
<div>
<br /></div>
<h3>
Calibration: the Next Big Thing?</h3>
<div>
<div>
<br /></div>
<div>
A word of warning: just because a model is able to spit out a confidence interval for a prediction doesn't mean that the confidence interval actually reflects the actual probabilities of outcomes in reality! </div>
<div>
<br /></div>
<div>
Confidence intervals (e.g. $2\sigma$) implicitly assume that your predictive distribution is Gaussian-distributed, but if the distribution you're trying to predict is multi-modal or heavy-tailed, then your model will not be well calibrated!</div>
<div>
<br /></div>
<div>
Suppose our rainfall RNN tells us that there will be $\mathcal{N}(4, 1)$ inches of rain today. If our model is <i>calibrated</i>, then if we were to repeat this experiment over and over again under identical conditions (possibly re-training the model each time), we really would observe empirical rainfall to be distributed exactly $\mathcal{N}(4, 1)$. </div>
<div>
<br /></div>
<div>
Machine Learning models developed by academia today mostly optimize for test accuracy or some fitness function. Researchers are not performing model selection by deploying the model in repeated identical experiments and measuring calibration error, so unsurprisingly, our models <a href="https://arxiv.org/abs/1706.04599">tend to be poorly calibrated</a>.</div>
<div>
<br /></div>
<div>
Going forward, if we are to trust ML systems deployed in the real world (robotics, healthcare, etc.), I think a much more powerful way to "prove our models understand the world correctly'' is to test them for statistical calibration. Good calibration also implies good accuracy, so it would be a strictly higher bar to optimize against. </div>
</div>
<div>
<br /></div>
<div>
<div>
<br /></div>
<h3>
Should Uncertainty be Scalar?</h3>
<div>
<br /></div>
<div>
As useful as they are, scalar uncertainty measures will never be as informative as the random variables they describe. I find methods like particle filtering and Distributional Reinforcement Learning very cool because they are algorithms that operate on entire distributions, freeing us from resorting to simple normal distributions to keep track of uncertainty. Instead of shaping ML-based decision making with a single scalar of "uncertainty", we can now query the full structure of distributions when deciding what to do. </div>
<div>
<br /></div>
<div>
The <a href="https://arxiv.org/pdf/1806.06923.pdf">Implicit Quantile Networks</a> paper (Dabney et al.) has a very nice discussion on how to construct "risk-sensitive agents'' from a return distribution. In some environments, one might favor an opportunitistic policy that prefers to explore the unknown, while in other environments unknown things may be unsafe and should be avoided. The choice of <a href="https://en.wikipedia.org/wiki/Risk_measure">risk measure</a> essentially determines how to map the distribution of returns to a scalar quantity that can be optimized against. All risk measures can be computed from the distribution, so predicting full distributions enables us to combine multiple definitions of risk easily. Furthermore, supporting flexible predictive distributions seems like a good way to improve model calibration.</div>
<div>
<br /></div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBRLhc_MKH_R4i9bE7mu1ROs1fxnqWE5N8oXhVU76uDgDad_wyz7xsNKgEEY1NiVLCms1oiybWNOPVQb42sgZJBcq2HvYTuo5xAUDKsEq2Ua1YOh2HjicGFcRmgnfityj0Omuhz0k1oNs/s1600/iqn_snap.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="452" data-original-width="727" height="396" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBRLhc_MKH_R4i9bE7mu1ROs1fxnqWE5N8oXhVU76uDgDad_wyz7xsNKgEEY1NiVLCms1oiybWNOPVQb42sgZJBcq2HvYTuo5xAUDKsEq2Ua1YOh2HjicGFcRmgnfityj0Omuhz0k1oNs/s640/iqn_snap.PNG" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Performance of various risk measures on Atari games as reported by the <a href="https://arxiv.org/abs/1806.06923">IQN paper</a>.</td></tr>
</tbody></table>
<div>
<br /></div>
<div>
Risk measures are a deeply important research topic to financial asset managers. The vanilla Markowitz portfolio objective minimizes a weighted variance of portfolio returns $\frac{1}{2}\lambda w^T \Sigma w$. However, variance is an unintuitive choice of "risk'' in financial contexts: most investors don't mind returns exceeding expectations, but rather wish to minimize the probability of small or negative returns. For this reason, risk measures like Value-at-Risk, Shortfall Probability, and Target Semivariance, which only pay attention to the likelihood of "bad'' outcomes, are more useful objectives to optimize.<br />
<br />
Unfortunately, they are also more difficult to work with analytically. My hope is that research into distributional RL, Monte Carlo methods, and flexible generative models will allow us to build differentiable relaxations of risk measures that can play nicely with portfolio optimizers. If you work in finance, I highly recommend reading the IQN paper's "Risks in Reinforcement Learning" section.</div>
</div>
<div>
<br /></div>
<div>
<div>
<br /></div>
<h3>
Summary</h3>
<div>
<br /></div>
<div>
Here's a recap of the main points of this post:</div>
<div>
<ul>
<li>Uncertainty/risk measures are scalar measures of "randomness''. Collapsing a random variable to a single number is done for optimization and mathematical convenience.</li>
<li>Predictive uncertainty can be decomposed into aleatoric (irreducible noise arising from data collection process), epistemic (ignorance about true model), and out-of-distribution (at test time, inputs may be malformed).</li>
<li>Epistemic uncertainty can be mitigated by softmax prediction thresholding or ensembling.</li>
<li>Instead of propagating OoD uncertainty to predictions, we can use a task-agnostic filtering mechanism that safeguards against "malformed inputs''.</li>
<li>Density models are a good choice for filtering inputs at test time. However, it's important to recognize that density models are merely approximations of the true density function, and are themselves susceptible to out-of-distribution inputs.</li>
<li>Self-plug:<a href="https://arxiv.org/abs/1810.01392">Generative Ensembles</a> reduce epistemic uncertainty of likelihood models so they can be used to detect OoD inputs. </li>
<li>Calibration is important and underappreciated in research models.</li>
<li>Some algorithms (Distributional RL) extend ML algorithms to models that emit flexible distributions, which provides more information than a single risk measure.</li>
</ul>
<div>
<br /></div>
</div>
</div>
<div>
<h3>
Further Reading</h3>
<div>
<ul>
<li>I especially recommend Chapter 3 ("Risk Measurement") of <a href="https://www.amazon.com/Modern-Investment-Management-Equilibrium-Approach/dp/0471124109">Modern Investment Management </a>by Litterman et al. to learn about risk in a concrete way.</li>
<li><a href="http://uqpm2017.usacm.org/sites/default/files/DStarcuzzi_UQConf.pdf">http://uqpm2017.usacm.org/sites/default/files/DStarcuzzi_UQConf.pdf</a></li>
<li><a href="http://mlg.eng.cam.ac.uk/yarin/blog_2248.html">http://mlg.eng.cam.ac.uk/yarin/blog_2248.html</a></li>
</ul>
</div>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-80413802732337376702018-11-30T21:42:00.004-08:002021-06-06T15:39:59.357-07:00Machine Learning MemesA periodically-updated list of my favorite Deep Learning memes. Enjoy!<br />
<br />
content warning: may contain crude humor.<div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh978w2dqgMioBTNc_qfSwyPbNWjre9MxjJ5jxDODc9zBL28Os0zVLm7Sh66jU3Xsy6EBqxW8kjtlZtKIEWRNDOn2ioKqIRkQ_RF8QhU-ooM10ajpEp0KZlZJHo-aRLHVxqZO9l4vgPHnk/s675/E3OEwuMWUAwfU1I.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="499" data-original-width="675" height="296" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh978w2dqgMioBTNc_qfSwyPbNWjre9MxjJ5jxDODc9zBL28Os0zVLm7Sh66jU3Xsy6EBqxW8kjtlZtKIEWRNDOn2ioKqIRkQ_RF8QhU-ooM10ajpEp0KZlZJHo-aRLHVxqZO9l4vgPHnk/w400-h296/E3OEwuMWUAwfU1I.jpg" width="400" /></a></div><br /><div><br /><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj00hrYn0oE2e6Xh4c_QgcAQph0flRf__tl6SY_9Jthx7yPRIkmAhsRNFxlkCpq-eELIgDBKDauKWomV4Z2fJz-TJlTDFKCcyyqb0fBc-pPut5tsev3_BHiDLrNLRw7m09JnjcIYl1Y_s0/s1231/163598846_3954156447999234_7937625739367812292_o.jpeg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1231" data-original-width="1080" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj00hrYn0oE2e6Xh4c_QgcAQph0flRf__tl6SY_9Jthx7yPRIkmAhsRNFxlkCpq-eELIgDBKDauKWomV4Z2fJz-TJlTDFKCcyyqb0fBc-pPut5tsev3_BHiDLrNLRw7m09JnjcIYl1Y_s0/w351-h400/163598846_3954156447999234_7937625739367812292_o.jpeg" width="351" /></a></div><br /><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgugN6IosAsBH4j0T2IhrqFfyQJzF6bAwQVdLlai5jO9Xx9mgpPo-EB25Q9FCbFIqH0Z3Wdu5R02HFhx2gbR29Vd3DEkJfFrsY108PKKdYPqNbd6NyU_HTzxKmrxQhldF0MIOyrwMb5D_M/s829/118823936_1170125176699612_3119017095284167246_n.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="829" data-original-width="829" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgugN6IosAsBH4j0T2IhrqFfyQJzF6bAwQVdLlai5jO9Xx9mgpPo-EB25Q9FCbFIqH0Z3Wdu5R02HFhx2gbR29Vd3DEkJfFrsY108PKKdYPqNbd6NyU_HTzxKmrxQhldF0MIOyrwMb5D_M/s320/118823936_1170125176699612_3119017095284167246_n.jpg" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8JToF5esEsbH3aSldwT04zhPTIwN1fyJVQtRfB3pIekaTRdyi-QQgaumHS6LDh7VIjEKH3pYACa8rm90dLV1TGsk0D8QgcZAV1s2EEamZGmTnX8pvEsdyriumhyphenhyphenHT2kNb8kfdR0bIa6I/s473/74682758_1014128165708350_6690467315135069753_n.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="424" data-original-width="473" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8JToF5esEsbH3aSldwT04zhPTIwN1fyJVQtRfB3pIekaTRdyi-QQgaumHS6LDh7VIjEKH3pYACa8rm90dLV1TGsk0D8QgcZAV1s2EEamZGmTnX8pvEsdyriumhyphenhyphenHT2kNb8kfdR0bIa6I/s320/74682758_1014128165708350_6690467315135069753_n.png" width="320" /></a></div><div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheQ4QyfcWqMmDvVAIZJ4SjzXd05LUu0ABTzUsITdBhVQtJho451DT8drSycHopP7-Q44fg3dGRkF1-GVk_Uel-FD5A2w6wVa_m2l9l16DgZ5DtBD9IjAZKMbJtJWbEEgID9k-xflE9YUY/s1600/104219183_1107034316342032_2629298085766284942_o.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="1280" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheQ4QyfcWqMmDvVAIZJ4SjzXd05LUu0ABTzUsITdBhVQtJho451DT8drSycHopP7-Q44fg3dGRkF1-GVk_Uel-FD5A2w6wVa_m2l9l16DgZ5DtBD9IjAZKMbJtJWbEEgID9k-xflE9YUY/s400/104219183_1107034316342032_2629298085766284942_o.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKD5IB70gr6lJQGSEiq6TiG9beFC8EH_dRo6WN3kbQqqPWcj_a9cjO10WqJ_izmYf0MmHu4FECszkzl4f2MkcOMj2MOcw93_mDa_IJ_yOSmN_RvWSpBbT70Y_LZy25XGMEeysPckedYfU/s1600/97882351_981209505666883_4768195812104601600_n.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="447" data-original-width="682" height="261" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKD5IB70gr6lJQGSEiq6TiG9beFC8EH_dRo6WN3kbQqqPWcj_a9cjO10WqJ_izmYf0MmHu4FECszkzl4f2MkcOMj2MOcw93_mDa_IJ_yOSmN_RvWSpBbT70Y_LZy25XGMEeysPckedYfU/s400/97882351_981209505666883_4768195812104601600_n.png" width="400" /></a></div>
<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvRVEug-oudeg_KA9m0GZTMLFSemji5oGOU_TRYpSH5gOeiv5YT7Tc7Tclgf5n7Y8e5bbjjmXDcOBKC2XpugNRGeHimZuHjoRBIIwt_4XPXQWBhlN8MTCiDuOv7SJhO7NpzVi3Ef6Cmhk/s1600/91871146_3064945716905784_5111201993536307200_n.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="632" data-original-width="622" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvRVEug-oudeg_KA9m0GZTMLFSemji5oGOU_TRYpSH5gOeiv5YT7Tc7Tclgf5n7Y8e5bbjjmXDcOBKC2XpugNRGeHimZuHjoRBIIwt_4XPXQWBhlN8MTCiDuOv7SJhO7NpzVi3Ef6Cmhk/s320/91871146_3064945716905784_5111201993536307200_n.jpg" width="314" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9Ebkd0resz350xPUNl8IxmEKAQiAGIk9SFejja13rcGJrLVku8IN46Wzv153hIZ9PURdS0EF7BGDu9aqVA8rnYrZf2xD2QJT-YQFkxkvzJW-xpDKH1oUiV8DXC6Jn-vNAMUBmhmhTKuA/s1600/82974126_891042984683536_5384042625693122560_n.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="505" data-original-width="646" height="500" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9Ebkd0resz350xPUNl8IxmEKAQiAGIk9SFejja13rcGJrLVku8IN46Wzv153hIZ9PURdS0EF7BGDu9aqVA8rnYrZf2xD2QJT-YQFkxkvzJW-xpDKH1oUiV8DXC6Jn-vNAMUBmhmhTKuA/s640/82974126_891042984683536_5384042625693122560_n.png" width="640" /></a></div>
<br />
<br />
<div style="text-align: center;">
Caption: The Gary Marcus/Yoshua Bengio debate. (Thanks Jackie Kay for sending me this)</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgdy4z8HKzjNl3ZeFeNHy5SVJaXEd9m88ThHQQzVhEEry8LXDOa5ezIXPwpn002daD7RJS3CkmmBqSbq-7V-nxW3d5r2oYO2vSBWDUhINYJYhHSOQdROYFOefbHFU76oHhQJeMD0l0KJZw/s1600/image+%25282%2529.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1406" data-original-width="500" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgdy4z8HKzjNl3ZeFeNHy5SVJaXEd9m88ThHQQzVhEEry8LXDOa5ezIXPwpn002daD7RJS3CkmmBqSbq-7V-nxW3d5r2oYO2vSBWDUhINYJYhHSOQdROYFOefbHFU76oHhQJeMD0l0KJZw/s640/image+%25282%2529.png" width="226" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaqcJg0701YTca6MJxrTUyX5MBA7xtbrHfzob4TOEPnT7DCl1iDMyU82PTn9kpoQKrvTWhZgm0bBXWCTssu841uf5XsIgwOZx7wsLAhxsm1Jk6aBs9Jw9JTimFx7wDa0rAROLKImifIoE/s1600/47446306_2163002007046355_8400391397395398656_n.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="584" data-original-width="681" height="342" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaqcJg0701YTca6MJxrTUyX5MBA7xtbrHfzob4TOEPnT7DCl1iDMyU82PTn9kpoQKrvTWhZgm0bBXWCTssu841uf5XsIgwOZx7wsLAhxsm1Jk6aBs9Jw9JTimFx7wDa0rAROLKImifIoE/s400/47446306_2163002007046355_8400391397395398656_n.png" width="400" /></a></div>
<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRzfTNbBJjgpYKAtkv90ugePZiE8y-RJPiz-AAAjtqMsMXKMkvQBo1sOldASe8ZwNaNE8D04PwTfxIMMmXpKtTkyuwVOhosKPAPgIuKpNRefzzgz8DFY73m2USBJQC_YTgvVo3CI1mXcQ/s1600/31841956_473125679808604_8560490817364426752_n.png" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" data-original-height="600" data-original-width="480" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRzfTNbBJjgpYKAtkv90ugePZiE8y-RJPiz-AAAjtqMsMXKMkvQBo1sOldASe8ZwNaNE8D04PwTfxIMMmXpKtTkyuwVOhosKPAPgIuKpNRefzzgz8DFY73m2USBJQC_YTgvVo3CI1mXcQ/s640/31841956_473125679808604_8560490817364426752_n.png" width="512" /></a><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMvkqAS2o_igy8Ykm95tMG1LFcIjUrRR6qCTlFgyTD_hyuQVV2t9_RngX-isCZBUmM1Vsxq2nj_A1d8d09wZJxdc_xEpG8cEOJXQS196EN4Fh82YLPx7sLhewVhbpio-NGNBckktC23GA/s1600/n9fgba8b0qr01.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1461" data-original-width="1600" height="584" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMvkqAS2o_igy8Ykm95tMG1LFcIjUrRR6qCTlFgyTD_hyuQVV2t9_RngX-isCZBUmM1Vsxq2nj_A1d8d09wZJxdc_xEpG8cEOJXQS196EN4Fh82YLPx7sLhewVhbpio-NGNBckktC23GA/s640/n9fgba8b0qr01.png" width="640" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhI0nMpbMIQ-mCu_32DdeXGSHwE_FEBwadd3pV0AkABcq33hMajZwoJAYC3_7aucnjAgLzpmO0TCRwRofM_T6bHYY7TKUflYYLSxRWOlTc_ZRf3P1Rgox_N9xJdw8uuarBFAhf4Awt4lq0/s1600/45518665_488880224939890_6413623840768786432_n.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="480" data-original-width="713" height="430" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhI0nMpbMIQ-mCu_32DdeXGSHwE_FEBwadd3pV0AkABcq33hMajZwoJAYC3_7aucnjAgLzpmO0TCRwRofM_T6bHYY7TKUflYYLSxRWOlTc_ZRf3P1Rgox_N9xJdw8uuarBFAhf4Awt4lq0/s640/45518665_488880224939890_6413623840768786432_n.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvvYIlDR47B8n_2gevteVW-enrdoo3tyoRPUurYJ75tJtHl_1hA0WXhtZAO5V5LxjA4z1nHwhcs3kR-OWiCc2a1nfBPc3VgT_UwL8ywb_B9YORgNkKT0oVFpG95y9PnlcRu81zDFMYsqw/s1600/28056576_10213577221682063_7572084637958860851_n.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="481" data-original-width="725" height="424" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvvYIlDR47B8n_2gevteVW-enrdoo3tyoRPUurYJ75tJtHl_1hA0WXhtZAO5V5LxjA4z1nHwhcs3kR-OWiCc2a1nfBPc3VgT_UwL8ywb_B9YORgNkKT0oVFpG95y9PnlcRu81zDFMYsqw/s640/28056576_10213577221682063_7572084637958860851_n.jpg" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidqRqA4QYUNJRQHp8vm6ws5mNTUYf35Thbt7xG3ipPyu2iQWpqLZCD3EmAwkYCxFIXdEX8Ykh4RtSru2mjOuBxkmDD1U4dUdontI3FPHI4G93xHnBxkfJ1Hhr1fxF0S4TtCWeKdBYQ1kI/s1600/28378825_443284266126079_709577461142126592_n.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="632" data-original-width="732" height="552" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidqRqA4QYUNJRQHp8vm6ws5mNTUYf35Thbt7xG3ipPyu2iQWpqLZCD3EmAwkYCxFIXdEX8Ykh4RtSru2mjOuBxkmDD1U4dUdontI3FPHI4G93xHnBxkfJ1Hhr1fxF0S4TtCWeKdBYQ1kI/s640/28378825_443284266126079_709577461142126592_n.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEie-JPTKGl-s8ipN0laksHESEsTOpUzAwj5iKGJyKxBIqUW9pfZ37KO_wApfTtA6BzHxkbItPC-bNBUeyMn8Hc3IVIhTTGPMd9EUCpnNT2tKnlLqGUQgKlub5jUauIJLmsgrZYxqtzqb_Y/s1600/29744258_10156132126725833_1805731393868640735_o.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1060" data-original-width="352" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEie-JPTKGl-s8ipN0laksHESEsTOpUzAwj5iKGJyKxBIqUW9pfZ37KO_wApfTtA6BzHxkbItPC-bNBUeyMn8Hc3IVIhTTGPMd9EUCpnNT2tKnlLqGUQgKlub5jUauIJLmsgrZYxqtzqb_Y/s1600/29744258_10156132126725833_1805731393868640735_o.jpg" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGnSMT59uv23Yg_FyZOi2jUlkBjiTpGf6OQGd9Am6LtnMpoUmEkMFZr9tB5frVqcTDPw9rQAFPR_3MQmMGLKpmGUbLQhZaNb_vkvQQJOxgjlIw0TSJ0BpiEUCd57XvwKy5VLdA8O_3Fm0/s1600/30740891_1998911613681516_3746944342502146048_n.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="583" data-original-width="750" height="496" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGnSMT59uv23Yg_FyZOi2jUlkBjiTpGf6OQGd9Am6LtnMpoUmEkMFZr9tB5frVqcTDPw9rQAFPR_3MQmMGLKpmGUbLQhZaNb_vkvQQJOxgjlIw0TSJ0BpiEUCd57XvwKy5VLdA8O_3Fm0/s640/30740891_1998911613681516_3746944342502146048_n.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCmDSz295jweOQkQc-2JdFZ2VbT85uZRz-bwZFFMnFZxFAnO64rjPKVD2frtQpoiz4x4DyC9p_-bG7TXSHPa4YO8vui-Boku43ApcljERaM2i2FeW6QynMvHmpjBEsG6fJgXqBSrEvStQ/s1600/30712185_1848335415179684_317972711143899136_o.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1368" data-original-width="488" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCmDSz295jweOQkQc-2JdFZ2VbT85uZRz-bwZFFMnFZxFAnO64rjPKVD2frtQpoiz4x4DyC9p_-bG7TXSHPa4YO8vui-Boku43ApcljERaM2i2FeW6QynMvHmpjBEsG6fJgXqBSrEvStQ/s1600/30712185_1848335415179684_317972711143899136_o.jpg" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhEuO_CCGzmPVbYvFPxeibvUZPH1T0wColhVL2_AHvCIZ4hpm9R-nz1O2WQB5uGF9_YA-aH6Gf3pPxFFEoGShlDySo87J9w9ScuRJ9XbnWFFFzKgDHJ86xIrwybuehOLEBY6EFE1oxtbjg/s1600/35432952_1914587155221176_3839196486218809344_o.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1592" data-original-width="1166" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhEuO_CCGzmPVbYvFPxeibvUZPH1T0wColhVL2_AHvCIZ4hpm9R-nz1O2WQB5uGF9_YA-aH6Gf3pPxFFEoGShlDySo87J9w9ScuRJ9XbnWFFFzKgDHJ86xIrwybuehOLEBY6EFE1oxtbjg/s640/35432952_1914587155221176_3839196486218809344_o.jpg" width="468" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEileuKACi-cD1NpGeLNlUwMofRu_WEFfT0T5P3IadTCjza0D4GTV6Su1NhrvuaYDTtnkTXItBSjvwvpqPGvmiddilVbaEGu_CXjT8S4kgefpKYks8jaUnmbjDEWNH_zbKAQF4c_g-Mxfa4/s1600/35525799_495641047557067_5934774092942016512_n.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="280" data-original-width="296" height="605" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEileuKACi-cD1NpGeLNlUwMofRu_WEFfT0T5P3IadTCjza0D4GTV6Su1NhrvuaYDTtnkTXItBSjvwvpqPGvmiddilVbaEGu_CXjT8S4kgefpKYks8jaUnmbjDEWNH_zbKAQF4c_g-Mxfa4/s640/35525799_495641047557067_5934774092942016512_n.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiAx4QEwr1Ew-RBOWz5rdrr6st5U0SAk_0vhMPAmLJxc5iiO4Gjjapnj9qIaueGNBs2E0g9_n-DgIBTYtooozLv6wrzheD8KmBUhgJMsbQYC1BorbxnKokNvq7y8_qS-VMKLNZyDhjW0Es/s1600/44621116_593081611146343_5300561797332336640_n.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="556" data-original-width="499" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiAx4QEwr1Ew-RBOWz5rdrr6st5U0SAk_0vhMPAmLJxc5iiO4Gjjapnj9qIaueGNBs2E0g9_n-DgIBTYtooozLv6wrzheD8KmBUhgJMsbQYC1BorbxnKokNvq7y8_qS-VMKLNZyDhjW0Es/s640/44621116_593081611146343_5300561797332336640_n.png" width="574" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9Iq_5ha5oKp11EuqFKVN_yyelQ7yCzo0e8YsmvqlaFUJJyciffA4Ce7rAweiCRRNd_MD7ylxryiBJW3Rt059nzqqYpE7fERAUjGWv0x1uL0eRni_NlzI-Bou6qeOAihsK_iwPVT13Y6c/s1600/47038510_613994469055057_6712890182632210432_o.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1426" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9Iq_5ha5oKp11EuqFKVN_yyelQ7yCzo0e8YsmvqlaFUJJyciffA4Ce7rAweiCRRNd_MD7ylxryiBJW3Rt059nzqqYpE7fERAUjGWv0x1uL0eRni_NlzI-Bou6qeOAihsK_iwPVT13Y6c/s640/47038510_613994469055057_6712890182632210432_o.jpg" width="570" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjepgwG95oLTqW6dqkgtLN7tQmB2HqgItcdCyi1TVTiNTHQt5QKt4DLf_rFruI6jxw7F9uJzD5hyLrRjkulxP-SnX34zJWwxcVbKejoOF5w3Jn__0HfyKQPZu5h2apGohugrA1QXfk6z3E/s1600/42999639_10215243114928353_3213818431530860544_n.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="534" data-original-width="573" height="596" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjepgwG95oLTqW6dqkgtLN7tQmB2HqgItcdCyi1TVTiNTHQt5QKt4DLf_rFruI6jxw7F9uJzD5hyLrRjkulxP-SnX34zJWwxcVbKejoOF5w3Jn__0HfyKQPZu5h2apGohugrA1QXfk6z3E/s640/42999639_10215243114928353_3213818431530860544_n.jpg" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgptP6fXKEiJeEcSB-cuhJxiYClBSsya4cJwH2o0tRHtwC1QpmaHRzgcFrrDaRCH1pa4s9qc-A4mgDNuuUzSuhrApR4jqWMZ066bKGgwMSmVFlKcOmA9NvpjbZZ7sC6eBtxOxiv6DEa8wM/s1600/46517288_609450632842774_3062612295398981632_o.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="991" data-original-width="800" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgptP6fXKEiJeEcSB-cuhJxiYClBSsya4cJwH2o0tRHtwC1QpmaHRzgcFrrDaRCH1pa4s9qc-A4mgDNuuUzSuhrApR4jqWMZ066bKGgwMSmVFlKcOmA9NvpjbZZ7sC6eBtxOxiv6DEa8wM/s640/46517288_609450632842774_3062612295398981632_o.png" width="516" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZSVijKkFghalQo8167_INTrFZMYr-9e_ztRuJ_IsSImw1PidhHIbJiF3iEEg1EH6B24tO5DRKuAnShI2oxHcJOoksHs-f4E7Evnp_klu2-pzXLLAsHpRWAajtI4ww29N5q7S66liMI1I/s1600/52386901_10157143758983669_1120348777576660992_o.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1511" data-original-width="1079" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZSVijKkFghalQo8167_INTrFZMYr-9e_ztRuJ_IsSImw1PidhHIbJiF3iEEg1EH6B24tO5DRKuAnShI2oxHcJOoksHs-f4E7Evnp_klu2-pzXLLAsHpRWAajtI4ww29N5q7S66liMI1I/s640/52386901_10157143758983669_1120348777576660992_o.jpg" width="456" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmzLU2-vUNgI6BMSBp3EGF0cr8VRbYEpaXVsG3i1i3MX4-zQvLwBBR8gDEetxJovLH4gar5I9jABWQicxyxsq7qLG9RaE6yrSZ9QVS63LTW9UvdjEEn5bcm-eb2FlLjpQYkpsCCkC28QQ/s1600/Screen+Shot+2019-02-05+at+9.15.26+PM.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1176" data-original-width="1214" height="386" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmzLU2-vUNgI6BMSBp3EGF0cr8VRbYEpaXVsG3i1i3MX4-zQvLwBBR8gDEetxJovLH4gar5I9jABWQicxyxsq7qLG9RaE6yrSZ9QVS63LTW9UvdjEEn5bcm-eb2FlLjpQYkpsCCkC28QQ/s400/Screen+Shot+2019-02-05+at+9.15.26+PM.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgAfET7S5liNr_Bvd9W2TN0sVx9YelDhVB4LY7CAgKbdMVd8anFdxQKNunekUPZC4fiS9mqdn0kUByuKhy28nKUcNCjl51HEGcpOTeTf9009yUX8x9XgxBCOt8EV7IGQ5N5R6aUPGAxy8/s1600/IMG_2294.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="639" data-original-width="480" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgAfET7S5liNr_Bvd9W2TN0sVx9YelDhVB4LY7CAgKbdMVd8anFdxQKNunekUPZC4fiS9mqdn0kUByuKhy28nKUcNCjl51HEGcpOTeTf9009yUX8x9XgxBCOt8EV7IGQ5N5R6aUPGAxy8/s640/IMG_2294.JPG" width="480" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjB10lx_VOGXhyphenhyphen4r9JCJAfr0l2h_GNKd9uYO_lzmcjOUFgu9BajRpAset4RHO5f3TorH3Yufl6pNGB4Ppd-hjOD7UctNNZx8WdZLhVODgb_VfcWzaZBA0TnLibtNJHb8iotV8eVBiIt24o/s1600/64369119_2391298877618340_4851478784206962688_n.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="485" data-original-width="720" height="430" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjB10lx_VOGXhyphenhyphen4r9JCJAfr0l2h_GNKd9uYO_lzmcjOUFgu9BajRpAset4RHO5f3TorH3Yufl6pNGB4Ppd-hjOD7UctNNZx8WdZLhVODgb_VfcWzaZBA0TnLibtNJHb8iotV8eVBiIt24o/s640/64369119_2391298877618340_4851478784206962688_n.jpg" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFoVPjapoJ9ROXwozQwqb8mOMxNp7mmsi-tTffl9JFw7nHlKStDSNsPLRIlgGAeh6OLSk_O75pOisKFNBYDjK-TXHrdY0ZN8RSpa_Qbe1I2ViSp11snbmUW7rBQoUh9OdEn2yj51vszxU/s1600/66402516_10216059248052711_1560386357648424960_n.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="942" data-original-width="720" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFoVPjapoJ9ROXwozQwqb8mOMxNp7mmsi-tTffl9JFw7nHlKStDSNsPLRIlgGAeh6OLSk_O75pOisKFNBYDjK-TXHrdY0ZN8RSpa_Qbe1I2ViSp11snbmUW7rBQoUh9OdEn2yj51vszxU/s640/66402516_10216059248052711_1560386357648424960_n.jpg" width="488" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br /></div></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-842965756326639856.post-82080407378142917062018-08-08T01:09:00.001-07:002018-08-15T13:49:54.621-07:00Dijkstra's in Disguise<div>
You can find a PDF version of this blog post <a href="https://drive.google.com/open?id=13Y_7tPvfvkNXcA_jkD2RMcNc0VQyRPvF">here</a>.</div>
<div>
<br /></div>
A weighted graph is a data structure consisting of some vertices and edges, and each edge has an associated cost of traversal. Let's suppose we want to compute the shortest distance from vertex $u$ to every other vertex $v$ in the graph, and we express this cost function as $\mathcal{L}_u(v)$.<br />
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIVZfNBwaYQsP0cMN-iqtyuy5rC_QZhy-o512xuiZH_bYMQQAGg-UQMzHHQ-K9GEKGQhC6ejfiYz6tfVV9DlOVpTtgsYB3L1w0MGt8YycqqUHg1UprtXEDwtioPlgmEcsDXaNt6QTOPVo/s1600/graph.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="484" data-original-width="600" height="258" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIVZfNBwaYQsP0cMN-iqtyuy5rC_QZhy-o512xuiZH_bYMQQAGg-UQMzHHQ-K9GEKGQhC6ejfiYz6tfVV9DlOVpTtgsYB3L1w0MGt8YycqqUHg1UprtXEDwtioPlgmEcsDXaNt6QTOPVo/s320/graph.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">For example, if each edge in this graph has cost $1$, $\mathcal{L}_u(v) = 3$.</td></tr>
</tbody></table>
<br />
<div>
Dijkstra's, Bellman-Ford, Johnson's, Floyd-Warshall are good algorithms for solving the shortest paths problem. They all share the principle of <b>relaxation</b>, whereby costs are initially <i>overestimated</i> for all vertices and gradually corrected for using a <b><a href="https://en.wikipedia.org/wiki/Consistent_heuristic">consistent heuristic</a></b> on edges (the term "relaxation" in the context of graph traversal is not be confused with "relaxation" as used in an optimization context, e.g. integer linear programs). The heuristic can be expressed in plain language as follows: <br />
<div>
<br /></div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlVggdbSx5N9rS86o_NjGnMPdCa3PSvVVJHX4BWrs7mXVcwVqgUPL1iaJ0AsAd7-3wYVzYfLa2cayktH7eK7E1RYOac509KdQMwJvv886zi8OZUIMjDep9P783qu5ylADbGTNpF01pC_g/s1600/bellmanford.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1045" data-original-width="1432" height="466" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlVggdbSx5N9rS86o_NjGnMPdCa3PSvVVJHX4BWrs7mXVcwVqgUPL1iaJ0AsAd7-3wYVzYfLa2cayktH7eK7E1RYOac509KdQMwJvv886zi8OZUIMjDep9P783qu5ylADbGTNpF01pC_g/s640/bellmanford.png" width="640" /></a></div>
<div>
<div>
It turns out that many algorithms I've encountered in my <b>computer graphics</b>, <b>finance</b>, and <b>reinforcement learning</b> studies are all variations of this relaxation principle in disguise. It's quite remarkable (embarrassing?) that so much of my time has been spent on such a humble technique taught in introductory computer science courses!</div>
<div>
<br /></div>
<div>
This blog post is a gentle tutorial on how all these varied CS topics are connected. No prior knowledge of finance, reinforcement learning, or computer graphics is needed. The reader should be familiar with undergraduate probability theory, introductory calculus, and be willing to look at some math equations. I've also sprinkled in some insights and questions that might be interesting to the AI research audience, so hopefully there's something for everybody here.</div>
</div>
</div>
<div>
<br /></div>
<h2>
Bellman-Ford</h2>
<div>
<br /></div>
<div>
<div>
Here's a quick introduction to Bellman-Ford, which is actually easier to understand than the famous Dijkstra's Algorithm.</div>
<div>
<br /></div>
<div>
Given a graph with $N$ vertices and costs $\mathcal{E}(s, v)$ associated with each directed edge $s \to v$, we want to find the cost of the shortest path from a source vertex $u$ to each other vertex $v$. The algorithm proceeds as follows: The cost to reach $u$ from itself is initialized to $0$, and all the other vertices have distances initialized to infinity. </div>
<div>
<br /></div>
<div>
The relaxation step (described in the previous section) is performed across all edges in any order for each iteration. The correct distances from $u$ are guaranteed to have propagated completely to all vertices after $N-1$ iterations, since the longest of the shortest paths contain at most $N$ unique vertices. If the relaxation condition indicates there are <i>still yet</i> shorter paths after $N$ iterations, it implies the presence of a cycle whose total cost is negative. You can find a nice animation of the Bellman-Ford algorithm <a href="https://visualgo.net/en/sssp">here</a>.</div>
<div>
<br /></div>
<div>
Below is the pseudocode:</div>
</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjnu_-M_ZBKkZ9vtik-OS7p7HnPbbvX8rBQg_6WCM9BPynG3YRNhC4T3ZCzVw36zHR8Fzbdu0DdMJsF_6wGxBc99DMbf80aDD70k2LKd_6_-nEpR6iMjTROJpUxUw5UnUOhnUacsWhsG10/s1600/bellmanford_pseudocode.png" imageanchor="1"><img border="0" data-original-height="570" data-original-width="834" height="272" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjnu_-M_ZBKkZ9vtik-OS7p7HnPbbvX8rBQg_6WCM9BPynG3YRNhC4T3ZCzVw36zHR8Fzbdu0DdMJsF_6wGxBc99DMbf80aDD70k2LKd_6_-nEpR6iMjTROJpUxUw5UnUOhnUacsWhsG10/s400/bellmanford_pseudocode.png" width="400" /></a></div>
<div>
<h2>
<br />Currency Arbitrage</h2>
<div>
<br /></div>
<div>
Admittedly, all this graph theory seems sort of abstract and boring at first. But would it still be boring if I told you that <i>efficiently detecting negative cycles in graphs is a multi-billion dollar business</i>? </div>
<div>
<br /></div>
<div>
The foreign exchange (FX) market, where one currency is traded for another, is the largest market in the world, with about 5 trillion USD being traded every day. This market determines the exchange rate for local currencies when you travel abroad. Let's model a currency exchange's order book (the ledger of pending transactions) as a graph:</div>
<div>
<ul>
<li>Each vertex represents a currency (e.g. JPY, USD, BTC).</li>
<li>Each directed edge represents the conversion of currency $A$ to currency $B$.</li>
</ul>
</div>
<div>
<br /></div>
<div>
An <b>arbitrage opportunity</b> exists if the product of exchange rates in a cycle exceeds $1$, which means that you can start with 1 unit of currency $A$, trade your way around the graph back to currency $A$, and then end up with more than 1 unit of $A$!</div>
<div>
<br /></div>
<div>
To see how this is related to the Bellman-Ford algorithm, let each currency pair $(A, B)$ with conversion rate $\frac{B}{A}$ be represented as a directed edge from $A$ to $B$ with edge weight $\mathcal{E}(A,B) = \log \frac{A}{B}$. Rearranging the terms,<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieGIoGPuqc8IrmmKQZEh2hx5kkAyfWoxgYynch9iqIMxjlFTNyu8RJ98pbgWAp1_D5An7FnPOSNCyEjmdT5WCNkMVO5z51CPXA3E5TmgtZdijX1FBK-YFW-hw2FkH2DSQtBtTgfBKCWG4/s1600/arb.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="599" data-original-width="1432" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieGIoGPuqc8IrmmKQZEh2hx5kkAyfWoxgYynch9iqIMxjlFTNyu8RJ98pbgWAp1_D5An7FnPOSNCyEjmdT5WCNkMVO5z51CPXA3E5TmgtZdijX1FBK-YFW-hw2FkH2DSQtBtTgfBKCWG4/s640/arb.png" width="640" /></a></div>
<br /></div>
</div>
<div>
<br /></div>
<div>
<div>
The above algebra shows that if the sum of edge weights in a cycle is negative, it is equivalent to the product of exchange rates exceeding $1$. The Bellman-Ford algorithm can be directly applied to detect currency arbitrage opportunities! This also applies to all fungible assets in general, but currencies tend to be the most strongly-connected vertices in the graph representing the financial markets.</div>
<div>
<br /></div>
<div>
In my sophomore year of college, I caught the cryptocurrency bug and set out to build an automated arbitrage bot for scraping these opportunities in exchanges. Cryptocurrencies - being unregulated speculative digital assets - are ripe for cross-exchange arbitrage opportunities:</div>
<div>
<br /></div>
<div>
<ul>
<li>Inter-exchange transaction costs are low (assets are ironically centralized into hot and cold wallets).</li>
<li>Lots of speculative activity, whose bias generates lots of mispricing.</li>
<li>Exchange APIs expose much more order book depth and require no license to trade cryptos. With a spoonful of Python and a little bit of initial capital, you can trade nearly any crypto you want across dozens of exchanges..</li>
</ul>
</div>
<div>
Now we have a way to automatically detect mispricings in markets and end up with more money than we started with. Do we have a money printing machine yet? </div>
</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5Z-QqEVZPrk7Kfog8s_1uPDNvSMh4s97kw7VmeMKdUERlXwSWLdYRfP9huGHXSi6vL-ANFtQNwlos3s6a2Q_QbWcSBQif2xsPnDrBy6Y7OkKMyKcxm7K38C_2IoySs9Zoj1J92F2aj48/s1600/scrooge.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="562" data-original-width="1000" height="358" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5Z-QqEVZPrk7Kfog8s_1uPDNvSMh4s97kw7VmeMKdUERlXwSWLdYRfP9huGHXSi6vL-ANFtQNwlos3s6a2Q_QbWcSBQif2xsPnDrBy6Y7OkKMyKcxm7K38C_2IoySs9Zoj1J92F2aj48/s640/scrooge.jpg" width="640" /></a></div>
<div>
<br /></div>
<div>
<div>
<br /></div>
<div>
Not so fast! A lot of things can still go wrong. Exchange rates fluctuate over time and other people are competing for the same trade, so the chances of executing all legs of the arbitrage are by no means certain. </div>
<div>
<br /></div>
<div>
Execution of trading strategies is an entire research area on its own, and can be likened to crossing a frozen lake as quickly as possible. Each intermediate currency position, or "leg'', in an arbitrage strategy is like taking a cautious step forward. One must be able to forecast the stability of each step and know what steps proceed after, or else one can get "stuck'' holding a lot of a currency that gives out like thin ice and becomes worthless. Often the profit opportunity is not big enough to justify the risk of crossing that lake.</div>
<div>
<br /></div>
<div>
Simply taking the greedy minimum among all edge costs does not take into account the probability of various outcomes happening in the market. The <i>right</i> way to structure this problem is to think about edge weights being random variables that change over time. In order to compute the expected cost, we need to integrate over all possible path costs that can manifest. Hold this thought, as we will need to introduce some more terminology in the next few sections.</div>
<div>
<br /></div>
<div>
While the arbitrage system I implemented was capable of detecting arb opportunities, I never got around to fully automating the execution and order confirmation subsystems. Unfortunately, I got some coins stolen and lost interest in cryptos shortly after. To execute arb opportunities quickly and cheaply I had to keep small BTC/LTC/DOGE positions in each exchange, but sometimes exchanges would just vanish into thin air. Be careful of what you wish for, or you just might find your money "decentralized'' from your wallet! </div>
<div>
<br /></div>
</div>
<h2>
Directional Shortest-Path</h2>
<div>
<br /></div>
<div>
<div>
Let's introduce another cost function, the <b>directional shortest path</b> $\mathcal{L}_u(v, s \to v)$, that computes the shortest path from $u$ to $v$, where the last traversed edge is from $s \to v$. Just like making a final stop at the bathroom $s$ before boarding an airplane $v$.</div>
<div>
<br /></div>
<div>
Note that the original <i>shortest path cost</i> $\mathcal{L}_u(v)$ is equivalent to the smallest directional <i>shortest path cost</i> among all of $v$'s neighboring vertices, i.e. $\mathcal{L}_u(v) = \min_{s} \mathcal{L}_u(v, s \to v)$ </div>
<div>
<br /></div>
<div>
Shortest-path algorithms typically associate edges with <i>costs</i>, and the objective is to <i>minimize</i> the total cost. This is also equivalent to trying to maximize the <i>negative cost</i> of the path, which we call $\mathcal{Q}_u = -\mathcal{L}_u(v)$. Additionally, we can re-write this max-reduction as a sum-reduction, where each $\mathcal{Q}_u$ term is multiplied by an indicator function that is $1$ when its $\mathcal{Q}_u$ term is the largest and $0$ otherwise.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-EUCnRUwV0DT0eQIQqrOOAqvnmhlVUgG3hbM_k_ll20QJg8vjyaTBGzpnarmVNlccqfuq-5VhPtHR0sEqpWZtiWMLKAf_T1ssUN0Qu1Myxf4O4o9t1VrmQ3BZFs1cKAR_uLWi1zVQq7E/s1600/bellmanford_rewrite.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1121" data-original-width="1432" height="500" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-EUCnRUwV0DT0eQIQqrOOAqvnmhlVUgG3hbM_k_ll20QJg8vjyaTBGzpnarmVNlccqfuq-5VhPtHR0sEqpWZtiWMLKAf_T1ssUN0Qu1Myxf4O4o9t1VrmQ3BZFs1cKAR_uLWi1zVQq7E/s640/bellmanford_rewrite.png" width="640" /></a></div>
<br />
<br /></div>
</div>
<div>
Does this remind you of any well-known algorithm? </div>
<div>
<br /></div>
<div>
If you guessed "Q-Learning", you are absolutely right! </div>
<div>
<br /></div>
<h2>
Q-Learning</h2>
<div>
<br /></div>
<div>
Reinforcement learning (RL) problems entail an agent interacting with its environment such that the total expected reward $R$ it receives is maximized over a multi-step (maybe infinite) decision process. In this setup, the agent will be unable to take further actions or receive additional rewards after transitioning to a terminal (absorbing) state.</div>
<div>
<div>
<br /></div>
<div>
There are many ways to go about solving RL problems, and we'll discuss just one kind today: <b>value-based</b> <b>algorithms</b>, attempt to recover a value function $Q(s,a)$ that computes the maximum total reward an agent can possibly obtain if it takes an action $a$ at state $s$.</div>
<div>
<br /></div>
<div>
Wow, what a mouthful! Here's a diagram of what's going on along with an annotated mathematical expression.</div>
</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEib9e78Qtrw-_aJ1q4NMM688DCmgt-M_GI-1ArV0vUdWfBgC_9T16sU6nsD2cE_XM5ScaCMsST_w8uiXqHITAG0NvdxKpqhHZXk3XZPdaqVymOY4PClpgILat6jOen0yP46bdzj6XBtfUQ/s1600/q.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="977" data-original-width="1432" height="436" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEib9e78Qtrw-_aJ1q4NMM688DCmgt-M_GI-1ArV0vUdWfBgC_9T16sU6nsD2cE_XM5ScaCMsST_w8uiXqHITAG0NvdxKpqhHZXk3XZPdaqVymOY4PClpgILat6jOen0yP46bdzj6XBtfUQ/s640/q.png" width="640" /></a></div>
<div>
<div>
Re-writing the shortest path relaxation procedure in terms of a directional path cost recovers the Bellman Equality, which underpins the Q-Learning algorithm. It's no coincidence that Richard Bellman of Bellman-Ford is also the same Richard Bellman of the Bellman Equality! Q-learning is a classic example of dynamic programming. </div>
<div>
<br /></div>
<div>
For those new to Reinforcement Learning, it's easiest to understand Q-Learning in the context of an environment that yields a reward only at the terminal transition:</div>
</div>
<div>
<br /></div>
<div>
<div>
<ul>
<li>The value of state-action pairs $(s_T, a_T)$ that transition to a terminal state are easy to learn - it is just the sparse reward received as the episode ends, since the agent can't do anything afterwards. </li>
<li>Once we have all those final values, the value for $(s_{T-1}, a_{T-1})$ leading to<i> those states</i> are "backed up'' (backwards through time) to the states that transition to them. </li>
<li>This continues all the way to the state-action pairs $(s_1, a_1)$ encountered at the beginning of episodes. </li>
</ul>
</div>
</div>
<h2>
</h2>
<h2>
Handling Randomness in Shortest-Path Algorithms</h2>
<div>
<br /></div>
<div>
<div>
Remember the "thin ice'' analogy from currency arbitrage? Let's take a look at how modern RL algorithms are able to handle random path costs. </div>
<div>
<br /></div>
<div>
In RL, the agent's <b>policy distribution</b> $\pi(a|s)$ is a conditional probability distribution over actions, specifying how the agent behaves randomly in response to observing some state $s$. In practice, policies are made to be random in order to facilitate exploration of environments whose dynamics and set of states are unknown (e.g. imagine the RL agent opens its eyes for the first time and must learn about the world before it can solve a task). Since the agent's sampling of action $a \sim \pi(a|s)$ from the policy distribution are immediately followed by computation of environment dynamics $s^\prime = f(s, a)$, it's equivalent to view randomness as coming from a stochastic policy distribution or stochastic transition dynamics. We redefine a notion of Bellman consistency for <i>expected</i> future returns:</div>
</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFbGR0dAieIpxmJAn9AKOAVYhjSW41E1HfntpfKFiTi6XsnPdswk6VSRN_sUM7WhMgAV-50kkvj8gVj6IN0D08UDdWSdWpm8vPHO6RA906hY6Qg6dOSH3tDUtn25pxPwfVLNav9JKdAZo/s1600/q_expected.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="146" data-original-width="1428" height="64" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFbGR0dAieIpxmJAn9AKOAVYhjSW41E1HfntpfKFiTi6XsnPdswk6VSRN_sUM7WhMgAV-50kkvj8gVj6IN0D08UDdWSdWpm8vPHO6RA906hY6Qg6dOSH3tDUtn25pxPwfVLNav9JKdAZo/s640/q_expected.png" width="640" /></a></div>
<div>
<div>
By propagating expected values, Q-learning allows for shortest-path algorithms to essentially be aware of the expected path length, and take transition probabilities of dynamics/policies into account.</div>
</div>
<div>
<br /></div>
<h2>
Modern Q-Learning</h2>
<div>
<br /></div>
<div>
This section discusses some recent breakthroughs in RL research, such as <b>Q-value overestimation</b>, <b>Softmax Temporal Consistency</b>, <b>Maximum Entropy Reinforcement Learning</b>, and <b>Distributional Reinforcement Learning</b>. These cutting-edge concepts are put into the context of shortest-path algorithms as discussed previously. If any of these sound interesting and you're willing to endure a bit more math jargon, read on -- otherwise, feel free to skip to the next section on computer graphics.</div>
<div>
<div>
<br /></div>
<div>
Single-step Bellman backups during Q-learning turn out to be rather sensitive to random noise, which can make training unstable. Randomness can come from imperfect optimization over actions during the Bellman Update, poor function approximation in the model, random label noise (e.g. human error in assigning labels to a robotic dataset), stochastic dynamics, or uncertain observations (partial observability). All of these can violate the Bellman Equality, which may cause learning to diverge or get stuck in a poor local minima.</div>
</div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjItl49gPBnTR-qj92HNN60ahBxzldyRqwXSAVuH0NmNQGjVYjr3pX1kCyMCcCfjclxJB6pvICSBwUiCmyd0V0nvHn_oAApKfx0RwS-suGpVWHi9ORQtoaCDKqfg1YJ_L_hbjDnI3yAXhQ/s1600/noise_q.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="300" data-original-width="866" height="220" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjItl49gPBnTR-qj92HNN60ahBxzldyRqwXSAVuH0NmNQGjVYjr3pX1kCyMCcCfjclxJB6pvICSBwUiCmyd0V0nvHn_oAApKfx0RwS-suGpVWHi9ORQtoaCDKqfg1YJ_L_hbjDnI3yAXhQ/s640/noise_q.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Sources of noise that arise in Q-learning which violate the hard Bellman Equality.</td></tr>
</tbody></table>
<div>
<div>
A well-known problem among RL practitioners is that Q-learning suffers from over-estimation; during off-policy training, predicted Q-values climb higher and higher but the agent doesn't get better at solving the task. Why does this happen?</div>
<div>
<br /></div>
<div>
Even if $Q_\theta$ is an unbiased estimator of the true value function, any variance in the estimate is converted into upward bias during the Bellman update. A sketch of the proof: assuming Q values are uniformly or normally distributed about the true value function, the Fisher–Tippett–Gnedenko theorem tells us that applying the max operator over multiple normally-distributed variables is mean-centered around a Gumbel distribution with a positive mean. Therefore the updated Q function, after the Bellman update is performed, will obtain some positively skewed bias! One way to deal with this is double Q-learning, which re-evaluates the optimal next-state action value using an i.i.d $Q$ function. Assuming Q-value noise is independent of the max action, the use of a i.i.d Q function for scoring the best actions makes max-Q estimation unbiased again.</div>
<div>
<br /></div>
<div>
Dampening Q values can also be accomplished crudely by decreasing the discount factor (0.95 is common for environments like Atari), but $\gamma$ is kind of a hack as it is not a physically meaningful quantity in most environments.</div>
<div>
<br /></div>
<div>
Yet another way to decrease overestimation of Q values is to "smooth'' the greediness of the max-operator during the Bellman backup, by taking some kind of weighted average over Q values, rather than a hard max that only considers the best expected value. In discrete action spaces with $K$ possible actions, the weighted average is also known as a "softmax'' with a temperature parameter:</div>
<div>
<br /></div>
<div>
$$\verb|softmax|(x, \tau) = \mathbf{w}^T \mathbf{x}$$</div>
<div>
<br /></div>
where <br />
<br />
<div>
$$\mathbf{w}_i = \frac{e^{\mathbf{x}_i/\tau}}{\sum_{j=1}^{K}{e^{\mathbf{x}_j/\tau}}}$$</div>
</div>
<div>
<br /></div>
<div>
<div>
Intuitively, the "softmax'' can be thought of as a confidence penalty on how likely we believe $\max Q(s^\prime, a^\prime)$ to be the actual expected return at the next time step. Larger temperatures in the softmax drag the mean away from the max value, resulting in more pessimistic (lower) Q values. Because of this temeprature-controlled softmax, our reward objective is no longer simply to "maximize expected total reward''; rather, it is more similar to "maximizing the top-k expected rewards''. In the infinite-temperature limit, all Q-values are averaged equally and the softmax becomes a mean, corresponding to the return of a <i>completely random policy</i>. Hold that thought, as this detail will be visited again when we discuss computer graphics!</div>
<div>
<br /></div>
<div>
This modification to the standard Hard-Max Bellman Equality is known as <b>Softmax Temporal Consistency</b>. In continuous action spaces, the backup through an entire episode can be thought of as repeatedly backing up expectations over integrals.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcgfLzLiuKWu1eofP8QYL04YAg1hmkoe0d9PdXxj2uxKIyAgRB4btbnb6C9ExLJvO8hgQEqOHc1CNSFbHpSCra8W_JNSATNGdNzbW2mIWJWsPlFXyd-3tM3Ru8PWM3e1csO32W9e7UoWQ/s1600/softq.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="178" data-original-width="1428" height="78" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcgfLzLiuKWu1eofP8QYL04YAg1hmkoe0d9PdXxj2uxKIyAgRB4btbnb6C9ExLJvO8hgQEqOHc1CNSFbHpSCra8W_JNSATNGdNzbW2mIWJWsPlFXyd-3tM3Ru8PWM3e1csO32W9e7UoWQ/s640/softq.png" width="640" /></a></div>
<br /></div>
</div>
<div>
<br /></div>
<div>
<div>
By introducing a confidence penalty as an implicit regularization term, our optimization objective is no longer optimizing for the cumulative expected reward from the environment. In fact, if the policy distribution has the form of a Boltzmann Distribution:</div>
<div>
<br /></div>
<div>
$$\pi(a|s) \sim \exp Q(s, a)$$</div>
<div>
<br /></div>
<div>
This softmax regularization has a very explicit, information-theoretic interpretation: it is the optimal solution for the <b>Maximum-Entropy RL objective</b>:</div>
</div>
<div>
<br /></div>
<div>
<div>
<br /></div>
<div>
$$\pi_{\mathrm{MaxEnt}}^* = \arg\!\max_{\pi} \mathbb{E}_{\pi}\left[ \sum_{t=0}^T r_t + \mathcal{H}(\pi(\cdot | \mathbf{s}_t)) \right]$$</div>
<div>
<br /></div>
<div>
An excellent explanation for the maximum entropy principle is reproduced below from <a href="http://www.cs.cmu.edu/~bziebart/publications/thesis-bziebart.pdf">Brian Ziebart's PhD thesis</a>:</div>
</div>
<div>
<br />
<blockquote class="tr_bq">
When given only partial information about a probability distribution, $\tilde{P}$, typically many different distributions, $P$, are capable of matching that information. For example, many distributions have the same mean value. The principle of maximum entropy resolves the ambiguity of an under-constrained distribution by selecting the single distribution that has the least <i>commitment</i> to any particular outcome while matching the observational constraints imposed on the distribution.</blockquote>
<div>
<br /></div>
<div>
This is nothing more than "Occam's Razor'' in the parlance of statistics. The Maximum Entropy Principle is a framework for limiting overfitting in RL models, as it limits the amount of information (in nats) contained by the policy. The more entropy a distribution has, the less information it contains, and therefore the less "assumptions'' about the world it makes. The combination of Softmax Temporal Consistency with Boltzmann Policies is known as <b>Soft Q-Learning</b>.</div>
<div>
<br /></div>
<div>
To draw a connection back to currency arbitrage and the world of finance, limiting the number of assumptions in a model is of paramount importance to quantiatiative researchers at hedge funds, since hundreds of millions of USD could be at stake. Quants have developed a rather explicit form of Occam's Razor by tending to rely on models with as few statistical priors as possible, such as Linear models and Gaussian Process Regression with simple kernels.</div>
</div>
<div>
<br /></div>
<div>
<div>
Although Soft Q-Learning can regularize against model complexity, updates are still backed up over single timesteps. It is often more effective to integrate rewards with respect to a "path'' of samples actually sampled at data collection time, than backing up expected Q values one edge at a time and hoping that softmax temporal consistency remains consistent well when accumulating multiple backups. </div>
<div>
<br /></div>
<div>
Work from <a href="https://arxiv.org/pdf/1702.08892.pdf">Nachum et al. 2017</a>, <a href="https://arxiv.org/abs/1611.01626">O’Donoghue et al. 2016</a>, <a href="https://arxiv.org/pdf/1704.06440.pdf">Schulman et al. 2017</a> explore the theoretical connections between multi-step return optimization objectives (policy-based) and temporal consistency (value-based) objectives. The use of a multi-step return can be thought of as a path-integral solution to marginalizing out random variables occuring during a multi-step decision process (such as random non-Markovian dynamics). In fact, long before Deep RL research became popular, control theorists have been using path integrals for optimal control to tackle the problem of integrating multi-step stochastic dynamics [<a href="https://arxiv.org/pdf/physics/0505066.pdf">1</a>, <a href="http://www.jmlr.org/papers/volume11/theodorou10a/theodorou10a.pdf">2</a>]. A classic example is the use of the <a href="https://en.wikipedia.org/wiki/Viterbi_algorithm">Viterbi Algorithm</a> in stochastic planning.</div>
<div>
<br /></div>
<div>
Once trained, the value function $Q(s,a)$ implies a sequence of actions an agent must do in order to maximize expected reward (this sequence does not have to be unique). In order for the $Q$ function to be correct, it must also implicitly capture knowledge about the expected dynamics that occur along the sequence of actions. It's quite remarkable that all this "knowledge of the world and one's own behavior'' can be captured into a single scalar.</div>
<div>
<br /></div>
<div>
However, this representational compactness can also be a curse!</div>
<div>
<br /></div>
<div>
Soft Q-learning and PGQ/PCL successfully back up <i>expected</i> values over some return distribution, but it's still a lot to ask of a neural network to capture all the knowledge about expected future dynamics, marginalize all the randomness into a single statistic. </div>
<div>
<br /></div>
<div>
We may be interested in propagating other statistics like variance, skew, and kurtosis of the value distribution. What if we did Bellman backups over entire distributions, without having to throw away the higher-order moments? </div>
<div>
<br /></div>
<div>
This actually recovers the motivation of <b>Distributional Reinforcement Learning</b>, in which "edges'' in the shortest path algorithm propagate distributions over values rather than collapsing everything into a scalar. The main contribution of the seminal <a href="https://arxiv.org/abs/1707.06887">Bellemare et al. 2017 paper</a> is defining an algebra that generalizes the Bellman Equality to operate on distributions rather than scalar statistics of them. Unlike the path-integral approach to Q-value estimation, this framework avoids marginalization error by passing richer messages in the single-step Bellman backups. </div>
<div>
<br /></div>
<div>
Soft-Q learning, PGQ/PCL, and Distributional Reinforcement Learning are "probabilistically aware'' reinforcement learning algorithms. They appear to be <a href="https://www.blogger.com/">tremendously beneficial</a> in <a href="http://bair.berkeley.edu/blog/2017/10/06/soft-q-learning/">practice</a>, and I would not be surprised if by next year it becomes widely accepted that these techniques are the "physically correct'' thing to do, and hard-max Q-learning (as done in standard RL evaluations) is discarded. Given that multi-step Soft-Q learning (PCL) and Distributional RL take complementary approaches to propagating value distributions, I'm also excited to see whether the approaches can be combined (e.g. policy gradients over distributional messages).</div>
</div>
<div>
<br /></div>
<h2>
Physically-Based Rendering</h2>
<div>
<br /></div>
<blockquote class="tr_bq">
Ray tracing is not slow, computers are. -- James Kajiya</blockquote>
<div>
<div>
<br /></div>
<div>
A couple of the aforementioned RL works make heavy use of the terminology "path integrals''. Do you know where else path integrals and the need for "physical correctness'' arise? Computer graphics!</div>
<div>
<br /></div>
<div>
Whether it is done by an illustrator's hand or a computer, the problem of rendering asks "Given a scene and some light sources, what is the image that arrives at a camera lens?''. Every rendering procedure -- from the first abstract cave painting to Disney's modern Hyperion renderer, is a depiction of light transported from the world to the eye of the observer.</div>
</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiiLuUdXSLNke9njhq-m1zCuRFxR2foa7FYAID7VL90VzlJfUoYaE_7dlx6peGT50PIP_Mw2izgLp7w2ZPPunrdhLYmGFoynjtmlnk4-8SCUYuMwobjUSp9x5M0Q3z9r3CRtoPlGvXIWTY/s1600/cgi_2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="433" data-original-width="1600" height="172" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiiLuUdXSLNke9njhq-m1zCuRFxR2foa7FYAID7VL90VzlJfUoYaE_7dlx6peGT50PIP_Mw2izgLp7w2ZPPunrdhLYmGFoynjtmlnk4-8SCUYuMwobjUSp9x5M0Q3z9r3CRtoPlGvXIWTY/s640/cgi_2.jpg" width="640" /></a></div>
<div>
<br /></div>
<div>
Here are some examples of the enormous strides rendering technology has made in the last 20 years:</div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgblTF_rYuFyU0zrsYonD44ZSlAXd0R-N67QFnD8c-jFncN3fRkWLPlhVsS4qpVJGGfDIu_O5QZ0QlqEmykiJNJ7E3K4-9BxJ5h52UDGeEf7MUOxyG397Wo6gv6irDm02nv_YW1awVq5L4/s1600/cgi_reel.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="780" data-original-width="1600" height="312" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgblTF_rYuFyU0zrsYonD44ZSlAXd0R-N67QFnD8c-jFncN3fRkWLPlhVsS4qpVJGGfDIu_O5QZ0QlqEmykiJNJ7E3K4-9BxJ5h52UDGeEf7MUOxyG397Wo6gv6irDm02nv_YW1awVq5L4/s640/cgi_reel.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">From top left, clockwise: <a href="https://www.artstation.com/artwork/big-city-sensory-overstimulation">Big City Overstimulation</a> by Gleb Alexandrov. Pacific Rim, Uprising. The late Peter Cushing resurrected for a Star Wars movie. Remove Henry's Cavill's mustache to re-shoot some scenes because he needs the mustache for another movie.</td></tr>
</tbody></table>
<div>
<br /></div>
<div>
<div>
Photorealistic rendering algorithms are made possible thanks to accurate physical models of how light behaves and interacts with the natural world, combined with the computational resources to actually represent the natural world in a computer. For instance, a seemingly simple object like a butterfly wing has an insane amount of geometric detail, and light interacts with this geometry to produce some macroscopic effect like iridescence.</div>
</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhiuFfP8JP3b1x4Hk6VoxcMfcrQDv7riyOc6EdMviJ-VsQ_H06tlK72CeF0F4daU6xMCXSC7OE_7GgPyue9mxQAkjdOCQh2AcjGdE7XDV3bhJsiSuoF4qvmLWHbEKvDQktRojrrVDT0WJQ/s1600/butterfly_wing.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="481" data-original-width="700" height="219" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhiuFfP8JP3b1x4Hk6VoxcMfcrQDv7riyOc6EdMviJ-VsQ_H06tlK72CeF0F4daU6xMCXSC7OE_7GgPyue9mxQAkjdOCQh2AcjGdE7XDV3bhJsiSuoF4qvmLWHbEKvDQktRojrrVDT0WJQ/s320/butterfly_wing.jpg" width="320" /></a></div>
<div>
<br /></div>
<div>
<div>
<br /></div>
<div>
Light transport involves far too many calculations for a human to do by hand, so the old master painters and illustrators came up with a lot of rules about how light behaves and interacts with everyday scenes and objects. Here are some examples of these rules:</div>
<div>
<br /></div>
<div>
<ul>
<li>Cold light has a warm shadow, warm light has a cool shadow.</li>
<li>Light travels through tree leaves, resulting in umbras that are less "hard" than a platonic sphere or a rock.</li>
<li>Clear water and bright daylight result in caustics.</li>
<li>Light bounces off flat water like a billiard ball with a perfectly reflected incident angle, but choppy water turns white and no longer behaves like a mirror.</li>
</ul>
</div>
<div>
<br /></div>
<div>
You can get quite far on a big bag of heuristics like these. Here are some majestic paintings from the Hudson River School (19th century).</div>
</div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgD8uu_MVErRUo6ONtHeYghw0LuU06FyntQ89dU92FcoOPAr5RN5-z4nxmZ5PF9FHNrfq7umLg2TjP4k8h_BKhMB8GtIazZQyWDYo2LqsRIvc24UK_prx1ggblxLuDDY8M_JyEgZIkEO1E/s1600/hudson_1.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1064" data-original-width="1600" height="424" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgD8uu_MVErRUo6ONtHeYghw0LuU06FyntQ89dU92FcoOPAr5RN5-z4nxmZ5PF9FHNrfq7umLg2TjP4k8h_BKhMB8GtIazZQyWDYo2LqsRIvc24UK_prx1ggblxLuDDY8M_JyEgZIkEO1E/s640/hudson_1.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Albert Bierstadt, <a href="https://artsandculture.google.com/asset/scenery-in-the-grand-tetons/tQGS1SjveuPfFg">Scenery in the Grand Tetons</a>, 1865-1870</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiO_w1-tGbe53L2JhyphenhyphenDmnLQ_XOWMhH5SZDDkC19N9NbGmjCJDZCv4KexAFnbr2QyqDSO1pZS3l3erX__SUd9pZxNbmgLNZ0Z46wjDUVyy0k76P9yDuRr4PB5tkM2e-6GW4XCT1FvmFdHqs/s1600/hudson_2.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="957" data-original-width="1600" height="382" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiO_w1-tGbe53L2JhyphenhyphenDmnLQ_XOWMhH5SZDDkC19N9NbGmjCJDZCv4KexAFnbr2QyqDSO1pZS3l3erX__SUd9pZxNbmgLNZ0Z46wjDUVyy0k76P9yDuRr4PB5tkM2e-6GW4XCT1FvmFdHqs/s640/hudson_2.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Albert Bierstadt, <a href="https://artsandculture.google.com/asset/among-the-sierra-nevada-california/IQE1CY9y_Rfy5A">Among the Sierra Nevada Mountains</a>, California, 1868</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSipXZ2sqSkqk7k0b-sWeJw6ajo7jENGiPN6YHLkufShyWQIKOaE_esLIUAGMiNTUvTHl1pNmUQTccy0O9stlXdUh87sVa5MpA3bSPfXjviLCsKHzB9L3ORuWZUfIIp9knEFiowwa0uCs/s1600/hudson_3.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="937" data-original-width="1600" height="374" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSipXZ2sqSkqk7k0b-sWeJw6ajo7jENGiPN6YHLkufShyWQIKOaE_esLIUAGMiNTUvTHl1pNmUQTccy0O9stlXdUh87sVa5MpA3bSPfXjviLCsKHzB9L3ORuWZUfIIp9knEFiowwa0uCs/s640/hudson_3.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Mortimer Smith: <a href="https://www.dia.org/art/collection/object/winter-landscape-61811">Winter Landscape</a>, 1878</td></tr>
</tbody></table>
<div>
<div>
However, a lot of this painterly understanding -- though breathtaking -- was non-rigorous and physically inaccurate. Scaling this up to animated sequences was also very laborious. It wasn't until 1986, with the independent discovery of the rendering equation by David Immel et al. and James Kajiya, that we obtained physically-based rendering algorithms.</div>
<div>
<br /></div>
<div>
Of course, the scene must obey the conservation of energy transport: the electromagnetic energy being fed into the scene (via radiating objects) must equal the total amount of electromagnetic energy being absorbed, reflected, or refracted in the scene. Here is the <b>rendering equation</b> explained in an annotated equation:</div>
</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghz_BaVdCZtNK0SiVduZBiS57vSzAhLIcVUPqInt7HQKCuFYxnKYyLg785RIKsU_LUX8A4bjP7yD9qIOs5YW1nmiP2USQot0BwWzHiqLzmkvT4yDw2qGdSPc7O6jzXaRVoBV6nGLbN-QI/s1600/pt.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1070" data-original-width="1432" height="478" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghz_BaVdCZtNK0SiVduZBiS57vSzAhLIcVUPqInt7HQKCuFYxnKYyLg785RIKsU_LUX8A4bjP7yD9qIOs5YW1nmiP2USQot0BwWzHiqLzmkvT4yDw2qGdSPc7O6jzXaRVoBV6nGLbN-QI/s640/pt.png" width="640" /></a></div>
<div>
<br /></div>
<div>
<div>
A <a href="https://en.wikipedia.org/wiki/Monte_Carlo_method">Monte Carlo estimator</a> is a method for estimating high-dimensional integrals, by simply taking the expectation over many independent samples of an unbiased estimator. Path-tracing is the simplest Monte-Carlo approximation possible to the rendering equation. I've borrowed some screenshots from Disney's very excellent <a href="https://www.youtube.com/watch?v=frLwRLS_ZR0">tutorial on production path tracing</a> to explain how "physically-based rendering'' works. </div>
</div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtLrFznxXHJvD67dxU-1-MYSupV-pGYKifY938C-ORI_d6FWy31zVCEanIwU8kMSfYrLvGitUD1X-_Esi7exc58AJA9AcI0JgLGM_3J_DIvoV9xU4KI5rlkbGk9vBtaRh_VYjzImFK3xM/s1600/disney1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="731" data-original-width="1600" height="292" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtLrFznxXHJvD67dxU-1-MYSupV-pGYKifY938C-ORI_d6FWy31zVCEanIwU8kMSfYrLvGitUD1X-_Esi7exc58AJA9AcI0JgLGM_3J_DIvoV9xU4KI5rlkbGk9vBtaRh_VYjzImFK3xM/s640/disney1.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Initially, the only thing visible to the camera is the light source. Let there be light!</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrAdcDAxiVGuaEjEOv0GPNWJxRHAiQgrizr8W0WpsWLEU5Z8NLfu8DQgETHzTcVG__SODoQm5JpCbhy3zTpYUvDrv5pJQJffowtgcF5zygJX1AKOtBYxSNHoNPyJ2MiybAvBQeCjBVEg0/s1600/disney2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="715" data-original-width="1600" height="284" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrAdcDAxiVGuaEjEOv0GPNWJxRHAiQgrizr8W0WpsWLEU5Z8NLfu8DQgETHzTcVG__SODoQm5JpCbhy3zTpYUvDrv5pJQJffowtgcF5zygJX1AKOtBYxSNHoNPyJ2MiybAvBQeCjBVEg0/s640/disney2.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">A stream of photons is emitted from the light and strikes a surface (in this case, a rock). It can be absorbed into non-visible energy, reflected off the object, or refracted into the object.</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIelt91pyV4z6pBwCfEZGK6iPvKNy8NdMq-scAWBNnJ_6dVXSlljOYateIh6xCM8vfpiUwo6hR4Hur0XfvPINxiv74kEowl9kUGMQJ8MZNQNYsKrsNaYEMg2O4wBPx8SOdIehH0rWjPpQ/s1600/disney3.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="699" data-original-width="1600" height="278" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIelt91pyV4z6pBwCfEZGK6iPvKNy8NdMq-scAWBNnJ_6dVXSlljOYateIh6xCM8vfpiUwo6hR4Hur0XfvPINxiv74kEowl9kUGMQJ8MZNQNYsKrsNaYEMg2O4wBPx8SOdIehH0rWjPpQ/s640/disney3.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Any reflected or refracted light is emitted from the surface and continues in another random direction, and the process repeats until there are no photons left or it is absorbed by the camera lens.</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjz0rO6xEgrkTvIQhlx-uNiOOjmcvhwWFfX6ibcxol0ltoEhOIiRKWHEAWJ3_87RreQaXKSd4An2BuDQx193Vt3OankLRE876zTihMAUiCdHlVHNV-d3XkV6SGj5AWhAv8ZoGllh3DPSgs/s1600/disney4.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="677" data-original-width="1600" height="270" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjz0rO6xEgrkTvIQhlx-uNiOOjmcvhwWFfX6ibcxol0ltoEhOIiRKWHEAWJ3_87RreQaXKSd4An2BuDQx193Vt3OankLRE876zTihMAUiCdHlVHNV-d3XkV6SGj5AWhAv8ZoGllh3DPSgs/s640/disney4.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">This process is repeated ad infinum for many rays until the inflow vs. outflow of photons reaches equilibrium or the artist decides that the computer has been rendering for long enough. <span style="font-size: 12.8px;">The total light contribution to a surface is a path integral over all these light bounce paths.</span></td></tr>
</tbody></table>
<div>
<div>
This equation has applications beyond entertainment: the inverse problem is studied in astrophysics simulations (given observed radiance of a supernovae, what are the properties of its nuclear reactions?), and <a href="https://www.amazon.com/Principles-Neutron-Transport-Problems-Mathematics/dp/0486462935">the neutron transport problem</a>. In fact, Monte Carlo methods for solving integral equations were developed for studying fissile reactions for the Manhattan Project! The rendering integral is also an <b><a href="https://en.wikipedia.org/wiki/Fredholm_theory">Inhomogeneous Fredholm equations</a> of the second kind</b>, which have the general form:</div>
<div>
<br /></div>
<div>
$${\displaystyle \varphi (t)=f(t)+\lambda \int _{a}^{b}K(t,s)\varphi (s)\,\mathrm {d} s.}$$</div>
<div>
<br /></div>
<div>
Take another look at the rendering equation. Déjà vu, anyone?</div>
<div>
<br /></div>
<div>
Once again, path tracing is nothing more than the Bellman-Ford heuristic encountered in shortest-path algorithms! The rendering integral is taken over the $4\pi$ steradian's of surface area on a unit sphere, which cover all directions an incoming light ray can come from. If we interpret this area integration probabilistically, this is nothing more than the expectation (mean) over directions sampled uniformly from a sphere.</div>
<div>
<br /></div>
<div>
This equation takes the same form as the high-temperature softmax limit for Soft Q-learning! Recall that as $\tau \to \infty$, softmax converges to an expectation over a uniform distribution, i.e. a policy distribution with maximum entropy and no information. Light rays have no agency, they merely bounce around the scene like RL agents taking completely random actions! </div>
<div>
<br /></div>
<div>
The astute reader may wonder whether there is also a corresponding "hard-max'' version of rendering, just as hard-max Bellman Equality is to the Soft Bellman Equality in Q-learning. </div>
<div>
<br /></div>
<div>
The answer is yes! The <b>recursive raytracing</b> algorithm (invented before path-tracing, actually) was a non-physical approximation of light transport that assumes the largest of lighting contributions reflected off a surface comes from one of the following light sources:</div>
</div>
<div>
<br /></div>
<div>
<div>
<ol>
<li>Emitting material</li>
<li>Direct exposure to light sources</li>
<li>Strongly reflected light (i.e. surface is a mirror)</li>
<li>Strongly refracted light (i.e. surface is made of glass or water).</li>
</ol>
</div>
<div>
<br /></div>
<div>
In the case of reflected and refracted light, recursive trace rays are branched out to perform further ray intersection, usually terminating at some fixed depth.</div>
</div>
<div>
<br /></div>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinBN60mza0n3AQxdwm27i6u-21T1o5k1kzbuaYVdXh1kYIlS_XFvytOj7RVgiIeCVaj3ZHkljqwJBloOGUq_vIvKf0up2jQe-lBG25O6lr_hXgPYqSVEKENK5vWeXmTmuGRivyk6530Lg/s1600/raytrace.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="688" data-original-width="1432" height="306" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinBN60mza0n3AQxdwm27i6u-21T1o5k1kzbuaYVdXh1kYIlS_XFvytOj7RVgiIeCVaj3ZHkljqwJBloOGUq_vIvKf0up2jQe-lBG25O6lr_hXgPYqSVEKENK5vWeXmTmuGRivyk6530Lg/s640/raytrace.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Raytracing approximation to the rendering equation.</td></tr>
</tbody></table>
<div>
<br /></div>
<div>
<div>
Because ray tracing only considers the maximum contribution directions, it is not able to model indirect light, such as light bouncing off a bright wall and bleeding into an adjacent wall. Although these contributions are minor in today setups like Cornell Boxes, they play a dominant role in rendering pictures of snow, flesh, and food. </div>
<div>
<br /></div>
<div>
Below is a comparison of a ray-traced image and a path-traced image. The difference is like night and day:</div>
</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0tr2NVe4vK1CFDCy0E1hNmjgDHS2S8BpdNvXiJLcBy6cMhlItSCkc4Gf3LXIVpMgLVQq_XbkvO2vcYmGAkapojYf05AdilwagXc-uj-BcJevUusV2ryXtalhsKFebWF_3IYtppK3x66E/s1600/cbox.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="126" data-original-width="256" height="315" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0tr2NVe4vK1CFDCy0E1hNmjgDHS2S8BpdNvXiJLcBy6cMhlItSCkc4Gf3LXIVpMgLVQq_XbkvO2vcYmGAkapojYf05AdilwagXc-uj-BcJevUusV2ryXtalhsKFebWF_3IYtppK3x66E/s640/cbox.png" width="640" /></a></div>
<div>
<br /></div>
<div>
<div>
Prior work has drawn connections between light transport and value-based reinforcement learning, and in fact <a href="https://arxiv.org/pdf/1701.07403v1.pdf">Dahm and Keller 2017</a> leverage Q-learning to learn optimal selection of "ray bounce actions'' to accelerate importance sampling in path tracing. Much of the physically-based rendering literature considers the problem of optimal importance sampling to minimize variance of the path integral estimators, resulting in less "noisy'' images. </div>
<div>
<br /></div>
<div>
For more information on physically-based rendering, I highly recommend Benedikt Bitterli's <a href="https://benedikt-bitterli.me/tantalum">interactive tutorial on 2D light transport</a>, Pat Hanrahan's book chapter on <a href="http://www.graphics.stanford.edu/courses/cs348b-01/course29.hanrahan.pdf">Monte Carlo Path Tracing</a>, and the authoritative <a href="http://www.pbrt.org/">PBR textbook</a>.</div>
<div>
<br /></div>
<h2>
<b>Summary and Questions</b></h2>
<div>
<br /></div>
<div>
We have 3 very well-known algorithms (currency arbitrage, Q-learning, path tracing) that independently discovered the principle of relaxation used in shortest-path algorithms such as Dijkstra's and Bellman-Ford. Remarkably, each of these disparate fields of study discovered notions of hard and soft optimality, which is relevant in the presence of noise or high-dimensional path integrals. Here is a table summarizing the equations we explored:</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgdHH9MNJj5KjK5ExWOoe8X6FPGfPFNE3STPti15W3senr8os9Q4M8WFPkQcZRgxf8ndRwqOn8gmY-sh0l2J7TB3LdsKVu-kZFQs6_BzQ3E0KGyv2XUJg4pBkT5VBQX5tkB5ZR4dHn8cAQ/s1600/eq_table_standalone.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="190" data-original-width="1600" height="76" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgdHH9MNJj5KjK5ExWOoe8X6FPGfPFNE3STPti15W3senr8os9Q4M8WFPkQcZRgxf8ndRwqOn8gmY-sh0l2J7TB3LdsKVu-kZFQs6_BzQ3E0KGyv2XUJg4pBkT5VBQX5tkB5ZR4dHn8cAQ/s640/eq_table_standalone.png" width="640" /></a></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
These different fields have quite a lot of ideas that could be cross-fertilized. Just to toss some ideas out there (a request for research, if you will):</div>
<div>
<br /></div>
<div>
<ul>
<li>There has been some preliminary work on using optimal control to reduce sample complexity of path tracing algorithms. Can sampling algorithms used in rendering be leveraged for reinforcement learning?</li>
<li>Path tracing integrals are fairly expensive because states and actions are continuous and each bounce requires ray-intersecting a geometric data structure. What if we do light transport simulations on a point cloud with a precomputed visibility matrix between all points, and use that as an approximation for irradiance caching / final-gather?</li>
<li>Path tracing is to Soft Q-Learning as Photon Mapping is to ...?</li>
<li>Has anyone ever tried using the Maximum Entropy principle as a regularization framework for financial trading strategies?</li>
<li>The selection of a proposal distribution for importance-sampled Monte Carlo rendering could utilize Boltzmann Distributions with soft Q-learning. This is nice because the proposal distribution over recursive ray directions has infinite support by construction, and Soft Q-learning can be used to tune random exploration of light rays.</li>
<li> Is there a distributional RL interpretation of path tracing, such as polarized path tracing?</li>
<li>Given the equivalence between Q Learning and shortest path algorithms, it's interesting to note that in Deep RL research, we carefully initialize weights but leave the Q-function values fairly arbitrary. However, all shortest-path algorithms rely on initializing costs to negative infinity, so that costs being propagated during relaxation correspond to actually realizable paths. Why aren't we initializing all function values to negative-valued numbers?</li>
</ul>
<div>
<br /></div>
</div>
<h2>
Acknowledgements</h2>
<div>
<br /></div>
<div>
<div>
I'm very grateful to Austin Chen, Deniz Oktay, Ofir Nachum, and Vincent Vanhoucke for proofreading and providing feedback to this post. All typos/factual errors are my own; please write to me if you spot additional errors. And finally, thank you for reading!</div>
</div>
</div>
<div>
<br /></div>
Unknownnoreply@blogger.com0