Suppose you are in a new city and looking to get dinner with friends. You pull out Google Maps to help with the decision. To make the process easier, you decide to look at the ratings: A nearby Cambodian restaurant boasts 4.8 stars, but the equally close Italian place is a close competitor at 4.6 stars, but has five times the number of reviews. Surely that must make a difference? You search for your scratchpad, promising to your friends that you got the situation.
Is random behavior helpful in any situation? By definition, random actions are the most uninformed, and if any better is known should be suboptimal. Yet, the issue is more subtle. Reinforcement learning and game theory can be paradigms to reason about this.
Autoregressive models use their output to arrive at predictions. In machine learning, this amounts to “training on the output”, i.e., generated data. More broadly, intelligent behavior is often accompanied by deep thought or even dreaming between actions. In both of these cases, the system is decoupled from the ground truth. Despite this apparent conundrum, there seems to be a benefit.
Meta-learning summarizes the concept of learning a more general framework to learn – hence the name. Yet, this concept subsumizes a range of multiple concepts, including transfer learning, few-shot learning, continual learning, and fine-tuning. We develop an abstracted framework that unifies these notions. This extends beyond parametric models.
In statistical physics, we are often dealing with systems that comprise many components. In order to calculate their statistics, high-dimensional integrals over those variables $\boldsymbol{x}\in\mathbb{R}^{N}$ with $N\gg1$ are required. A typical form is
Workflow for blogposts write the document in Jupyter Notebook using MyST markdown for equation references Produce daunting equations in a temporary LyX-document Render it out to Markdown using render_nb.sh and use it in the blog as a HTML+JS combo