Rotating Features For Object Discovery
Published:
Structure is a useful but underleveraged inductive bias for representation learning.
Published:
Structure is a useful but underleveraged inductive bias for representation learning.
Published:
Cover your bases.
Published:
Disentanglement is a concept rooted in geometric deep learning.
Published:
Graphs don’t tell about the nature of dependence, only about its (non-)existence.
Published:
In the previous post, we dived deep into abstract algebra to motivate why Geometric Deep Learning is an interesting topic. Now we begin the journey to show that it is also useful in practice. In summary, we know that symmetries constrain our hypothesis class, making learning simpler—indeed, they can make learning a tractable problem. How does this happen?
Published:
Yes, abstract algebra is actually useful for machine learning.
Published:
Improve typesetting and save space in your submissions, who does not want that?
Published:
It’s again a statistics deck.
Published:
Two ways to shut the door before confounding enters the scene.
Published:
Interventions in disguise.
Published:
Hitting the nail on its arrowhead, a.k.a. when does $X$ cause $Y$?
Published:
We will talk about IC, $IC$, and ${IC}^*$ in this post. You get the difference.
Published:
DAGs like to play hide-and-seek. But we are more clever.
Published:
The model zoo of Markovian conditions is fascinating confusing. Let there be light!
Published:
What you won’t be able to find in this post are unconditional claims of superiority of causal inference.
Published:
This blog discusses causal inference. What is this post about Bayesian Statistics then?
Published:
Asking a causal question is not casual.
Published:
No one told me that I need a dictionary for learning causal inference. Indeed, there was none before. Now there is.
Published:
Not just parameter learning, but learning about parameter learning got easier today.
Published:
A top-secret guide to d-separation. We will go deep, ready?
Published:
This post deliberately (wink) tries to confuse you about the grand scheme of DAG equivalence. What a good deal, isn’t it?
Published:
If your goal is to be able to recall Sum-Product Belief Propagation even at 3a.m., this is the post you are looking for.
Published:
d-separation is the bread and butter for deciding about conditional independence in DAGs. What is a DAG, anyway?
Published:
A causality blog cannot exist without discussing Judea Pearl’s Causality book. Thus, I am paying my debt.
Published:
To make learning probabilistic graphical models frictionless and more fun.
Published:
Good resources matter, a lot.
Published:
A PhD student’s casual journey with causal inference.