Casual Causality

I am a final-year Ph.D. student in the Robust Machine Learning group at the Max Planck Institute of Intelligent Systems in Tübingen, supervised by Wieland Brendel, Ferenc Huszár, Matthias Bethge, and Bernhard Schölkopf. I am part of the ELLIS and IMPRS-IS programs. I have also spent time at Mila, the Vector Institute, and at the University of Cambridge.

The main motivation for my research is to advance our understanding of how and why deep learning models work. My research toolkit currently focuses around identifiable causal and self-supervised representation learning and out-of-distribution (OOD) generalization, with a focus on compositionality in language models. During my Ph.D., I realized that current machine learning theory is insufficient to explain especially the interesting and useful properties of deep neural networks. I aim to help close this gap, by focusing on:

  • extending machine learning theory to understand the role of inductive biases (e.g., model architecture or optimization algorithm)
  • grounding machine learning in the physical world via (causal) principles and humanity’s prior knowledge
  • extending our understanding of out-of-distribution and compositional generalization
  • uncovering overarching patterns across different fields in machine learning

I am also passionate about AI for research: I believe AI can fundamentally transform how we do science. I outlined this vision in a position paper at the P-AGI Workshop @ ICLR 2026 and built the Research Agora, an open-source skills marketplace for AI-assisted research.

I have done both my M.Sc. and B.Sc. at the Budapest University of Technology in electrical engineering and specialized in control engineering and intelligent systems. In my free time, I enjoy being outdoors and often bring my camera with me.

Recent Publications

Recent Projects

Tools for managing BibTeX bibliographies: automatically update preprints to published versions and filter to only cited references.

82 4 Updated 2026-04-05

HALLMARK: Citation hallucination detection benchmark for ML papers — 2,525 entries, 14 hallucination types, 3 difficulty tiers, 10 baselines including LLMs and verification tools

1 Updated 2026-04-02

A personal website for Patrik Reizinger. Template is courtesy of academicpages.

1 Updated 2026-04-02

Research Agora: Claude Code skills, benchmarks & tools for ML researchers — paper writing, citation verification, experiment tracking, LaTeX automation

4 Updated 2026-04-02

Code for 'Estimating Treatment Effects with Independent Component Analysis' (arXiv:2507.16467)

1 Updated 2026-03-31
CV TeX
Updated 2026-03-22