Photo of Nathan Young

Nathan (N.C.) Young

PhD Student · Strong AI Lab, University of Auckland

I'm an AI researcher, working at the intersection of theoretical computer science and machine learning. My current work asks whether Transformers can be understood as bounded approximations of Solomonoff Induction, a theoretically optimal sequence prediction algorithm.

I'm also interested in prediction markets, improving organisational incentives and decision-making, and sea shanties.

My PhD thesis is founded on a simple hypothesis: that Transformers, the architecture behind Large Language Models (and therefore the best existing sequence-prediction algorithm), might approximate Solomonoff Induction — the theoretically optimal (but computationally intractable) sequence predictor. An explicit model of how this approximation works could shed light on why large language models work as well as they do, and help us to construct AI as powerful as LLMs that are more explicitly programmed rather than trained as a black box.

I'm based at the Strong AI Lab (SAIL) at the University of Auckland.

Recent writing

All posts →

Loading posts…

Find me elsewhere