Wayne Wang

Wayne Wang

Building systems that think.

Questioning systems that predict.

Living between grief and nothing

Education

MS Computer Science @ UC San Diego

2024 - 2026

BS Computer Science @ NYU TandonSumma Cum Laude

2021 - 2025

Previously

ByteDance

Jun 2025 - Sep 2025

Software Engineer Intern

+ more

NYU Research

Jun 2024 - May 2025

Research Assistant

+ more

CITIC Poly Fund

Jun 2023 - Aug 2023

Data Engineering Intern

+ more

Contact

My research asks: how do we move LLMs beyond pattern matching toward genuine understanding?

I think of it like Taylor expansion. Prompt engineering gives us first-order approximation—linear workflows. RL and fine-tuning add second-order terms—reasoning chains. But true creativity lives in higher-order terms.

My work focuses on mental models—structured ways of understanding that could give LLMs higher-order capabilities. Current focus: agentic deep research. End goal: machines that genuinely predict, not recall.

Approximating Intelligence

target

f(x) ≈ a₀

1st Order

Prompt Engineering

2nd Order

RL / Reasoning

Higher Order

Mental Models

·
01

Temporal Leakage in Search-Engine Date-Filtered Web Retrieval

71% of date-filtered queries return post-cutoff data

arXiv (pending for ACL 2026) ·