Hi! My name is Alex and I'm a co-founder at Luma AI, where I am working on generative models, including genie, a text-to-3d model.
For those interested: Please see this link for open positions or this link for internships.

Previously, I graduated from UC Berkeley, where I majored in CS and applied math and worked on 3D computer vision research related to NeRFs with Prof. Angjoo Kanazawa in BAIR. While at Berkeley, I interned at Google and Adobe, helped search for aliens at SETI: Breakthrough Listen, particiated in ICPC, and released various open source projects on Github, among other things. Before Berkeley, I lived in Vancouver, Canada for many years. In my free time, I sometimes enjoy hiking and stargazing.

Luma AI
2022-
Luma AI
Co-Founder
Series B AI start-up based in Palo Alto
Berkeley AI Research
Summer 2020-Fall 2021
Berkeley AI Research
Research Assistant
Advisor: Angjoo Kanazawa; worked on research related to NeRFs (see below)
Adobe Research
Summer 2021
Adobe Research
Research Intern
Host: Oliver Wang; worked on NeRF without COLMAP
CS 61A @ UC Berkeley
Fall 2019
CS 61A @ UC Berkeley
Teaching Assistant
Responsible for the traditional Hog Contest for several semesters before.
Google
Summer 2019
Google
SWE Intern
Built banking and trading features for Google Assistant
SETI: Breakthrough Listen
Spring 2019
SETI: Breakthrough Listen
URAP Apprentice
Looking for alien technosignatures at SETI.
FHL Vive Center
Fall 2017-Fall 2019
FHL Vive Center
Research Assistant
Worked on human and hand projects in OpenARK, with Dr. Allen Yang

Research Works

For projects at luma, see genie, Interactive Scenes, etc. While at Luma, I am now working on generative models, I started doing research right around the time that NeRFs (neural radiance fields) came out, and my focus was around making them more practical for real-world uses, including works that sped up rendering, training, and one of the first attempts at sparse-view NeRFs (pixelNeRF).

Plenoxels: Radiance Fields Without Neural Networks

Plenoxels: Radiance Fields Without Neural Networks

CVPR 2022 Oral

We introduce Plenoxels (plenoptic voxels), a system for photorealistic view synthesis. Plenoxels represent a scene as a sparse 3D grid with spherical harmonics. This representation can be optimized from calibrated images via gradient methods and regularization without any neural components. On standard, benchmark tasks, Plenoxels are optimized two orders of magnitude faster than Neural Radiance Fields with no loss in visual quality.

PlenOctrees for Real-time Rendering of Neural Radiance Fields

PlenOctrees for Real-time Rendering of Neural Radiance Fields

ICCV 2021 Oral

We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects. Our method can render 800x800 images at more than 150 FPS, which is over 3000 times faster than conventional NeRFs. We do so without sacrificing quality while preserving the ability of NeRFs to perform free-viewpoint rendering of scenes with arbitrary geometry and view-dependent effects. Real-time performance is achieved by pre-tabulating the NeRF into a PlenOctree. In order to preserve view-dependent effects such as specularities, we factorize the appearance via closed-form spherical basis functions. Specifically, we show that it is possible to train NeRFs to predict a spherical harmonic representation of radiance, removing the viewing direction as an input to the neural network. Furthermore, we show that PlenOctrees can be directly optimized to further minimize the reconstruction loss, which leads to equal or better quality compared to competing methods. Moreover, this octree optimization step can be used to reduce the training time, as we no longer need to wait for the NeRF training to converge fully. Our real-time neural rendering approach may potentially enable new applications such as 6-DOF industrial and product visualizations, as well as next generation AR/VR systems. PlenOctrees are amenable to in-browser rendering as well; please visit the project page for the interactive online demo.

pixelNeRF: Neural Radiance Fields from One or Few Images

pixelNeRF: Neural Radiance Fields from One or Few Images

CVPR 2021

We propose pixelNeRF, a learning framework that predicts a continuous neural scene representation conditioned on one or few input images. The existing approach for constructing neural radiance fields involves optimizing the representation to every scene independently, requiring many calibrated views and significant compute time. We take a step towards resolving these shortcomings by introducing an architecture that conditions a NeRF on image inputs in a fully convolutional manner. This allows the network to be trained across multiple scenes to learn a scene prior, enabling it to perform novel view synthesis in a feed-forward manner from a sparse set of views (as few as one).

The Breakthrough Listen Search for Intelligent Life: Data Formats, Reduction and Archiving

The Breakthrough Listen Search for Intelligent Life: Data Formats, Reduction and Archiving

Matthew Lebofsky et al.
Publications of the Astronomical Society of the Pacific (PASP)

Breakthrough Listen is the most comprehensive and sensitive search for extraterrestrial intelligence (SETI) to date, employing a collection of international observational facilities including both radio and optical telescopes. During the first three years of the Listen program, thousands of targets have been observed with the Green Bank Telescope (GBT), Parkes Telescope and Automated Planet Finder. At GBT and Parkes, observations have been performed ranging from 700 MHz to 26 GHz, with raw data volumes averaging over 1PB / day... In this paper, we describe the hardware and software pipeline used for collection, reduction, archival, and public dissemination of Listen data.

* = Equal contribution

Misc

My Chinese name is 余思贤 / 余思賢.