PrincetonComputer SciencePIXL Group → Lunch Local Access

The PIXL lunch meets every Monday during the semester at noon in room 402 of the Computer Science building. To get on the mailing list to receive announcements, sign up for the "pixl-talks" list at

Upcoming Talks

Monday, April 13, 2020
Deniz Oktay

Monday, April 20, 2020
Yuting Yang

Monday, April 27, 2020
Jiaqi Su

Monday, May 04, 2020
Fangyin Wei

Monday, May 11, 2020
Angela Dai

Monday, May 18, 2020
Zeyu Wang

Previous Talks

Monday, February 10, 2020
Ethan Tseng

Monday, February 17, 2020
Data + Art: better science communication
Kirell Benzi

What is Data Art? Can a spreadsheet be transformed into a beautiful, emotional visualization able to communicate knowledge/information more deeply than either the dry spreadsheet, or the traditional charts we use on a daily basis? What additional insight can these complex visualizations provide? Inspired by data visualization and creative coding, we will discuss their main differences as we dwell on interesting (and entertaining) data science research projects such as biodiversity, the strength of collaboration, the Montreux Jazz Festival, Wikipedia and the Star Wars expanded universe. This seminar will show that Data Art can be used as a new communication medium to tell impactful insightful stories and analyses by connecting scientific rigor with creativity.

Kirell Benzi is a data artist, speaker and data visualization lecturer. He holds a Ph.D. in Data Science from EFPL (Ecole Polytechnique Fédérale de Lausanne). His unique work, mixing data visualization and abstract aesthetics, has been shown in museums, newspapers, magazines and on over 100 websites in 10 languages. In 2018, he gave a keynote at a TEDx symposium in Annecy, France on the combination of data and art. He regularly tours the world to inspire people to gain more data literacy by showing the positive outcomes of technology for our society using art.

Monday, March 23, 2020
Encoder-Free 3D Deep Learning for Shape Recognition and Registration
Yi Fang (Assistant Professor of NYU Abu Dhabi and NYU Tandon)

With the availability of both large 3D datasets and unprecedented computational power, researchers have been shifting their focus on applying deep learning to address the challenges in specific tasks such as 3D classification, registration, recognition, and correspondence. The deep learning model often takes input as a grid structured input to effectively exploit discrete convolutions on the data as its fundamental building blocks. However, the irregular Non-Euclidean 3D data representation poses a great challenge for directly applying standard convolutional neural networks to 3D applications such as object recognition from 3D point clouds, 3D shape registration and matching, 3D localization and mapping and so on. In this talk, to address the challenges of learning with irregular 3D data representation, I will discuss our labs recent efforts in the development of an encoder-free design of deep neural network architecture, which we apply to 3D deep learning for shape recognition and registration. The mainstream 3D deep learning efforts require explicitly designed encoders to extract deep shape features and/or spatial-temporal correlation features from irregular 3D data representations. By contrast, we acknowledge the challenges in designing an explicit form of an encoder to extract deep features from unstructured 3D data. As a result, we propose our novel approaches to work around this issue by implicitly extracting the deep features towards various 3D tasks. Our key novelty is that we present a novel unified concept of task-specific latent code (TSLC). The TSLC can take on different forms depending on the nature of the task. It represents a 3D shape descriptor for shape recognition, a 3D shape spatial correlation tensor for shape alignment, or a spatial-temporal descriptor for 3D group registration. The TSLC captures the geometric information from unstructured 3D data essential to each task and is used as input to a task-specific decoder to produce the desired output. Particularly, our approach starts with a randomly initialized TSLC. Next, at training time, we jointly optimize the latent shape code and update the neural network decoders weights towards the minimization of a task-specific loss, while at inferencing time we hold the decoders weights fixed and only optimize the TSLC. Our novel encoder-free approach brings forth two unique advantages: 1) it avoids the inclusion of an explicit 3D feature encoder for irregular 3D data representation. 2) It enhances the flexibility of feature learning for unseen data. The new design centers around the combination of optimization and learning, enabling further fine-tuning on the test data for better generalization abilities. By contrast, the conventional neural network does not have the flexibility in fine-tuning at the testing phase. We conducted experiments on a variety of tasks, including the unsupervised learning of 3D registration, 3D correspondence, and 3D recognition. Qualitative and quantitative comparisons on these experiments demonstrate that our proposed method achieves superior performance over existing methods.

Monday, March 30, 2020
Image-Based Acquisition and Modeling of Polarimetric Reflectance
Seung-Hwan Baek

Realistic modeling of the bidirectional reflectance distribution function (BRDF) is a vital prerequisite for forward and inverse rendering used in graphics, vision, and optics. In the last decades, the availability of databases containing real-world material measurements has fueled considerable innovation in the development of BRDF. However, previous datasets ignored the polarization state of light as it is imperceptible to the human eye. While subtle to human observers, polarization is easily perceived by any optical sensor, providing a wealth of additional information about shape and material properties of the object. In this work, we present the first polarimetric BRDF (pBRDF) dataset that captures the pBRDF of real-world materials over the full angular domain, and at multiple wavelengths. We propose a system combining image-based acquisition with spectroscopic ellipsometry to perform measurements in a realistic amount of time. We demonstrate usage of our databas e in a physically-based renderer that accounts for polarized interreflection, and we investigate the relationship of polarization and material appearance, providing insights into the behavior of characteristic real-world pBRDFs.

Seung-Hwan is a post-doctoral research associate working with Prof. Felix Heide at Princeton. His research interests lie in computer graphics and computer vision with a particular focus on computational imaging. He has jointly developed optics and algorithms to solve graphics and vision problems especially using the wave properties of light. He is a recipient of SIGGRAPH Asia Doctoral Consortium, Microsoft Research Asia Ph.D. Fellowship, and best paper/demo awards at ACCV 2014.

Monday, April 06, 2020
Image Generalization Through Camera Model Design
Voicu Popescu

Most computer graphics and visualization applications rely on images rendered with the planar pinhole camera, which approximates the human eye well. Whereas for some applications it is important to generate images that closely resemble what users would actually see if they explored a physical replica of the dataset, for many other applications the constraints of the planar pinhole camera are unnecessarily restrictive. In this talk we give an overview of our work to remove the uniform sampling rate and the single viewpoint constraints of conventional images. The image generalization is implemented through an intervention at the camera model level. We present the camera model design paradigm that abandons the conventional rigidity of the camera model in favor of designing and optimizing the camera model for each application, for each dataset, and for each view. The resulting generalized image is more effective than a conventional image, yet it remains easy to compute, continuou s, and non-redundant. We illustrate the benefits of image generalization in a wide range of applications such as focus+context, remote, and multiperspective visualization, visibility computation, rendering acceleration, virtual and augmented reality navigation, and diminished reality.

Voicu Popescu is an Associate Professor of Computer Science at Purdue University. He has a Computer Science Ph.D. degree from the University of North Carolina at Chapel Hill, and a Computer Science B.Sc. degree from the Technical University of Cluj-Napoca, Romania. His research interests lie at the confluence of computer graphics, visualization, computer human interaction, and computer vision, with applications in defense, healthcare, and education.