Georgia Tech to Present Nine Poster Papers at ECCV 2018

Next week, a group of Georgia Tech students and faculty will travel to Munich, Germany to attend the European Conference on Computer Vision (ECCV) 2018.

More than 700 organizations from industry, academia, and government are represented at the 2018 conference, which is held every two years. Georgia Tech will present eight papers during poster sessions at the premier event and, it is among the top 3 percent of participating institutions based on accepted research.

Along with presenting several papers, Georgia Tech faculty members have also participated in organizing ECCV 2018. Devi Parikh, Irfan Essa, Dhruv Batra, and Fuxin Li served as area chairs for the event.

“ECCV is an exciting conference to participate in. There’s a lot of good work that gets presented from top computer vision labs in the world, and it is great that Georgia Tech is one of them! It is a great venue to share our latest ideas and hear what others in the research community are thinking about these days.” said Devi Parikh, assistant professor in Georgia Tech’s School of Interactive Computing.

Georgia Tech organized the first Visual Dialog Challenge, designed to find methods for artificial intelligence agents to hold a meaningful dialog with humans in natural, conversational language about visual content. Winners will be announced at the conference.

The conference takes place Sept. 8 through 14 in the heart of Munich at the Gasteig Cultural Center.

To see an interactive visualization of the entire ECCV 2018 program, please click here.

For an interactive visualization of ECCV 2018 by institutions with accepted research, please click here.

An interactive visualization of ECCV 2018 by people and institutions can be viewed here.

Below are the titles of Georgia Tech’s research being presented this week.

Georgia Tech at ECCV 2018

Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation

By Zhaoyang Lv*, GEORGIA TECH; Kihwan Kim, NVIDIA; Alejandro Troccoli, NVIDIA; Deqing Sun, NVIDIA; Kautz Jan, NVIDIA; James Rehg, Georgia Institute of Technology

Read our blog post about this paper on the ML@GT blog here.

Multi-object Tracking with Neural Gating using bilinear LSTMs

By Chanho Kim*, Georgia Tech; Fuxin Li, Oregon State University; James Rehg, Georgia Institute of Technology

In the Eye of Beholder: Joint Learning of Gaze and Actions in First Person Vision

Yin Li*, CMU; Miao Liu, Georgia Tech; James Rehg, Georgia Institute of Technology

Choose Your Neuron: Incorporating Domain Knowledge through Neuron Importance

By Ramprasaath Ramasamy Selvaraju*, Georgia Tech; Prithvijit Chattopadhyay, Georgia Institute of Technology; Mohamed Elhoseiny, Facebook; Tilak Sharma, Facebook; Dhruv Batra, Georgia Tech & Facebook AI Research; Devi Parikh, Georgia Tech & Facebook AI Research; Stefan Lee, Georgia Institute of Technology

Read our blog post about this paper on the ML@GT blog here.

Visual Coreference Resolution in Visual Dialog using Neural Module Networks

By Satwik Kottur*, Carnegie Mellon University; José M. F. Moura, Carnegie Mellon University; Devi Parikh, Georgia Tech & Facebook AI Research; Dhruv Batra, Georgia Tech & Facebook AI Research; Marcus Rohrbach, Facebook AI Research

Graph R-CNN for Scene Graph Generation

By Jianwei Yang*, Georgia Institute of Technology; Jiasen Lu, Georgia Institute of Technology; Stefan Lee, Georgia Institute of Technology; Dhruv Batra, Georgia Tech & Facebook AI Research; Devi Parikh, Georgia Tech & Facebook AI Research

Read our blog post about this paper on the ML@GT blog here.

SEAL: A Framework Towards Simultaneous Edge Alignment and Learning

By Zhiding Yu*, NVIDIA; Weiyang Liu, Georgia Tech; Yang Zou, Carnegie Mellon University; Chen Feng, Mitsubishi Electric Research Laboratories (MERL); Srikumar Ramalingam, University of Utah; B. V. K. Vijaya Kumar, CMU, USA; Kautz Jan, NVIDIA

SwapNet: Image Based Garment Transfer

By Amit Raj, Georgia Tech; Patsorn Sangkloy, Georgia Tech; Huiwen Chang, Princeton; James Hays, Georgia Tech; Duygu Ceylan, Adobe; and Jingwan Lu, Adobe

Connecting Gaze, Scene, and Attention: Generalized Attention Estimation via Joint Modeling of Gaze and Scene Saliency

Eunji Chong, Nataniel Ruiz, Yongxin Wang, Yun Zhang, Agata Rozga, James M. Rehg, Georgia Tech

Related Media

Click on image(s) to view larger version(s)

  • ECCV 2018 will be held in Munich, Germany

For More Information Contact

Allie McFadden

Communications Officer

allie.mcfadden@cc.gatech.edu