3D recreation of live events from mobile videos

Ever been to a rock concert and wished you could experience it again, just as it was… or better? This may be possible soon thanks to an EU-funded research project, SCENENET , which is developing the technology to combine mobile feeds from different spectators around the arena to reconstruct the event in 3D.

Chen and Nizan Sagiv had the idea when they were at a Depeche Mode concert in Tel Aviv five years ago. "While I was busy looking at the show, Nizan was watching the crowds," explained Chen, SCENENET project coordinator. "He could not help noticing the huge number of faint lights from mobile phone screens. People were taking videos of the show. Nizan thought that combining all the videos taken by individuals into a synergetic, enhanced and possibly 3D video could be an interesting idea. We discussed the concept for many months, but it looked too futuristic, risky and complicated."

Israel–Europe collaboration

They went for advice from ISERD, the Israel-Europe R&D directorate , and contacted Prof. Peter Maass, of the University of Bremen in Germany, and Prof. Pierre Vandergheynst, of Ecole Polytechnique Fédérale (EPFL) in Lausanne, Switzerland, with whom Chen had worked on an earlier 7th Framework Programme project, UNLOCX .

The result is the SCENENET project, awarded € 1.33 million by the European Commission, and coordinated by Chen's and Nizan's Ra'anana-based company, SagivTech, specialists in computer vision and parallel computing.

SCENENET, which is funded by the Future and Emerging Technologies (FET scheme) , just as UNLOCX was, runs until January 2016 and consists of four European partners: the University of Bremen, Steinbeis Innovation, European Research Services, all in Germany, and Switzerland's EPFL.

The first year of the project has seen the team develop the mobile infrastructure for the video feeds, a mechanism for tagging them, and their transmission to a cloud server. They’ve also developed basic tools for a human-computer interface that will allow users to view the 3D video from any vantage point 'in the arena' and edit the film themselves. This, they believe, will help create online communities to share the content, reliving the concert experience together. With this in mind, the partners are to study privacy and intellectual property rights issues during the next two years of the project.

"We have at the end of the first year, and sooner than expected, built the entire SCENENET pipeline based on current state-of-the-art components," said Chen. Indeed, the accelerated computer vision algorithms they are creating for mobiles is a pioneering effort worldwide and leading chipset manufacturers are following the project closely.

SCENENET involves several technological challenges: on-device pre-processing that requires immense computer power, efficient transmission of the video streams, development of accurate and fast methods for registration between the video streams, and the 3D reconstruction. All of these tasks have to run at near real-time rates.

"We believe that the various components that make up SCENENET, e.g. registration of images and 3D reconstruction, have great potential for mobile computing and cloud computing, Thus SCENENET offers a huge technological breakthrough – in its whole and also via each of its components," she added.

Moving beyond concerts

Myriad possible uses for SCENENET are emerging as the project develops. Rights and privacy concerns permitting, the technology might also be used to recreate other events in 3D, such as breaking news or sports, or in the tourism or surveillance sectors. The partners are also looking at shooting static, as well as active, objects from various angles, to create instructions that can be sent on to 3D printers. The mobile-cloud server model could also be used for a host of other applications, say the researchers. But for the moment they are concentrating on music fans.

"SCENENET revolves around mobile cameras and 3D vision. The invasion of mobile cameras and their continuously improved quality has meant we are flooded with images we want to enhance and show off. Many devices that 'understand' visual inputs are being developed – Google Glass, for instance - where most of this work is based on image processing and computer vision. 3D vision is becoming more important for better visualization of the world on one hand, and easier analysis of the world on the other hand," the coordinator explained.

SCENENET is a good example of collaboration in research between the EU and Israel. Some 1 600 Israeli scientists have benefited from the 7th Framework programme in over 800 projects funded by the EC to the tune of € 634 million, in fields ranging from cutting edge research and ICT through nanotechnology to energy and health.

"This is an opportunity to be a part of the thriving European scientific and industrial communities, and collaborate with leading academy partners and companies in Europe," Chen summarised.

SCENENET received research funding under the European Union's Seventh Framework Programme (FP7).

Link to project on CORDIS:

- FP7 on CORDIS

Link to project's website:

- SCENENET website

Other links:

- European Commission's Digital Agenda website

- "Uncertainty principles versus localization properties"

- HORIZON 2020 - Future and Emerging Technologies

published: 2015-01-21
Comments


Privacy Policy