D5.1 Draft plugins for layered depth-image base rendering and satellite base hole filling

Deliverable D5.1 describes the first rendering plugin. The plugin supports trifocal input material. The output can be generated in 2D, stereo 3D and autostereoscopic formats. Different filling modes enable the user to select which source views are used for rendering and whether additional cameras are used to fill dis-occluded areas.

The plugin has been tested with MambaFX on Windows 7, Mistika on Suse Enterprise 11, Natron on Windows 7 and Ubuntu 14.04 and Nuke on Windows 7. Currently only spatial multiplex is supported as the input format.

With the next release additional plugins for video-plus-depth input and stereo input will be provided.

D4.6 Draft plugin for advanced rotoscoping using depth maps

This deliverable reports the activities carried out in the task T4.2 “Advanced rotoscoping in single and multiple views”; as defined in the working plan of the activity WP4 “Depth enabled post-production tools”. The expected outcome of this task is an OFX plug-in for the Mocha platform by Imagineer Systems for refining the contours of an object during rotoscoping, taking into account depth information, along with the colour information.  This deliverable describes the draft OpenFX plugin developed, which is able to refine the contour of an object, searching in a restricted area around an existing rough solution.

In the document we describe the current state of the plug-in, including: the core algorithm, the input/output format, the user interface and the installation procedure. Moreover we present an evaluation of its performance, compared to manually processed images.

D4.3 Draft Plugin of semi-automatic depth map refinement

Disparity estimation and creation are a very crucial part of the 3FLEX workflow because mistakes during the process can have a bad influence on all downstream operations. The automated estimation of disparities is naturally not working for every pixel of an image. Difficulties like noise, similarities between different objects or occlusions can result in wrong estimates. A semi-automatic plugin helps to identify and correct critical pixels. The depth map correction plugin comprises three steps: detection, identification, correction.

A detection of critical pixels or regions has to be done by overlooking the estimated result and the corresponding points between all pictures. The user can see areas in the picture that will cause problems and will get corresponding areas as a reference. The number of references differs between a stereo workflow and a trifocal workflow.

Variant references between the S3D or H3D images are identified as errors and marked accordingly. The user has then the possibility to correct estimation errors manually by observation of the correct references.

Even if the technical quality of the disparity maps is sufficient, they may not fulfil artistic requirements in terms depth distribution and resolution. Therefore, depth map manipulation tools have been developed.

With the depth mapping plugin the user is able to change depth just like color or contrast in a 2D image. By using a free transfer function along with a scaling and an offset parameter adjustments can be made easily and quickly.

The depth scribbling plugin allows the generation of a depth map from a few scribbles. These scribbles are made manually with a brush tool. With just a few scribbles the plugin is able to generate a depth map which already leads to a visually pleasant 3D image when used with the standard DIBR (Depth Image Based Rendering) plugin.

D3.2 Draft plugin for spatiotemporal post-processing of video+depth maps and related streams

This deliverable explains the improvements of the previously developed prototype plugins from D3.1 and introduces new plugins for the 3FLEX workflow.

A new plugin for Stereo Registration has been developed based on a new internal plugin structure which will also be used by the other plugins in future. Disparity Estimation has been extended for better integration into the 3FLEX workflow. Furthermore enhancements for better temporal stability were implemented.

A plugin for time-of-flight filtering allows processing of depth data originating from time-of-flight sensors. A new plugin allows conversion of disparity to depth and vice-versa and allows integration of both types of information into the 3FLEX workflow.

Plugins now also support Mistika as the host platform.

D4.5 Recorded/Extracted depth maps for VFX in 2D and 3D productions

This deliverable presents the advances in Task 4.6 devoted to the integration of depth and label maps into some existing modules that can take advantage of it in the Mistika post production and the Mamba FX compositing software. Specifically, the work has been focused on the needed modifications for allowing colour grading, filtering and composition to exploit depth information for common VFX tasks in 2D and 3D productions. These extensions have been primarily adaptations made on the Mistika and Mamba platforms to support the 3FLEX workflow and data. They include an enhancement of the colour grading module and tools with the inclusion of a new Spatial Keyer Toolset, and the development of a new node, the Depth Grade node, for depth based compositing purposes.

In this report we detail these extensions and show examples of application of some of the new possibilities in VFX tasks that are enabled by these enhancements. This report also defines next steps for task T4.6.

D4.4 Draft plugin for entity labelling from colour and depth information

This deliverable reports the activities carried out in the task T4.4 “Entity labelling from colour and depth information”; as defined in the working plan of the activity WP4 “Depth enabled post-production tools”. The expected outcome of this task is an OFX plug-in for the Mamba/Mistika platforms by SGO for labelling the various segments of an image, taking into account depth information, besides the colour texture. This deliverable describes the draft plugin developed, which implements the labelling for a given frame.

In the document we describe the current state of the plug-in, including: the core algorithm, the input/output format, the user interface and the installation procedure. Moreover we present an evaluation of its performance, compared to manually processed images.

D7.2 Planning for exploitation 1

 This deliverable reports the results of activities of market analysis and monitoring and definition of plans for exploitation and dissemination carried out in tasks WP7 during the first 9 months of the 3FLEX project.

The market analysis gives an overview of the post-production market in EU and Worldwide, including a close insight into the China market given its growing importance nowadays, the current trends in issues both in the market as well at a scientific level, complemented with a benchmarking table, and analysis of the impact of the 3FLEX results in the industry and in the Consortium. Current trends and challenges show on one side, that 3D has lost some of the interest of past years, but it is still of interest in general for the cinema industry and broadcasters, which still demand on most cost-effective ways for generating 3D content. This is a clear opportunity for the 3FLEX proposed workflow and tools, which precisely aim to reduce costs in 3D productions while allowing more flexibility to the process. In the other hand, analysis of competitors and scientific trends show currently there has not been identified any potential menace, although several tools have appear that starts exploiting depth in post-production processes.

Based on all that information, the first exploitation and business plan, which updates initial considerations stated in the DoW, is presented.

Finally, this deliverable goes over dissemination actions already done and presents the plan for future dissemination events taking into consideration each partner activities.

D6.2 Acquisition and Provision of Novel Test Material

The goal of this deliverable is to acquire novel test material using the trifocal camera setup, the image+depth camera setup from the SCENE project1 or a combined camera setup in order to extend the existing dataset described in deliverable D6.1. More specifically the goal is to acquire missing scenarios or conditions specifically needed for the development and evaluation of the 3FLEX workflow and tools.

The analysis of the existing dataset conducted during the past months has shown that the available test material is representative enough for the first development and evaluation phase of the 3FLEX plugins. The project has therefore focused on the definition of an initial video subset including stereoscopic and trifocal footage.

The 3FLEX consortium will take the opportunity to get a better understanding on how the new technology works using the existing material. Based on that experience, acquisition of novel test material will be carried out before the beginning of the experimental production in M20. Thus, it is guaranteed that the novel material is optimally suited to the needs of the experimental production and, if needed, to provide additional test cases which might be identified throughout the tests during the next months. A description will be provided in an additional deliverable D6.4.

The projects SCENE and 3FLEX took the opportunity of the final SCENE shoot in July 2014 for collaboration. 3FLEX provided a trifocal camera system to enhance the SCENE setup comprising a “Motion SCENE” camera with time-of-flight sensor and a wide baseline stereo camera. Several sequences could be recorded providing fully synchronized time-of-flight, stereo and trifocal images. This additional data will enable the 3FLEX project to evaluate and compare the workflow and tools for a wide range of input formats.

D4.2 Draft plugin for automatic clean-plate creation for planar backgrounds

This deliverable reports the activities carried out in the task T4.3 “Automatic clean-plate creation for planar backgrounds” as defined in the working plan of the activity WP4 “Depth enabled post-production tools”. The expected outcome of this task is a plug-in for clean-plate creation for planar backgrounds that can take advantage of depth information for the Mocha platform by Imagineer Systems.  This deliverable describes the first software module implemented, a stand-alone executable, backed up by a library that implements an algorithm for image inpainting that works on planar backgrounds

In the document we introduce the inpainting algorithm that was implemented. Then we proceed with the description of the software and the API of the library that we programmed. In the final sections of the report we regard the evaluation of the method, and state our conclusions.

D4.1 Draft plugin for automatic clean-plate creation in complex scenes

This deliverable reports the activities carried out in the task T4.5 “Automatic clean-plate creation for complex scenes” as defined in the working plan of the activity WP4 “Depth enabled post-production tools”. The expected outcome of this task is a plug-in for clean-plate creation for complex scenes that can take advantage of depth information for the Mistika/Mamba platform by SGO. This deliverable describes the first software module implemented, a stand-alone executable, backed up by a library that implements an algorithm for image inpainting capable to deal with piece-wise planar backgrounds

In the document we introduce the inpainting algorithm that was implemented. Then we proceed with the description of the software and the API of the library that we programmed. In the final sections of the report we regard the evaluation of the method, and state our conclusions.