专利摘要:
method of encoding a 3d video data signal, method of decoding a 3d video data signal, encoder for encoding a 3d video data signal, decoder for decoding a 3d video data signal, program product computer to encode a video data signal, computer program product to decode a video data signal, 3d video data signal, and, digital data carrier a method to encode a video data signal (15) is provided, the method comprising providing at least a first image (21) of a scene (100) as seen from a first panorama, providing rendering information (22) to allow the generation of at least one rendered image of the scene ( 100) as seen from a rendering panorama, providing a preferred direction indicator (23), defining a preferred orientation of the rendering panorama relative to the first panoramic a, and generate (24) the video data signal (15) comprising encoded data representing the first image, the rendering information and the preferred direction indicator.
公开号:BR112012007115A2
申请号:R112012007115-7
申请日:2010-09-21
公开日:2020-02-27
发明作者:Bernardus Maria Klein Gunnewiek Reinier;Hendrikus Alfonsus Bruls Wilhelmus;Luc E. Vandewalle Patrick
申请人:Koninklijke Philips Electronics N.V.;
IPC主号:
专利说明:

METHOD OF ENCODING A 3D VIDEO DATA SIGNAL, METHOD OF DECODING A 3D VIDEO DATA SIGNAL, ENCODER TO ENCODE A 3D VIDEO DATA SIGNAL, DECODER TO DECODE A 3D VIDEO DATA SIGNAL,
COMPUTER PROGRAM PRODUCT TO ENCODE A VIDEO DATA SIGNAL, COMPUTER PROGRAM PRODUCT TO DECODE A VIDEO DATA SIGNAL, 3D VIDEO DATA SIGNAL, AND, DIGITAL DATA HOLDER
FIELD OF THE INVENTION
This invention relates to a method of encoding a video data signal, the method comprising providing at least a first image of a scene as seen from a first panorama, providing rendering information to allow the generation of at least one rendered image of the scene 15 as seen from a rendering panorama and generating the video data signal comprising encoding data representing the first image and the rendering information.
This invention relates to a method of decoding the video data signal, an encoder, a decoder, 20 computer program products for encoding or decoding, the video data signal and a digital data carrier.
HISTORY OF THE INVENTION
In the emerging three-dimensional (3D) video technique, 25 there are several methods for encoding a third dimension in the video data signal. This is usually done by providing a viewer's eye with different views of the scene being watched. A popular approach to representing 3D video is to use one or more two-dimensional (2D) images plus a '30 depth representation providing information from the third dimension. This approach also allows 2D images to be generated with different panoramas and viewing angles in addition to 2D images that are included in the 3D video signal. Such
2/13 The approach provides a number of advantages including allowing views to be generated with relatively low complexity and providing an efficient data representation in this way by reducing, for example, the communication and storage resource requirements for 3D video signals. Preferably, the video data is extended with data that is not visible from the available panoramas, but becomes visible from a slightly different panorama. These data are said to be background data or occlusion. In practice, the 10 occlusion data is generated from multi-view data capturing a scene with multiple cameras in different panoramas.
It is a problem with the approaches described above that the availability of data to reconstruct de-occluded objects 15 in newly generated views may differ from frame to frame and even within a frame. As a result, the quality of the images generated for different panoramas may vary.
OBJECTIVE OF THE INVENTION
It is an object of the invention to provide a method for encoding a video data signal as described in the opening paragraph, the method of which allows to generate higher quality images with different panoramas.
SUMMARY OF THE INVENTION
According to a first aspect of the invention, this objective is achieved by providing a method of encoding a video data signal, the method comprising providing at least a first image of a scene as seen from a first panorama, providing the rendering information to allow the generation of at least one rendered image of the scene as seen from a rendered panorama, providing a preferred direction indicator, defining a preferred orientation of the rendering panorama relative to the first panorama, and generating the video data signal comprising data coded
3/13 representing the first image, the rendering information and the preferred direction indicator.
As explained above, the quality of images generated from different panoramas is related to the availability of data necessary for the reconstruction of de-occluded objects. Consider the situation where the data is available to shift the viewpoint to the left, but not to shift the viewpoint to the right. Consequently, shifting the panorama to the left can lead to an image generated with a different quality than shifting the panorama to the right. A similar difference in quality can occur when insufficient de-occlusion information or no occlusion information is available to fill the de-occluded areas. In such a case the de-occluded areas can be filled using the so-called hole filling algorithms. Generally, such algorithms interpolate information from the direct proximity of the de-occluded area. Consequently, changing the panorama to the left can lead to an image generated with a different quality than changing the panorama to the right.
Such differences in quality are not only influenced by the availability of the necessary data, but also by the size and nature of the surface area being cleared when changing the landscape. As a result, the quality of a video or 3D image may vary depending on the recently selected panorama. It could, for example, make a difference if a new view was generated on the left or on the right side of the panorama of the first image already available.
According to the invention, it is a solution to generate a preferred direction indicator and include it in the video signal. The preferred direction indicator defines a preferred orientation of the rendered panorama for an additional view relative to the original panorama of the image already included in the video signal.
4/13
When the video signal is decoded, the preferred direction indicator can be used to select the rendering panorama and generate the rendered image of the scene from the selected panorama.
The video data signal may comprise a preferred direction indicator for each frame, each group of frames, each scene, or even for an entire video sequence. Encoding such information on a per-frame basis instead of coarser granularity allows for random access; for example, to aid reading control. As the preferred rendering indicator is often constant for several frames and the size of the encoded video signal is typically relevant, the number of duplicate indicators can be reduced instead of encoding the information in the base frame group. An even more efficient encoding is to encode the preferred direction on a per-scene basis as the preferred rendering direction is preferably kept the same throughout the scene, thus ensuring continuity within a scene.
Optionally, when the size of the encoded signal is smaller than the critical information, it can be encoded in a frame, frame group and similar scene level, as long as all indicators are established in accordance with each other.
As changes in the choice of the rendering panorama can affect the perceived continuity of the content in the rendering, the preferred direction indicator is preferably kept constant for several frames. It can, for example, be kept constant over a group of 30 frames or alternatively over a scene. It is realized that even when the preferred direction indicator is kept constant for several frames, it can still be advantageous to code the preferred direction indicator constant in a
5/13 granularity less than strictly necessary to facilitate random access.
The preferred orientation can be left, right, up, down or any combination of these directions. In addition to an orientation, a preferred distance or maximum preferred distance can be provided with the preferred direction indication. If sufficient information about the occluded objects and / or depth values of the objects in the first image is available, it may be possible to generate multiple additional high-quality views of the panoramas that may be further away from the original panorama.
The rendering information can, for example, comprise occlusion data representing background objects being occluded by foreground objects in the first image, a depth map providing the depth values of the objects in the first image or transparency data for these objects.
The preferred direction indicator indicates for which possible rendering panorama the best rendering information is available. For more information regarding the rendering of layered depth images, see, for example, International Patent Application W02007 / 063477, incorporated here as a reference. For more information regarding the hole filling algorithms for use in filling de-occluded regions when layered depth images, see, for example, document W02007 / 099465, incorporated here for reference.
According to another aspect of the invention, a method for encoding a video data signal is provided, the video data signal comprising encoded data representing a first image of a scene as seen from a first panorama, rendering information to allow for generation of at least one rendered image of the scene as seen from
6/13 from a rendering panorama and a preferred direction indicator, defining a preferred orientation of the rendering panorama relative to the first panorama. The encoding method comprises receiving the video data signal, depending on the preferred direction indicator by selecting the rendering panorama, and generating the rendered image of the scene as seen from the selected rendering panorama.
These and other aspects of the invention are apparent and will be elucidated with reference to the embodiments described below.
BRIEF DESCRIPTION OF THE DRAWINGS
In the drawings:
Fig. 1 shows a block diagram of a system for encoding video data according to the invention;
Fig. 2 shows a flow diagram of a coding method according to the invention;
Fig. 3 shows a block diagram of a system for decoding video data according to the invention, and
Fig. 4 shows a flow diagram of a decoding method according to the invention.
DETAILED DESCRIPTION OF THE INVENTION
Figure 1 shows a block diagram of a system for encoding video data according to the invention. The system comprises two digital video cameras 11, 12 and an encoder 10. The first camera 11 and the second camera 12 record the same scene 100, but from a slightly different position and also from a slightly different angle. . Digital video signals recorded from both video cameras 11, 12 are sent to encoder 10. The encoder can, for example, be part of a dedicated encoding box, a video card on a computer or a function software to be run by a general purpose microprocessor. Alternatively,
7/13 video cameras 11, 12 are analog video cameras and analog video signals are converted to digital video signals before being supplied as input to encoder 10. If the video cameras are coupled to encoder 10, the Encoding can happen while recording scene 100. You can also record scene 100 first and provide the recorded video data to encoder 10 later.
encoder 10 receives digital video data from video cameras 11, 12, either directly or indirectly, and combines both digital video signals into a 3D video signal 15. It is realized that both video cameras 11, 12 can be combined into a 3D video camera. It is also possible to use more than two video cameras to capture scene 100 from more than two panoramas.
Figure 2 shows a flow diagram of a coding method according to the invention. This encoding method can be performed by the encoder 10 of the system of figure 1. The encoding method uses the digital video data recorded from cameras 11, 12 and provides a video data signal 15 according to the invention. In the base image providing step 21, at least a first image of the scene is provided for inclusion in the video data signal 15. This base image can be standard 2D video data from one of the two cameras 11, 12. 0 encoder 10 can also use two basic images; one from the first camera 11 and one from the second 12. From these base images, the color values of all pixels in each frame of the recorded video can be derived. The base images represent the scene at a certain point in time as seen from a specific panorama. In the following, this specific panorama will be called the base panorama.
In step 22 of enabling 3D, the video data
8/13 resulting from video cameras 11, 12 are used to add information to the base image. This added information should allow a decoder to generate a rendered image of the same scene from a different panorama. Next, this added information is called rendering information. The rendering information can comprise, for example, depth information or transparency values of the objects in the base image. The rendering information can also describe objects that are blocked from being seen from the base panorama by objects visible in the base image. The encoder uses known, preferably standardized methods to derive this rendering information from the recorded regular video data.
In the direction indicating step 23, encoder 10 also adds a preferred direction indicator for the rendering information. The preferred direction indicator defines a preferred orientation of the rendering panorama for an additional view relative to the base panorama. When the video signal is later decoded, the preferred direction indicator can be used to select the rendering panorama and generate the rendered image of scene 100 from the selected panorama. As described above, the quality of a 3D video or image may vary according to a recently selected panorama. It could, for example, make a difference if a new view was generated on the left or on the right side of the panorama of the first image already available. The preferred direction indicator that is added to the rendering information can, for example, be a single bit indicating a left or right direction. A more advanced preferred direction indicator can also indicate an up or down direction and / or a maximum or preferred distance from the new panorama relative to the base panorama.
Alternatively, the preferred direction indicator
9/13 can provide preferred orientation and / or distance from multiple rendering panoramas relative to the first panorama. For example, it may be preferable to generate two rendering panoramas on the same side of the first panorama or one on each side. The preferred position (s) of the rendering panorama (s) relative to the first panorama may depend on the distance between those two points. For example, it may be better to render an image from a panorama on the left side of the first panorama when both panoramas are together; while for greater distances both panoramas to the left side of the first panorama may be more suitable for generating the rendering panorama.
The decision of which particular direction is most favorable can be determined automatically; for example, when encoding a stereo pair as a layer depth image, using an image, occlusion and depth representation, it is possible to use either the left or right image as the layer depth image, and to reconstruct the other image based on this. Subsequently, a metric difference can be computed for both alternatives and the preferred encoding and with this the direction can be determined based on these.
Preferably the differences are balanced based on a model of the human visual perception system. Alternatively, in particular within a professional setting, the preferred direction could be selected based on user interaction.
In the signal generation step 24, the information provided in the previous steps 21, 22, 23 is used to generate a video data signal 15 according to the invention. The video data signal 15 represents at least the first image, the rendering information and the preferred direction indicator. A preferred direction indicator can be provided for each
10/13 frame, for a group of frames or for a complete scene or even a complete video. Changing the position of the rendering panorama during a scene can have a negative influence on the perceived 3D image quality, but may, on the other hand, be necessary if the availability of rendering information for different directions changes considerably.
Figure 3 shows a block diagram of a system for decoding video data according to the invention. The system comprises a decoder 30 for receiving the video data signal 15 and converting the video data signal 15 into a display signal which is suitable for display by a display 31. The video data signal 15 can reach the decoder 30 as part of a signal transmitted, for example, via cable or satellite transmission. The video data signal 15 can also be provided on demand, for example, via the Internet or via a video demand service. Alternatively, the video data signal 15 is provided on a digital data carrier such as a DVD or Blu-ray disc.
display 31 is capable of providing a 3D presentation of scene 100 that has been captured and encoded by encoder 10 of the system of figure 1. Display 31 may comprise decoder 30 or may be coupled to decoder 30. For example, decoder 30 may be part of a 3D video receiver that is to be coupled to one or more normal computer or television displays. Preferably, the display is a dedicated 3D display 31 capable of providing different views to different eyes of a viewer.
Figure 4 shows a flow diagram of a decoding method as can be performed by decoder 30 of figure 3. In step 41 of receiving video data, the video data signal 15 encoded by encoder 10 is received at an input decoder 30. The received video data signal 15 comprises encoded data representing
11/13 at least the first image of scene 100, the rendering information and the preferred direction indicator.
In step 42 of selecting the additional panorama, the preferred direction indicator is used to select at least one additional panorama for the respective additional view. In step 43 of rendering the additional view, one or more individual views of the selected panorama or panoramas are generated. In display stage 44, two or more views from different panoramas can then be provided for display 31 to show scene 100 in 3D.
It will be appreciated that the invention also extends to computer programs, particularly computer programs on or in a carrier, adapted to put the invention into practice. The program can be in the form of source code, object code, an intermediate source of code and object code such as partially compiled form, or in any other form suitable for use in implementing the method according to the invention. It will also be realized that such a program can have many different architectural designs. For example, a program code implementing the functionality of the method or system for the invention can be subdivided into one or more subroutines. Many different ways of distributing functionality between these subroutines will become apparent to the person skilled in the art. Subroutines can be stored together in an executable file to form a stand-alone program. Such an executable file may comprise computer executable instructions, for example, processor instructions and / or interpretation instructions (for example, Java interpretation instructions). Alternatively, one or more or all of the subroutines can be stored in at least one external library file and linked with a main program either statically or dynamically, for example, at run time. The main program contains at least one call
12/13 for at least one of the subroutines. Also, subroutines can comprise function calls to each other. An embodiment in relation to a computer program product comprises instructions executable on a computer corresponding to each of the steps of processing at least one of the established methods. These instructions can be subdivided into subroutines and / or be stored in one or more files that can be linked statically or dynamically. Another realization in relation to a computer program product comprises computer executable instructions corresponding to each of the means of at least one of the established systems and / or products. These instructions can be subdivided into subroutines and / or be stored in one or more files that can be linked statically or dynamically.
The holder of a computer program can be any entity or device capable of loading the program. For example, the carrier may include a storage medium, such as a ROM, for example, a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example, a floppy disk or hard disk. In addition, the carrier may be a transmissible carrier such as an optical or electrical signal, which may be transmitted via an optical or electrical cable or by radio or other means. When the program is incorporated into such a signal, the carrier may consist of such a cable or other device or means. Alternatively, the carrier can be an integrated circuit in which the program is included, the integrated circuit being adapted to perform, or for use in performing the relevant method.
It should be noted that the achievements described above illustrate rather than limit the invention, and those skilled in the art will be able to design many different accomplishments without departing from the scope of the pending claims. In the claims, any reference signs placed between
Parentheses should not be constructed as limiting the claim. The use of the verb understand and its conjugations does not exclude the presence of elements or steps beyond those established in a claim. Article one or one 5 preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements, and by means of a properly programmed computer. In claiming the device enumerating various means, several of these means can be incorporated by one and the same item of hardware. The mere fact that certain measures are cited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
权利要求:
Claims (15)
[1]
1. METHOD OF ENCODING A 3D VIDEO DATA SIGNAL (15), characterized by the method comprising:
providing at least a first image (21) of a scene (100) as seen from a first panorama, providing rendering information (22) to allow the decoder to generate at least one rendered image of the scene (100) as seen from a rendering panorama being different from the first panorama, provide a preferred direction indicator (23), defining a preferred orientation of the rendered panorama relative to the first panorama, and generate (24) the 3D video data signal (15) comprising encoded data representing the first image, the rendering information and the preferred direction indicator.
[2]
2. METHOD OF ENCODING A 3D VIDEO DATA SIGNAL (15), according to claim 1, characterized in that the preferred direction indicator comprises a single bit to define whether the preferred orientation of the rendering panorama relative to the first panorama it is to the left or to the right of the first panorama.
[3]
3. METHOD OF ENCODING A 3D VIDEO DATA SIGNAL (15) according to claim 1, characterized in that a preferred direction indicator is encoded in the 3D video data signal (15) using at least one of the following options:
- a preferred direction indicator is coded for each frame;
- a preferred direction indicator is coded for each group of frames; and
- a preferred direction indicator is coded for each scene.
[4]
4.
METHOD OF ENCODING A SIGNAL DATA SIGNAL
2/4
3D VIDEO (15), according to claim 3, characterized in that the value of the preferred direction indicator is constant for one of:
- a group of staff and
- a scene.
[5]
5. METHOD OF ENCODING A 3D VIDEO DATA SIGNAL (15), according to any one of claims 1 to 4, characterized in that the preferred orientation is dependent on a distance between the first panorama and the rendering panorama.
[6]
6. METHOD OF ENCODING A 3D VIDEO DATA SIGNAL (15), according to any one of claims 1 to 4, characterized in that the rendering information comprises depth indication values for pixels in the first image.
[7]
7. METHOD OF ENCODING A 3D VIDEO DATA SIGNAL (15), according to any one of claims 1 to 4, characterized in that the rendering information comprises alpha values for pixels in the first image, alpha values indicating a transparency of the respective pixels.
[8]
8. METHOD OF ENCODING A 3D VIDEO DATA SIGNAL (15), according to any one of claims 1 to 4, characterized in that the rendering information comprises the occlusion data representing the data being occluded from the first panorama .
[9]
9. METHOD OF DECODING A 3D VIDEO DATA SIGNAL (15), characterized in that the 3D video data signal (15) comprises encoded data representing a first image of a scene (100) as seen from a first panorama , rendering information to allow the generation of at least one rendered image of the scene (100) as seen from a rendering panorama being different from the first panorama, and a preferred direction indicator,
3/4 defining a preferred orientation of the rendering panorama relative to the first panorama, the method comprising:
receive the video data signal (41), depending on the preferred direction indicator, select the rendering panorama (42), and generate (43) the rendered image of the scene (100) as seen from the selected rendering panorama .
[10]
10. ENCODER (10) TO ENCODE A 3D VIDEO DATA SIGNAL (15), characterized in that the encoder comprises:
means for providing at least a first image of a scene (100) as seen from a first panorama, the rendering information to enable a decoder to generate a rendered image of the scene (100) as seen from a rendering panorama being different from the first panorama, and a preferred direction indicator, defining a preferred orientation of the rendering panorama relative to the first panorama, means for generating the 3D video data signal (15) comprising the encoded data representing the first image, the information of rendering and preferred direction indicator, and an output to provide the 3D video data signal (15).
[11]
11. DECODER (30) FOR DECODING A 3D VIDEO DATA SIGNAL (15), characterized in that the decoder (30) comprises:
an input for receiving a 3D video data signal (15), the 3D video data signal (15) comprising encoded data representing a first image of a scene (100) as seen from a first panorama, rendering information to allow the generation of a rendered image of the scene (100) as seen from a panorama of
4/4 rendering being different from the first panorama, and a preferred direction indicator, defining a preferred orientation of the rendering panorama relative to the first panorama, means to, depending on the preferred direction indicator, select the rendering panorama, means to generate the rendered image of the scene (100) as seen from the selected rendering panorama, and an output to provide the rendered image.
[12]
12. COMPUTER PROGRAM PRODUCT FOR ENCODING A VIDEO DATA SIGNAL, characterized in that the program is operative to cause a processor to perform the method as defined in claim 1.
[13]
13. COMPUTER PROGRAM PRODUCT FOR DECODING A VIDEO DATA SIGNAL, characterized in that the program is operative to cause a processor to perform the method as defined in claim 9.
[14]
14. 3D VIDEO DATA SIGNAL (15), characterized by comprising encoded data representing:
at least a first image of a scene (100) as seen from a first panorama, rendering information to allow a decoder to generate a rendered image of the scene (100) as seen from a rendering panorama being different from the first panorama, and a preferred direction indicator, defining a preferred orientation of the rendering panorama relative to the first panorama.
[15]
15. DIGITAL DATA CARRIER, characterized in that a 3D video data signal (15) as encoded in claim 14 is encoded therein.
类似技术:
公开号 | 公开日 | 专利标题
BR112012007115A2|2020-02-27|METHOD OF ENCODING A 3D VIDEO DATA SIGNAL, METHOD OF DECODING A 3D VIDEO SIGNAL, ENCODER FOR ENCODING A 3D VIDEO DATA SIGNAL, DECODER FOR DECODING A 3D VIDEO DATA SIGNAL, COMPUTER PROGRAM PRODUCT FOR PRODUCT ENCODE A VIDEO DATA SIGNAL, COMPUTER PROGRAM PRODUCT TO DECODE A VIDEO SIGNAL, 3D VIDEO DATA SIGNAL, AND DIGITAL DATA HOLDER
JP5567562B2|2014-08-06|Versatile 3D image format
JP5173028B2|2013-03-27|Method and apparatus for providing a layered depth model of a scene and signal having a layered depth model of a scene
JP4879326B2|2012-02-22|System and method for synthesizing a three-dimensional image
ES2700227T3|2019-02-14|Entry points for 3D playback mode
BRPI0911014B1|2021-08-17|METHOD OF CREATING A THREE-DIMENSIONAL IMAGE SIGNAL FOR RENDING ON A DISPLAY, DEVICE FOR CREATING A THREE-DIMENSIONAL IMAGE SIGNAL FOR RENDING ON A DISPLAY, METHOD OF PROCESSING A THREE-DIMENSIONAL IMAGE SIGNAL, AND DEVICE FOR PROCESSING A THREE-DIMENSIONAL IMAGE
BRPI1005691B1|2021-03-09|method of combining three-dimensional image data [3d] and auxiliary graphic data, information carrier comprising three-dimensional image data [3d] and auxiliary graphic data, 3d generation device to combine three-dimensional image data [3d] and auxiliary graphic data , 3D display device to combine three-dimensional image data [3d] and auxiliary graphic data
US20110149037A1|2011-06-23|Method and system for encoding a 3D video signal, encoder for encoding a 3-D video signal, encoded 3D video signal, method and system for decoding a 3D video signal, decoder for decoding a 3D video signal.
TW201123841A|2011-07-01|Recording medium, playback device, integrated circuit
TW201842765A|2018-12-01|Method and apparatus for mapping virtual-reality image to a segmented sphere projection format
RU2632426C2|2017-10-04|Auxiliary depth data
EP2875636A1|2015-05-27|Metadata for depth filtering
JP6231125B2|2017-11-15|Method for encoding a video data signal for use with a multi-view stereoscopic display device
EP2437501A2|2012-04-04|Image-processing method and apparatus
BR112019027116A2|2020-07-07|apparatus for generating an image, apparatus for generating an image signal, method for generating an image, method for generating an image signal and image signal
TW201921918A|2019-06-01|Image processing device and file generation device
EP2685730A1|2014-01-15|Playback device, playback method, and program
KR20220011180A|2022-01-27|Method, apparatus and computer program for volumetric video encoding and decoding
同族专利:
公开号 | 公开日
WO2011039679A1|2011-04-07|
KR101727094B1|2017-04-17|
CN102549507B|2014-08-20|
KR20120093243A|2012-08-22|
US9167226B2|2015-10-20|
JP5859441B2|2016-02-10|
EP2483750B1|2018-09-12|
CN102549507A|2012-07-04|
RU2551789C2|2015-05-27|
RU2012117572A|2013-11-10|
JP2013507026A|2013-02-28|
US20120188341A1|2012-07-26|
EP2483750A1|2012-08-08|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US6327381B1|1994-12-29|2001-12-04|Worldscape, Llc|Image transformation and synthesis methods|
US6088006A|1995-12-20|2000-07-11|Olympus Optical Co., Ltd.|Stereoscopic image generating system for substantially matching visual range with vergence distance|
US6222551B1|1999-01-13|2001-04-24|International Business Machines Corporation|Methods and apparatus for providing 3D viewpoint selection in a server/client arrangement|
RU2237283C2|2001-11-27|2004-09-27|Самсунг Электроникс Ко., Лтд.|Device and method for presenting three-dimensional object on basis of images having depth|
JP4174001B2|2002-09-27|2008-10-29|シャープ株式会社|Stereoscopic image display apparatus, recording method, and transmission method|
EP1431919B1|2002-12-05|2010-03-03|Samsung Electronics Co., Ltd.|Method and apparatus for encoding and decoding three-dimensional object data by using octrees|
JP4188968B2|2003-01-20|2008-12-03|三洋電機株式会社|Stereoscopic video providing method and stereoscopic video display device|
JP4324435B2|2003-04-18|2009-09-02|三洋電機株式会社|Stereoscopic video providing method and stereoscopic video display device|
US20060200744A1|2003-12-08|2006-09-07|Adrian Bourke|Distributing and displaying still photos in a multimedia distribution system|
GB0415223D0|2004-07-07|2004-08-11|Sensornet Ltd|Intervention rod|
JP2006041811A|2004-07-26|2006-02-09|Kddi Corp|Free visual point picture streaming method|
EP1686554A3|2005-01-31|2008-06-18|Canon Kabushiki Kaisha|Virtual space generating system, image processing apparatus and information processing method|
EP1934945A4|2005-10-11|2016-01-20|Apple Inc|Method and system for object reconstruction|
KR100714672B1|2005-11-09|2007-05-07|삼성전자주식회사|Method for depth based rendering by using splats and system of enabling the method|
US8094928B2|2005-11-14|2012-01-10|Microsoft Corporation|Stereo video for gaming|
CN101395634B|2006-02-28|2012-05-16|皇家飞利浦电子股份有限公司|Directional hole filling in images|
US8594180B2|2007-02-21|2013-11-26|Qualcomm Incorporated|3D video encoding|
GB0712690D0|2007-06-29|2007-08-08|Imp Innovations Ltd|Imagee processing|
WO2009009465A1|2007-07-06|2009-01-15|Christopher William Heiss|Electrocoagulation reactor and water treatment system and method|
JP2009075869A|2007-09-20|2009-04-09|Toshiba Corp|Apparatus, method, and program for rendering multi-viewpoint image|
MY162861A|2007-09-24|2017-07-31|Koninl Philips Electronics Nv|Method and system for encoding a video data signal, encoded video data signal, method and system for decoding a video data signal|
JP5575650B2|2007-10-11|2014-08-20|コーニンクレッカフィリップスエヌヴェ|Method and apparatus for processing a depth map|
JP4703635B2|2007-12-12|2011-06-15|株式会社日立製作所|Stereoscopic image generation method, apparatus thereof, and stereoscopic image display apparatus|
CN101257641A|2008-03-14|2008-09-03|清华大学|Method for converting plane video into stereoscopic video based on human-machine interaction|
JP4827881B2|2008-04-30|2011-11-30|三洋電機株式会社|Video file processing method and video transmission / reception playback system|AU2006225115B2|2005-03-16|2011-10-06|Lucasfilm Entertainment Company Ltd.|Three- dimensional motion capture|
JP2011523538A|2008-05-20|2011-08-11|ペリカンイメージングコーポレイション|Image capture and processing using monolithic camera arrays with different types of imagers|
US8866920B2|2008-05-20|2014-10-21|Pelican Imaging Corporation|Capturing and processing of images using monolithic camera array with heterogeneous imagers|
US8514491B2|2009-11-20|2013-08-20|Pelican Imaging Corporation|Capturing and processing of images using monolithic camera array with heterogeneous imagers|
JP5848754B2|2010-05-12|2016-01-27|ペリカン イメージング コーポレイション|Architecture for imager arrays and array cameras|
US8878950B2|2010-12-14|2014-11-04|Pelican Imaging Corporation|Systems and methods for synthesizing high resolution images using super-resolution processes|
KR101973822B1|2011-05-11|2019-04-29|포토네이션 케이맨 리미티드|Systems and methods for transmitting and receiving array camera image data|
US20130070060A1|2011-09-19|2013-03-21|Pelican Imaging Corporation|Systems and methods for determining depth from multiple views of a scene that include aliasing using hypothesized fusion|
EP2761534B1|2011-09-28|2020-11-18|FotoNation Limited|Systems for encoding light field image files|
EP2817955B1|2012-02-21|2018-04-11|FotoNation Cayman Limited|Systems and methods for the manipulation of captured light field image data|
US9210392B2|2012-05-01|2015-12-08|Pelican Imaging Coporation|Camera modules patterned with pi filter groups|
CN104508681B|2012-06-28|2018-10-30|Fotonation开曼有限公司|For detecting defective camera array, optical device array and the system and method for sensor|
US20140002674A1|2012-06-30|2014-01-02|Pelican Imaging Corporation|Systems and Methods for Manufacturing Camera Modules Using Active Alignment of Lens Stack Arrays and Sensors|
EP3869797A1|2012-08-21|2021-08-25|FotoNation Limited|Method for depth detection and correction in images captured using array cameras|
WO2014032020A2|2012-08-23|2014-02-27|Pelican Imaging Corporation|Feature based high resolution motion estimation from low resolution images captured using an array source|
CZ308335B6|2012-08-29|2020-05-27|Awe Spol. S R.O.|The method of describing the points of objects of the subject space and connection for its implementation|
EP2901671A4|2012-09-28|2016-08-24|Pelican Imaging Corp|Generating images from light fields utilizing virtual viewpoints|
US9143711B2|2012-11-13|2015-09-22|Pelican Imaging Corporation|Systems and methods for array camera focal plane control|
RU2640357C2|2013-02-06|2017-12-28|Конинклейке Филипс Н.В.|Method of encoding video data signal for use with multi-view stereoscopic display device|
US9462164B2|2013-02-21|2016-10-04|Pelican Imaging Corporation|Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information|
US9374512B2|2013-02-24|2016-06-21|Pelican Imaging Corporation|Thin form factor computational array cameras and modular array cameras|
WO2014138697A1|2013-03-08|2014-09-12|Pelican Imaging Corporation|Systems and methods for high dynamic range imaging using array cameras|
US8866912B2|2013-03-10|2014-10-21|Pelican Imaging Corporation|System and methods for calibration of an array camera using a single captured image|
US9106784B2|2013-03-13|2015-08-11|Pelican Imaging Corporation|Systems and methods for controlling aliasing in images captured by an array camera for use in super-resolution processing|
WO2014164550A2|2013-03-13|2014-10-09|Pelican Imaging Corporation|System and methods for calibration of an array camera|
US9519972B2|2013-03-13|2016-12-13|Kip Peli P1 Lp|Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies|
WO2014164909A1|2013-03-13|2014-10-09|Pelican Imaging Corporation|Array camera architecture implementing quantum film sensors|
WO2014153098A1|2013-03-14|2014-09-25|Pelican Imaging Corporation|Photmetric normalization in array cameras|
WO2014159779A1|2013-03-14|2014-10-02|Pelican Imaging Corporation|Systems and methods for reducing motion blur in images or video in ultra low light with array cameras|
US9497429B2|2013-03-15|2016-11-15|Pelican Imaging Corporation|Extended color processing on pelican array cameras|
US9426451B2|2013-03-15|2016-08-23|Digimarc Corporation|Cooperative photography|
US9445003B1|2013-03-15|2016-09-13|Pelican Imaging Corporation|Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information|
US10122993B2|2013-03-15|2018-11-06|Fotonation Limited|Autofocus system for a conventional camera that uses depth information from an array camera|
JP2016524125A|2013-03-15|2016-08-12|ペリカン イメージング コーポレイション|System and method for stereoscopic imaging using a camera array|
ITRM20130244A1|2013-04-23|2014-10-25|MAIOR Srl|METHOD FOR THE REPRODUCTION OF A FILM|
US9826212B2|2013-05-10|2017-11-21|Koninklijke Philips N.V.|Method of encoding a video data signal for use with a multi-view rendering device|
WO2015048694A2|2013-09-27|2015-04-02|Pelican Imaging Corporation|Systems and methods for depth-assisted perspective distortion correction|
WO2015070105A1|2013-11-07|2015-05-14|Pelican Imaging Corporation|Methods of manufacturing array camera modules incorporating independently aligned lens stacks|
US9641592B2|2013-11-11|2017-05-02|Amazon Technologies, Inc.|Location of actor resources|
US9578074B2|2013-11-11|2017-02-21|Amazon Technologies, Inc.|Adaptive content transmission|
US9604139B2|2013-11-11|2017-03-28|Amazon Technologies, Inc.|Service for generating graphics object data|
US9582904B2|2013-11-11|2017-02-28|Amazon Technologies, Inc.|Image composition based on remote object data|
US9413830B2|2013-11-11|2016-08-09|Amazon Technologies, Inc.|Application streaming service|
US9634942B2|2013-11-11|2017-04-25|Amazon Technologies, Inc.|Adaptive scene complexity based on service quality|
US9805479B2|2013-11-11|2017-10-31|Amazon Technologies, Inc.|Session idle optimization for streaming server|
US10119808B2|2013-11-18|2018-11-06|Fotonation Limited|Systems and methods for estimating depth from projected texture using camera arrays|
EP3075140B1|2013-11-26|2018-06-13|FotoNation Cayman Limited|Array camera configurations incorporating multiple constituent array cameras|
US10089740B2|2014-03-07|2018-10-02|Fotonation Limited|System and methods for depth regularization and semiautomatic interactive matting using RGB-D images|
EP2960864B1|2014-06-23|2018-12-05|Harman Becker Automotive Systems GmbH|Device and method for processing a stream of video data|
CN113256730A|2014-09-29|2021-08-13|快图有限公司|System and method for dynamic calibration of an array camera|
US9942474B2|2015-04-17|2018-04-10|Fotonation Cayman Limited|Systems and methods for performing high speed video capture and depth estimation using array cameras|
US10204449B2|2015-09-01|2019-02-12|Siemens Healthcare Gmbh|Video-based interactive viewing along a path in medical imaging|
EP3273686A1|2016-07-21|2018-01-24|Thomson Licensing|A method for generating layered depth data of a scene|
US10353946B2|2017-01-18|2019-07-16|Fyusion, Inc.|Client-server communication for live search using multi-view digital media representations|
US10482618B2|2017-08-21|2019-11-19|Fotonation Limited|Systems and methods for hybrid depth regularization|
RU2744699C1|2017-12-14|2021-03-15|Кэнон Кабусики Кайся|Generating apparatus, method of generation and program for a three-dimensional model|
US10965928B2|2018-07-31|2021-03-30|Lg Electronics Inc.|Method for 360 video processing based on multiple viewpoints and apparatus therefor|
CN112335258A|2018-11-12|2021-02-05|英特尔公司|Automatic field of view estimation from the perspective of a play participant|
US11270110B2|2019-09-17|2022-03-08|Boston Polarimetrics, Inc.|Systems and methods for surface modeling using polarization cues|
法律状态:
2020-04-14| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2020-04-22| B25D| Requested change of name of applicant approved|Owner name: KONINKLIJKE PHILIPS N.V. (NL) |
2020-05-12| B25G| Requested change of headquarter approved|Owner name: KONINKLIJKE PHILIPS N.V. (NL) |
2020-06-09| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2020-11-24| B11B| Dismissal acc. art. 36, par 1 of ipl - no reply within 90 days to fullfil the necessary requirements|
2021-10-13| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
申请号 | 申请日 | 专利标题
EP09172096.1|2009-10-02|
EP09172096|2009-10-02|
PCT/IB2010/054251|WO2011039679A1|2009-10-02|2010-09-21|Selecting viewpoints for generating additional views in 3d video|
[返回顶部]