专利摘要:
SYSTEM, METHOD FOR REGISTERING A HYPER SPECTRAL IMAGE AND VIEWING HYPER SPECTRAL IMAGE IN VISIBLE LIGHT, AND COMPUTER PROGRAM PRODUCT. The imaging system comprises a light field camera (3) for recording a hyperspectral light field (CLF). The system also comprises a light projector (4) for projecting a light field into visible light (PLF). The camera and the projector share a common optical axis. The projector projects a light field (PLF) based on the hyperspectral light field (CLF) captured by the light field camera.
公开号:BR112014028811B1
申请号:R112014028811-9
申请日:2014-03-12
公开日:2020-11-17
发明作者:Frederik Jan De Bruijn;Remco Theodorus Johannes Muijs;Jorrit Ernst De Vries;Bernardus Hendrikus Wilhelmus Hendriks;Drazenko Babic
申请人:Koninklijke Philips N.V.;
IPC主号:
专利说明:

FIELD OF THE INVENTION
[001] The invention relates to a hyperspectral image system that comprises a camera for recording a hyperspectral image of an object and a visualization device for viewing the hyperspectral image recorder in visible light, and a method for recording a hyperspectral image and visualization of the hyperspectral image in visible light. HISTORY OF THE INVENTION
[002] The hyperspectral image is known to reveal details impossible or difficult to be observed by human eyes, such as, for example, tissue differences in a human. In the hyperspectral image, the image is obtained from an object in one or more wavelength bands, where at least one wavelength band is at least partially invisible to human eyes or at least very difficult to observe. This image is then converted into a visible image, the image of which is provided in visible light to a viewer. The hyperspectral image can be based both on spectrally selective lighting (that is, the illumination of an object with light in a given wavelength band) or by prior spectrally selective filtering (that is, using a filter that transmits only light in a wavelength band) for image capture. In both cases, image processing is required to generate a resulting image that reveals the structure contrast of interest.
[003] In such a system, the hyperspectral image (for example, image that extends beyond the visible spectrum) is conventionally obtained and the result is shown on a visualization screen. By sitting behind the viewing screen, the viewer, interested in the invisible or hardly visible details of the object under observation, can study the image on the screen in visible light, as it would appear in, for example, UV light or IR light.
[004] Although using a screen is a very useful technique, the possibilities are limited. It was proposed to project a hyperspectral image on an object studied, for example, in R.K. Miyake, H.D. Zeman, FH Duarte, R. Kikuchi, E. Ramacciotti, G. Lovhoiden, C. Vrancken, "Vein imaging: A new method of near infrared imaging where a processed image is projected onto the skin for the enhancement of vein treatment", Dermatologic , Surgery, vol. 32, pp. 1031-1038, 2006. The projection is performed with a laser project.
[005] It is difficult, if not almost impossible, to use the known technique to provide a good sharp projection in which the projection coincides with a relatively high degree of alignment, unless the object, in the known prior art, the skin is immobile and to a high flat degree.
[006] It is an object of the invention to provide a system and a method to directly observe the hyperspectral details of an object under observation and in correct alignment. SUMMARY OF THE INVENTION
[007] For this purpose, the system of the invention is characterized by the camera system being a camera that captures the light field and the display device is a light field projector, in which the camera and the projector share an optical path coaxial and in which the camera is arranged to capture a hyperspectral light field, and comprises an output for sending data in the captured hyperspectral light field to an input of the light field projector, and the light field projector is arranged to project a light field into visible light on the object, based on the data received from the camera.
[008] For this purpose, the method of the invention is characterized by a light field in a hyperspectral radiation range of an object being captured by a light field camera, the data on the light field captured by the camera are processed to provide projection image data for a light field projector, the light field projector projecting a light field, based on the projection image data on the object, in which the camera and the projector share a coaxial optical path and a light field in visible light is projected onto the object by the light field projector.
[009] The camera that captures the light field captures a light field in a hyperband of the spectrum, that is, in a spectral radiation band at least partially invisible to human eyes, and the light field projector projects a field of light. light in visible light. The light field projector forms a viewing device for viewing the hyperspectral image recorder in visible light. The projected light field causes the visualization of a projected three-dimensional image that is superimposed on the object, said three-dimensional image being sharpened throughout the wide range of depths. A shared coaxial optical path provides for relatively easy alignment of captured and projected light fields. This allows for a precise and real-time projection of the hyperspectral image in visible light onto the object under observation, from which the camera captured the hyperspectral light field, even if the object under observation is not flat, but has a three-dimensional shape. .
[010] A camera that captures a light field has, compared to a normal two-dimensional or even three-dimensional camera, the advantage that a complete light field is obtained, with the possibility of obtaining sharp images over a wide range of depths . A normal two-dimensional camera does not provide a wide depth of view and, although the three-dimensional camera can provide some depth information, none is able to provide a clear image over the entire depth range. A lightfield camera is also called a plenoptic camera. A light field camera is a camera that captures light field information about a scene that uses a plenoptic image. The plenoptic image captures a field of incident light, preserving the intensity and direction of the incident light. The implementation of a plenoptic imaging system can be based on several techniques: a set of microlenses, according to M. Levoy et al., "Light field microscopy", ACM Trans, on Graphics, vol. 25, no. 3, pp. 924-934, July 2006; photograph stained with a continuously graded attenuation mask, as in A. Veeraraghavan et al., "Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing", ACM Trans, on Graphics (Proc. SIGGRAPH 2007), vol. 26, no. 3, July 2007; aperture coding mask, as in A. Levin et al., "Image and depth form a conventional camera with a coded aperture", ACM Trans, on Graphics (Proc. SIGGRAPH 2007), vol. 26, no. 3, July 2007; wavefront encoder, as in E.R.Dowski et al., "Extended depth of field through wave-front coding", Applied Optics, vol. 34, no. 11, pp. 1859-1866, Apr. 1995; scan-focused image, as in H. Nagahara et al., "Flexible Depth of Field Photography", in Proc. ECCV 2008, Oct. 2008. The plenoptic image stores spatial information from the incident light field. The captured light field, in fact, is four-dimensional, since each ray of light is characterized by a two-dimensional location on the sensor and an angle of horizontal and vertical incidence, adding 2 more dimensions. The projected light field creates an image on the object, which is sharp across the wide range of optical depths.
[011] Of the techniques described above, the use of a set of microlenses is preferred. In a continuously graduated attenuation mask and an opening coding mask, some of the light that passes through the mask is attenuated, leading to a loss of intensity. In a set of microlenses, a higher percentage of available light is used.
[012] A set of microlenses is located between a lens and an image sensor of the plenopic camera. The microlensing assembly gives a new focus to the light captured by the lens in the image sensor, thus creating many small images obtained from slightly different points of view. Three-dimensional information is stored in small images, each of which is produced by a single microlens. Each of the small images has a relatively low spatial resolution.
[013] Another type of camera that captures the light field, which does not use a set of microlenses, is a system that uses a so-called lens technique or scanning focus. In such cameras, the focus lens and / or sensor position is changed during image capture. This technique rises to the integration images in the focus scan (that is, in a particular range of depth of fields). The resulting image consists of a focus scan of all image information and also captures all available light. The obtained image can be distorted to provide sharp images at different depths and to reconstruct a projected plenoptic light field. It is preferred to use a set of microlenses, since the light field can be obtained instantly. The use of microlenses is relatively easy to align the captured light field captured by the camera and the projected light fields projected by the projector.
[014] Preferably, the camera and the projector share the common chain of optical image elements along the shared coaxial optical axis. This allows the best alignment of the captured light field and the projected light field.
[015] Preferably, the system comprises an element that provides a plenopic function being positioned on the shared coaxial optical path.
[016] Providing the element that provides a plenopic function in the shared coaxial path increases the ease of alignment of the captured and projected light field.
[017] Such elements can be a set of microlens, a coded aperture, wavefront encoder.
[018] Of these elements, the micro lens assembly is preferred.
[019] In the realizations, the system comprises a beam divider for dividing the light paths to and from the common optical axis to and from a light field camera, respectively, the beam divider light field projector presenting a spectrally selective dichroic property. The dichroic beam splitter passes or reflects light in the hyperspectral range to the camera, while reflecting or passing light in the visible range from the projector.
[020] In another embodiment, spectrally selective lighting is used.
[021] In preferred embodiments, the system is a mobile system, preferably portable, for example, a handheld system. This allows, for example, the doctor to view the veins immediately and on the spot. When needles need to be inserted into a vein, such an on-site inspection is a great advantage.
[022] In another preferred embodiment, the system is a part of a surgical luminary.
[023] In another embodiment, the system is a part of a larger system, the larger system further comprising a secondary image system for providing secondary image data in an internal image of the object under observation, in which the system comprises a processor to provide depth information, based on captured hyperspectral light field data and means to format, based on depth information, secondary data in an image projected onto the object. The secondary imaging system can be, for example, a radiography, magnetic resonance, computed tomography, positron emission computed tomography or ultrasound system. BRIEF DESCRIPTION OF THE DRAWINGS
[024] These and other objects and advantageous aspects will become apparent from exemplary achievements that will be described using the Figures below.
[025] Fig. 1 illustrates an embodiment of a system, according to the invention;
[026] Fig. 2 illustrates another embodiment of a system, according to the invention;
[027] Fig. 3 illustrates another embodiment of a system, according to the invention;
[028] Fig. 4 illustrates a hand-held device comprising a system, according to the invention;
[029] Fig. 5 illustrates the enhancement of vein image using a hand system, as shown in Fig. 4;
[030] Fig. 6 illustrates a surgical or dental lamp, which comprises a system according to the invention;
[031] Figures 7 and 8 illustrate a radiography system, which comprises a system according to the invention;
[032] Fig. 9 illustrates the principle of using a micro lens to capture a light field and project a light field;
[033] Fig. 10 illustrates a method for fine-tuning correspondence between the captured and projected light field.
[034] The figures are not drawn to scale. Generally, identical components are designated by the same reference numbers in the figures. DETAILED DESCRIPTION OF THE PREFERRED ACHIEVEMENTS
[035] It is an object of the invention to provide a result image as a projection on the observed tissue, yet in such a way that the projection is always in the correct focus on the tissue, regardless of the surface curvature of the tissue or its orientation in relation to the device capture / projection.
[036] The hyperspectral image provides contrast, for example, tissue contrast that is invisible to the naked eye. The enhanced contrast can, for example, be used to reveal blood vessels and nerves during surgery or to insert needles into the veins. It can also be used to identify malignant tissue.
[037] Hyperspectral image capture can be based on a monochromatic, selective and non-spectral image sensor and the use of spectrally selective filtering prior to the image sensor, similar to a normal RBG camera, however, with more color channels and with different filter characteristics. Otherwise, hyperspectral image capture can also be based on spectrally selective (controlled) lighting in combination with an unfiltered image sensor. The combination of 'filtered lighting' and 'filtered acquisition' is also possible.
[038] Differences in the spectral response between different materials are generally converted into a visible contrast (b / p or pseudocolor) through a weighted linear combination of different spectral input values for the same spatial location. Several different predetermined weighted combinations lead to different tissue contrasts. In this way, the result of the hyperspectral image capture is generally an image with an enhanced contrast of the material (liquid or tissue) of interest. Thus, for example, it is possible to reveal the position of veins and arteries based on their subtle spectral response, however, distinct compared to, for example, the skin. The corresponding result image depicts the structure of the blood vessel directly under the observed skin area. It is an object of the invention to project the result image in real time on the observed tissue, in real time and in constant correct alignment with the observed tissue.
[039] In general, the disadvantage of current hyperspectral imaging systems is that the result data appears separately on a display screen, in such a way that the geometric relationship with the actual tissue is easily lost. Currently, the use of augmented reality glasses is a popular method to maintain the result data, generated in the glasses, which form the visualization screens, in constant alignment with a tissue that is observed by the surgeon. The main disadvantage is that this requires a head-mounted device, which location and orientation are tracked in opposition to the position of the work area, added to the complexity of such solutions. In addition, it forces the specialist to wear special glasses. In addition, in operating rooms, many people are present; if only the specialist carries the special glasses, assistants will not be able to observe what the specialist is seeing, unless they also wear glasses, and the position and orientation of said glasses are also tracked, adding immense complexity to the system.
[040] It is an object of the invention to provide a system and a method that allow direct observation of the hyperspectral details of an object under observation and in correct alignment without the need for tridimensional object tracking or the use of special glasses.
[041] Fig. 1 illustrates an embodiment of a system and method, according to the invention.
[042] In object 1, in this example a human tissue, hyperspectral light is shined from a hyperspectral source 2. The light source can be part of the system and, in preferred embodiments, is or would be provided separately. The hyperspectral source causes the problem to form a hyperspectral image, for example, in IR or UV. Alternatively, the object itself, regardless of the presence of a hyperspectral light source, can provide a hyperspectral image, that is, an image at a difficult wavelength or not observed with the human eye. For example, the object can be provided with a substance that, after having been previously illuminated, phosphorests at a special wavelength.
[043] Alternatively or in addition, fabric 1 can, even without a light source shining on it, provide an IR image that shows details at an IR wavelength that are invisible at visible wavelengths. Alternatively or in addition, the object can be illuminated with a source that provides visible light, as well as, for example, UV and / or IR light, a selective wavelength element is provided in the light path for the camera or on the camera so that the camera records the hyperspectral image.
[044] Alternatively or in addition, the camera can be provided with a pixel sensor that electronically record the image in visible light and a pixel sensor that register the image in a hyperspectral radiation band and data from the pixels are used hyperspectral sensitive to the hyperspectral light field.
[045] It is also possible to use a light field camera that comprises pixels sensitive to both visible light and hyperspectral radiation (for example, IR and / or UV part of the spectrum) and, sequentially over time, to place selective filters of wavelength in front of a source that provides visible light, as well as hyperspectral radiation, where the filters pass both visible light and a hyperspectral part of the spectrum and synchronize the acquisition of data from the light field camera with sequential lighting time to provide light field data in the hyperspectral range and possibly also in the visible part of the spectrum.
[046] In the realizations, the hyperspectral image is obtained in a UV or IR band of the electromagnetic spectrum. These achievements are preferred.
[047] However, the hyperspectral image can be obtained in other bands of the electromagnetic spectrum, for example, by means of a radiographic image or Terahertz image.
[048] For such realizations, the light field camera is a radiographic or Terahertz imaging device that provides the light field data in the radiographic or Terahertz portion of the electromagnetic spectrum.
[049] The four-dimensional light field provided by the fabric is captured by the light field camera 3 by means of a lens system 5. The lens system 5 comprises a beam divider 6 and a microlens assembly 7. The field of captured light is indicated by CLF in figure 1. The light field camera comprises a sensor in which the light field is captured. The data on the captured light field is provided by means of an image processor 8 for a light field projector 4. The output of the camera therefore provides the data for the input of the projector. "Data provision" should not, however, be interpreted to mean that the data from the camera is directly supplied to the projector, but that the data from the camera forms a basis for the data to the projector. Data processing can be provided between the output of the light field camera 3 and the input of the light field projector 4. The light field projector projects light field PLF onto the fabric 1 via the beam splitter 6 and micro lens assembly 7. It is preferred that the light source forms part of the system. This makes it possible to control the intensity of the light that shines on object 1. The realization of figure 1 shows a system in which a hyperspectral image is obtained in UV or IR. As explained above, an image can be obtained in several ways. For the sake of simplicity, no selective wavelength element was shown in the figure. Such a wavelength selective element can be, for example, placed in front of the source, or in front of the camera or, if the camera comprises different pixels for visible light than for UV or IR, the data can be filtered electronically, ie , through a data filter to filter the data acquired by the light field camera.
[050] Due to the generally short focal length of the microlens in the microlens assembly, it tends to create a set of micro images also focused very strictly behind the lens assembly. The optical lens system between the microlensing set 7 and the beam splitter 6, and also behind the beam splitter, retransmits this (micro) image plane in such a way that the micro image plane coincides with the camera's sensor plane and the plane of the imaging element on the projector. The imaging element can be, for example, a set of elements that emit light, and a set of switching mirrors (type of a DLP element), or a set of LCD light shutters.
[051] Projector 4 and camera 3 share a common coaxial optical axis. The common optical axis is illustrated in figure 1 by the fact that the rays of light are parallel. The advantage of using a common optical path for image capture and projection is that the projected overlap is in good alignment with the associated tissue. In addition to scaling up differences in the size of the sensor and projection element, no complex three-dimensional processing is required.
[052] Each microlens can be considered a superpixel that not only stores angular information in addition to the intensity of the light incident at the location of this 'superpixel'. Similarly, a projector that generates the same micro images in association with a set of microlenses will lead to a projection, in which the focal plane coincides with the original surface plane, regardless of its curved shape. The use of a common optical path and alignment of the projector's sensor and pixels will lead to a projection that is always in focus on the surface that is captured with the camera. The use of a microlensing set is preferred, since a microlensing set does not attenuate the light field.
[053] The system can be called a system of augmented hyperspectral augmented reality that provides non-variant capture of range and projection.
[054] Depending on the application, beam splitter 6 can also provide spectral selectivity. In particular, when the image capture is primarily in an invisible light domain, such as IR, the beam splitter may have a dichroic property. In this case, the incident IR light follows a direct path towards the camera, and the visible light from the projector is refracted by the beam splitter.
[055] Figure 2 also illustrates an embodiment of a system according to the invention. A mirror was used to double the projected light field. This allows, in circumstances, a more compact design of the system.
[056] Figure 3 illustrates an additional realization. In this embodiment, the camera and the projector comprise different sets of microlenses. The systems in figures 1 and 2 are preferred, however, if, for example, the spectral wavelength of the hyperspectral image requires a specific material for the microlens that is less suitable for the visible light wavelength, sets can be used separate microlenses. In figures 1 to 3, the camera and the projector share common image elements along the common optical axis.
[057] Figures 4 and 5 illustrate a preferred embodiment of the system. In this embodiment, the system is a mobile system, preferably portable. In this embodiment, the system is a hand held system. The system comprises a hyperspectral source inside the handheld device, the camera and the projector, where the portable device is used to capture the tissue region and provide an otherwise invisible data projection, for example, on the position of the veins, as shown in figure 5. Having an adequate image capture and sharp image correctly projected from, for example, veins using a portable device, provides great advantages in situations where it is important or even vital to find a vein quickly. When inserting a needle into a vein, for example, an emergency situation, such as an accident, it can be critical or even a matter of life and death to act quickly and accurately and need only a relatively simple device which can be easily operated and brought in emergency situations. The existing systems do not provide the possibility, in a precise and real time, and at the accident site, to provide an image of the position of the veins or other hyperspectral details. The portable system of figures 4 and 5 provides this possibility. In this case, the example of the system is hand held. The system can be used on a helmet or on a sleeve, so that the hands are free to insert a needle or perform other clinical procedures.
[058] Fig. 6 illustrates the use of a system, according to the invention, in a surgical lamp or a dental lamp. The lamp can optionally provide spectrally selective lighting as part of the hyperspectral image capture.
[059] In yet another embodiment, the invention can be incorporated into a system comprising a secondary imaging system, for example, a radiographic imaging system, or more generally, in a system that produces an internal image of the object under observation, for example, a system as described in patent application W02010067281.
[060] In Figure 7a, a schematic drawing of a system for such an embodiment is shown.
[061] The system comprises an X-ray C arm with two cameras sensitive to UV, visible, or attached infrared wavelengths. The radiography system with illustrated arm C consists of a base structure 72 movable on wheels 71 and in which a arm C 73 is installed in such a way that it can rotate around the axis 74 (angle) so that it can also be rotated around an axis 75 in the direction of the double arrow 7 6 (orbital rotation). Although a mobile system is described here, the radiography system can also be attached to the wall, as in a catheterization laboratory. An X-ray source 77 and a detector 81, preferably a flat rectangular detector, which reside 180 degrees opposite to each other, are attached to the arm C 73 in the region of its ends.
[062] The X-ray arm C is capable of acquiring an internal three-dimensional image of the patient. Camera system 82 is attached to the side of detector 81 and is capable of capturing images of the patient's field of operation. In a particular embodiment, the camera system is capable of taking a three-dimensional image of the patient. In addition, the hyperspectral imaging system 83, according to the invention, is also attached to detector 81 and is able to produce projection information in visible light back to the patient in such a way that the images are in focus on the surfaces the patient's curves. For example, structures such as tumor margins are better outlined in the hyperspectral image and can be projected back to the patient in visible light, according to the invention. This makes the tumor margins more visible to the surgeon. In addition to this return projection of the hyperspectral image, it is possible to return projection of images obtained by the radiography system and converted to images visible by the 83 system. For example, the position of the tumor depth within the body visible with radiographic image is projected again on the patient's body. In this way, the surgeon has a much better indication of where the tumor is located. In addition, important structures such as large blood vessels that are just below the surface and are not visible to the eye can be indicated. In this way, the surgeon knows in advance that he needs to be careful when making the incisions in this position. Instead of an X-ray system, an approach similar to an MRI, CT scan, positron emission computed tomography or ultrasound system can also be applied. A Terahertz imaging system can also be used. All of these systems provide an internal image of an object under observation and, in all cases, the data sources produce a flow of two-dimensional images that form a set of secondary data in addition to data based on camera acquisitions.
[063] In the system in figure 7, the relative positions of the hyperspectral imaging system and the secondary imaging system (the radiography system in figure 7) are known and fixed. This allows for a relatively simple matching of the hyperspectral and internal image.
[064] In systems where the relative position of the hyperspectral imaging system and secondary internal imaging system are preferably variable means of greater or lesser extent provided for determining the relative positions of the hyperspectral imaging system and secondary imaging. This can be done automatically, for example, to provide electronic means to measure the X, Y and Z coordinates of both image systems and preferably also the orientation or axes of the image system, if this information is relevant. Of course, this can also be accomplished by manually entering such data. Alternatively or in addition, the image characteristics, whether naturally occurring or specifically inserted in the range of the respective images presented in both the hyperspectral and secondary images, can be used to align the hyperspectral and secondary images. For example, small metal objects on the patient at various points that they would show on hyperspectral images, as well as visible on radiographic images, could be used for this purpose.
[065] Figure 8 further illustrates the system of figure 7. The use of such secondary image data from, for example, radiography data requires the explicit calculation of a depth map d (x, y) that describes the distance d between the plenoptic camera / projector and the tissue surface for each pixel. (x, y) of the projector. This, in contrast to the data from the plenoptic camera itself, which requires only spatial interpolation to correspond with the input pixel grid of the plenoptic camera to the projector's output pixel grid.
[066] The captured light field comprises depth information. To recover the distance profile from the captured light field data, several solutions have been proposed, for example, by Bishop et al. in T. Bishop, P. Favaro, "Plenoptic depth estimation from multiple aliased views", in: 2009 IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops), IEEE, pp. 1622-1629, Los Alamitos, 2009 and by Wanner et al. in S. Wanner, J. Fehr, B. Jaehne, "Generation EPI representations of 4D light fields with a single lens focused plenoptics camera", in: Proc. ISVC 2011, G. Bebis et al. eds., pp. 90-101, 2011. This then becomes an extra task that is performed by processing block 8 in Figure 8. The retrieved depth map d (x, y) is then used in part 9 to reformat the image of the source of secondary data in a set of micro images. In the case of proper alignment with the microlensing set, the secondary data will also then project in an appropriate focus on the tissue surface, regardless of its shape and orientation. Although not shown, part 9 can also provide an entry for input data on the relative positions and / or orientations of the hyperspectral imaging or radiography system.
[067] Fig. 9 illustrates the principle of using microlenses to capture a light field and project a light field. The upper part of Fig. 9 illustrates the capture of a light field. The plenoptic image stores special information from the incident light field. In the event that a microlensing set is used, three-dimensional information is stored in small micro-images, each of which is produced by a single microlensing of the microlensing set. The captured light field is, in fact, four-dimensional, since each ray of light is characterized by a two-dimensional location on the sensor and a horizontal and vertical angle of incidence, adding 2 more dimensions.
[068] Each microlens can be considered a superpixel that not only stores angular information, but also the intensity of the light incident at the location of this 'superpixel'.
[069] The lower part of fig. 9 illustrates the projection of a light field from the pixel of the projector 4. The light rays are reversed. A projector that generates the same micro images in association with a set of microlenses will lead to a projection in which the focal plane coincides with the original surface plane, regardless of its curved shape. The use of a common optical path and alignment of the sensor and pixels of the projector will lead to a projection that is always in focus on the surface that is captured with the camera. If all elements are exactly the same, same size, same position, etc., there is a simple one-to-one relationship between the camera pixel and the projector pixel. In reality, the two may differ in size or exact location. However, the relationship remains a simple task of transfer (T) and scale (S). This is done on processor 8.
[070] The transfer task could also be performed mechanically by providing the projector or camera with means for transferring the sensor surface or projection in an x and y direction.
[071] Having common optical elements, and in particular a common element that provides the plenoptic function, in figure 9 the set of microlenses 7 increases the correspondence between the optical trajectories of image registration and projection, thus simplifying the processing.
[072] Figure 10 illustrates a method for finding the required transfer and scale factors.
[073] In figure 10, a T test image is provided, this T test image is registered by camera 3, which sends the data about the registered image to processor 8; processor 8 applies an initial transformation T and S, found, for example, by a computer-generated optical ray trace that takes on known characteristics of the camera and the projector, to the data and sends it to the projector 4. The projected image is compared to test image, which can, for example, be performed with a separate camera capable of recording the hyperspectral image and the projected image. If the test image and the projected image coincide, the present values for T and S are used, if not, the values of T and S are varied until the test image and the projected image coincide. This is a way to find the T and S values. In figure 10, a method for aligning the light field camera and light field projector for a system, according to the invention, is shown by adjusting the transformation and factors scale T and S to align a T test image with a projected light field image. This test and alignment procedure is performed in the preferred methods according to the invention prior to the acquisition of light field images and projection of light field images.
[074] In summary, the invention can be briefly described as follows:
[075] Image system comprising a light field camera (3) for recording a hyperspectral light field (CLF). The system also comprises a light projector (4) for projecting a light field into visible light (PLF). The camera and the projector share a common optical axis. The projector projects a light field (PLF) based on the hyperspectral light field (CLF) captured by the light field camera.
[076] The invention is not restricted by or to exemplary embodiments shown in the figures or described above. It will be clear to a person skilled in the art that several variations are possible.
[077] The term "comprising" does not exclude the presence of other elements or steps to those listed in a claim. The use of the article "one" or "one" prior to an element does not exclude the presence of a plurality of such elements.
[078] The term "means" includes any means, whether in the form of software, hardware and any combination thereof to carry out the indicated function.
[079] The different elements of a system can be, and preferably are, in a single device, but several elements can be in different physical positions, for example, when the light field data is sent from the light field camera to part 8 to be processed to provide projection of light field data for projector 4. This part 8 can be on the same device as the camera and the projector, and preferably is, but it can also be on a CPU or a location on the internet or shared by several systems. Data can be transmitted from camera 3 to part 8 by any means for data transmission, wired or wireless. The same goes for data from part 8 to projector 4.
[080] The invention also relates, for those realizations in which the invention is carried out by means of software, in whole or in part, to a computer program product that comprises means of program code stored in a reading medium by computer for carrying out a method according to the invention and for a computer program product to be loaded by a computer arrangement, comprising instructions for a method, according to the invention.
权利要求:
Claims (13)
[0001]
1. IMAGE SYSTEM, characterized by: - a light field capture camera configured to record an image of an object in a spectral band of Terahertz radiation through radiography; and - a viewing device configured to view the image recorded in visible light, where the viewing device includes a light field projector, where the light field capture camera and the light field projector share a path coaxial optics and where the light field capture camera comprises an output for sending data in the captured light field to an input of the light field projector, and the light field projector is configured to project a light field in visible light on the object, based on the data received from the light field capture camera; and the system further comprising: - a secondary image system configured to provide secondary image data in an internal image of the object under observation; and a processor configured to provide depth map information that describes the distance between a pixel in the light field projector and the object's surface, based on the data in the light field captured by the light field capture camera, and to format, based on information from a depth map, the secondary image data in an image projected onto the object's surface.
[0002]
2. SYSTEM, according to claim 1, characterized by the light field capture camera and the light field projector share a common chain of optical image elements along the coaxial optical axis.
[0003]
3. SYSTEM, according to claim 1, characterized in that it additionally comprises a set of microlens, a coded aperture or a wavefront encoder configured to provide a plenoptic function, and to be positioned on the shared coaxial optical path.
[0004]
4. SYSTEM, according to claim 1, characterized in that it additionally comprises a set of microlenses configured to provide a plenopic function.
[0005]
5. SYSTEM, according to claim 4, characterized in that the microlensing set is a common element to the light field capture camera and the light field projector.
[0006]
6. SYSTEM, according to claim 1, characterized in that it further comprises a beam splitter configured for dividing the light paths, the beam divider having a spectrally selective dichroic property.
[0007]
7. SYSTEM, according to claim 1, characterized in that the image system is mobile and portable.
[0008]
8. SYSTEM, according to claim 1, characterized by the light field capture camera recording the light field in an IR or UV part of the electromagnetic spectrum.
[0009]
9. SYSTEM, according to claim 1, characterized in that the secondary image system is a radiography image system, an magnetic resonance, computerized tomography, positron emission computed tomography or ultrasound imaging system.
[0010]
10. METHOD TO RECORD AN IMAGE OF AN OBJECT IN A SPECTRAL RADIATION RANGE OF TERAHERTZ THROUGH THE RADIOGRAPHY AND VISUALIZATION OF THE IMAGE IN VISIBLE LIGHT, characterized by understanding: capturing a light field in the spectral range of Terahertz radiation through the radiography by the camera of light field capture so that the image of the object is obtained, processing the data in the light field captured by the light field capture camera to provide projection of image data to a light field projector, projecting through the light field projector a light field based on the projection image data on the object, in which the light field capture camera and the projector share a coaxial optical path and a light field in visible light to be projected onto the object by the light field projector, in which data about the light field captured by the light field capture camera is processed to provide depth map information describing a distance between a pixel of the light field projector and the surface of the object, and in which the secondary image data is provided in an internal image of the object, and in which said secondary image data is reformatted using information from a depth map, and said reformatted data are provided to the light field projector to project onto the object's surface.
[0011]
11. METHOD, according to claim 10, characterized in that the light field is captured in an IR or UV part of the electromagnetic spectrum.
[0012]
12. METHOD, according to claim 10, characterized by the secondary image data being provided by a radiography image system, an magnetic resonance, computed tomography, computed tomography with positron emission or ultrasound system.
[0013]
13. STORAGE MEDIA, characterized by being readable by a non-transitory computer containing a program that makes a computer: record an image of an object in a spectral band of Terahertz radiation through radiography and display the image in visible light by: capture a light field in the radiation range at least partially not visible to the human eye by a light field capture camera, so that the image of the object is obtained; process the data in the light field captured by the light field capture camera to provide projection image data for a light field projector; and project a light field based on data from the projection image onto the object, where the light field capture camera and the projector share a coaxial optical path, and a light field in visible light. it is projected onto the object by the light field projector; where the data in the light field captured by the light field capture camera is processed to provide a depth information map that describes a distance between a pixel in the light field projector and an object surface, and in which data Secondary image data is provided in an internal image of the object, and in which said secondary image data is reformatted using depth information, and said reformatted data is provided to the light field projector to project onto the object's surface.
类似技术:
公开号 | 公开日 | 专利标题
BR112014028811B1|2020-11-17|imaging system, method for registering an image of an object in a spectral band of terahertz radiation through radiography and visualization of the image in visible light, and storage medium
US10992922B2|2021-04-27|Optical imaging system and methods thereof
US20210093417A1|2021-04-01|Imaging and display system for guiding medical interventions
CN107111118A|2017-08-29|EPI illumination Fourier overlapping associations imagings for thick sample
US10724853B2|2020-07-28|Generation of one or more edges of luminosity to form three-dimensional models of objects
US9095255B2|2015-08-04|Method and device for locating function-supporting tissue areas in a tissue region
Villa2017|Forensic 3D documentation of skin injuries
Chernov et al.2017|3D dynamic thermography system for biomedical applications
KR20190070672A|2019-06-21|Spectral imaging device, sysem for analysing skin using spectral imaging and method for recommending cosmetic using spectral imaging
JP2005287900A|2005-10-20|Endoscope
CN113436129B|2021-11-16|Image fusion system, method, device, equipment and storage medium
JP5144579B2|2013-02-13|Ophthalmic observation device
KR101133503B1|2012-04-05|Integrated optical and x-ray ct system and method of reconstructing the data thereof
Schmalz2012|Robust single-shot structured light 3D scanning
US20160256123A1|2016-09-08|Method and apparatus for static 3-d imaging of human face with cbct
CN104463967A|2015-03-25|Skin disease quantitative evaluation device
US20220086416A1|2022-03-17|Optical imaging system and methods thereof
KR20140005418A|2014-01-15|Endoscope and endoscope system
Zhang2021|Image Acquisition Modes
Kwan et al.2017|Development of a Light Field Laparoscope for Depth Reconstruction
同族专利:
公开号 | 公开日
EP2976609A1|2016-01-27|
RU2014153621A|2016-07-20|
WO2014147515A1|2014-09-25|
US20150381908A1|2015-12-31|
RU2014153621A3|2018-03-19|
EP2976609B1|2022-01-05|
CN104380066A|2015-02-25|
JP5974174B2|2016-08-23|
RU2655018C2|2018-05-23|
CN104380066B|2018-12-21|
BR112014028811A2|2017-06-27|
US9736402B2|2017-08-15|
JP2015529482A|2015-10-08|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

JP2001521772A|1997-10-30|2001-11-13|ハイパーメッド・イメジング・インコーポレーテッド|Multispectral / hyperspectral medical instruments|
JP4625956B2|2004-08-27|2011-02-02|国立大学法人東京工業大学|Image processing apparatus and image processing method|
CA2631564A1|2004-11-29|2006-06-01|Hypermed, Inc.|Medical hyperspectral imaging for evaluation of tissue and tumor|
JP5149015B2|2004-12-28|2013-02-20|ハイパーメツド・イメージング・インコーポレイテツド|Hyperspectral / multispectral imaging in the determination, evaluation and monitoring of systemic physiology and shock|
US8838210B2|2006-06-29|2014-09-16|AccuView, Inc.|Scanned laser vein contrast enhancer using a single laser|
GB0602137D0|2006-02-02|2006-03-15|Ntnu Technology Transfer As|Chemical and property imaging|
US20080298642A1|2006-11-03|2008-12-04|Snowflake Technologies Corporation|Method and apparatus for extraction and matching of biometric detail|
US20100177184A1|2007-02-14|2010-07-15|Chrustie Medical Holdings, Inc.|System And Method For Projection of Subsurface Structure Onto An Object's Surface|
EP2075616A1|2007-12-28|2009-07-01|Möller-Wedel GmbH|Device with a camera and a device for mapping and projecting the picture taken|
WO2009118671A1|2008-03-28|2009-10-01|Koninklijke Philips Electronics N.V.|Object localization in x-ray images|
WO2010067281A1|2008-12-11|2010-06-17|Koninklijke Philips Electronics N.V.|System and method for generating images of a patient's interior and exterior|
EP2429398A1|2009-05-13|2012-03-21|Koninklijke Philips Electronics N.V.|System for detecting global patient movement during imaging procedures|
US20120200829A1|2011-02-09|2012-08-09|Alexander Bronstein|Imaging and projecting devices and methods|
US8897522B2|2012-05-30|2014-11-25|Xerox Corporation|Processing a video for vascular pattern detection and cardiac function analysis|JPH0711005B2|1988-09-09|1995-02-08|昭和アルミパウダー株式会社|Size-controlled metal powder for metallic pigment and method for producing size-controlled metal powder|
DE102012222375B3|2012-12-06|2014-01-30|Siemens Aktiengesellschaft|Magnetic coil device for investigation on head of patient, has light field camera element which is provided in camera unit of magnetic coil assembly, such that camera element is arranged within receiving region surrounding shell unit|
US10107747B2|2013-05-31|2018-10-23|Ecole Polytechnique Federale De Lausanne |Method, system and computer program for determining a reflectance distribution function of an object|
DE102014210938A1|2014-06-06|2015-12-17|Siemens Aktiengesellschaft|Method for controlling a medical device and control system for a medical device|
US20160086380A1|2014-09-22|2016-03-24|Invuity, Inc|Hyperspectral imager|
US20160205360A1|2014-12-04|2016-07-14|Stephen Allen|Systems and methods for facilitating placement of labware components|
US9906759B2|2015-04-09|2018-02-27|Qualcomm Incorporated|Combined processing and display device package for light field displays|
CN104887181A|2015-04-29|2015-09-09|浙江大学|Portable vein projector|
US10722200B2|2015-06-04|2020-07-28|Siemens Healthcare Gmbh|Apparatus and methods for a projection display device on X-ray imaging devices|
CN106331442B|2015-07-02|2021-01-15|松下知识产权经营株式会社|Image pickup apparatus|
US10317667B2|2015-07-04|2019-06-11|The Regents Of The University Of California|Compressive plenoptic microscopy for functional brain imaging|
CN105158888B|2015-09-29|2020-09-11|南京理工大学|Programmable microscope condenser device based on LCDpanel and imaging method thereof|
JP2017080159A|2015-10-29|2017-05-18|パイオニア株式会社|Image processing apparatus, image processing method, and computer program|
CN109074674A|2016-02-26|2018-12-21|南加州大学|The optimization volume imaging detected with selective volume irradiation and light field|
DE102016207501A1|2016-05-02|2017-11-02|Siemens Healthcare Gmbh|Method for operating a magnetic resonance device and magnetic resonance device|
EP3284396B1|2016-08-16|2020-02-12|Leica InstrumentsPte. Ltd.|Observation apparatus and method for visual enhancement of an observed object|
CN109640868A|2016-09-09|2019-04-16|直观外科手术操作公司|Simultaneous with the imaging system of white light and EO-1 hyperion light|
JP2020507436A|2017-02-14|2020-03-12|アトラクシス エス・アー・エール・エル|High-speed optical tracking using compression and / or CMOS windowing|
GB201713512D0|2017-08-23|2017-10-04|Colordyne Ltd|Apparatus and method for projecting and detecting light on a 2D or 3D surface, e.g. for semantic lighting based therapy|
CN108836506A|2018-07-20|2018-11-20|东北大学|A kind of black light for operation shows that equipment and optics instruct system|
CN108937992B|2018-08-06|2020-10-23|清华大学|In-situ visualization system for X-ray perspective imaging and calibration method thereof|
GB201902668D0|2019-02-27|2019-04-10|Colordyne Ltd|Appoaratus for selectively illuminating a target field, for example, in a self dimming headlight system|
KR102222076B1|2019-03-19|2021-03-03|한국광기술원|Optical System for Realizing Augmented Reality and Medical Augmented Reality Apparatus Including the Same|
CN112001998B|2020-09-02|2021-02-19|西南石油大学|Real-time simulation ultrasonic imaging method based on OptiX and Unity3D virtual reality platforms|
法律状态:
2018-11-13| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2020-02-18| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2020-07-14| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2020-11-17| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 12/03/2014, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
US201361803169P| true| 2013-03-19|2013-03-19|
US61/803,169|2013-03-19|
PCT/IB2014/059652|WO2014147515A1|2013-03-19|2014-03-12|System for hyperspectral imaging in visible light, method for recording a hyperspectral image and displaying the hyperspectral image in visible light|
[返回顶部]