专利摘要:
A method (10) for processing local information acquired by means of a virtual representation and a device comprising an inertial unit and an image sensor which comprises the following steps: capturing (11) at least one image of a real environment of the device, - location (12) of the device in the virtual representation, corresponding to the location of the device in the real environment, by correlation of parts of a captured image and parts of the virtual representation, determination (13) of the displacement of the device at least by means of the inertial unit and modification (15) of the location of the device in the virtual representation as a function of the displacement determined by the inertial unit so that the actual position of the device corresponds to, during the displacement, to the location of the device in the virtual representation.
公开号:FR3021442A1
申请号:FR1454556
申请日:2014-05-21
公开日:2015-11-27
发明作者:Nicolas Chevassus;Denis Marraud;Antoine Tarault;Xavier Perrotton
申请人:Airbus Group SAS;
IPC主号:
专利说明:

[0001] FIELD OF THE INVENTION The present invention relates to a local information processing method, a device for implementing such a method and a communicating portable terminal comprising such a device. The invention applies in particular in the field of assistance to industrial control. More particularly, the invention applies to assembly, maintenance or installation by mixed or augmented reality and training aid. STATE OF THE ART Augmented reality in an industrial environment requires very robust location methods. Currently, the estimation of the position of the device used is performed using markers. This technique makes it possible to have a certain robustness only when a marker is visible. In addition, the device identified is located in the reference frame of the marker. In addition, there are few devices for locating without the use of markers. To have a robust device location, the existing methods require a long calibration step and not intuitive. This calibration step does not allow rapid use of the device and requires a certain qualification of the users. In addition, certain positioning techniques reconstruct the environment in real time, for example SLAM (or "Simultaneous Localization And Mapping" in English terminology). We therefore have a location with respect to a reconstructed environment. This has several disadvantages. First, it is not possible to detect differences between what has been constructed and what has been drawn. In addition, the reconstructed environment may include element detection errors, for example. Thus, this technique is unreliable especially in the field of maintenance or where it is desired a high accuracy. Augmented reality devices are, in most cases, devices for displaying information superimposed on an image or video.
[0002] It is known in the state of the art European Patent Application EP 2201532 which describes a local positioning device configured to determine the relative position of the device with respect to a target object. The device is mounted on a controlled and graduated ball joint fixed on a tripod. The patella is used to determine the azimuth and elevation angle that must be entered manually to define the position of the device. This device is difficult to position and move in an industrial environment. OBJECT OF THE INVENTION The present invention aims to remedy all or part of these disadvantages. For this purpose, according to a first aspect, the present invention aims a method of processing local information acquired by means of a virtual representation and a device comprising an inertial unit and an image sensor, which comprises the following steps : - capture of at least one image of a real environment of the device, - location of the device in the virtual representation, corresponding to the location of the device in the real environment, by correlation of parts of a captured image and parts of the virtual representation, - determination of the displacement of the device at least by means of the inertial unit and - modification of the location of the device in the virtual representation as a function of the displacement determined by the inertial unit so that the actual position of the device corresponds, during the displacement, the location of the device in the virtual representation.
[0003] First, the invention has the advantage of having a location of the device from a single captured image. The method therefore has great robustness as early as the step of locating the device. Also, this step is fast and does not require special skills on the part of the user of the device.
[0004] The use of a virtual environment realized a priori makes it possible to locate the device in the reference of the virtual representation. Unlike a reconstruction, the representation allows to detect the missing elements on the elements of the real environment. The analyzed information has greater reliability. The virtual environment can also be modified according to the observations made on the real environment. The correlation step may be performed from an image analysis that does not require a target object. In addition, the device performing the method of the invention is not fixed and is easily movable in an industrial environment. In embodiments, the step of determining the displacement comprises the following steps: estimation of a movement by the inertial unit, estimation of a relative movement between the images captured at a given moment and the images captured at a given instant subsequent and - combination of motion estimates. These embodiments have the advantage of having a position as accurate as possible. The estimation of the relative movement between two captured images as well as the estimation of the movement of the inertial unit minimize an error of estimation of the position. Indeed, since industrial environments can be visually ambiguous, the determination of correlation localization can produce errors between two similar situations. The estimation of the two other movements makes it possible to avoid abrupt changes of location. The process is therefore more robust.
[0005] In embodiments, during the step of locating the device, the correlation is performed by recognition in at least one captured image of predefined discriminant semantic structures in the virtual representation. This recognition has the advantage of increasing the robustness of the process. Indeed, the more discriminating semantic structures in the captured image, the more accurate the process. In embodiments, the method which is the subject of the present invention comprises a step of attenuation of the displacement determined during the step of determining the displacement.
[0006] These embodiments have the advantage of having a precise location in case of momentary loss of the image or if the image is unusable. The user should not wait to stabilize the device to know its position relative to the repository of the virtual representation. The user is therefore more efficient. Also, this helps mitigate micromovements that can suffer the device. In embodiments, the method that is the subject of the present invention comprises a step of conjointly displaying a captured image and a portion of the virtual representation corresponding to the displayed captured image. The advantage of these embodiments is a better visualization of the differences between the virtual representation and the real environment. The user is therefore more efficient. In embodiments, the method that is the subject of the present invention comprises a step of editing information located on the virtual representation. The advantage of these embodiments is to have precisely localized information. In addition, the information being recorded on the virtual representation, they are easily transferred and accessible. The information recorded during previous uses of the device is accessible and modifiable. According to a second aspect, the present invention aims at a local information processing device which comprises: an image sensor, which provides at least one image of the real environment of the device, a means of access to a virtual representation, which further comprises: - means for locating the device in the virtual representation, corresponding to the location of the device in the real environment, by correlation of parts of a captured image and parts of the virtual representation, - an inertial unit which determines a displacement of the device and, - a means for modifying the location of the device in the virtual representation as a function of the displacement determined by the inertial unit so that the actual position of the device corresponds, during the displacement, to the location of the device in the virtual representation. The advantages, aims and particular characteristics of the device object of the present invention being similar to those of the method object of the present invention, they are not recalled here.
[0007] In embodiments, the device of the present invention comprises a display means configured to jointly display a captured image and a portion of the virtual representation corresponding to the displayed captured image.
[0008] These embodiments have the advantage of comparing the real environment to the virtual representation in order to detect anomalies. In embodiments, the device that is the subject of the present invention comprises means for editing information located on the virtual representation.
[0009] The advantage of these embodiments is the possibility of creating and modifying annotations precisely located directly on the virtual representation. According to a third aspect, the present invention aims a communicating portable terminal comprising a device object of the present invention.
[0010] Thanks to these provisions, the invention is compact and easily transportable in industrial settings, which are often difficult to access. BRIEF DESCRIPTION OF THE FIGURES Other advantages, aims and features of the invention will become apparent from the following nonlimiting description of at least one particular embodiment of the method and device for processing local information, and the portable terminal. communicating having such a device, with reference to the accompanying drawings, in which: Figure 1 shows, in logic diagram form, an embodiment of the method forming the subject of the present invention, Figure 2 shows, schematically, an embodiment of the device. object of the present invention. FIG. 3 represents, schematically, an embodiment of the communicating portable terminal object of the present invention.
[0011] DESCRIPTION OF EXAMPLES OF EMBODIMENT OF THE INVENTION It is already noted that the figures are not to scale. FIG. 1 shows a particular embodiment of the method 10, object of the present invention, which comprises: a step 11 of capturing images representative of the real environment; a step 12 of locating a device in a virtual model corresponding to the location of the device in the real environment, - a step 13 for determining the displacement of the device comprising three sub-steps: a step 13-1 of motion estimation by the inertial unit, a step 13-2 for estimating the relative movement between the images captured at a given moment and the images captured at a later time, - a step 13-3 of combining the estimates of movements determined in steps 13-1 and 13-2 determining a displacement of the device, a step 14 of attenuation of the determined displacement, a step 15 of modifying the position of the device in the virtual representation as a function of the displacement of completed, a step 16 of conjoint display of captured images and a portion of the virtual representation corresponding to the displayed captured image and a step 17 of editing localized information on the virtual representation.
[0012] Step 11 is performed by means of an image capture device. The image capture device is, for example, a camera, a camera or a scanner. In the rest of the description, "camera" designates an image capture device. The camera can be monocular, stereoscopic, RGB-D or plenoptic. The camera performing the image capture in step 11 can be used in two modes: the video mode for continuous shooting and - the still image mode configured for further analysis of certain shots when the shot is taken. of view is difficult for example. The video mode may include a sub-step of image quality processing by stabilization, denoising, and super-resolution. This step is used in detailed local views for example. Still picture mode differs from video mode in the absence of time constraints. The image can be of better quality and it is possible to implement a global localization strategy by optimizing the location of all the images. For example, taking into account the knowledge of certain characteristics of the shooting, such as the characteristics of panoramas for example. Step 12 is a calibration step. When it is created, discriminating semantic structures are designated in the virtual representation. The virtual representation can be a digital model also called "Digital MockUp" in English terminology or DMU. The digital model is preferably performed using computer-aided design (CAD) software. The virtual representation may include: assembly or inspection information, tests and measurements, annotations, elements to be checked, non-conformities. The information can be of different types: text such metadata associated with the objects of the scene for example, - image, - geometry, video or, 3D scan that can be acquired on the objects of the scene during a previous use. Preferably, the virtual representation is a simplification of the raw digital model, made during the design of the object represented. The raw digital model is filtered and organized in order to: select the relevant objects for the task to be performed, extract metadata to display or edit, organize geometry simplification data and define the discriminant semantic structures by an automatic analysis method geometric structures present in the digital model. Preferentially, these discriminant semantic structures: minimize the natural ambiguity of the scene, maximize their probability of detection, possibly take into account reference structures. Discriminant semantic structures take into account reference structures when the constraints of tolerance impose it. In this case, the discriminant semantic structures are selected exclusively on the reference elements. Discriminant semantic structures are preferably geometric structures such as points, lines, circles, ellipses, surfaces, parametric volumes, texture-rich elements, or contours. Discriminant semantic structures can be: - Multimodal and unambiguous visual amers, - Easily detectable calibration patterns. The term multimodal means that they correspond to primitives extracted from the different available images, whatever their nature. Unambiguity is defined as corresponding to unique configurations or descriptions in a nearby neighborhood. Visual amers allow calibration without user intervention. The calibration targets require the intervention of a user who must locate the target approximately in the virtual representation. The pattern can be positioned on a surface to define a normal to the surface. The user can reproduce normality and positioning quickly in the virtual representation. In addition, a subsequent addition of automatically located patterns on the digital representation makes it possible to further increase the robustness of the method and the accuracy of the position of the device in the digital representation. The location is automatic when the new positioned target is captured by the camera in an image having at least one semantic structure already referenced.
[0013] After the location of the first target, a correlation between the discriminating semantic systems of the captured images and those of the virtual representation is performed. Preferentially, the correlation is performed from the outlines extracted from the virtual representation which are then aligned with the contours extracted from the image. 'picture. This allows a reduction of the drift during use in video mode. The initialization step 12 can be fully automated if the number of visual bitter is sufficient to avoid having to place a test pattern. In addition, the selection of the amers or patterns to be extracted can be performed according to different criteria: the proximity of an object of interest identified in the virtual representation or the size of the landmarks or sights, for example. During the location step 12, the initial position of the inertial unit is defined.
[0014] The step 13 of determining the displacement can be carried out in three sub-steps, 13-1, 13-2 and 13-3. Step 13-1 of motion estimation by the inertial unit, is a step of calculating the displacement between the initial position of the inertial unit defined in step 12 and the estimated position of the inertial unit after displacement of the device. The step 13-2 for estimating the relative movement between the images captured at a given moment and the images captured at a later moment is an image processing step. More particularly, a recognition of semantic structures is performed in both images. By comparing the positions of these structures in the image, an estimate of the motion of the device can be determined. The step 13-2 can be iterative and exploits the capabilities of the virtual representation, for example a texture rendering, a depth map, a map of normals. Preferentially, the step is a pairing of the 3D primitives extracted from the renderings previously mentioned with 2D or 3D primitives extracted from the image acquired by the camera. The selection of visible 3D primitives is inherently managed and a model preparation step is avoided. Step 13-3 of combining the motion estimates determined in steps 13-1 and 13-2 determines a displacement of the device. Preferably, an estimate of the position using the correlation of step 12 is combined with the motion estimates to determine the position of the device at a later time. A level of confidence can be assigned to each estimate. The determination of the displacement is preferably carried out by weighting the estimates by the corresponding confidence levels. The process is therefore more robust and precise. For example, when the image is fuzzy, the confidence level assigned to the correlation position estimate is low. Step 13-3 is configured to: - limit the resetting effort by reducing the number of reference bites, and thus the number of primitives defined in step 13-2, - increase the robustness of the re-latching in case of temporary loss of visual registration if the camera is hidden for example. The step 13 of attenuation of the displacement determined in step 13-3 is configured to attenuate the impression of floating between the virtual representation and the captured images. Step 14 may be a filtering of the rotation and translation data from the inertial unit. For example, motion estimation by the inertial unit is filtered to minimize the impact of vibrations or small movements due to the user. Depending on the selection mode of the semantic structures, the small movements can be attenuated while maintaining an alignment on the points of interest of the image. The step 15 of modifying the position of the device in the virtual representation as a function of the determined displacement, is configured to precisely update the position of the device according to the displacement determined in step 13-3 and attenuated in step 14.
[0015] Step 16, conjoint display of captured images and a portion of the virtual representation corresponding to the displayed captured image, uses the position and the orientation of the device, modified in step 15 and located in the repository of the virtual representation, to define the part of the virtual representation to display. The joint display may be a display in: juxtaposition, in which the images are placed next to each other, in order to allow a comparison for example or, in superposition in which the virtual representation is in transparency, in order to have the display of information added to the raw digital model and related to the industrial process.
[0016] The display may be: in a global view which is a wide field view in order to have a global view of the scene to be analyzed, in local view which is a detailed view or a zoom of part of a global view, the local view being indicated in a global view.
[0017] The step 17 of editing localized information on the virtual representation is performed from the information display realized in step 16. Precisely localized information can be associated automatically or manually with the virtual representation. This information is for example provided by: the operator, such as an indication of non-compliance in the real environment for example, the camera, such as a photo, a video, or a 3D scan for example, an external device allowing for example a measure of pressure, temperature, level of illumination. This information is saved in the virtual representation and can be used for future use. This information can be edited to reflect an evolution of a local situation for example, such as the correction of a non-compliance, for example.
[0018] The editing step can also be a user control of the local alignment. This control is preferably performed by means of the superposition display defined above. Preferably, the position of the virtual representation is transparent and the alignment is performed from a still image. The operator can move a point position in 3D without calling into question the global alignment. The user can precisely align the real on the virtual with a zoom centered on the point to align. The method 10 is carried out by means of a device 20. FIG. 2 shows a particular embodiment of the device 20 which is the subject of the present invention.
[0019] A camera 205 captures an image 210 and an image 215 subsequent to the image 210. The images 210 and 215 represent a real environment. The image 210 is transmitted to a device 225. The device 225 also takes into account a virtual representation 220 of the real environment. The device 225 extracts discriminant semantic structures from the virtual representation and the captured image and performs a correlation configured to obtain a location 230 of the device in the repository of the digital representation according to step 12 of the method 10. The device 225 is a microprocessor, for example. The image 210 is also transmitted with the image 215 to a device 235. The device 235 extracts the discriminant semantic structures of the two images and compares them to deduce an estimate of the motion that the device 20 has made between the instant when the image 210 was captured and the subsequent instant when image 215 was captured. The device 235 is for example a microprocessor which performs the estimation according to step 13-2 of the method 10. The information 240 is the estimation of the movement of the device 20 at the output of the device 235.
[0020] An inertial unit 245 is initialized when the image 210 is sensed. The inertial unit provides motion estimation information 250 to a device 255 in accordance with step 13-1 of method 10. The device 255 may be a microprocessor. The device 255 takes into account the information 230 of initial location of the device 20 and, according to the motion estimates 240 and 250, transmits a signal 260 corresponding to the new position of the device 20 in the virtual representation. The device 255 performs the steps 13-3, 14 and 15 of the method 10. A device 270 is a display device. The device 270 jointly displays the image 215 and the virtual representation 220 at the position 260 corresponding to the image 215. The device 270 can also take into account information 265 that must be edited on the virtual representation. The information 275 at the output of the device 270 is the edited virtual representation. FIG. 3 shows a particular embodiment of the communicating portable terminal 30 which is the subject of the present invention. The communicating portable terminal 30 comprises a display screen 270 connected to the rest of the device 20. The communicating portable terminal 30 is preferably: a digital tablet, a device of the "smartphone" type, glasses, a helmet or a computer.
权利要求:
Claims (10)
[0001]
REVENDICATIONS1. Method (10) for processing local information acquired by means of a virtual representation (220) and a device (20) comprising an inertial unit (245) and an image sensor (205) characterized in that it comprises the following steps: - capture (11) of at least one image (210) of a real environment of the device, - location (12) of the device in the virtual representation, corresponding to the location of the device in the environment real, by correlation of parts of a captured image and parts of the virtual representation, - determining (13) the displacement of the device at least by means of the inertial unit and - modifying (15) the location of the device in the representation virtual as a function of the displacement determined by the inertial unit so that the actual position of the device corresponds, during the displacement, to the location of the device in the virtual representation.
[0002]
2. Method (10) for processing local information according to claim 1, wherein, the step (13) for determining the displacement comprises the following steps: - estimation (13-1) of a movement by the inertial unit - estimation (13-2) of a relative movement between the images captured at a given moment and the images captured at a later moment and - combination (13-3) of the motion estimates.
[0003]
The local information processing method (10) according to one of claims 1 or 2, wherein, during the locating step (12) of the device, the correlation is performed by recognition in at least one captured image (210) of predefined semantic structures in the virtual representation (220).
[0004]
4. The local information processing method according to one of claims 1 to 3, which comprises a step (14) for attenuating the displacement determined during the step (13) for determining the displacement.
[0005]
5. Method (10) for processing local information according to one of claims 1 to 4, which comprises a step (16) of joint display of a captured image (210) and a portion of the virtual representation (220) corresponding to the captured image displayed.
[0006]
The method (10) of local information processing according to one of claims 1 to 5, which comprises a step (17) of editing localized information (265) on the virtual representation (220).
[0007]
A local information processing device (20) which includes: an image sensor (205), which provides at least one image (210) of the actual environment of the device, means for accessing a virtual representation (220), characterized in that it further comprises: - means for locating (225) the device in the virtual representation, corresponding to the location of the device in the real environment, by correlation of parts of a captured image and portions of the virtual representation, - an inertial unit (245) which determines a displacement (250) of the device and - means for modifying (255) the location of the device in the virtual representation as a function of the displacement determined by the inertial unit so that the actual position of the device corresponds, during the displacement, to the location of the device in the virtual representation.
[0008]
The local information processing apparatus (20) according to claim 7, which includes display means (270) configured to jointly display a captured image (210) and a portion of the virtual representation (220) corresponding to the captured image displayed.
[0009]
9. Local information processing device (20) according to one of claims 7 or 8, which comprises means (270) for editing localized information on a virtual representation (220).
[0010]
10. communicating portable terminal (30) characterized in that it comprises a device (20) according to one of claims 7 to 9.
类似技术:
公开号 | 公开日 | 专利标题
EP2947628B1|2018-04-25|Method for processing local information
US10134196B2|2018-11-20|Mobile augmented reality system
US11003943B2|2021-05-11|Systems and methods for processing images with edge detection and snap-to feature
US10404969B2|2019-09-03|Method and apparatus for multiple technology depth map acquisition and fusion
US9576183B2|2017-02-21|Fast initialization for monocular visual SLAM
US10242454B2|2019-03-26|System for depth data filtering based on amplitude energy values
WO2016029939A1|2016-03-03|Method and system for determining at least one image feature in at least one image
WO2017041740A1|2017-03-16|Methods and systems for light field augmented reality/virtual reality on mobile devices
JP2008203991A|2008-09-04|Image processor
US9232132B1|2016-01-05|Light field image processing
CN111105462A|2020-05-05|Pose determination method and device, augmented reality equipment and readable storage medium
KR101863647B1|2018-06-04|Hypothetical line mapping and verification for 3D maps
Mori et al.2018|Visualization of the Past-to-Recent Changes in Cultural Heritage Based on 3D Digitization
FR3013492A1|2015-05-22|METHOD USING 3D GEOMETRY DATA FOR PRESENTATION AND CONTROL OF VIRTUAL REALITY IMAGE IN 3D SPACE
TWI516744B|2016-01-11|Distance estimation system, method and computer readable media
Schreyvogel et al.2019|Dense point cloud generation of urban scenes from nadir RGB images in a remote sensing system
JP2015005200A|2015-01-08|Information processing apparatus, information processing system, information processing method, program, and memory medium
FR3041747A1|2017-03-31|METHOD OF COLLABORATIVE REFERENCE
FR3032053A1|2016-07-29|METHOD FOR DISPLAYING AT LEAST ONE WINDOW OF A THREE-DIMENSIONAL SCENE, COMPUTER PROGRAM PRODUCT, AND DISPLAY SYSTEM THEREOF
FR3018362A1|2015-09-11|METHOD AND SYSTEM FOR ASSISTING THE LOCATION OF AN ELEMENT OF A COMPLEX SYSTEM, AND METHOD AND ASSEMBLY FOR SUPPORTING THE MAINTENANCE OF A COMPLEX SYSTEM
同族专利:
公开号 | 公开日
US9940716B2|2018-04-10|
RU2015119032A|2016-12-10|
BR102015011234A2|2016-08-09|
EP2947628B1|2018-04-25|
EP2947628A1|2015-11-25|
FR3021442B1|2018-01-12|
SG10201503996XA|2015-12-30|
CN105091817A|2015-11-25|
JP2015228215A|2015-12-17|
CN105091817B|2019-07-12|
US20150339819A1|2015-11-26|
CA2891159A1|2015-11-21|
JP6516558B2|2019-05-22|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
US20080177411A1|2007-01-23|2008-07-24|Marsh Joseph C|Method and apparatus for localizing and mapping the position of a set of points on a digital model|
WO2011104167A1|2010-02-23|2011-09-01|Airbus Operations Limited|Recording the location of a point of interest on an object|
US8044991B2|2007-09-28|2011-10-25|The Boeing Company|Local positioning system and method|
JP5434608B2|2010-01-08|2014-03-05|トヨタ自動車株式会社|Positioning device and positioning method|
US9600933B2|2011-07-01|2017-03-21|Intel Corporation|Mobile augmented reality system|
JP5867176B2|2012-03-06|2016-02-24|日産自動車株式会社|Moving object position and orientation estimation apparatus and method|
EP2904417A1|2012-09-27|2015-08-12|Metaio GmbH|Method of determining a position and orientation of a device associated with a capturing device for capturing at least one image|FR3022065B1|2014-06-04|2017-10-13|European Aeronautic Defence & Space Co Eads France|METHOD FOR GENERATING AN ENRICHED DIGITAL MODEL|
CA3032812A1|2016-08-04|2018-02-08|Reification Inc.|Methods for simultaneous localization and mappingand related apparatus and systems|
JP6799077B2|2016-12-15|2020-12-09|株式会社ソニー・インタラクティブエンタテインメント|Information processing system, controller device, controller device control method, and program|
JP6626992B2|2016-12-15|2019-12-25|株式会社ソニー・インタラクティブエンタテインメント|Information processing system, vibration control method, and program|
US10963055B2|2016-12-15|2021-03-30|Sony Interactive Entertainment Inc.|Vibration device and control system for presenting corrected vibration data|
WO2018193514A1|2017-04-18|2018-10-25|株式会社ソニー・インタラクティブエンタテインメント|Vibration control device|
US11145172B2|2017-04-18|2021-10-12|Sony Interactive Entertainment Inc.|Vibration control apparatus|
JP6887011B2|2017-04-19|2021-06-16|株式会社ソニー・インタラクティブエンタテインメント|Vibration control device|
JP6771435B2|2017-07-20|2020-10-21|株式会社ソニー・インタラクティブエンタテインメント|Information processing device and location information acquisition method|
法律状态:
2015-05-21| PLFP| Fee payment|Year of fee payment: 2 |
2015-11-27| PLSC| Publication of the preliminary search report|Effective date: 20151127 |
2016-05-20| PLFP| Fee payment|Year of fee payment: 3 |
2017-05-23| PLFP| Fee payment|Year of fee payment: 4 |
2017-07-28| CA| Change of address|Effective date: 20170622 |
2017-07-28| CD| Change of name or company name|Owner name: AIRBUS GROUP SAS, FR Effective date: 20170622 |
2018-05-22| PLFP| Fee payment|Year of fee payment: 5 |
2020-02-14| ST| Notification of lapse|Effective date: 20200108 |
优先权:
申请号 | 申请日 | 专利标题
FR1454556A|FR3021442B1|2014-05-21|2014-05-21|METHOD OF PROCESSING LOCAL INFORMATION|
FR1454556|2014-05-21|FR1454556A| FR3021442B1|2014-05-21|2014-05-21|METHOD OF PROCESSING LOCAL INFORMATION|
EP15167278.9A| EP2947628B1|2014-05-21|2015-05-12|Method for processing local information|
CA2891159A| CA2891159A1|2014-05-21|2015-05-13|Treatment process for local information|
BR102015011234A| BR102015011234A2|2014-05-21|2015-05-15|local information processing process|
US14/714,273| US9940716B2|2014-05-21|2015-05-16|Method for processing local information|
RU2015119032A| RU2015119032A|2014-05-21|2015-05-20|METHOD FOR PROCESSING LOCATION DATA|
CN201510259721.9A| CN105091817B|2014-05-21|2015-05-20|Method for handling local message|
JP2015102494A| JP6516558B2|2014-05-21|2015-05-20|Position information processing method|
SG10201503996XA| SG10201503996XA|2014-05-21|2015-05-21|Method for processing local information|
[返回顶部]