![]() METHOD AND SYSTEM FOR AUTOMATED MODELING OF A PART
专利摘要:
The present invention relates to a method (100) for automatically modeling a part, comprising the following steps: - acquisition (104) of several images in said part comprising at least one color information and at least one depth information, according to a plurality of positions and / or locations, said acquisition, in said room; obtaining (126) from said acquisition of a cloud of points relating to said piece from said images; detecting (130), from said cloud of points, at least one so-called reference surface of said piece; determination (132-142) of a two- or three-dimensional representation of said piece by analyzing said point-by-side point cloud along a reference axis passing through said reference surface. 公开号:FR3025918A1 申请号:FR1458550 申请日:2014-09-11 公开日:2016-03-18 发明作者:Jeremy Guillaume;Damien Dous 申请人:Snapkin; IPC主号:
专利说明:
[0001] The present invention relates to a method for automated modeling of a workpiece in order to obtain a two- or three-dimensional representation of a part or of the totality of a workpiece. the room. It also relates to a system implementing such a method. [0002] The field of the invention is the field of the representation of a part, or part of a part, in two or three dimensions, for example in order to provide an architectural plan of the part. [0003] STATE OF THE ART The modeling of a three-dimensional structure, and in particular of a piece, for example in order to obtain a two- or three-dimensional representation of the piece, can currently be carried out in several ways. [0004] First of all, there are the manual or semi-manual methods of measuring the dimensions of the various elements constituting the part and their relative positioning, and then of carrying out a purely manual layout or by means of a computer tool with a view to get a representation of the piece. [0005] There are also automated methods that, from images acquired inside the room, perform a reconstruction of the different surfaces of the room to give a two- or three-dimensional representation. The manual processes are very time-consuming on the one hand for measuring the dimensions and the position of different elements of the part, and on the other hand for realizing the layout of the part from these measurements. Automated processes are less time consuming compared to manual processes. However, currently known modeling methods suffer from other disadvantages. On the one hand, the currently known automated methods do not make it possible to give a sufficiently accurate representation of the part, and often require the intervention of an operator to complete or correct the representation provided by these methods. On the other hand, since the representation of a part is obtained from an analysis of images taken in the part, the currently known automated methods require rather heavy and expensive computing and processing resources. Finally, the currently known methods and system make it possible to model a part identically regardless of the nature of the part, that is to say without taking into account the complexity of the surfaces constituting the part. [0006] An object of the present invention is to overcome the aforementioned drawbacks. It is an object of the present invention to provide a method and system for automatically modeling a part providing a two- or three-dimensional representation of the more accurate part. Another object of the present invention is to provide a method and a system for modelizing a part in an automated manner requiring fewer resources. Yet another object of the present invention is to provide a method and system for automatically modeling a part that adapts to the complexity of the part. DISCLOSURE OF THE INVENTION The invention makes it possible to achieve at least one of these objectives by a method for automatically modeling a part, said method comprising the following steps: acquisition of a plurality of images in said part comprising at least one color information and at least one depth information, - obtaining from said images a cloud of points relating to said piece; detecting, from said cloud of points, at least one so-called reference surface of said part; Determining a bi- or three-dimensional representation of said part by analyzing said point-by-edge point cloud along a reference axis, passing through said reference surface, each portion comprising points of said cloud of 5; selected points along said reference axis. The method according to the invention thus makes it possible to base the modeling of a part on a slice analysis of a 3D point cloud of the part. In other words, according to the invention, the modeling of a part is carried out gradually layer by layer. Such slice analysis allows for modeling with lower computational resources compared to currently known modeling methods and systems in which piece data is analyzed as a whole. In addition, a wafer analysis of the dots representing the part allows a more accurate modeling of a part, for example by decreasing the thickness of each wafer. In addition, the method according to the invention makes it possible to carry out a more personalized modeling according to the type of the part, by adapting the thickness of the slices to the complexity of the modeled part. When the part consists of relatively continuous surfaces with no features or objects, it is possible to increase the thickness of the slices without losing precision while reducing the modeling time. When a part consists of relatively complex surfaces with many features or objects, it is possible to reduce the thickness of the slices to take into account the complexity of the surfaces and thus to achieve accurate modeling of the part. In the present invention, the term "color image" means a two-dimensional image representing the colorimetric information of the targeted region. In the present invention, the term "depth image" is intended to mean a two-dimensional image representing the depth information of the target region, ie for each point represented, the distance between this point and and an acquisition point. . In the present invention, the term "point cloud" is intended to mean a three-dimensional representation comprising the colorimetric information of the color image and the depth information of the depth image. Advantageously, the image acquisition step may comprise an acquisition of at least one image comprising at least one color information and at least one depth information, according to a plurality of 5 so-called acquisition positions. , about an axis of rotation passing through a location, called acquisition, in said room. Preferably, for at least one acquisition location, the axis of rotation is a vertical axis, that is to say perpendicular to the floor of the room. [0007] According to the invention, for each acquisition position, the method according to the invention may preferentially comprise an acquisition of two images, one of said images being a color image comprising color data, and the other of said images being a color image. Depth image including depth data. [0008] Alternatively, as the acquisition means allows, for each acquisition position, the method according to the invention may comprise a single image acquisition comprising both color data and depth data. The method according to the invention may comprise a step of concatenating at least two images, preferably all the images, acquired for at least two images, preferably for all the acquisition positions. A concatenation of the images taken for two acquisition positions, in particular consecutive, can be achieved by determining a matrix 25 testifying displacement, rotation (s) and / or translation (s), operated to pass from one of these positions to each other. This displacement matrix is then used to concatenate (or merge) the images taken for these two positions. The concatenation step can be performed to concatenate / merge the color images between them or the depth images between them, or preferentially for both. Alternatively or in addition, the image acquisition step may comprise an acquisition of at least one image comprising at least one color information and at least one depth information, according to a plurality of locations. different acquisition in the room. This feature is particularly advantageous for modeling large parts and allows accurate modeling 5 even in the case of large parts. The method according to the invention may then comprise a concatenation step or a merger of the results of two acquisition steps performed at two different acquisition locations. Such a concatenation / merger step may preferentially comprise the following steps: obtaining a scatter plot for each acquisition location, detecting the same surface of the part, or one or more points of correspondence , on each cloud of points, for example the ceiling or the floor of the room, - determination of a linear transformation connecting this surface, respectively these correspondence points, in one of the two points clouds, to this same surface, respectively these points of correspondence, in the other of the two point clouds, and 20 - registration of one of the two point clouds with said linear transformation. Such a concatenation is relative, that is to say that it allows to recalibrate one of the two clouds of points relative to the other of the two point clouds. [0009] The concatenation can also be absolute and perform a registration of each cloud of points with respect to a reference or an absolute reference previously fixed. The method according to the invention may comprise an acquisition step 30 combining the two variants that have just been defined, namely: at least two acquisition locations in the room, and for each acquisition location, at at least one acquisition position about an axis of rotation passing through said acquisition location. Advantageously, the acquisition step may comprise, for at least one position or at least one acquisition location, image acquisition (s) of at least two different regions of the room according to at least 5 two different viewing directions so that two adjacent regions have a common area. In particular, the two viewing directions are not parallel to each other. Such an acquisition may preferentially be performed by at least two cameras positioned so as to have an identical orientation in a horizontal plane in the room and different orientations in a vertical plane in the room. Alternatively, such an acquisition can also be achieved by a single and repositionable camera both in a vertical plane and in a horizontal plane. In this case, the acquisition step can perform an acquisition of images: in a first direction for all the positions or acquisition locations, then another direction for all the positions or acquisition locations, etc. ; or 20 - in all directions for a position or location of acquisition, then in all directions for the next acquisition position or location, etc. The method according to the invention may further comprise, for at least one position or an acquisition location, a step of concatenation / fusion of the acquisition made for at least two adjacent regions, said step comprising the following steps: estimating a linear transformation between the two adjacent regions using the positions of the acquisition tool or tools used for said regions, - detecting at least one correspondence point in the area common to the two adjacent regions, improving the estimation of said linear transformation according to said at least one correspondence point, refining the linear transformation using a predetermined minimization method, and concatenating or merging the point clouds said regions using said linear transformation. [0010] The point or points of correspondence can be detected by analyzing the color image, and / or depth, of each zone and can correspond to an object in the room or a feature of a surface of the room and visible. in the images acquired for each of the regions. [0011] Detection of a point of correspondence in two images can be achieved by any known algorithm, such as for example the Scale Invariant Feature Transform (SIFT) algorithm. The mapping of two images can be performed by the technique known as FLANN (for "Fast Library for Approximate Nearest Neighbors"). Advantageously, the method according to the invention may comprise, before the concatenation or melting step, a step of rebalancing the one or more detected linear transformations. [0012] Indeed, when several linear transformations are detected to concatenate or merge several images of several regions, there are uncertainties about the linear transformations detected individually, especially when the number of correspondence points is small between two adjacent regions, for example in the case of a surface without variation. In this case, the method according to the invention makes it possible, during this rebalancing step, to modify at least one, preferably each, of the linear transformations so as to reduce the uncertainties on all the linear transformations detected. Such a rebalancing step can be performed by any known algorithm, such as for example the technique known as SLAM (for "Simultaneous Localization and Mapping"). The refinement method of the linear transformation can be carried out by any known algorithm, such as for example the minimization algorithm known as ICP (for "Iterative Closest Point"). According to the invention, the detection of a plane or a surface in a point cloud can be carried out using a known form recognition algorithm, such as for example the RANSAC algorithm (for "RANdom SAmple Consensus" in English). Advantageously, the method according to the invention may comprise an adaptation of the brightness of the images acquired during the acquisition step. [0013] This makes it possible to make the process robust against variations in brightness between two acquisition positions, and / or two acquisition locations, and / or between two acquisition moments during the day. Such an adaptation may, in a nonlimiting manner, implement the HDRI imaging technique (for "high-dynamic-range imaging" in English). According to a particularly preferred version of the invention, the reference surface may be a horizontal surface of the room, and more preferably a ceiling or a floor of said room, the detection step 20 comprising a search in the cloud of dots. a horizontal plane. This version of the method according to the invention is particularly advantageous because the ceiling of a room, or more generally a surface of a room whose normal is directed towards the floor of the room, is the surface which has the least particularity or objects. Its detection and characterization is therefore simpler, faster and more accurate than the other surfaces composing the part. The detection of such a surface can be carried out, in no way limiting, by the RANSAC algorithm applied to the cloud of points in order to detect a horizontal surface. [0014] The recognition of such a surface makes it possible to align the cloud of points so that the reference axis is vertical to obtain the vertical walls and the horizontal ceiling and floor. Advantageously, the analysis of at least one slice of the scatterplot 35 may comprise: a conversion of the points belonging to said slice into a two-dimensional image in a plane perpendicular to the reference axis, such as by example a binary or gray level image; Searching for at least one segment, preferably of all segments, present on said image; and converting to a two-dimensional image the points belonging to a slice can be achieved by removing for each of these points the coordinate of the point along the reference axis or an axis parallel to the reference axis and to keep for that point only the coordinates along the other two axes. The search and identification of the segment or segments in the two-dimensional image can be performed by any algorithm known to those skilled in the art, such as for example the Hough technique. [0015] The step of determining a two- or three-dimensional representation may further comprise an identification of a surface or a wall of said part as a function of the results provided by the analysis of a plurality of, and preferably of all, slices. [0016] The width of the surface or wall corresponds to the length of a segment corresponding to the surface, its position at the position of the segment and its height to the sum of the thickness of the slices on which this segment is present. [0017] Advantageously, the determining step may further include automated detection of at least one object at at least one surface, or wall, of said workpiece based on results provided by the analysis of the work. a plurality of slices. Indeed, when a two-dimensional image obtained for a slice 30 comprises a discontinuity in a segment that means that there is the presence of an object in front of said surface. In addition, when a two-dimensional image obtained for a slice comprises two aligned segments, it means that there is the presence of an object in said surface. In the present invention, the term "object" is understood to mean: a door located in a wall of the room, a window located in a wall of the room, an object hung on a wall of the room. 5 - an object disposed on the ground against a wall of the room, or - a shape made in a wall of the room. The method according to the invention may further comprise, for at least one detected object, a step of characterizing said object, with a view to determining a parameter relating to said object, namely at least one dimension of the object such as the length , the width, the depth of the object or a shape that makes up the object. According to a particular and nonlimiting limiting embodiment, the characterization of an object may advantageously comprise the following steps for each object: determination of an approximate position, and possibly approximate dimensions, of said object by analysis of at least one two-dimensional image provided for at least one slice; depending on said approximate position, and possibly on said approximate dimensions, selecting a color image and / or a depth image representing a zone encompassing said object; detecting contours relating to said object on said color image and / or on said depth image. [0018] Preferably, the contour detection is performed on both the binary images representing each of the slices and on the color image, and / or the depth image. This makes it possible to obtain more data on the object. For example, the binary images representing each of the slices can be used to determine the size of the object and the color image and / or the depth image can determine whether the object has visual / textural characteristics / or depth similar or different from the wall in which it is located. [0019] Contour detection can be performed by any known method, such as for example using the watershed segmentation technique. [0020] Advantageously, the method according to the invention may comprise, for at least one object, a step of determining the nature of said object by a neuron network with supervised learning. The neural network may be previously trained with known objects, and take into account the results of the method according to the invention in the prior modeling of parts. The neural network may take as input at least one of the data, in particular provided by the characterization step, namely: at least one dimension of the object, a position of the object, a color of the object, - a bi- or three-dimensional contour of the object, - etc. [0021] According to another aspect of the invention there is provided a system for automatically modeling a part. This system can in particular be configured to implement all the steps of the method according to the invention. The system according to the invention may comprise: at least one acquisition means for taking a plurality of images in said part comprising at least one color information and at least one depth information; and calculating means for implementing the steps of the method according to the invention to obtain from the acquired images a two- or three-dimensional representation of the part. The at least one acquisition means may be configured to implement the acquisition step as described above according to the different variants described. In particular, and in a manner that is in no way limitative, the at least one acquisition means may comprise at least two cameras rotating about a vertical axis, and oriented along two non-parallel and mutually intersecting viewing directions. substantially at a vertical axis. [0022] The at least one acquisition means may be rotated by any known means, such as a stepper motor. DESCRIPTION OF THE FIGURES AND EMBODIMENTS Other advantages and features will appear on examining the detailed description of non-limitative examples, and the appended drawings in which: FIG. 1 is a schematic representation of the steps of an example non-limiting of a system according to the invention; FIGURES 2a to 2h are schematic representations of the results provided in certain steps of the method of FIGURE 1100; FIGURES 3a and 3b are diagrammatic representations of a nonlimiting example of a neural network used in the method according to the invention; and FIG. 4 is a schematic representation of a nonlimiting example of a system according to the invention. It is understood that the embodiments which will be described in the following are in no way limiting. It will be possible, in particular, to imagine variants of the invention comprising only a selection of characteristics described subsequently isolated from the other characteristics described, if this selection of characteristics is sufficient to confer a technical advantage or to differentiate the invention with respect to the prior art. This selection comprises at least one feature preferably functional without structural details, or with only a part of the structural details if this part alone is sufficient to confer a technical advantage or to differentiate the invention from the state of the prior art. In particular, all the variants and all the embodiments described are combinable with one another if there is nothing to prevent this combination from the technical point of view. In the figures, the elements common to several figures retain the same reference. FIGURE 1 is a schematic representation of a non-limiting example of a method according to the invention. [0023] The method 100 shown in FIGURE 1 includes a step 102 of calibrating the one or more cameras used for acquiring images in the room. In addition, if there is more than one camera, this step includes calibrating the cameras together. This calibration step can be performed by any known method. [0024] Then, the method 100 comprises an acquisition phase 104 realizing an acquisition of a plurality of images in the part to be modeled. The acquisition phase 104 comprises a step 106 of acquisition of images in an acquisition position around a vertical axis of rotation passing through an acquisition location in the room. This step takes a color image and a depth image of two different regions of the part, along at least two different directions of sight. The two viewing directions are chosen so that the two image regions comprise a common area, that is to say an area appearing on the images of each region. This step 106 therefore provides for an acquisition position: a color image, a depth image and a point cloud of a first region, and a color image, a depth image and a point cloud of a second region; region comprising a common area with the first region. Of course, this step 106 can realize an image acquisition of a single region in the room or more than two regions in the room. Each color image and / or depth image is acquired according to a digital imaging technique with a large dynamic range so as to render the image taken independent of differences in brightness. Each picture taken is stored in association with an identifier of the region, for example region 1, region 2, and so on. an identifier of the acquisition position, for example position 1, position 2, etc. ; and 5 - an identifier of the acquisition location, for example, location 1, location 2, and so on. Then, when step 106 completes image acquisition at the current acquisition position, a step 108 determines whether the current acquisition position is the last acquisition position for that acquisition location. If in step 108, the current acquisition position is not the last acquisition position for this acquisition location then a step 110 moves, namely a rotation, to the next acquisition position at this acquisition site. The process resumes at step 106, and so on. If in step 108, the current acquisition position is the last acquisition position for that acquisition location then a step 112 determines whether the current acquisition location is the last acquisition location. [0025] If in step 112 the current acquisition location is not the last acquisition location then a step 114 moves, particularly a translation, to the next acquisition location. The method resumes in step 106 for the new acquisition location. If in step 112, the current acquisition location is the last acquisition location then the acquisition phase 104 is complete. The method 100 then comprises a phase 116 for obtaining a scatter plot representing the part as a function of the acquisitions made during the acquisition phase 104. [0026] This phase 116 comprises a step 118 of concatenating the images of at least two different acquisition regions for the same acquisition position. This concatenation is performed by considering adjacent regions two by two. For two adjacent regions comprising a common zone, step 118 carries out the following operations: estimation of the linear transformation between the two adjacent regions by means of the positions and orientations of the camera (s) for the two regions ; detection, by the SIFT algorithm, of several points of interest on each of the images in the common area, and creation of the correspondence points, using the FLANN technique, from the points of interest between the two images, - extraction of the three-dimensional points corresponding to the points of correspondence on each of the two images, and 10 - estimation of the linear transformation by the least squares method, - refinement of the linear transformation using the ICP method (for "Iterative Closest Point "in English). [0027] This step 118 provides for an acquisition position linear transformations that allow to concatenate / merge the point clouds for the adjacent regions into a single point cloud comprising all the regions imaged at this acquisition position. This step 118 is performed independently for each of the acquisition positions of each of the acquisition locations. The obtaining phase 116 then comprises a step 120 performing a calculation of the linear transformations which allow the concatenation / fusion of the point clouds for all the acquisition positions of an acquisition location. To do this, the acquisition positions are considered in pairs and this step 120 comprises the following operations: estimation of the linear transformation between the two adjacent positions using the acquisition tool 30-detection, by the SIFT algorithm, several points of interest on each of the images in a zone common to the two adjacent positions, and - creation of the correspondence points, using the FLANN technique, from the points of interest between the two images, 3025918 - 16 - - extraction of the three-dimensional points corresponding to the points of correspondence on each of the two images, - estimation of the linear transformation by the least squares method, - refinement of the linear transformation using the ICP method (for " Iterative Closest Point ". This step 120 provides, for an acquisition location, linear transformations that make it possible to concatenate / merge the point clouds for all the acquisition positions. [0028] This step 120 is performed for each acquisition location independently. The obtaining phase 116 then comprises a step 122 performing a calculation of the linear transformations that allow the concatenation / fusion of the point clouds for all the acquisition locations. To do this, the acquisition locations are considered two by two and this step 122 comprises the following operations: detection, by the RANSAC algorithm, of the same surface in the room, for example the ceiling of the room, on each of the two images 20 depth, or cloud of points, associated with each of the two locations, and - determination of a linear transformation connecting this surface in one of the two points clouds, to this same surface in the other of the two clouds of points. [0029] The acquisition phase 116 then comprises a step 124 realizing an organization and a rebalancing of the different linear transformations with respect to one another in ways to concatenate / merge without offset and in a precise manner the point clouds with respect to each other. . [0030] To do this, the method of the invention uses the technique known as SLAM (for "Simultaneous localization and mapping" in English). The obtaining phase 116 then comprises a step 126 making a concatenation / fusion of the point clouds using said linear transformations. When step 126 is completed, the method has a 5-point cloud representing the part. A representation of a point cloud obtained by the obtaining phase 116 is visible in FIG. 2a. The method 100 comprises, after the phase 116, a phase 128 for determining a bi- or three-dimensional representation of the part, from the scatterplot provided by the phase 116. The phase 128 comprises a detection step 130 a reference surface, which in the present example and in no way limiting, is the ceiling of the room. [0031] To do this, the RANSAC algorithm is applied to points in the scatterplot representing the part having a normal facing downwards, to identify a surface whose normal is directed downwards, and more precisely towards the floor of the room. . A representation of step 130 is visible in FIGURE 2b. [0032] When the reference surface is detected, a step 132 performs a wafer-by-edge analysis of the point cloud representing the workpiece along an axis perpendicular to the reference surface, that is to say on the ceiling of the workpiece, called an axis. reference. In the following description, we will adopt a reference (x, y, z) and the reference axis will be the z axis. [0033] Concretely, step 132 first selects all points in the point cloud representing the part whose coordinate z is between z1 and z2 with z1 the coordinate along the z-axis of the reference surface. A representation of a slice of the point cloud is visible in FIGURE 2c. The selected points are projected on the (x, y) plane to obtain a two-dimensional image (x, y) of this first slice. This two-dimensional image is in this example a black and white image. A representation of a two-dimensional image of said slice is visible in FIG. 2d. The two-dimensional image is analyzed with the Hough technique to determine the segments on this image. Then a second slice, corresponding to the set of points in the scatter plot whose z-coordinate is between z2 and z3, is selected and then analyzed as described, and so on until a slice is reached. including no point. [0034] Each slice may have a thickness, denoted Oz, predetermined along the z axis. This thickness may be identical for all the slices or different for at least one slice. The thickness Oz of one or more slices can be adapted according to the nature of the surfaces of the piece. The more complex the surfaces, the smaller the thickness, and vice versa. Step 132 provides for each slice analyzed, the segments detected in this slice and for each segment the segment equation, the length of the segment and the position of the segment. A representation of step 132 is visible in FIGURE 2e. [0035] A step 134 performs a detection of the surfaces of the part. To do this, step 134 compares the equations of the segments identified for each of the slices and merges the segments having the same equation. Step 134 thus makes it possible to obtain a plurality of unique equations, in the plane (x, y), each corresponding to a surface of the part. The width of each identified surface corresponds to the length of the segment, its position at the segment position and its height to the sum of the thickness of the slices on which this segment has been identified. Step 134 thus provides a plurality of surfaces and for each surface its equation in the (x, y, z) plane. [0036] Then a step 136 carries out a detection of object (s), each object being able to be, without limitation: a door being in a wall of the room, a window being in a wall of the room, an object hung on a wall of the room, an object disposed on the ground against a wall of the room, or a shape made in a wall of the room. To do this, each unique equation obtained during step 134 is analyzed for each of the slices, in the case where: a discontinuity appears in a segment that means that there is the presence of an object in front of said surface - two segments are aligned, this means that there is the presence of an object in said surface 5 By analyzing all the slices, the method according to the invention detects the height, the width and the depth said objects. Indeed, by grouping the information, because the object is found on several slices, the method according to the invention deduces the height, width and depth. [0037] Thus all objects are identified and for each object, step 136 informs the approximate position of the object. A representation of step 136 is visible in FIGURE 2f on which a window object, and two cabinet objects have been detected. A step 138 performs a precise characterization of each object. For each object, knowing the approximate position of the object, contour detection is performed in the color image. This contour detection provides, for each object, the precise contours, and therefore the position and the dimensions, and possibly at least one characteristic of the object, for example, the color of the object. [0038] A representation of step 138 is visible in FIGURE 2g. In this Figure 2g the window object has been identified on one of the surfaces of the room. The method 100 shown in FIGURE 1 includes a step 140 determining the nature of each object. To do this, step 140 uses a neuron network with supervised learning previously "trained". For each object, the characteristics of the object obtained during step 138 are informed to the neural network which according to its prior learning provides the nature of the object. [0039] When all the surfaces and objects of the part are identified and characterized, a step 142 provides a three-dimensional or two-dimensional presentation of the part according to a previously indicated point of view. A representation of step 142 is visible in FIGURE 2h. FIG. 3a is a schematic representation of a nonlimiting example of a neuron 300 and its connections that can be implemented in the method according to the invention, and in particular during step 140 of the method 100 of FIGURE 1. [0040] FIG. 3b shows an exemplary model 302 for supervised learning to which the neuron network consisting of several neurons 300 is subjected. Learning of such a neural network is achieved by providing the neural network with the characteristics of a multitude of neurons. input objects, 10 and indicating for each object, the nature of the object that it must provide output. FIGURE 4 is a schematic representation of a non-limiting example of a system according to the invention. The system 400 shown in FIG. 4 comprises two cameras 402 and 404 arranged in a support 406. The support 406 is rotatably arranged on a numerically controlled step motor 408. [0041] The system further includes a computer module 410 connected to the stepper motor 408 and to each of the cameras 402 and 404. The computer module 410 includes a stepper motor control output 408, a control output for each of the cameras 402 and 404 to trigger an image capture. [0042] The computer module 410 further comprises inputs such as: an input for each camera 402 and 404 for receiving from each camera images taken by the latter; and - optionally, an input for the motor 408, used for the motor 408 to indicate to the computer module the rotation it is performing. The computer module 410 further comprises storage means and calculation means for implementing the steps of the method according to the invention, such as for example the steps 118 to 142 of the method 100 described with reference to FIGURE 1. 3025918 The cameras 402 and 404, support 406, engine 408 and computer module are arranged on a tripod 412, whose height is adjustable. Preferably, the height of tripod 412 is adjusted so that cameras 402 and 404 are substantially mid-height between the floor and the ceiling of the room. The motor 408 rotates the support 406 about an axis of rotation 414 substantially perpendicular to the floor of the room. The camera 402 is positioned in a direction of shooting materialized by the downward axis 416 and makes it possible to take images of a first region 418 around the axis 416. The camera 404 is positioned in a direction of capture. view materialized by the ascending axis 420 and makes it possible to take images of a second region 422 around the axis 420. More particularly, the axis 416 and the axis 420 each have an angle of absolute value substantially equal to at a plane parallel to the ground, or horizontal, passing through the center of the support 406. The regions 418 and 422, respectively, imaged by the cameras 402 and 404 comprise a common area 424. The computer module 410 is equipped with instructions for: controlling the step motor 408 to position the carrier 406 at a plurality of acquisition positions about the vertical axis 414, changing from an acquisition position to the next acquisition position being performed solely by rotation; - triggering an image by each camera 402 and 404, 25 for each acquisition position, each camera 402 and 404 being equipped: ^ a color image sensor providing a color image of the region associated with it and an infrared sensor providing a depth image of the region associated therewith; - memorize the color image and the depth image taken by each camera for each acquisition position. Alternatively, the steps of the method according to the invention can be carried out by another computer module that the computer module 410, such as for example a remote server. In this case, according to a first example, the computer module 410 may be in communication with the remote server via a wireless communication network, such as the mobile telephone network. According to a second example, the data 5 stored by the computer module 410 can be transferred to the remote server in a "manual" manner, that is to say by wired connection of the computer module 410 to the remote server or via USB type memory storage means. [0043] Of course, the invention is not limited to the examples which have just been described and numerous adjustments can be made to these examples without departing from the scope of the invention.
权利要求:
Claims (15) [0001] REVENDICATIONS1. A method (100) for automatically modeling a part, said method (100) comprising the steps of: - acquiring (104) a plurality of images in said part comprising at least one color information and at least one piece of information depth; obtaining (116) from said images a cloud of points relating to said piece; detecting (130), from said cloud of points, at least one so-called reference surface of said piece; determination (132-142) of a two- or three-dimensional representation of said part by analysis of said point-by-side point cloud along a reference axis, passing through said reference surface, each portion comprising points of said cloud of said selected points along said reference axis. [0002] 2. Method (100) according to claim 1, characterized in that the image acquisition step (104) comprises an acquisition of at least one image comprising at least one color information and at least one depth information. , according to a plurality of positions, called acquisition, around an axis of rotation passing through a location, said acquisition in said room. [0003] 3. Method (100) according to any one of the preceding claims, characterized in that the step (104) of image acquisition comprises an acquisition of at least one image comprising at least one color information and at least one depth information, according to a plurality of different acquisition locations in the room. [0004] 4. Method (100) according to claims 2 or 3, characterized in that the acquisition step (104) comprises, for at least one acquisition position, respectively at least one acquisition location, an acquisition (106) ) image (s) of at least two different regions (418, 422) according to at least two different directions (416, 420) of sight so that two adjacent regions (418, 422) comprise an area commune (424). [0005] 5. Method (100) according to the preceding claim, characterized in that it comprises, for at least one acquisition position, respectively at least one acquisition location, a step (118, 126) of concatenation / merger of the acquisition made for at least two adjacent regions (418, 422), said step (118, 126) comprising the steps of: estimating a linear transformation between said adjacent regions using the positions of the acquisition tool (s) used for said regions, - detecting at least one correspondence point in the area common to the two adjacent regions, - improving the estimation of said linear transformation according to said at least one correspondence point, - refining the linear transformation using a predetermined minimization method, and = concatenation or fusion of point clouds of said regions using said linear transformation. 20 [0006] 6. Method (100) according to any one of the preceding claims, characterized in that it comprises an adaptation of the brightness of the images acquired during the acquisition step (106). 25 [0007] 7. Method (100) according to any one of the preceding claims, characterized in that the reference surface corresponds to a horizontal surface having a vertical normal, in particular a ceiling or a floor of said room, the detection step ( 130) comprising a search in the scatterplot of a plane having a normal downward, in particular towards the floor of the room. [0008] 8. Method (100) according to any one of the preceding claims, characterized in that the analysis (132) of at least one edge of the scatterplot comprises: - a conversion of the points belonging to said slice in a two-dimensional image in a plane perpendicular to the reference axis; and a search for at least one segment, preferably all segments, present on said image. [0009] A method (100) according to any one of the preceding claims, characterized in that the determining step (132-142) comprises an identification (134) of a surface of said part according to the results provided by the analyzing a plurality of slices. [0010] The method (100) according to any one of the preceding claims, characterized in that the determining step (132-142) comprises a detection (136) of at least one object at at least one surface of Said piece according to results provided by analyzing a plurality of slices. [0011] 11. Method (100) according to the preceding claim, characterized in that it comprises for at least one object, a step (138) of characterizing said object, said characterization comprising the following steps for each object: - determination of a approximate position, and possibly approximate dimensions, of said object; depending on said approximate position, and possibly on said approximate dimensions, selecting a color or depth image representing a zone encompassing said object; - Detecting contours relative to said object on said color image or depth. 30 [0012] 12. Method (100) according to any one of claims 10 or 11, characterized in that it comprises, for at least one object, a step (140) for determining the nature of said object by a neural network (300). ) supervised learning. 3025918 - 26 - [0013] 13. Method (100) according to any one of claims 10 to 12, characterized in that at least one object corresponds to: - a door located in a wall of the room, 5 - a window located in a wall of the piece, - an object hooked on a wall of the room, - an object placed on the ground against a wall of the room, or - a shape made in a wall of the room. 10 [0014] 14. System (400) for automatically modeling a workpiece, in particular configured to implement all the steps of the method (100) according to any one of the preceding claims, said system (400) comprising: - at least one means (402, 404) for taking a plurality of images in said part comprising at least one color information and at least one depth information; and calculating means (410) for carrying out the steps of the method (100) according to any one of the preceding claims to obtain from the acquired images a two- or three-dimensional representation of the part. [0015] 15. System (400) according to the preceding claim, characterized in that the at least one acquisition means comprises at least two cameras (402, 25 404) rotatable about a vertical axis (414), and oriented in two directions. non-parallel aiming directions (416, 420) and intersecting substantially at a vertical axis (414).
类似技术:
公开号 | 公开日 | 专利标题 FR3025918A1|2016-03-18|METHOD AND SYSTEM FOR AUTOMATED MODELING OF A PART EP2766872B1|2015-07-29|Method of calibrating a computer-based vision system onboard a craft EP2257924B1|2021-07-14|Method for generating a density image of an observation zone FR3039684A1|2017-02-03|OPTIMIZED METHOD OF ANALYSIS OF THE CONFORMITY OF THE SURFACE OF A TIRE FR3081248A1|2019-11-22|SYSTEM AND METHOD FOR DETERMINING A LOCATION FOR PLACING A PACKET FR3053821A1|2018-01-12|DEVICE FOR ASSISTING THE ROTATION OF A GIRAVION, ASSOCIATED GIRAVION AND METHOD OF ASSISTING THE STEERING ASSISTANCE THEREFOR EP2724203B1|2017-09-13|Generation of map data EP2930659A1|2015-10-14|Method for detecting points of interest in a digital image EP1100048A1|2001-05-16|Automatic building process of a digital model using stereoscopic image couples EP3008445A1|2016-04-20|Method for processing a digital image of the surface of a tire in order to detect an anomaly EP2279487B1|2013-05-01|Method for determining a three-dimensional representation of an object using a sequence of cross-section images, computer program product, and corresponding method for analyzing an object and imaging system EP3234913B1|2021-05-05|Method for detecting a defect on a surface of a tyre FR2950451A1|2011-03-25|ALGORITHM FOR DETECTION OF CONTOUR POINTS IN AN IMAGE WO2009125131A2|2009-10-15|Method for determining a three-dimensional representation of an object using points, and corresponding computer program and imaging system WO2015082293A1|2015-06-11|Identification of shapes by image correction FR2987685A1|2013-09-06|Method for controlling robot for displacement of object i.e. nut, placed on support in three-dimensional space, involves researching position and spatial three dimensional orientation of each object, and carrying out displacement of object EP3928281A1|2021-12-29|Method for registering depth images EP0559594A1|1993-09-08|Process for creating the signature of an object represented on a digital image, from the type consisting in defining at least one caracteristic dimensional calibre of that object and corresponding process for verifying the signature of an object FR3038760A1|2017-01-13|DETECTION OF OBJECTS BY PROCESSING IMAGES FR3093215A1|2020-08-28|Method and device for monitoring the environment of a robot EP3729327A1|2020-10-28|Method for recognising objects in a three dimensional scene WO2014053437A1|2014-04-10|Method for counting people for a stereoscopic appliance and corresponding stereoscopic appliance for counting people EP2391921A2|2011-12-07|Method for automatic generation of three-dimensional models of the superstructures of rooves and buildings by deduction WO2017187059A1|2017-11-02|Method for adjusting a stereoscopic imaging device FR3034229A1|2016-09-30|METHOD FOR DENOMBRATING OBJECTS IN A PREDETERMINED SPATIAL AREA
同族专利:
公开号 | 公开日 WO2016038208A3|2016-06-23| FR3025918B1|2018-04-06| WO2016038208A2|2016-03-17|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US20140043436A1|2012-02-24|2014-02-13|Matterport, Inc.|Capturing and Aligning Three-Dimensional Scenes| US8705893B1|2013-03-14|2014-04-22|Palo Alto Research Center Incorporated|Apparatus and method for creating floor plans|WO2018007628A1|2016-07-07|2018-01-11|Levels3D|Method and system for reconstructing a three-dimensional representation| US10977397B2|2017-03-10|2021-04-13|Altair Engineering, Inc.|Optimization of prototype and machine design within a 3D fluid modeling environment| US10867085B2|2017-03-10|2020-12-15|General Electric Company|Systems and methods for overlaying and integrating computer aided designdrawings with fluid models| US10803211B2|2017-03-10|2020-10-13|General Electric Company|Multiple fluid model tool for interdisciplinary fluid modeling| US11004568B2|2017-03-10|2021-05-11|Altair Engineering, Inc.|Systems and methods for multi-dimensional fluid modeling of an organism or organ| US10409950B2|2017-03-10|2019-09-10|General Electric Company|Systems and methods for utilizing a 3D CAD point-cloud to automatically create a fluid model| CN109614857B|2018-10-31|2020-09-29|百度在线网络技术(北京)有限公司|Point cloud-based rod identification method, device, equipment and storage medium| US11216005B1|2020-10-06|2022-01-04|Accenture Global Solutions Limited|Generating a point cloud capture plan|
法律状态:
2015-09-30| PLFP| Fee payment|Year of fee payment: 2 | 2016-03-18| PLSC| Publication of the preliminary search report|Effective date: 20160318 | 2016-09-22| PLFP| Fee payment|Year of fee payment: 3 | 2017-09-27| PLFP| Fee payment|Year of fee payment: 4 | 2018-09-24| PLFP| Fee payment|Year of fee payment: 5 | 2019-09-27| PLFP| Fee payment|Year of fee payment: 6 | 2021-06-11| ST| Notification of lapse|Effective date: 20210506 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1458550A|FR3025918B1|2014-09-11|2014-09-11|METHOD AND SYSTEM FOR AUTOMATED MODELING OF A PART| FR1458550|2014-09-11|FR1458550A| FR3025918B1|2014-09-11|2014-09-11|METHOD AND SYSTEM FOR AUTOMATED MODELING OF A PART| PCT/EP2015/070886| WO2016038208A2|2014-09-11|2015-09-11|Method and system for automated modelling of a part| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|