Optical Characterization of the Beams Generated by 3-D LiDARs: Proposed Procedure and Preliminary Results on MRS1000

In last years, automotive is pushing to the continuous development of more performing and less expensive measuring systems for supporting advanced driver-assistance systems (ADAS). In that scenario, light detection and rangings (LiDARs) are one of the key enabling technologies. This has led and will lead to the availability of more and more LiDAR systems and manufacturers. Given the relevance of the topic, in recent years, many studies and some national and international standards have been proposed or updated. However, such methods and standards are more focused on the analysis of the overall system performance and do not allow to investigate specific aspects of LiDAR systems. As an example, despite the relevance that spot size and divergence have on the evaluation of the performance of the LiDAR system, such parameters are not always, if ever, fully provided by LiDARs manufacturers and, to the best of our knowledge, no standard nor measurement method has been previously proposed for their analysis. In this article, we propose novel methods for the characterization and comparison of LiDAR systems with particular focus on the analysis of the spatiotemporal arrangement of beams spots, and beams divergences and profiles. The proposed method has been exploited for the analysis of the MRS1000 LiDAR system by Sick. The obtained results indicated that the MRS1000 simultaneously emits three spots triangularly arranged. As an example, at 6 m distance, such a triangle has a height of about ≈ 4 cm and a base that varies from ≈ 6 to ≈ 11 cm depending on the LiDAR settings.


Optical Characterization of the Beams Generated by 3-D LiDARs: Proposed Procedure and Preliminary
Results on MRS1000 In recent years, many studies have been proposed both on the design [1]- [12] and characterization [13]- [21] of LiDAR systems and subsystems. Also, national and international standards have been recently proposed or updated [22]- [26]. In this article, we focus our attention on the development of a measurement method for the spatiotemporal analysis of the spots generated by 3-D-LiDARs beams on the targets. The spots produced by LiDARs beams, also referred as footprints, play a key role in determining the performances of LiDAR systems. Indeed, the footprint is the "basic unit of information" on the target that the LiDAR collects information.
In spite of the relevance that spots have on the evaluation of the performance of the LiDAR system, such parameters are not always, if ever, fully provided by manufacturers and, to the best of our knowledge, no other measurement method has been previously proposed for the analysis of parameters such as spots pattern, waist, and divergence. However, there are many situations in which such parameters are required for proper estimate system performances. According to Thakur [27], range and resolution-the smallest size of an object the system is able to detect-are the two key system requirements for scanning LIDAR systems. Indeed, the footprint of the LiDAR determines the area on which the system averages to estimate a single point of the point cloud, thus the knowledge of its size allows to better define the capability of the LiDAR to distinguish small targets from the background. If an object has a cross section much smaller than the LiDAR footprint, the LiDAR will likely not be able to detect such an object. Such is relevant not only in autonomous driving but also in many other automation fields ranging from industrial applications to precision agriculture. For instance, apart from autonomous driving, in automotive applications, LiDARs should be used to detect potential safety hazards such as a large piece of tire on the road or a pothole [27]. The knowledge of the beam spot size on the road plane is fundamental to estimate LiDAR performance in detecting pieces of tire, potholes, and analyzing road unevenness.
In precision agriculture, it is known that the LiDAR footprint plays a key role in determining the capability of the LiDAR system to properly assess the vegetative state [28], [29]. Unfortunately, as previously introduced, information such as beam spot pattern, waist, and divergence are not always, if ever, fully available.
For instance, it is known that several manufacturers exploit more than one pulse for estimating a single point of the point cloud. However, very little information is generally provided about it, and it is usually not known how such multiple pulses are spatially and temporally arranged. To give an example, as described in mode details in Section III, thanks to the proposed characterization method, we showed that MRS1000 by Sick exploits three spots with a particular spatial distribution and that the beam divergence declared by the manufacturer does not relate to the divergence of a single "elementary" spot, but to the divergence of a specific pattern of spots consecutively fired by the LiDAR.
For the analysis of the beam's waist and divergence of lasers, several methods have been proposed both "electronic" and "nonelectronic" [30]. However, the analysis of the beam generated by (spinning) LiDARs is quite peculiar since the optical head rotates while emitting short duration pulses (generally some nanoseconds). Thus, classic methods based on mechanical scanning (knife-edge, slit, or pinhole) cannot be used. Similarly, classic camera-based methods, where the (attenuated) beam is pointed directly on the photodetector array, are able to investigate the beams only in the very first part of the measuring interval of LiDARs since the size of the spots quickly becomes greater than that of the photodetector array. Similar considerations also apply to both MEMS and optical phased array (OPA) LiDARs.
To overcome such limitations, we propose a measurement method actually based on what was probably the first method for the analysis of the laser beam profile, i.e., the observance of the spot reflected from a flat surface. Indeed, thanks to a reflective target and a camera system, we have been able to investigate information such as spots pattern, waist, and divergence, thus providing relevant information for estimating the performance of the LiDAR system by both analytical and numerical methods.
In the following, Section II describes the proposed test methods. The obtained results are reported in Section III, and conclusions are drawn in Section IV.

II. MATERIALS AND METHODS
It is known that several manufacturers exploit more than one beam for estimating a single point of the point cloud. Indeed, in general, a single point in the point cloud is obtained from the average of several pulses impinging on different positions of the target and emitted at different time instants. As a result, in the following, we will use the words "elementary spot" or simply "spot" to refer to the spot generated by a single beam produced by the instrument under test (IUT). Then, the set of spots simultaneously emitted by the IUT will be referred as "overall LiDAR spot" (OLS).
For the analysis of the "OLS" we propose the following tests: 1) warm-up and stability, 2) spots number and space-time arrangement, 3) spots profile and divergence. Following Sections II-A, II-B, II-C, and II-D describe in detail such tests.
Most of the characterizations have been performed exploiting the custom rail system shown in Figs. 1 and 2. Based on aluminum extrusion profiles, the rail system allows translating Fig. 1. Picture of the custom rail system for target positioning. As shown in the figure, the system is based on aluminum extrusion profiles fixed on H-shaped supports that allow the height of the profiles to be adjusted with respect to the ground. As described in more detail in Fig. 2, the setup also includes a target and a camera (plus objective-OBJ) mounted on a sliding carriage that can be translated along the rail system. During the test, the IUT was placed in front of the rail system, thus translating the target along the rail it was possible to modify the IUT to target distance d.  Fig. 1. The sliding carriage can be translated along the rail system to modify the distance d between the IUT and the target while maintaining unchanged the camera-target distance. (x, y, z) are the Cartesian coordinates whose origin coincides with the origin of the IUT-point in space that corresponds to a range value of zero. Numbers 1, 2, 3, and 4 refer to the channels (layers) of the IUT. The red dots in the zoom represent the points of the point cloud composing P(t). During the test, the IUT was placed on a multiaxis stage that allowed to align it to the rail system. Tests were performed in a closed environment, where both temperature and lighting were controlled. Note that the rail system has the sole purpose of positioning the target in front of the IUT and that before each acquisition it is possible to check the alignment between the target and the IUT. Thus, the rail system is supposed to have an extremely modest effect on the accuracy of the measurements. The proposed methods have been tested by analyzing the LiDAR model MRS1000 by Sick. Such LiDAR emits laser beams at 850 nm using an internally rotating sender-receiver units [31].

A. Warm-Up and Stability
To estimate the warm-up and stability, we placed the IUT in front of a plane target at a distance d = 7 m (the maximum distance analyzed in subsequent tests). The target was a 24" by 24" hardboard (model TB4 by Thorlabs) whose spectral reflectance at the MRS1000 lasers wavelength (850 nm) is about 67%. The target was then aligned to the IUT following a procedure similar to the one described in Appendix X1 of ASTM E2938-15 [23] in order to center it along the measurement axis and tilt it so that the plane of the target was perpendicular to the measurement axis.
Then, we switched the IUT on and recorded a point cloud each t scan = 1 min over a period of about 700 min. The instrument settings exploited for the analysis of IUT are resumed in Table I. For each acquisition, measured points from the plate target have been manually segmented from the point cloud relative to a single channel (in particular, the reported results are relative to channel 2 as shown in Fig. 2). The resulting sets of points after segmentation are referred as P(t), where t is the time from the start of warm-up and stability test. Each point in P(t) is defined by the (x, y, z) Cartesian coordinates (see Fig. 2). Then, supposing z i (t) to be the value along the z-axis relative to the i th point in the point set P(t), for each P(t) we analyzed the mean z and the relative experimental standard deviations of the mean: (1) where N = 13 is the number of points composing each P(t). The number N = 13 was determined by the number of points of the point cloud relative to the target.
Warm-up is generally defined as the time it takes to the measured value of a property to stay within a "tolerance interval" (TI) defined by the upper (T U ) and lower (T L ) tolerance limits. However, as it will be shown in Section III-A, the magnitude Supposing the TI to be defined by the upper (T U ) and lower (T L ) tolerance limits ( ), we introduce two guard bands whose width is equal to 2 · sz. Then, the t warm−U ( ) is equal to the time it takes to the measured value of the property-z(t) ( )-to stay within the UAI defined by the upper (UA U ) and lower (UA L ) acceptance limits. Similarly, t warm−L ( ) is equal to the time it takes toz(t) to stay within the LAI defined by the upper (LA U ) and lower (LA L ) acceptance limits.
of the obtained experimental standard deviation of the mean sz(t) is not negligible compared to the magnitude of the variations of the measured value of the property-z(t). Hence, to estimate the upper (t warm−U ) and lower (t warm−L ) limits of the warm-up time, we implemented decision rules similar to the "guarded acceptance" and "guarded rejection" used in conformity assessment as described in JCGM 106:2012 [32]. Hence, as shown in Fig. 3, we introduce two guard bands whose width is equal to 2 · sz(t). Then, the t warm−U is equal to the time it takes to the measured value of the property-z(t)to stay within the upper acceptance interval (UAI) defined by the upper (U A U ) and lower (U A L ) acceptance limits obtained by reducing TI of 2 · sz(t). Similarly, t warm−L is equal to the time it takes toz(t) to stay within the lower acceptance interval (LAI) defined by the upper (L A U ) and lower (L A L ) acceptance limits obtained by increasing TI of 2 · sz(t).
Assuming that warm-up ends in the first n w samples obtained considering t warm−U , the stability has been investigated in terms of experimental standard deviation where n s = n TOT − n w + 1 is the number of P(t) sets considered for the stability being n TOT total number of recorded

B. Spots Number and Space Arrangement
Beams number and space arrangement have been investigated by using the setup shown in Fig. 2. The planar reflective target was fixed on the sliding carriage that was translated along the rail system. As shown in Fig. 2, a charge-coupled device (CCD) camera was fixed on the same sliding carriage of the target. To increase the signal-to-noise ratio of the images  acquired by the camera, the target was composed by a rigid plane support covered by reflective material fabrics by 3M (Product Number 8906). The target was then aligned to the IUT following a procedure similar to the one described in Appendix X1 of ASTM E2938-15 [23] in order to center it along the measurement axis and tilt it so that the plane of the target was perpendicular to the measurement axis.
The focusing of the objective (OBJ) of the CCD camera was performed by placing a graph paper on the target and illuminating it with LED light sources having the same wavelength as the IUT (850 nm) to minimize the effects due to chromatic aberration. Thanks to the graph paper, we also estimated the relationship between the object and image planes (the relation between the object dimensions and the pixels). In particular, in our setup, the projected pixel size in the object plane was equal to ≈ 66.7 μm/pixel.
Once aligned with the target and ended the warm-up of the IUT, we exploited the sliding carriage and the CCD to acquire pictures of the OLS at different distances d. As an example, Fig. 4 shows an image of the spots obtained from MRS1000 by Sick at d = 2 m. The IUT settings were previously reported in Table I, whereas the CCD settings are reported in Table II.

C. Spots Time Arrangement
As declared by the manufacturer [31], MRS1000 has four channels (layers) and an "overall" scanning frequency f scan = 50 Hz. The following activities have been aimed at verifying: 1) the "overall" scanning frequency f scan , 2) how the different channels are acquired (as it will be shown in Section III-C, with each rotation the system acquires only one layer), 3) the time t fire between consecutive fires, Schematic representation of the measuring instrument used for analyzing spots time arrangement. The photodiodes PD A and PD B were inversely polarized with a voltage of 12 V with respect to ground. Then, photogenerated signals were collected by means of 50 coaxial cables and analyzed by using the DSO (DSO input channels were set to 50 ). Note that activities A, B, and C used only one photodiode simply referred as PD. Fig. 4 are emitted simultaneously and the duration of each single fire. Indeed, it is known that in general a single point in the point cloud is obtained from the average of several OLSs each of them composed by multiple elementary spots (see Fig. 4). Since the IUT must be able to distinguish the echoes arising from consecutive fires of the laser source/s, the time t fire that passes between two consecutive fires must satisfy the following:

4) if the three elementary spots shown in
where n ≈ 1 is the refractive index of the medium (air), NAR (nonambiguous range) is the upper limit of the measuring interval of the IUT (for MRS1000 the manufacturer declares 64 m [31]), and c is the speed of light. From (4), for MRS1000 t fire > 430 ns. Hence, given the t CCD = 9 μs (see Table I), the three spots shown in Fig. 4 may be due to consecutive firings of the laser source/s. On the other hand, MRS1000 has declared angular resolution of δφ = 0.25 • and a scanning frequency f scan = 50 Hz. Thus, consecutive points of the point cloud are theoretically obtained each Note that, if not provided by the manufacturer, δφ can be easily estimated from the analysis of adjacent points in the point cloud, whereas f scan can be estimated as described in the following. Activities A, B, C, and D did not exploit the target shown in Figs. 1 and 2, but have been performed by using photodiodes and a digital storage oscilloscope (DSO, model DSO6052A by Agilent-Sample rate 4 GSa/s, bandwidth 500 MHz) as shown in Fig. 5. The photodiodes were placed on a mount that was placed in front of the IUT as shown in Fig. 6.
In particular, activity A has been performed by using a "large-area" photodiode PD model S1336-8BK by Hamamatsu (square photosensitive area with side equal to 5.8 mm). As shown in Fig. 6, during activity A, the photodiode was Fig. 6. The setup used for analyzing spots time arrangement. During activities A, B, and C only one "large-area" photodiode was used. In particular, during activity A, the photodiode was contacted to the housing (the optics cover) of the IUT to collect the spots relative to all the channels of the IUT. In activities B and C, the same photodiode was moved a few centimeters from the housing to collect the spots relative to one channel only (i.e., channel 2). Lastly, in activity D, two "fast" photodiodes were used to investigate the timing of elementary spots composing the OLS relative to a single channel (i.e., channel 2). In particular, to investigate the timing between the elementary spots (ES-red rounded rectangles) ES 1 and ES 2 , the photodiode PD B was moved on the left as long as the signal of channel 2 of the DSO (CH2) disappeared. Then, PD B was slightly moved to the right as long as the CH2 signal reappeared. Finally, by using the CCD camera, we verified that each photodiode received a single elementary spot. A similar procedure was used to record the timing between spots ES 1 and ES 3 . contacted to the housing (the optics cover) of the IUT to collect the spots relative to all channels of the IUT.
In activities B and C, the same photodiode PD was moved a few centimeters away from the housing to collect the spots relative to one channel only. Specifically, the distance of the PD from the IUT was nominally the same both in B and C, while what changed was the time base of the oscilloscope. In activity B, the time base was set to record the impulses due to more than one rotation of the LiDAR, while in C, the time base was set in order to analyze the time interval between the emission of subsequent OLSs.
Lastly, in activity D, two "fast and small" photodiodes PD A and PD B (model SSO-PDQ0.25-5 by Roithner Lasersquare photosensitive area with side equal to 0.5 mm, rise time 0.4 ns) were used. In particular, the photodiodes were placed at a distance of some meters from the IUT, such that each photodiode received a single elementary spot as shown in Fig. 6. Then, exploiting the CCD camera previously shown in Fig. 2, we verified that each photodiode received a single elementary spot. To detect the timing between elementary spots ES 1 and ES 2 , the photodiode PD B was moved on the left as long as the signal of CH2 of the DSO disappeared. Then, PD B was slightly moved to the right as long as the CH2 signal reappeared. A similar procedure was used to record the timing between spots ES 1 and ES 3 . Note that even if the OLS is composed of three spots (see Fig. 4), in activity D, we recorded only two spots at a time due to the limited number of input channels of the DSO.

D. Spots Profile and Divergence
Spots profile and divergence have been investigated exploiting the setup previously shown in Fig. 2, thus acquiring images of the spots at different distances d by translating the sliding carriage.
As previously shown in Fig. 4, each OLS is composed of three elementary spots simultaneously emitted by the IUT (as it will be shown in Section III-C). Moreover, the IUT emits a new OLS each t fire ≈ 1.17 μs (see Section III-C). Since consecutive points of the point cloud are theoretically obtained each t point−cloud ≈ 13.9 μs (see (5)), it is reasonable to suppose that the IUT exploits more than one OLS to estimate a single point in the point cloud. As a result, the "dimensions of the spot" (the footprint) used by the IUT to estimate a single point in the point cloud depends on: 1) the dimensions of the elementary spots composing the OLS; 2) the distances between the three elementary spots composing the OLS; 3) the number n OLS of OLS used by the IUT to estimate a single point in the point cloud; 4) the IUT angular velocity 2π f scan ; and 5) the distance d between the IUT and the target. Indeed, according to Fig. 7, the distance δ x in the x, y plane between the centers of consecutive elementary spots is Hence, according to Fig. 7, the horizontal (width-w) and vertical (height-h) dimensions of the footprint due to n OLS OLSs are where σ e−x and σ e−y are the standard deviations of the elementary spot and w OLS and h OLS are the width and height of the OLS. The approximation σ e−x ≈ σ e−y in (7) was made supposing circular Gaussian beams. Indeed, for our camera t CCD > 7 ·t fire (see Section III-C), hence each image is relative to more than one OLS, thus with the available camera, it is not easy to estimate σ e−x . Actually, for MRS1000 δ x < 2σ e−x . Hence, consecutive spots partially overlap as shown in Fig. 7 making it difficult to estimate σ e−x . Therefore, in order to avoid the images of the two lower spots ES 2 and ES 3 to merge each other as shown in Fig. 7, images of the spots have been acquired by setting t CCD = 9 μs (9 μs is the lower settable value). For each acquired image, the coordinates of the centers of the elementary spots ES have been estimated by means of the fitting with tri-dimensional elliptical Gaussian functions. Indeed, given the symmetry of the problem the distances between the "centers" of the 3-D Gaussian functions are substantially equal to the distances between the centers of the elementary spots. Then, the σ e−y has been estimated as the mean value of the "vertical" standard deviations of the three spots composing the acquired image. Finally, δ x has been estimated according to (6).

A. Warm-Up and Stability
Warm-up and stability tests have been performed according to the procedure described in Section II-A.  During the test, the ambient temperature remained in the range [18.2, 20.2] • C, with a mean value of 18.7 • C. Considering the thermal expansion coefficient of the aluminum rail system to be ≈ 2.4 · 10 −5 K −1 , such gives rise to a fractional change in length of approximately ±4 · 10 −3 % . Fig. 8 shows the normalized distancesz(t)/z steady obtained according to (1) and (3). As shown in Fig. 8, fixing the upper (T U ) and lower (T L ) tolerance limits at ±0.15% of the steady-state valuez steady , we obtained t warm−L = 3 min and t warm−U = 20 min.

B. Spots Number and Space Arrangement
Beams spots number and space arrangement have been investigated according to the procedure previously reported in Section II-B. As an example, Fig. 9 shows the images of spots recorded at different distances.   10. Activity A: signal obtained by using the "large-area" photodiode contacted to the housing of the IUT to collect the spots relative to all the four channels of the IUT. Every time the LiDAR laser beam/s strike the sensitive area of the photodiode, there is a pulse in the oscilloscope trace. Since the LiDAR rotates, the oscilloscope shows a pulse only when the LiDAR beam is oriented toward the photodiode. Thus, the frequency of rotation of the LiDAR can be estimated by analyzing the time delay between successive pulsesf scan = (20 ms) −1 . The different amplitudes of the pulses are due to different optical couplings between the photodiode PD and the four channels.
During the test, the ambient temperature remained in the range [18.6, 20.3] • C, with a mean value of 19.0 • C.

C. Spots Time Arrangement
Spots time arrangement has been investigated according to the procedure reported in Section II-C. Fig. 10 shows the signal obtained by using the "large-area" photodiode contacted to the housing of the IUT to collect the spots relative to all the four channels of the IUT (activity A). The different amplitudes of the pulses shown in Fig. 10 are due to different optical couplings between the photodiode and the four channels. As shown in Fig. 10, the "overall" scanning frequency is f scan = 50 Hz as declared by the manufacturer.
Figs. 11 and 12 show the signals obtained moving the "large-area" photodiode a few centimeters away from the housing to collect the spots relative to one channel only (activities B and C). Comparing Figs. 10 and 11, it is easy to observe that with each rotation the system acquires only one layer. Hence, each layer has a scanning frequency of 12.5 Hz. On the other hand, from Fig. 12, it is easy to observe that the IUT emits a spot each 1.17 μs.
Lastly, Fig. 13 shows the signals recorded by using two "fast and small" photodiodes placed at a distance from the Fig. 11. Activity B: signal obtained moving the "large-area" photodiode a few centimeters away from the housing to collect the spots relative to one channel only. As shown in the figure, once the photodiode is illuminated by only one of the four channels of the LiDAR, there is a peak only every 80 ms. By comparing Figs. 10 and 11, it is evident that at each rotation the MRS1000 acquires only one channel. Fig. 12. Activity C: zoom of the signal obtained moving the "large-area" photodiode a few centimeters away from the housing to collect the spots relative to one channel only. By reducing the time base of the DSO, it is possible to see that each peak in Fig. 11 (as well as in Fig. 10) is composed of several peaks due to the emission of subsequent OLSs. As shown in the figure, the IUT emits a spot each t fire ≈ 1.17 μs.
IUT such that each photodiode received a single elementary spot as shown in Fig. 6. Activity D was aimed at verifying if the three spots were simultaneously emitted, and according to Fig. 13, they are. As described by Donati [33], the shape and duration of the pulses may influence the uncertainty on the estimate of the time of flight (ToF), and short-duration pulses allow to obtain good visibility while complying with the energy limits for Class 1 laser sources. However, all such aspects are beyond the purposes of the proposed methods.
Note that, given the exposure time t CDD = 9 μs (see Table II) and the t fire = 1.17 μs (Fig. 12), the pictures shown in Figs. 4 and 9 are the result of about seven OLSs emitted by the IUT while rotating (see Fig. 7). The use of a better performing camera (shorter t CDD and better signal to noise) can allow to both acquire a single OLS and improve the visibility of the acquired OLS.
During the test, the ambient temperature remained in the range [18.8, 19.3] • C, with a mean value of 19.1 • C.  Fig. 6). Data have been normalized with respect to the peak value to facilitate comparison. As shown in the figure, the three spots of which the OLS is composed are reasonably synchronously emitted. The obtained fullwidth half-maximum of a single spot is about [3,4] ns (the manufacturer declares about 3.5 ns [31]). The pulse distortion and undershoots are due to incomplete impedance matching between the source-photodiode and 50 resistor-the cable and the DSO.

D. Spots Profile and Divergence
Fig. 14 shows the dimensions of w OLS , h OLS , 2σ e−y , and δ x as a function of the distance d. Fig. 15 shows the horizontal dimensions w both with and without using HDDM+ estimated according to (7). In particular, w without HDDM+ has been estimated using n OLS = 13-n OLS ≈ (t point cloud /t fire ) + 1, whereas w with HDDM+ has been estimated using n OLS = 37-n OLS ≈ (3 · t point cloud /t fire ) + 1). Note that the manufacturer declares the following spot size [31]: d · 10.4 (mrad) + 7 (mm) (without HDDM+) d · (10.4 + 8.7) (mrad) + 7 (mm) (with HDDM+) (8) and the fitting in Fig. 15 have slopes equal to 10.2 (mm/m) and 19.0 (mm/m), respectively.
Finally, Fig. 16 shows vertical dimensions h as a function of the distance d calculated as described in (7). Note that the manufacturer makes no distinction between the vertical  and horizontal dimensions of the spot (see (8)), thus it is not possible to compare the obtained results with the nominal ones.
As expected from geometrical optics, w OLS , and h OLS linearly vary as a function of the distance. The same for δ x according to (6) and σ e−y given the far-field (see Fig. 14). Thus, according to (7), w and h linearly vary with d.
During the test, the ambient temperature remained in the range [18.6, 20.3] • C, with a mean value of 19.0 • C.

IV. DISCUSSION AND CONCLUSION
In last years, automotive is pushing the market to the continuous development of more performing and less expensive measuring systems for supporting advanced driver-assistance systems (ADAS). This has led and will lead to the availability of more and more LiDAR measuring systems and manufacturers.
Given the relevance of the topic, in recent years, many studies and some national and international standards have been proposed or updated to allow both to evaluate measurement performances and to compare performance among different instruments. In this article, we focus our attention on the spatiotemporal analysis of the divergence and footprint of the beams proposing novel methods for their analysis. In spite of the relevance that such parameters have on the evaluation of the performance of the LiDAR system, they are not always, if ever, fully provided by manufacturers and, to the best of our knowledge, no other measurement method has been previously proposed for the analysis of parameters such as spots pattern, waist, and divergence.
As previously described, range and resolution-the smallest size of an object the system is able to detect-are the two key system requirements for scanning LIDAR systems [27]. For instance, LiDARs should be used in ADAS to detect potential safety hazards such as a large piece of tire on the road or a pothole [27]. Similarly, in precision agriculture, it is known that the LiDAR footprint plays a key role in determining the capability of the LiDAR system to properly assess the vegetative state [28], [29]. Indeed, according to the previous discussions, LiDARs are reasonably not able to detect objects having a cross section much smaller than their footprint, thus, parameters such as beams dimension, divergence, arrangement, and timing are important to have an idea of the capability of IUT to detect a certain object at a given distance, or, similarly, the maximum distance at which a certain object can be reasonably detected before disappearing into the background. The proposed methods are thus aimed at obtaining an estimate of the area on which the system averages to estimate a single point of the point cloud.
The described methods have been tested on MRS1000 by Sick, allowing us to fully characterize the beams generated by such LiDAR, hence obtaining relevant new information. As an example, according to the results reported in Section III-B, we discover the peculiar footprint used by the IUT. Moreover, according to the results reported in Section III-D, we discovered that the divergence declared by the manufacturer refers only to the "horizontal" dimension of the footprint, whereas the "vertical" dimension of the footprint is considerably smaller. Then, according to Sections III-C and III-D, we discover that the IUT reasonably makes use of about 13 or 37 (HDDM+) fires to estimate a single point in the point cloud.
Concluding, the proposed method is extremely flexible and versatile, and it can allow the analysis of the beams of substantially any LiDAR, thus providing relevant information for estimating the performance of LiDARs by both analytical and numerical methods.