• Sonuç bulunamadı

5.   EXPERIMENTS

5.3.   STAGE- 3: Multi-Camera Experiments

5.3.2.   Multi-Camera Experiment with Conf-1

Camera positions have been shown in Fig. 77. IDs are given as C1, C2, C3 and C4 to the cameras respectively. Camera viewing area is indicated with blurred area for C1. C1-X and C1-Y are the length and height of rectangular C1 camera viewing area (CVA). Other cameras have similar viewing areas according to their positions. The blue and red areas represent the common intersection areas for two webcams. The middle square area represents intersection area of four webcams. Black lines represent guidelines.

  Fig. 77. Camera positions and camera intersection areas

are negligible; since, all these errors are too small to be effective on path planning and visual servoing tasks.

Fig. 79. The stitched image to acquire Configuration-1 (Conf-1)

After acquiring stitched environment, obstacle detection task is executed as in single camera configuration, Fig. 80. The obstacles are detected and the environment is converted to binary map. This process is performed by assigning ‘1’ to the obstacles and assigning ‘0’ to the remaining area. This task is known as ‘Binary Image Acquisition’. The robot and target positions are also detected and stored. To increase safety, the object dilation is used to re-scale detected obstacles.

Fig. 80. Obstacle map acquired from the stitched image

Adaptive artificial potential field (A-APF) method is performed the path planning process on acquired map. The extracted path has crossed on three cameras. The 𝑃

𝐶 , 𝐶 , 𝐶 and 192 image frames have been processed. Therefore, 192 different position sampling has been taken on the acquired path. These positions are used to implement visual based control process with designed controllers. Simulation takes about 11.2s, so 17.142 frames per second is obtained. The simulation path cost is found as 1037.53px. The Gaussian controller with triangle positioning scheme is manage the robot to approach to the target position, successfully. The next suitable position is calculated in each iteration. In Fig. 81, the formed path by A-APF is given.

Fig. 81. Simulation path with A-APF

Attractive potential field (A-PF), repulsive potential field (R-PF) and total potential field (T-PF) force values against number of processed image frames are given in Fig.

82. A-PF force increases at several frames from the starting, then it decreases until the target position is reached. On the other hand, R-PF forces show changing pattern until the assigned task is completed. T-PF forces is formed by combining attractive and repulsive forces. As it can be seen, total forces are quite similar to the opposite direction values of the A-PF forces.

  Fig. 82. Potential force change

The changes of attractive and repulsive gain values (aps = ζ and rps = η) are given in Fig. 83. The ‘aps’ increases for a while from the start point, then it decreases with small rates as iteration continues. The ‘rps’ increases aggressively at first, then it decreases almost vertically to a point. It approaches near stabilize state with a little fluctuation until the end of the simulation. On the other hand, potential calculating order shows small changes and minimum calculating order shows no-changes.

Fig. 83. Potential scaling factors change

Real implementation frames under 𝐶 are given in Fig. 84. The ‘f1, f2 … f8’

frames show different robot positions at different times. In Fig. 85, the C2-s, C2-f and C1s, C1-f represent the starting and final position under 𝐶 and 𝐶 , respectively. The simulated and real paths are given in Fig. 85 as well. The 153 frames are processed in total (with all sub-paths in 𝑃 ). Moreover, 14.57 FPS is achieved with 10.5s time for Conf-1. Only one-third of the total frames are stored to keep performance stable.

  Fig. 84. Sample frames from visual based control task under C4 camera

Fig. 85. (I) Robot positions under C2 and C1 cameras (II) Simulation and Real paths

Acquired path plan has been used as reference path which have to be followed by the mobile robot. The robot is triggered to make motions according to reference path in real time. 𝐴 , 𝐴 and 𝐴 values are calculated as 73.81o, 69.19o and 37.0o respectively according to the intermediate target at the first starting frame. These values are calculated as 59.29o, 61.41o and 59.29o respectively at the end of the control task. Robot has successfully reached to the pre-defined target about 10.5s.

Starting and finishing positions of the mobile robot is given in Fig. 86.

  Fig. 86. (a) Starting position and (b) finishing position of the mobile robot

Sample frames from the visual control process in the whole working environment are given in Fig. 87. The ‘f1, f2 … f8’ frames demonstrate different robot positions at different times.

  Fig. 87. Sample frames from visual based control task

The path created by robot motions are given in following Fig. 88. The controller has tried to kept the mobile robot on acquired path through the control process. The distance of path created by robot motions until to the target position is emerged a little smaller than the distance of simulation path. Main reason behind this situation is the dynamically changed local targets used to track the simulation path. Local target is extracted from simulation path within a pre-defined threshold value and it is periodically updated until reaching to the main/final target. In this way, the controller generally smooths sharp turns.

Fig. 88. Simulated path and starting position of robot

The path and robot motions from selected frames are demonstrated in Fig. 89.

Except from starting and finishing positions of robot, several additional intermediate positions have been given. Eventually, the mobile robot has smoothly tracked the input path. There may be some error between simulation and real path. However, this error is so small in terms of path cost, so it is negligible.

Fig. 89. Simulation path and mobile robot motions

The real path formed by the mobile robot has been given in Fig. 90. As it seen, the distance of real path is emerged a little smaller than the simulation path. Its length is found about 995.16px. Therefore, there is only 4% difference between paths.

  Fig. 90. Simulation path (red) and Real path (blue)

The angle value changes of the control points (mobile robot wheels) and target are given in Fig. 91. The local target point is controlled in each iteration and if it is required, this target position is updated. The angle changes have dramatically increased when the mobile robot starts to perform turning motions. At the end of the control process 𝐴 , 𝐴 and 𝐴 angle values approach to the each other very closely.

This means that the robot gradually approaches to the target position.

  Fig. 91. Angle changes of control points

Velocity changes of the left and right wheels are given in Fig. 92. The changes in velocity values look like the changes in 𝐴 and 𝐴 angle values with different magnitude. The main reason is that the angle values directly affect the velocity values of mobile robot wheels. Both the angle and velocity value changes are a bit jagged.

This is because; sensitivity of the controller and storing of selected sample frames to the disk.

  Fig. 92. Left and Right velocity changes of WMR wheels

5.3.3. Additional Experiments with Different Configurations (Conf-2/3)

Benzer Belgeler