• Sonuç bulunamadı

3. MOBIL ROBOTUN TASARIMI

4.5. Tek Şerit Olması Durumu

4.5.2. Solda tek şerit olması durumu

Şekil 4.13’de orijinal imgeler sırasıyla gösterilmiştir. Şekil 4.14’de bu orijinal imgeler için elde edilen bütün Hough çizgileri verilmiştir. Şekil 4.15’de eliminasyondan sonra elde edilen şerit çizgileri görünmektedir.

orijinal imge Orijinal İmge

Elde edilen bütün hough çizgileri Elde edilen bütün hough çizgileri

Şekil 4.13: Solda tek şerit olması durumunda alınan iki örnek görüntü

Şekil 4.14: Solda tek şerit olması durumu için alınan iki örnek görüntüden elde edilen bütün Hough çizgileri

Şekil 4.15: Solda tek şerit olması durumu için alınan iki örnek görüntüden eliminasyon sonrası tespit edilen şerit çizgileri

Görüldüğü gibi yolda tek şerit çizgisi olması durumunda dahi algoritma başarılı olarak çalışmakta ve mobil robot belirlenen çizginin dışarısına çıkmamaktadır.

Orijinal İmge Orijinal İmge

Elde edilen bütün hough çizgileri Eleminasyondan sonra elde edilen hough çizgileri

SONUÇLAR ve ÖNERİLER

Bu bölümde, tez çalışması sırasında geliştirilen otonom hareket eden mobil robot platformunun elde ettiği sonuçlar ve sistemin geliştirilebilmesi için gelecekte neler yapılabileceği tartışılmıştır.

Tez çalışması sırasında geliştirilen otonom hareket eden mobil robot platformunun bütün ortamlarda yazılımsal olarak mükemmel bir şekilde çalıştığı, şerit çizgisini kaçırmadığı gözlenmiştir. Fakat geliştirilen sistemin mekanik olarak bazı sorunları bulunmaktadır. Bu bölümde bu sorunlara kısaca değinilecek olup sorunların çözümleri için bazı yöntemler sunulacaktır.

Geliştirilen mobil robot platformunun hareketini sağlayan DC motorların güçleri çok yüksek olmadığı için mobil robotun maksimum hızı 4 km/s olmaktadır. Daha güçlü DC motorlar takılarak mobil robotun hızı arttırılabilir.

Mobil robotun engellerden rahat geçmesi için oluşturulan amortisörlü mekanizma gayet başarılı bir biçimde çalışmaktadır ve mobil robot karşısına çıkan engellerden rahatlıkla geçebilmekte hatta merdiven inip çıkabilmektedir. Fakat geçebileceği engellerin boyunu tekerleğin yarıçapı belirlemektedir. Daha yüksek çaptaki tekerlekler kullanılarak mobil robotun engellerden geçme kabiliyeti arttırılabilir. Mobil robotun ana kasası ve motor tutucuları alüminyumdan oluşturulmuştur. Bu elemanların birbirine alüminyum kaynak yapılarak tutturulmuştur. Alüminyum kaynak hassas bir kaynak türü olduğu için eksenler iyi tutturulamamış olup, eksenlerde bazı kaymalar meydana gelmiştir. Bu yüzden mobil robotun hareketini sağlayan motorlara aynı oranda PWM verildiğinde mobil robotun sağa doğru yönlendiği görülmüştür. Bunu çözmek için sağ taraftaki motorlara doluluk boşluk oranı daha yüksek olan PWM sinyali gönderilmiştir. Alüminyum yerine çelik malzeme kullanılarak ve lazerle çelik kaynak yapılarak bu sorun çözülebilir.

Mobil robotun enerjisi 24V 5A’lik bir LI-PO pil kullanılarak sağlanmıştır. Sistemdeki görüntü işleme kartı tek başına 2A çekmektedir ve her bir motor tek başına 1A çekmektedir. Bu da 5A olan pilin çabuk bitmesine yol açmaktadır. Amper değeri daha yüksek bir pil kullanılarak bu sorun giderilebilir.

Gelecekteki çalışmalarda mobil robot üzerine bir radar koyularak mobil robotun engelleri algılaması ve engellerden kaçması sağlanabilir. Ayrıca yine değişik sensörler kullanılarak mobil robotun konumu belirlenebilir ve mobil robota hedef verilip mobil robotun bu hedefe otonom olarak varması sağlanabilir.

KAYNAKLAR

[1] Autonomous car http://en.wikipedia.org/wiki/Autonomous car (Ziyaret Tarihi: 22.11.2011)

[2] McGillem, C.D., Rappaport, T.S., “Infra-red Location System for Navigation of Autonomous Vehicles”, IEEE Transactions on vehicular technology, 38, 132-139, (1988)

[3] Dickmanns, E.D., Mysliwetz, B., Christians, T., “ An Integrated Spatio-Temporal Approachto Automatic Visual Guidance of Autonomous Vehicles”, IEEE Transactions on Systems, 20 ,1273-1284, (1990)

[4] Gregor, R., Lutzeler, M., Pellkofer, M., Siedersberger, K.-H., Dickmanns E.D., “A Perceptual System for Autonomous Vehicles”, IEEE Transactions on Intelligent Trasportation Systems, 3, 48-59, (2002)

[5] Antonelli, G., Chiaverini, S., “Kinematic Control of Platoons of Autonomous Vehicles”, IEEE Transactions on Robotics, 22, 1285-1292, (2006)

[6] Frazzoli, E., Munther A.D., Feron, E., “Real-Time Motion Planning for Agile Autonomous Vehicles”, “Journal Of Guıdance Control And Dynamıcs”, 25, 116- 129, (2002)

[7] Krogh, B., Thorpe, C., “Integrated Path Planning and Dynamic Steering Control for Autonomous Vehicles”, Conferance on Robotics and Automation Proceedings, IEEE International,1664-1669, April (1986)

[8] Shiller, Z., Chen, J.C., “Optimal Motion Planning of Autonomous Vehicles in Three Dimensional Terrains”, Conferance on Robotics and Automation, IEEE International, 198-203 Cincinnati, 13-18 May (1990)

[9] Kanayama, Y., Nilipour, A., Lelm, C.A., “A Locomotıon Control Method For Autonomous Vehıcles”, Conferance on Robotics and Automation, IEEE International, 1315-1317 Philadelphia, 13-18 Nis (1998)

[10] Nelson, W., “Continuous-Curvature Paths for Autonomous Vehicles”, Conferance on Robotics and Automation, IEEE International, 1260-1264 Scottsdale, 14-19 May (1989)

[11] Kanayama, Y., Kimura,Y., Miyazaki, F., Noguchi, T., “A Stable Tracking Control Method for an Autonomous Mobile Robot”, Conferance on Robotics and Automation, IEEE International, 384-389 Cincinnati, 13-18 May (1990)

[12] Aguiar, A.P., Hespanha, J.P., “Trajectory-Tracking and Path-Following of Underactuated Autonomous Vehicles With Parametric Modeling Uncertainty”, IEEE Transactions on Automatic Control, 52, 1362-1379, (2007)

[13] Gomez, B.F., Cuesta, F., Ollero, A., “Parallel And Diagonal Parking İn Nonholonomic Autonomous Vehicles”, Engineering Applications of Artificial Intelligence(2000)

[14] Vaidyanathan, R., Hocaoglu, C., Prince, T.S, Quinn R. D, “Evolutıonary Path Plannıng For Autonomous Aırvehıcles Usıng Multıresolutıon Path Representatıon”, Conferance on Intelligent Robots and Systems, IEEE International, 69-76 Maui, 29 Sep-03 Nov (2001)

[15] Beard, R.W., McLain, T.W., Goodrich, M.A., Anderson, E.P., “Coordinated Target Assignment and Intercept for Unmanned Air Vehicles”, Ieee Transactıons On Robotıcs And Automatıon, 18, 911-922, (2002)

[16] Thomas, P.W., Richard, J.P., Andrew J.K., “Vision-Based State Estimation for Autonomous Micro Air Vehicles”, Journal Of Guıdance, Control, And Dynamıcs, 30, 816-826, (2007)

[17] Kanade, T., Amidi, O., Ke, Q., “Real-Time and 3D Vision for Autonomous Small and Micro Air Vehicles”, Conferance on Decision and Control, IEEE International, 1655-1662 USA, 14-17 Dec (2004)

[18]http://www.nytimes.com/2010/10/10/science/10google.html?pagewanted=all(Zi yaret Tarihi: 22.11.2011)

[19]http://www.aselsan.com.tr/urun.asp?urun_id=236&lang=tr(Ziyaret Tarihi: 22.11.2011)

[20] Qing, L., Zheng, N., Hong, C., “Springrobot: A Prototype Autonomous Vehicle And İts Algorithms For Lane Detection”. IEEE Transactions on Automatic Control, 5, 300-308, (2007)

[21] Yu, B., Jain, A.K., “Lane Boundary Detection Using A Multiresolution Hough Transform”, Conferance on Image Processing, IEEE International, 748-751 Santa Barbara, 26-29 Nov (1997)

[22] McCall, J.C., Trivedi, M.M., “ Video-Based Lane Estimation and Tracking for Driver Assistance: Survey, System, and Evaluation”, IEEE Transactions on Intelligent Transportation Systems , 7, 20-37, (2006)

[23] Kang, D.J., Jung, M.H., “Road Lane Segmentation Using Dynamic Programming For Active Safety Vehicles”, Pattern Recognizations Letters , 24, 3177-3185, (2003)

[24] Wang, Y., Teoh, E.K., Shen, D., “Lane detection and tracking using B-Snake”, Image and Vision Computing, 22, 269-280, (2004)

[25] Mandalia, H.M., Salvucci D.D.M., “Usıng Support Vector Machines For Lane- Change Detection”, Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 49, 1965-1969, (2005)

[26] Borkar, A., Hayes, M., Smith, M.T., “Robust Lane Detectıon And Tracking With Ransac And Kalman Filter”, IEEE International Conference on Image Processing , 7, 3261-3264, (2009)

[27] Nedevschi, S., Schmidt, R., Graf, T., Danescu, R., Frentiu, D., Marita, T., Oniga, F., Poco1, C., “3D Lane Detection System Based on Stereovision”, IEEE Transactions on Intelligent Transportation Systems , 161-166, (2004)

[24] Wang, Y., Teoh, E.K., Shen, D., “Lane detection and using Catmull-Rom Spline”, IEEE Transactions on Intelligent Vehicles, 51-57, (1998)

[29] Sun, T.Y., Tsai, S.J., Chan, V., “HSI Color Model Based Lane-Marking Detection”, Conferance on Intelligent Transportation Systems, IEEE International, 1168-1172 Toronto, 17-20 Sept (2006)

[30] Nedevschi, S., Schmidt, R., Graf, T., Danescu, R., Frentiu, D., Marita, T., Oniga, F., Poco1, C.,”3D Lane Detection System Based on Stereovision”, Conferance on Intelligent Transportation Systems, IEEE International, 161-166, 03-06 Oct (2004)

[31] Kreucher, C., Lakshmanan, S., Kluge, K., “A Driver Warning System Based on the LOIS Lane Detection Algorithm”, Conferance on Intelligent Transportation Systems, IEEE International ,17-22, (1998)

[32] Lai, A.H.S., Yung, N.H.C., “Lane Detection by Orientation and Length Discrimination”, Ieee Transactıons On Systems, 30, 539-548, (2000)

[33] Jochem, M.D., Pomerleau, D.A., Thorpe, E.C., “MANIAC: A Next Generation Neurally Based Autonomous Road Follower”, The Robotics Institue, Carnegie MellonUniversity,http://scholar.google.com.tr/scholar?hl=tr&q=%2C+MANIAC%3 A+A+Next+Generation+Neurally+Based+Autonomous+Road+Follower&btnG=Ara &lr=&as_ylo=&as_vis=0 (Ziyaret Tarihi: 22.11.2011)

[34] Demigny, D., “On Optimal Linear Filtering for Edge Detection”, Ieee Transactions On Image Processing, 11, 728-737,(2002)

[35] Kanopoulus, N., Vasantahavada, N., Baker, R.L., “Design of an Image Edge Detection Filter Using the Sobel Operator”, Ieee Journal Of Solıd-State Cırcuıts, 23, 358-367, (1988)

[36] Cho, S.Y., Chow, T.W. S., Leung C.T., “A Neural-Based Crowd Estimation by Hybrid Global Learning Algorithm”, Ieee Transactions On Systems, 29, 535- 541, (1999)

[37] Ballard, D.H., “Generalizing Hough Transform To Detect Arbitrary Shapes”, Computer Science Department, University of Rochester, http://www.sciencedirect.com/science/article/pii/0031320381900091 (Ziyaret Tarihi: 22.11.2011)

[38] Ioannou, D., Huda, W., Laine, A.F., “Circle Recognition Through A 2D Hough Transform And Radius Histogramming”, Image and Vision Computing,17, 15-26 (1999)

[39] Chen, T.C., Chung, K.L., “An Efficient Randomized Algorithm for Detecting Circles”, Computer Vision and Image Understanding, 83, 172-191 (2001)

[40] Lutton, E., Maitre, H., Lopez-Krahe, J., “Contribution to the Determination of Vanishing Points Using Hough Transform”, Ieee Transactions On Pattern Analysis And Machine Intelligence, 16, 430-438, (1994)

[41] Illingworth, J., Kittler, J., “The Adaptive Hough Transform”, Ieee Transactions On Pattern Analysıs And Machine Intelligence, 9, 690-698,(2009)

[42] Chutatape, O., Guo, L., “A Modified Hough Transform For Line Detection And Its Performance”, Pattern Recognization, 32, 181-192, (1998)

[43] Bertozzi, M., Broggi, A., Fascioli, A., “Stereo Inverse Perspective Mapping: Theory And Applications”, Image and Vision Computing, 16, 585-590 (2007) [44] Malot, H.A., Bultoff, H. H., Little, J.J., Bohrer, S., “Inverse Perspective Mapping Simplifies Optical Flow Computation And Obstacle Detection”, Biological Cybernetics, 64 , 177-185 (1992)

[45] Jiang, G.Y., Choi, T.Y., Hong, S.K., Bae, J.W., Song, B.S., “ Lane and Obstacle Detection Based on Fast Inverse Perspective Mapping Algorithm”, Ieee Transactions On Cybernatics, 4, 2969-2974, (2000)

[46] Nandam, P.K., Sen, P.C., “Analog and Digital Speed Control of DC Drives Using Proportional-Integral and Integral-Proportional Control Techniques”, Ieee Transactions On Industrial Electronics, 34, 227-223, (1987)

[47] TLP521-4 http://www.datasheetcatalog.com/datasheets_pdf/T/L/P/5/TLP521- 4.shtml (Ziyaret Tarihi: 22.11.2011)

[48] 7805 http://www.datasheetcatalog.com/datasheets_pdf/7/8/0/5/7805.shtml (Ziyaret Tarihi: 22.11.2011)

[49] LMD18200 http://www.ti.com/general/docs/lit/getliteratue.tspgenericPartNumb er=lmd18200&reg=en&fileType=pdf (Ziyaret Tarihi: 22.11.2011)

[50] LM2576 http://www.datasheetcatalog.com/datasheets_pdf/L/M/2/5/LM2576.sht (Ziyaret Tarihi: 22.11.2011)

[51] LM7809 http://www.datasheetcatalog.com/datasheets_pdf/L/M/7/8/LM7809.sht (Ziyaret Tarihi: 22.11.2011)

[52] BF-561 EZKIT LITE http://www.analog.com/en/processors-dsp/blackfin/bf561- ezlite/products/product.html (Ziyaret Tarihi: 22.11.2011)

EKLER

EKA Geliştirilen Görüntü İşleme Kodu

#include "main.h" #include "stdio.h"

// Horizontal and vertical blankinks (in NTSC/PAL frame) in bytes. #define H_BLANK (unsigned int)

(ADI_ITU656_BLANKING+ADI_ITU656_EAV_SIZE+ADI_ITU656_SAV_SIZE)

#define V_BLANK (unsigned int) (ADI_ITU656_ILF1_START)*(ADI_ITU656_LINE_WIDTH) void convertToGrayScale(volatile short *ptrImage, unsigned char *ptrOut);

void extractCb(volatile short *ptrImage, unsigned char *ptrOut); void extractCr(volatile short *ptrImage, unsigned char *ptrOut); void Sobel_Edge(unsigned char *ptr);

void Initialize_Hough_Array(unsigned volatile short *ptrHT_Array);

void Hough_Transform(unsigned char *ptrIn, unsigned volatile short *ptrHT_Array); void Hough_Transform_Non_Optimized(unsigned char *ptrIn, unsigned volatile short *ptrHT_Array);

void Hough_Peaks(unsigned volatile short *ptrHT_Array); void Find_Lane_Marks(void);

void Draw_Hough_Lines(volatile short *ptrOut, signed volatile char *theta, signed volatile short *rho, unsigned volatile short nlines, unsigned char color);

void copy_ProcessedImage2Buffer(unsigned char *ptrIn, volatile short *ptrOut); void deinterlaceFrame(volatile short *ptrIn);

void Imag_Thr(unsigned char *ptr, unsigned char thr); void Navigate_Robot(void);

void Perspective_Correction(void);

void processFrame(volatile short *ptrImage) {

unsigned short peak_count; // # of peaks detected in Hough space // NTSC/PAL frame is resized and converted to gray-scaled image // Only active field 1 is kept

// # of pixels per line (columns) is reduced by 2

convertToGrayScale(ptrImage, &processedImage[0]); //extractCb(ptrImage, &processedImage[0]);

//Sobel_Edge(&processedImage[0]); // initialize the accumulator array

Initialize_Hough_Array(&houghArray[0]); // compute the Hough transform of the image

Hough_Transform_Non_Optimized(&processedImage[0], &houghArray[0]); //Hough_Transform(&processedImage[0], &houghArray[0]);

// start searching the Hough space for peaks Hough_Peaks(&houghArray[0]);

// find lane marks among the peaks located Find_Lane_Marks();

// if dispVideo is set to 1 in system.h: // Draw the lane marks over the image if (dispVideo==1) {

Draw_Hough_Lines(&sFrame0[0], &theta_peaks[0], &rho_peaks[0], N_peaks, 0); //Draw_Hough_Lines(&sFrame0[0]+(ADI_ITU656_ILF2_START-

ADI_ITU656_ILF1_START)*(ADI_ITU656_LINE_WIDTH/2), &theta_peaks[0], &rho_peaks[0], N_peaks, 0);

//Draw_Hough_Lines(&sFrame1[0], &theta_peaks[0], &rho_peaks[0], N_peaks, 0); //Draw_Hough_Lines(&sFrame1[0]+(ADI_ITU656_ILF2_START-

ADI_ITU656_ILF1_START)*(ADI_ITU656_LINE_WIDTH/2), &theta_peaks[0], &rho_peaks[0], N_peaks, 0);

}

// perspective correction for the lanes detected Perspective_Correction();

// if dispVideo is set to 1 in system.h:

// Draw the lane marks (perspective corrected) over the image // Draw the robot's heading vector, too.

if (dispVideo==1) {

Draw_Hough_Lines(&sFrame0[0], &theta_peaks[0], &rho_peaks[0], N_peaks, 1); //Draw_Hough_Lines(&sFrame0[0]+(ADI_ITU656_ILF2_START-

ADI_ITU656_ILF1_START)*(ADI_ITU656_LINE_WIDTH/2), &theta_peaks[0], &rho_peaks[0], N_peaks, 1);

//Draw_Hough_Lines(&sFrame1[0], &theta_peaks[0], &rho_peaks[0], N_peaks, 1); //Draw_Hough_Lines(&sFrame1[0]+(ADI_ITU656_ILF2_START-

ADI_ITU656_ILF1_START)*(ADI_ITU656_LINE_WIDTH/2), &theta_peaks[0], &rho_peaks[0], N_peaks, 1);

Draw_Hough_Lines(&sFrame0[0], &theta_peaks[2], &rho_peaks[2], 1, 2); //Draw_Hough_Lines(&sFrame0[0]+(ADI_ITU656_ILF2_START-

ADI_ITU656_ILF1_START)*(ADI_ITU656_LINE_WIDTH/2), &theta_peaks[2], &rho_peaks[2], 1, 2);

//Draw_Hough_Lines(&sFrame1[0], &theta_peaks[2], &rho_peaks[2], 1, 2); //Draw_Hough_Lines(&sFrame1[0]+(ADI_ITU656_ILF2_START-

ADI_ITU656_ILF1_START)*(ADI_ITU656_LINE_WIDTH/2), &theta_peaks[2], &rho_peaks[0], 1, 2);

}

// navigate

//Navigate_Robot();

/******************************

NTSC/PAL frame is resized and converted to gray-scaled image Only active field 1 is kept

# of pixels per line (columns) is reduced by 2

For NTSC frame the resulting image is a 240x360 gray-scaled image ******************************/

void convertToGrayScale(volatile short *ptrImage, unsigned char *ptrOut) { // ptrImage is pointer to the NTSC/PAL frame

// ptrOut is the pointer to the output image unsigned int i, j;

u8 *ptrIn;

int rows=N_Rows; int cols=N_Cols; // Skip vertical blanking.

ptrIn =((u8 *)ptrImage) + 1 + V_BLANK; // Get the every other luminosity (Y) value. for (i=0; i<rows; i++) {

ptrIn += H_BLANK; // skip horizontal blanking for (j=0; j<cols; j++) {

*(ptrOut++) = *(ptrIn);

ptrIn += 4; // // Y values are bytes: 2-4-6-8...(every other Y is kept)

} }

/******************************

NTSC/PAL frame is resized and blue diffence (Cb) values are extracted Only active field 1 is kept

# of pixels per line (columns) is reduced by 2

For NTSC frame the resulting image is a 240x360 gray-scaled image ******************************/

void extractCb(volatile short *ptrImage, unsigned char *ptrOut) { // ptrImage is pointer to the NTSC/PAL frame

// ptrOut is the pointer to the output image unsigned int i, j;

u8 *ptrIn;

int rows=N_Rows; int cols=N_Cols; // Skip vertical blanking.

ptrIn =((u8 *)ptrImage) + V_BLANK; // Extract the blue difference (Cb) values for (i=0; i<rows; i++) {

ptrIn += H_BLANK; // skip horizontal blanking for (j=0; j<cols; j++) { *(ptrOut++) = *(ptrIn); ptrIn += 4; // Cb is byte:1-5-9-... } } }

/******************************

NTSC/PAL frame is resized and red diffence (Cr) values are extracted Only active field 1 is kept

# of pixels per line (columns) is reduced by 2

For NTSC frame the resulting image is a 240x360 gray-scaled image ******************************/

void extractCr(volatile short *ptrImage, unsigned char *ptrOut) { // ptrImage is pointer to the NTSC/PAL frame

// ptrOut is the pointer to the output image unsigned int i, j;

u8 *ptrIn;

int rows=N_Rows; int cols=N_Cols; // Skip vertical blanking.

ptrIn =((u8 *)ptrImage) + 2 + V_BLANK; // Extract the red difference (Cr) values for (i=0; i<rows; i++) {

ptrIn += H_BLANK; // skip horizontal blanking for (j=0; j<cols; j++) {

*(ptrOut++) = *(ptrIn);

ptrIn += 4; // Cr values are bytes: 3-7-11-... }

} }

/****************************** Function :Sobel_Edge

Input Argument :(*ptr) pointer to the input image. Return Argument :The sobel filtered image Purpose :Filters an image with sobel filter

The output is written over the input image ******************************/

void Sobel_Edge(unsigned char *ptr) {

int i; //row index int j; //column index signed char Gx, Gy;

unsigned char preline[N_Cols]; unsigned char left_pixel; unsigned char current_pixel;

int rows = N_Rows; // # of rows int cols = N_Cols; // # of columns

// Threshold value to be used in sobel filtered image to find edges // EDGE_THRESH is defined in system.h

unsigned char edge_thr = EDGE_THRESH;

// Edge pixels having a gray level above this threshold will be considered only // GRAY_THRESH is defined in system.h

unsigned char gray_thr = GRAY_THRESH; for (i=0;i<cols;i++) preline[i]=*ptr++; left_pixel=*ptr; for (i=1;i<rows-1;i++) { for(j=1;j<cols-1;j++) { current_pixel = *(++ptr);

Gy = (signed char) left_pixel-*(ptr+1); // vertical filter [1 0 -1]

//Gx = (signed char) preline[j]-*(ptr+cols); // horizontal filter [1 ; 0 ; -1] if (current_pixel<=gray_thr && abs(Gy)>=edge_thr)

*ptr = 255; else *ptr = 0; preline[j-1] = left_pixel; left_pixel = current_pixel; } preline[cols-2]=left_pixel; preline[cols-1]=*(++ptr); ptr++; }}

/****************************** Function :Initialize_Hough_Array

Input Argument :(*ptrHT_Array) pointer to the Hough Accumulator Array Return Arguments :None

Purpose :Initializes all the values of the array to zero ******************************/

void Initialize_Hough_Array(unsigned volatile short *ptrHT_Array) {

unsigned short n_rho = N_RHO; unsigned short n_theta = N_THETA;

int i, j;

for (i=0;i<n_rho;i++)

for (j=0;j<n_theta;j++) *ptrHT_Array++ = 0; }

/****************************** Function :Hough_Transform

Input Argument :(*ptrIn) pointer to the input gray scaled image

Return Arguments :(*ptrHT_Array) pointer to the Hough Accumulator Array

Purpose :Computes the HT of a gray scaled image returning the accumulator array ******************************/

void Hough_Transform(unsigned char *ptrIn, unsigned volatile short *ptrHT_Array) {

unsigned short int i; //row index unsigned short int j; //column index unsigned short houghArrayInd;

unsigned short rows = N_Rows; // # of rows unsigned short cols = N_Cols; // # of columns // define the region to be processed

unsigned short row_min = rows/2; unsigned short row_max = rows-1; unsigned short col_min = 10;

unsigned short col_max = cols-col_min;

signed char Gx, Gy; // gradient in x and y directions

float slope; // Gy/Gx

signed short int_slope;

// Hough parameters defined in system.h signed short rho_min = RHO_MIN; signed short rho_max = RHO_MAX; unsigned char rho_res = RHO_RES; signed char theta_min = THETA_MIN; signed char theta_max = THETA_MAX; unsigned char theta_res = THETA_RES; unsigned short n_rho = N_RHO; unsigned short n_theta = N_THETA; // rho and theta indices and values unsigned short rho_ind, theta_ind; signed short rho;

signed char theta;

signed char theta_lower_limit, theta_upper_limit; // parameters for trig functions look-up tables unsigned short LUT_ind;

signed short atan_max = ATAN_MAX; signed short atan_min = ATAN_MIN; float atan_res = ATAN_RES;

// x and y values in the line equation unsigned short x, y;

// max y value short ymax = rows-1;

// Threshold value to be used in sobel filtered image to find edges // EDGE_THRESH is defined in system.h

unsigned char edge_thr = EDGE_THRESH;

// Edge pixels having a gray level below this threshold will be considered only // GRAY_THRESH is defined in system.h

unsigned char gray_thr = GRAY_THRESH;

// set the pointer to the start of the upper half of the image // first line is not processed

ptrIn += row_min*cols; for (i=row_min;i<row_max;i++) { ptrIn += col_min; for(j=col_min;j<col_max;j++) {

if (*ptrIn<=gray_thr) // for each pixel below gray threshold: {

Gy = (signed char) *(ptrIn-1) - *(ptrIn+1); // vertical filter [1 0 -1]

//Gy = (signed char) *(ptrIn) - *(ptrIn-1);

//Gy = ((signed char) *(ptrIn-1-cols)-*(ptrIn+1-cols)) + ((signed char) *(ptrIn-1)-*(ptrIn+1)) + ((signed char) *(ptrIn-1+cols)-*(ptrIn+1+cols));

Gx = (signed char) *(ptrIn-cols) - *(ptrIn+cols); // horizontal filter [1 ; 0 ; -1]

//Gx = (signed char) *(ptrIn) - *(ptrIn-cols);

//Gx = ((signed char) *(ptrIn-1-cols)-*(ptrIn-1+cols)) + ((signed char) *(ptrIn-cols)-*(ptrIn+cols)) + ((signed char) *(ptrIn+1-cols)-*(ptrIn+1+cols));

if (abs(Gy)>=edge_thr) // if the current pixel is an edge pixel: { if (Gx==0) { theta_lower_limit = -6; theta_upper_limit = 6; } else {

slope = ((float) Gy)/Gx; int_slope = slope; if (int_slope>=atan_max || int_slope<=atan_min) { theta_lower_limit = -6; theta_upper_limit = 6; } else { LUT_ind = (slope+atan_max)/atan_res; theta = atan_LUT[LUT_ind]; theta_lower_limit = theta-6; theta_upper_limit = theta+6; } }

// x and y values from the indices

// Origin (0,0) is the lower left corner of the image x = j;

y = ymax-i;

for (theta=theta_lower_limit; theta<=theta_upper_limit; theta+=theta_res)

//for (theta=theta_min; theta<=theta_max; theta+=theta_res)

{

// compute theta index and convert theta to radians. //theta_ind = (theta+theta_max)/theta_res; // comment out if theta_res = 1

theta_ind = theta + theta_max; //

comment out if theta_res != 1

// compute the corresponding rho value from the line equation. LUT_ind = theta+90;

rho = (x*cos_LUT[LUT_ind] + y*sin_LUT[LUT_ind])/32767;

rho_ind = (rho+rho_max)/rho_res; // comment out if rho_res = 1

//rho_ind = rho + rho_max; // comment

out if rho_res != 1

// update the accumulator array

houghArrayInd = rho_ind * n_theta + theta_ind; ptrHT_Array[houghArrayInd]++; //ptrHT_Array[houghArrayInd-1]++; //ptrHT_Array[houghArrayInd+1]++; ptrHT_Array[houghArrayInd-n_theta]++; ptrHT_Array[houghArrayInd+n_theta]++; } } } ptrIn++; } ptrIn += col_min; } }

/****************************** Function :Hough_Transform

Input Argument :(*ptrIn) pointer to the input gray scaled image

Return Arguments :(*ptrHT_Array) pointer to the Hough Accumulator Array

Purpose :Computes the HT of a gray scaled image returning the accumulator array ******************************/

void Hough_Transform_Non_Optimized(unsigned char *ptrIn, unsigned volatile short *ptrHT_Array) {

unsigned short int i; //row index unsigned short int j; //column index unsigned short houghArrayInd;

unsigned short rows = N_Rows; // # of rows unsigned short cols = N_Cols; // # of columns // define the region to be processed

unsigned short row_min = rows/2; unsigned short row_max = rows-1; unsigned short col_min = 10;

unsigned short col_max = cols-col_min;

signed char Gx, Gy; // gradient in x and y directions

float slope; // Gy/Gx

signed short int_slope;

// Hough parameters defined in system.h signed short rho_min = RHO_MIN; signed short rho_max = RHO_MAX; unsigned char rho_res = RHO_RES; signed char theta_min = THETA_MIN; signed char theta_max = THETA_MAX; unsigned char theta_res = THETA_RES; unsigned short n_rho = N_RHO; unsigned short n_theta = N_THETA; // rho and theta indices and values unsigned short rho_ind, theta_ind; signed short rho;

signed char theta;

signed char theta_lower_limit, theta_upper_limit; // parameters for trig functions look-up tables unsigned short LUT_ind;

signed short atan_max = ATAN_MAX; signed short atan_min = ATAN_MIN; float atan_res = ATAN_RES;

// x and y values in the line equation unsigned short x, y;

// max y value short ymax = rows-1;

// Threshold value to be used in sobel filtered image to find edges // EDGE_THRESH is defined in system.h

unsigned char edge_thr = EDGE_THRESH;

// Edge pixels having a gray level below this threshold will be considered only // GRAY_THRESH is defined in system.h

unsigned char gray_thr = GRAY_THRESH;

// set the pointer to the start of the upper half of the image // first line is not processed

ptrIn += row_min*cols; for (i=row_min;i<row_max;i++) { ptrIn += col_min; for(j=col_min;j<col_max;j++) {

if (*ptrIn<=gray_thr) // for each pixel below gray threshold: {

Gy = (signed char) *(ptrIn-1) - *(ptrIn+1); // vertical filter [1 0 -1]

//Gy = (signed char) *(ptrIn) - *(ptrIn-1);

//Gy = ((signed char) *(ptrIn-1-cols)-*(ptrIn+1-cols)) + ((signed char) *(ptrIn-1)-*(ptrIn+1)) + ((signed char) *(ptrIn-1+cols)-*(ptrIn+1+cols));

Gx = (signed char) *(ptrIn-cols) - *(ptrIn+cols); // horizontal filter [1 ; 0 ; -1]

//Gx = (signed char) *(ptrIn) - *(ptrIn-cols);

//Gx = ((signed char) *(ptrIn-1-cols)-*(ptrIn-1+cols)) + ((signed char) *(ptrIn-cols)-*(ptrIn+cols)) + ((signed char) *(ptrIn+1-cols)-*(ptrIn+1+cols));

if (abs(Gy)>=edge_thr) // if the current pixel is an edge pixel:

{

//don't determine the theta value limit from gradient(before optimization) if (Gx==0) { theta_lower_limit = -6; theta_upper_limit = 6; } else {

slope = ((float) Gy)/Gx; int_slope = slope; if (int_slope>=atan_max || int_slope<=atan_min) { theta_lower_limit = -6;

Benzer Belgeler