Monday, July 23, 2018

Are you planning to use drones for your Business? READ THIS FIRST.



A lot of things have been happening in the drone Industry. A new company coming up every month with a new concept. New ideas being tested for its feasibility. A lot of these concepts are yet to commercialize or are anywhere near widespread use. But there is one thing that small drones have proved that they can do a very good job at. And that is Aerial photography or simply put capturing photos/videos using a basic camera setup. Couple this setup with an intelligent drone which can do autonomous navigation and you have a powerful tool to do aerial surveys and inspections.
As a prospective business owner or a manager who wishes to explore use of drones in his/her business.You probably have some common question in your mind. And most of the time the drone service providers steer people away from reality by showing them some flashy presentations / videos or assuring them of metrics that they themselves are not sure of in first place.
In this article I hope to give a clearer picture of what expectations one can have by using a drone in his or her field. I may not cover all possible solutions but the ones that are widely used.
You need to answer these questions for yourself.Starting with
INSPECTIONS:-
1.      Does your business involve assets that have to be monitored ?
2.      Are those assets hard to reach by i.e  are they tall , far , tricky to reach?
3.      Does photos (and its derivatives) alone solve your inspection problem?
4.      Kind of data expected Visual / NIR / Thermal ?
If all your answers are yes then maybe using a drone could be useful in your business. The " maybe " is because of the cost associated with using the drone. As you explore more you would end up with some decisive metrics that you should worry about.
1) Resolution of the images:- Most of the service provides use off the shelf DJI products so take a note of which equipment will be used. Main point to note is the camera specifications like Resolution (generally higher the better ) , Sensor Size ( Higher the better ) , Zoom ( Depending upon how close the drone could fly from the asset choose ; ask for zoom capabilities if you are sure that the drone will not be able to fly very close )
2) Range :- Maximum limit of height or distance. Most of the drones available today would have enough range for most applications. But in case you are sure your asset is far away do check with the vendor.
3) Operation time :- Ask how much time would be needed for inspecting a single asset. Try to maximize the usage of drone by asking the service provider to carry extra batteries to maximize the data collected in 1 day.

4)Analytics:- Analysis that would be required on the imagery like extracting details , assessing damages , generating reports. Almost always the service provider may throw some confusing words with some AI and Machine learning in it. Strictly speaking it doesn't matter if the service provider is using AI or doing it manually as long as the assessment is within your time bounds and the costing is same. Don't just choose a vendor based on what he says check the reports personally to be sure of what you would get finally.
5)Costing :- Depending upon the kind of drone the service provider brings the pricing would vary. Make a note of quoted value from different vendors with the similar drone. Also just don't cut short on Analytics ,a drone data in the form of just imagery is of no use. Either make provisions in-house or make sure the service provider does the necessary pre processing so that you get concise reports.

SURVEYING:-
Drones seem to have taken surveying industry by storm. With the reduction of costs in photogrammetric processing software and access to cloud based processing the industry seems to be blooming. It seems like you don't again have to use the traditional ground based surveying techniques. Is that so?.NO ABOSLUTELY NOT.
Although the photogrammetric surveying technique is fast and relatively accurate. It doesn't solve all your problems. Naming a few like you don't get terrain information under the tree , terrain data in grasslands , terrain data in dense forests.
Again let's get back to the questions one should ask before opting for such survey.
1)     Is your land parcel huge ?
2)     Do you need the surveying to be done at a fast pace ?
3)     Is high resolution ortho photo must for your project ?
4)     Are you flexible with the changing accuracies over the entire parcel ? [yes it's true ! the data would be more accurate near GCPS ]
5)     Is your accuracy requirement between 10 - 20 cms ( Elevation )and not less [ A lot of vendors would be happy to give you accuracy of 1 - 2 cm ( Elevation ) be careful with such claims ; if the accuracy is of utmost importance for your project do cross check carefully ]
6)     Do you need a dense point data for cut fill or volume calculations ?

If all or few of your answers are yes then you can think of using drones for your project. Some key metrics to take a note of

1) Ground Sampling Distance :- Well this is the most important metric. This is what defines the resolution and indirectly the accuracy of your data. In simple terms it's the length or breadth of the ground that is covered in one pixel. The smaller the GSD the better the quality of data. But that just doesn't mean having very low GSD is a way to go. Defining a GSD would indirectly define the time required for surveying and processing. To get a small GSD ( clear picture ) the drone would have to fly low and have to do many passes over the site thus increasing the time of acquisition and more number if images would lead to more time being consumed for processing. Ideally a GSD of 2-3 cm would give you accuracy of elevation in the range of 10-15 cm.
2) GCP ( Ground Control Points ) :- As stated earlier accuracy depends on the GSD and Control points laid on the ground. Control points are nothing but markers on ground whose readings have been taken using Total Station or DGPS. Around 2-5 points per square kilometer are sufficient for most cases .These points will be used by the Photogrammetric processing software to correct the model errors. This is a very critical step. Chances are the final data you receive may be incorrect only due to incorrect readings of GCP OR using very few of them.
    There is also a new module that people offer with survey package called as PPK .It's basically a GPS placed on the drone whose data  would later be corrected using the data gathered from the ground static GPS. But that doesn't mean you would not need any GCPs you would still need a few of them. Also be very careful while using PPK and do necessary cross checks on the ground. Also the Static GCP should be placed on a know point for the data to match with surrounding bench mark elevation.
3) Side Lap And Over Lap between photos :- Although this part is usually a headache of the vendor. But a poor data set with low side lap and overlap would lead to highly inaccurate data being generated. General rule of thumb is to have side lap greater than 60 % and overlap greater than 80 %.
4) Camera Quality :- A better camera with better sensor size would lead to a better outcome. Especially if you are planning to use the Orthophoto for feature extraction and other purposes. Always check which drone/camera the vendor is using.
5) Flight Time / Range :- Make sure you know the amount of time the vendor would take to cover your project. Ideally a drone could easily cover 5-10sqkm of area OR 10-15 KM of linear survey in a day. You could gather way more depending upon the type of drone and the GSD expected.
6) Processing Time :- Always make sure that you ask for the time taken for processing. It could run into days depending upon the images and GSD. Also photogrammetric processing do generate error some time. Make sure you personally verify the outputs with the data gathered on ground.
7) DSM / DTM :- The key point to note while dealing with drone survey is that you get Digital Surface Model. SO basically your elevations value represent the surface of a tree or a structure and not ground beneath. In most of the cases people use automatic workflow to remove structures and generate the terrain Model. Again this is an approximation and almost always there would be some error that is generated due to this. So be very clear about your requirements in terms of how the DSM to DTM would affect your calculations.

8) Costing :- The pricing is generally defined by the unit area or unit length along with the GSD requirements. Plus the cost of laying and recording Ground Control Point. Don't just blindly go with the vendor who quotes less. Do get into the details like the no of GCPs laid , the equipment that would be used , processing software being used , the work flow that would be followed. Always ask for missions before hand to check if the vendor know what he is going to do on field.
Apart from the above metrics nothing beats the educated and knowledgeable service provider. No matter how powerful a tool you have a poorly educated vendor would not be able to deliver that excellent outcome you were expecting by using drones.
Be very careful of vendors that charge you extremely low. They almost always have very limited experience in usage of drones. Chances are they are going to cut short on a lot of things. This could lead to a bigger problem if your designs or calculations are based on  incorrect information.

VIDEO MONITORING:-
Lastly it's the plain old simple video recording of the project. Nothing fancy here if your project can be shown better in an aerial photo/video and the service cost is justifiable go for it.
points to be noted
1) Resolution :- It could be HD , Full HD or 4K. Most dji drones offer 4K resolution. Higher the better.
2) Annotations :- Overlay critical information on the video. It could be location , asset , metrics etc.
4) Good Pilot :- A good pilot is must unless you are planning to capture video using Autonomous Navigation.
This service is usually very cheaply available. Do cross check between vendors before finalizing anyone.

Hopefully I was able to give some clarity with regards to usage of drones in your domain. In case you are still uncertain whether drones are helpful in your business you can reach me personally. I would be happy to help you out.
Thank you for reading.

Aniket A Tatipamula

Director | Airpix

E: aniket@airpix.in

M: 9028208536

Tuesday, December 11, 2012

Ball Tracking / Detection using OpenCV

   Ball detection is pretty easy on OpenCV. So to start with lets describe what steps we will go through.

                       LINK TO THE CODE




1.Load an image / start a video capture




2.Convert image from RGB space to HSV space . HSV(hue saturation value) space gives us better results while doing color based segmentation.

3.Seperate Image into its 3 component images(i.e H  S  V each of which is a one dimensional image or intensity image)
H component

S component

V component


4.Use a condition for intensity values in the image and get a Binary image.
  i.e let say we taken H intensity image .If our ball is red color .Then in this image we will find that the values of the pixel where the ball is present , lies in a specific range. so we define a condition for every pixel . if                                (pixel > threshold_min & pixel < threshold_max )= pixel of o/p image is 1 else it is zero.

NOTE:
FOR THE PURPOSE OF CALIBRATION WE HAVE 2 SLIDERS ON EACH COMPONENT IMAGE TO SET THE LOWER AND UPPER LIMIT OF PIXEL VALUES.

H component after condition


We do this for all components i.e for S and V.


S component after condition

V component after condition

5.Now we have three binary images( only black and only white) . Which has the region of ball as 1's and every thing else which has the intensity values greater(less) than threshold .The pixels that do not pass this conditions will be zero.


6.We then combine all the above three Binary images (i.e we AND them all). All the pixels that are white in the three images will be white in the output of this step.So there will be regions too which will have 1's but with lower areas and of random shapes.

Combined image

7.Now we use houghs transform on the output of last operation to find the regions which are circular in shape.

8.Then we draw the marker on the detected circles as well as display the center and radius of the circles





Thursday, February 2, 2012

Setting up opencv on DEV C++



This is really simple if you know what to do .

  1. Download OpenCV : install ; check the path where it is installed. for eg C:\Opencv2.x
  2. Download DevCPP: install ;

Once done with both
Open Dev Cpp.Go to TOOLS - COMPILER OPTIONS . ADD new compiler (click on plus sign).
Name it OpenCV.

Add these lines and tick
Add these foll commands while calling compiler
-L"C:\OpenCV\lib" -lcxcore210 -lcv210 -lcvaux210 -lhighgui210 -lml210

while doing so change the lib path( C:\OpenCV\lib) according to the path you have saved.GO to the lib folder in opencv dir and check for the above files linked .eg cxcore210 check if there is some other name instead and replace accordingly.


Add these lines and tick
Add these foll commands to the linker command line
-lcxcore210 -lcv210 -lcvaux210 -lhighgui210 -lml210


Now go to Directories

first in Binaries Add path to opencv Bin folder
C:\OpenCV\bin
again change it according to your bin path

then go to Libraries Add path to opencv Lib folder
C:\OpenCV\lib
again change it according to your lib path


then go to C includes Add path to opencv Include folder
C:\OpenCV\include
again change it according to your include path

then go to C++ includes Add path to opencv Include folder
C:\OpenCV\include
again change it according to your include path


Now go to Environment Variables and edit path variable
and add Opencv/bin to path and save.
Again the bin path should be according to your install dir opencv path. change it accordingly.




click ok and your are done .
go to samples and run them.
if u get errors
Make sure you have selected operating compiler as openCV.

Project -Project options - Compiler

cheers

Hand gesture using opencv


Hi ! In this post I will be describing the code for hand gesture recognition using OpenCV.The code is written in C on Dev C++.For installing the necessary libraries on Dev C++ you can check my previous post. So basically to start with I had to extract the hand region .Which can be done by many ways for eg                                                                                                                                                                                  1) you can segment the hand region using RGB values i.e.R G B values of hand will be different from background                                                                                                                                                         
OR
2) you can use edge detection
 OR
3) background subtraction.

     I have used background subtraction model. OpenCV provides us with different back ground subtraction models I choose codebook ( no specific reason).What it does is it calibrates for some time to be exact for some frames.In which for all the images it acquires; it calculates the average and deviation of each pixel and accordingly designates boxes. For more information please refer a book.

     So at this stage we have removed the background and in the foreground we only have our hand. For those who are new to CV it is like a black and white image with only the hand as white.

  
   In the next part what we intend to do is recognise the gesture. Here we use Convex Hull to find the finger tips.Convex hull is basically the convex set enclosing the hand region.


     The red line bounding the hand is convex hull .Basically it’s a convex set ; means if we take any two points inside the red region and join them to form a line then the line entirely lies inside the set.



     The yellow dot is the defect point and there will be many such defect points i,e every valley has a defect point. Now depending upon the number of defect points we can calculate the number of fingers unfolded.



summary :-
  • The hand region extraction has been done using background substraction using codebook method.
  • For Tip points i have used convex hull 2 and for depth points convexity defects.
The main code for extracting the contour and detecting the convexity points is in the function
void detect(IplImage* img_8uc1,IplImage* img_8uc3);

Place the camera in front of a steady background ; run the code ,wait for some time .Once the calibration has been done . U see the connected component image showing some disturbance.Bring your hand in cameras view . Enjoy .

VIDEOS:-




CODES:-

Link 1 : Convex Hull2 usage

Link 2 : Hand gesture recognition

                    FOR OPENCV 2.4


 Background subtraction has been done using codebook.
My code has been written over the basic example available in the opencv examples for codebook.So all that i have written has been included in a new function named detect() .

void detect(IplImage* img_8uc1,IplImage* img_8uc3) {

//8uc1 is BW image with hand as white And 8uc3 is the original image


CvMemStorage* storage = cvCreateMemStorage();
CvSeq* first_contour = NULL;
CvSeq* maxitem=NULL;
double area=0,areamax=0;
int maxn=0;


//function to find the white objects in the image and return the object boundaries

int Nc = cvFindContours(
img_8uc1,
storage,
&first_contour,
sizeof(CvContour),
CV_RETR_LIST // Try all four values and see what happens
);


int n=0;
//printf( "Total Contours Detected: %d\n", Nc );


//Here we find the contour with maximum area

if(Nc>0)
{
for( CvSeq* c=first_contour; c!=NULL; c=c->h_next )
{
//cvCvtColor( img_8uc1, img_8uc3, CV_GRAY2BGR );
area=cvContourArea(c,CV_WHOLE_SEQ );
if(area>areamax)
{areamax=area;
maxitem=c;
maxn=n;
}

n++;
}



CvMemStorage* storage3 = cvCreateMemStorage(0);
//if (maxitem) maxitem = cvApproxPoly( maxitem, sizeof(maxitem), storage3, CV_POLY_APPROX_DP, 3, 1 );


if(areamax>5000) //
check for area greater than certain value and find convex hull
{
maxitem = cvApproxPoly( maxitem, sizeof(CvContour), storage3, CV_POLY_APPROX_DP, 10, 1 );
CvPoint pt0;
CvMemStorage* storage1 = cvCreateMemStorage(0);
CvMemStorage* storage2 = cvCreateMemStorage(0);
CvSeq* ptseq = cvCreateSeq( CV_SEQ_KIND_GENERIC|CV_32SC2, sizeof(CvContour),
sizeof(CvPoint), storage1 );
CvSeq* hull;
CvSeq* defects;
for(int i = 0; i < maxitem->total; i++ )
{ CvPoint* p = CV_GET_SEQ_ELEM( CvPoint, maxitem, i );
pt0.x = p->x;
pt0.y = p->y;
cvSeqPush( ptseq, &pt0 );
}
hull = cvConvexHull2( ptseq, 0, CV_CLOCKWISE, 0 );
int hullcount = hull->total;
defects= cvConvexityDefects(ptseq,hull,storage2 );
//printf(" defect no %d \n",defects->total);

CvConvexityDefect* defectArray;
int j=0;
//int m_nomdef=0;
// This cycle marks all defects of convexity of current contours.
for(;defects;defects = defects->h_next)
{
int nomdef = defects->total; // defect amount
//outlet_float( m_nomdef, nomdef );
//printf(" defect no %d \n",nomdef);
if(nomdef == 0)
continue;
// Alloc memory for defect set.
//fprintf(stderr,"malloc\n");
defectArray = (CvConvexityDefect*)malloc(sizeof(CvConvexityDefect)*nomdef);
// Get defect set.
//fprintf(stderr,"cvCvtSeqToArray\n");
cvCvtSeqToArray(defects,defectArray, CV_WHOLE_SEQ);
// Draw marks for all defects.
for(int i=0; i
{ printf(" defect depth for defect %d %f \n",i,defectArray[i].depth);
cvLine(img_8uc3, *(defectArray[i].start), *(defectArray[i].depth_point),CV_RGB(255,255,0),1, CV_AA, 0 );
cvCircle( img_8uc3, *(defectArray[i].depth_point), 5, CV_RGB(0,0,164), 2, 8,0);
cvCircle( img_8uc3, *(defectArray[i].start), 5, CV_RGB(0,0,164), 2, 8,0);
cvLine(img_8uc3, *(defectArray[i].depth_point), *(defectArray[i].end),CV_RGB(255,255,0),1, CV_AA, 0 );
}
char txt[]="0";
txt[0]='0'+nomdef-1;
CvFont font;
cvInitFont(&font, CV_FONT_HERSHEY_SIMPLEX, 1.0, 1.0, 0, 5, CV_AA);
cvPutText(img_8uc3, txt, cvPoint(50, 50), &font, cvScalar(0, 0, 255, 0));
j++;
// Free memory.
free(defectArray);
}

cvReleaseMemStorage( &storage );
cvReleaseMemStorage( &storage1 );
cvReleaseMemStorage( &storage2 );
cvReleaseMemStorage( &storage3 );
//return 0;
}
}
}


thank you!! :)

Tuesday, February 8, 2011

Line Tracker with PID

LINE TRACKER

https://docs.google.com/open?id=0B7lDtwez94H3ZWUxNDM0MjktNjE5Mi00ZDFhLWI2ZTAtZjI4MmUwZDcwMzFh

This was my first project and i did it when i was in my first year of graduation.Line tracker is a best way to put your hand into robotics.In this post i will teach you to make a line tracker along with the advanced PID controller.






Firstly we define a line tracker: It is a robot which follows a line.So we need to program a robot to track line. To do this we need to give some kind of input to the robot to let it know where the line is .This is where line sensors come in .And to drive the robot we need some kind of actuators(motors).

Materials required

Mechanical:
2 DC geared motors 100rpm Rs 125 each
2 L shaped clamps to hold motors Rs 15 each
1 Castor wheel Rs 15
some wood or acrylic or aluminium to build a chassi .( For my first bot i used a plastic box)

Electronics:

Dev board

1 Atmega16 microcontroller
1 40 pin mount
berg strips
connecting wires
paraller port connector DB25 male
330 ohms resistor
IC 7805
IC L293D
PCB

Sensor Board

8 pair of IR (Tx ,Rx)
330 ohms resistor
10k ohms resistor
PCB

Circuit Diagram


D ( 1 3 5 7 9 11) - tx ; D ( 2 4 6 8 10) - Rx ; R ( 1 3 5 7 9 11) - 330 ohms ; R ( 2 4 6 8 10) - 10Kohms ;
In the above circuit the first one is transmitter circuit and below we have receiver circuit.
The transmitter and receiver should be placed one below other.

Working:
A transmitter is a simple infrared led .It emits infrared light when forward biased.
While the receiver is a photo diode it is used in reverse biased state.when infra red light falls on it the resistance across the reverse biased diode decreases. This property is used to detect white and black surface.Now consider your sensor pair ( Tx and Rx) is on white line , in this case the IR light emitted by Led is completely reflected back by white surface and this light falls on Rx due to which the resistance across Rx decreases .And the output(lf,l2,l1,r1,r2,rf) vary i.e. under normal conditions the resistance of Rx is infinity therefore the voltage across 10k resistance may be close to 0V.And when IR light falls; it may increase as the diode resistance decreases drastically and becomes comparable to 10k .
With the above information we can fairly judge the o/p under white and black surfaces.
Black : o/p will be high
white : o/p will be low


Here are some pictures of sensor board.These pictures are of different sensor board.It has 8 sensor pair but there is never a need of 8 sensor 6 should suffice for us.





Ok ! Now we are done with sensor board its time to test it .How to test it ?
Its really simple take a mutimeter connect one end(black) to gnd other(Red) to sensor output (use multimeter in voltage measuring mode select voltage range below 10/20 V).
Now measure the value of output with your hands on the sensor(make sure your hands be just above the sensors and very close to them around 1cm above), lets call it V1.Now remove your hands completely and let the area above the sensors be open and take the reading, let it be V2.
The sensors work properly if there is a substantial difference between V1 and V2 .V1 sould always be greater than V2. And V1-V2 should be approx around 1 volt.But this may differ with the environment . You may get false readings if you test this in sunlight.Sunlight contains a lot of IR light .So if you are facing problems during day and everything works fine at night then sunlight is the only problem.The only solution is to cover the sensors.

The above board has 8 sensors but its not necessary we can use 4 or 6 of them and follow a line perfectly well.

Now moving to drive system .We will be needing a Motor driving Circuit to drive the 2 motors of the robot.
You can google for more info about the motor driver circuit. I used L293D as driving IC. It is a simple H bridge driving circuit.
The IC can drive 2 motors. and takes 4 inputs 2 for each . Which lets motor control in both directions.

And finally we have a micro controller which does the part of controlling the motor depending on the inputs from the Sensor board.


Tuesday, January 19, 2010

Speakjet





Hey!!guys this post is for those who are interested to control speakjet.
Speakjet is sound synthesizing chip.

its features:
· Programmable, 5 channel synthesizer.
· Natural phonetic speech synthesis.
· DTMF and other sound effects.
· Programmable control of pitch, rate, bend and volume.
· Programmable power – up or reset announcements.
· Multiple modes of operation.
· Simple interface to microcontrollers.
· Simple “Stand Alone” operation.
· Three programmable digital outputs.
· Internal 64 Byte input buffer.
· Internal programmable EEPROM.
· Extremely low power consumption.
· Low pin count.
· Multiple case styles available.


In the beginning you don't have to know much about how it synthesizes music internally.
It can be controlled easily using serial interface.
NOTE:speakjet works on TTL logic and not RS232.So to control it through PC you need to have a level converter ic like max232.
It generates sound using basic unit of speech called allophones.A combination of desired such allophones will generate required sound.The SpeakJet is preconfigured with 72 speech elements (allophones),43 sound effects, and 12 DTMF Touch Tones.For more information read usermanual

http://www.magnevation.com/pdfs/speakjetusermanual.pdf

Here u will come across diff methods to control Speakjet.But frankly speaking u will not be interested in events control .In this post we will learn how to control it using serial interface.You can interface speakjet with any microcontroller which is UART compatible.
Before starting working with it we have to set up the circuitary for the ic.

DEMO /TEST MODE



In demo mode the pins M0,M1,Rst are held at logic 1. i.e they are connected to vcc(2-5V).
In this mode the ic plays all the allophones and special sounds inside.And all pins on LHS are grounded.




SPEAKER
For speakers we used a headphone and connected one of its pin to GND and other to Vout of IC.


NORMAL MODE

In normal mode we connect M0 to gnd and M1 ,Rst to Vcc.
For normal mode we will need an amplifier (to hear o/p clearly).I used a commonly available
LM386 low power audio amplifier.



BYPASS capacitor -0.1 uF.
Gain=200.


SETTING BAUD RATE
speakjet has a factory assembled baud rate of 9600.but you can change it as and when you wish.There is a simple routine to set baud rate.we will follow this routine every time we connect speakjet to microcontroller.





The code for atmega16 microcontroller


Wish you good luck for the speakjet project .