Nitesh Kadyan
  • About
  • Projects
  • Certificates
  • Achievements
  • Photo Gallery
  • Contact

CHALKAAT


Present day CNC/Lasercutters have an indirect interface involving external computers. Chal-Kaat is a direct manipulation laser-cutter that's aware of the strokes being drawn on the workpiece. Chalkaat is a pen stroke based UI for interacting with laser cutters, where the users can express themselves more directly by working directly on the workpiece. The camera on top tracks the strokes on paper. Different colored markers allow different commands to be executed. We built a laser cutter from ground up, with computer vision and mechanics to optimize for the interface.

Project homepage : http://chalkaat.sbalabs.co
                                  https://vimeo.com/137483326

3D Printing

After working on my 2D plotter I decided to add one more dimension to it so that I can start printing real objects in 3D instead of images in 2D. So, I built my own 3D printer using an open source design. Apart from plastic it can print with materials like chocolate and halwa. Here are some of the things we have printed.

Fabric Printer

During the MIT design and innovation workshop held in Ahmedabad me and my friends Shreyas and Kaustubh hacked our 3d printer into a fabric printer. Here is a video.

LED art and persistence of vision.

Persistence of vision is the theory where an afterimage is thought to persist for approximately one twenty-fifth of a second on the retina. This phenomenon can be used to display text with a strip of leds rotating at high speed. Here are few snapshots.

Pen Plotter @HillHacks

During Hillhacks Dharamshala 2014 I hacked an old junk printer and a DVD writer to a pen/pencil plotter. It can also draw anything using a pencil. It was on a showcase at DIFF 2014 (Dharamshala International Film Festival).

Painting Robot

One of my friend wanted to teach students how to paint. So, I build this 2D plotter which can sketch any image on a paper using a pencil. Here is my selfie :)
Picture
Source code : https://github.com/niteshKadyan/BeagleBone

Face Tracking with Raspberry Pi and OpenCV

Picture
As a part of a hobby project I implemented face tracking on Raspberry Pi using  HAAR feature classification in OpenCV.
 
Though the frame rate was quite low bu
t it was fun! You can watch a video below.


Lobula Giant Movement Detector Neural Model on Raspberry Pi

Picture
The lobula giant movement detector (LGMD) is a neuron found in the brain of a Locust and is believed to respond to looming objects, for example a predator. 

I have successfully implemented LGMD model on a Raspberry Pi using OpenCV and a Logitech camera.

Picture
Prototype robot (Alice) for testing the algorithms is ready!!

Currently it just have a camera and Raspberry Pi and avoids obstacles using the LGMD neural model.

Alice running with a quad core processor and two cameras can now avoid obstacles using a neural model of a grasshopper's visual system. Here is a video.

Carolo Cup 2013

Picture
I spent six months with team Berlin United at Freie University Berlin. Under the guidance of Prof. Raul Rojas, we built an autonomous car for CaroloCup.

Some pics from Long Night of Science, Berlin

Iterative Closest Point (ICP) SLAM for mapping the tracks

During my six months with team Berlin United I worked primarily on localization and mapping. For the mapping I implemented ICP Simultaneous Localization and Mapping (SLAM) for the car. The ICP works by aligning two point clouds. This alignment can be repeatedly done to correct the odometry of the car, as shown in the figures below. 
Picture

Map generated using odometry

Picture

Map generated by correcting the odometry using ICP 

Force Field Method for Localization

For the localization task I implemented a force field method. The idea is that each point in the map is attracted towards the nearest point in the map and the force of attraction is proportional to the distance to that points. Once the map is generated by the ICP, these force values (green lines in the image below) are precalculated before the localization starts. 

After calculating the force values a resulting force is calculated on a point cloud (red dots in the image below). This resulting force pulls and rotate the point cloud and aligns it with the map (blue dots in the image below). In this way the odometry is corrected and localization is done.
Picture

Precalculated force values

Picture

Odometry Correction using force field method

Parking Spot Detection

Picture
For the parking event of Carolocup, I programmed the car to detect a parking spot. As shown in the figure, the car detects a parking spot of suitable length from an omnidirectional image.  

A video of the car doing a parking manoeuvre is shown below. (In the video the first attempt failed because the microcontroller crashed)

Kalman Filter for Lane Tracking

Picture
I implemented a Kalman Filter to model the lane tracks using a parabolic function. As shown in the image, a parabola is fitted to the right lane of the track in front.

Kalman filter consists of two steps state prediction and measurement update. State vector in this case is [a b c] where a, b and c are parameters of the parabola ax^2 + bx + c. These parameters are then used to calculate the steering angle of the car while driving autonomously.
A video is also shown below.


Particle Filter for Localization and Tracking

Picture
As a semester project I implemented a particle filter to localize and track a robot. As shown in the image, a robot moves in a circular direction and the golden dots (particles) represents the belief about the current position of the robot. The path and the particles together give rise to a ring shaped figure.

A video is also shown below.


Blinking Rakhi

Picture
Didn't get my rakhi this time. So built it on my own. (Year 2013)














Presenting to all, the world's first re-programmable electronic Rakhi. (Year 2014)

Powered by Create your own unique website with customizable templates.