Tuesday, February 14, 2017

The Aftermath

please ignore al's unphotogenic appearance



After 7 weeks of blood, sweat, and tears, we were pretty satisfied with the end-result of our app (okay, no blood or sweat were shed, though al may have cried (tears of joy)). Everything we had hoped to be in the project ended up being in the project.

We finished implementing image processing just days before today's presentation. It was a bit of a rush to get it working, and to get it working with moving our robot without thread-locking (we had a lot of issues with threads too). Originally we wanted to utilize our phone's quadcore processor to process the images, but thanks to Dr. Stonedahl, we decided to just resample our to-be processed image, which would increase our speed by 25 times versus 3-4 times. To resample our image, instead of iterating pixel by pixel, Stonedahl suggested we iterate every 5 pixels, which would be a very small change in the overview of things.

Originally, we wanted to utilized a sort of shape/edge detection. After some pseudo code including stacks, a search algorithm (which would have been awesome to use some A.I.), we decided we were very time limited to implement it so we left it out. Instead, we tracks our robot's heuristic using amount of instances of a target color and make our robot move according to that, which turned out pretty well.

Other than our last implemented feature, we felt our presentation turned out well. There's much much more than can be done in the future and there's probably many more things we could implement in the robot too.

No comments:

Post a Comment