Tuesday, February 14, 2017

The Aftermath

please ignore al's unphotogenic appearance



After 7 weeks of blood, sweat, and tears, we were pretty satisfied with the end-result of our app (okay, no blood or sweat were shed, though al may have cried (tears of joy)). Everything we had hoped to be in the project ended up being in the project.

We finished implementing image processing just days before today's presentation. It was a bit of a rush to get it working, and to get it working with moving our robot without thread-locking (we had a lot of issues with threads too). Originally we wanted to utilize our phone's quadcore processor to process the images, but thanks to Dr. Stonedahl, we decided to just resample our to-be processed image, which would increase our speed by 25 times versus 3-4 times. To resample our image, instead of iterating pixel by pixel, Stonedahl suggested we iterate every 5 pixels, which would be a very small change in the overview of things.

Originally, we wanted to utilized a sort of shape/edge detection. After some pseudo code including stacks, a search algorithm (which would have been awesome to use some A.I.), we decided we were very time limited to implement it so we left it out. Instead, we tracks our robot's heuristic using amount of instances of a target color and make our robot move according to that, which turned out pretty well.

Other than our last implemented feature, we felt our presentation turned out well. There's much much more than can be done in the future and there's probably many more things we could implement in the robot too.

Monday, February 6, 2017

Dev Update 2/6/2016



It's a wondrous day. We get our feet wet in image processing - our final "big" feature, we can access a camera through our application, we can create an overwriting image file in a custom directory, and not to mention we get a project extension.

Ah, yes. Our main focus as of late relates to camera functionality and image processing. But before we can get to our image processing, we need to work around using the Android device's camera and using its image files. We ran into so many problems just implementing a camera into an Image View in the main app. The camera preview was sideways, the captured image was sideways, an unreachable file path was trying to be accessed, and so much more issues (no, not on github). After hours and hours of tutorials and API workarounds (because we just love deprecated code), we finally managed to take a correctly oriented picture and store it into a custom directory. This application will be run on a separate application on the phone that will be mounted onto our robot which will communicate with the main client app.


Camera attachment acquired

On the client-side app, we made a test app that retrieves a picture from the image gallery on the phone, and converts the image into a bitmap for some basic image processing. Our current method, which increases brightness, currently loops through each pixel of the image and just increase the RBG values of all of them and replaces the old pixel.

a simple pixel replace method from the infamous "Do stuff" button





In the end, we aim to have our robot scan for a color and autonomously find our specified color. This is done by going through the bitmap's pixels using for loops that'll go row by row, column by column until a pixel has the color we're looking for. Though, it's not just enough to find a color and go towards it. We plan to look around the X and Y coorindates of the specified pixel (bitmaps are 2D) and determine if the color persists within the region, signifying that the color blob is probably an object. Afterwards, we aim to divide our robot's vision into a left, right, and center region. If our robot detects our goal color in either of the regions, then we will focus on positioning our robot so our goal color is within the center of our region. Afterwards, we approach the goal color and hope that the color region of interest increases in size and thus finding a color.

There will be plenty to do within this final week. We hope everyone has a nice and as-stressless-as-possible week 10!

Friday, February 3, 2017

Dev Update 2/3/2017


A lot of retouching has been done app-side lately, but also with a bit of deprecation problems. We figured since we've been focusing so much on our robot that we should probably double back for our poor, empty app.

Our first approach was to redo our Android Studios activity classes and rework them into fragments for better organization of our controls and and better user-flow. The current plan for our app flow goes something like this:
  • Launch the starting activity to prompt the user to type in an IP address and port number and save it using Android's SharedPreferences. If the app detects a previously used IP/Port skips to the next activity
  • The next activity will serve as our Fragment container, with all the options to perform our various actions with the robot (without the need to press the back button)
  • Added to our option Fragments will be an Options button that allows the IP to be edited or to be disconnected, which will prompt the user back to the main activity.
Some issues encountered were the useless of using old labs we did regarding Fragments as references. We're running our app on a higher API, which unfortunately blocked us off from using the Fragment strategy in that previous lab. This caused us to find another method of displaying multiple fragments within a single activity.

Our answer to that was the ViewPager class within Android. The ViewPager allows us to swipe through our different fragments while keeping track of the current fragment at the top of the screen. The only current issue is that it overlaps our action/status bar. Implementing the view pager was a bit tricky though. We had to restructure most of our fragment classes almost completely because we lose the references to the original activity classes and remove most of the onclick methods from theXML.

It was a pain to deal with, but we managed through. (: