Monday, February 6, 2017

Dev Update 2/6/2016



It's a wondrous day. We get our feet wet in image processing - our final "big" feature, we can access a camera through our application, we can create an overwriting image file in a custom directory, and not to mention we get a project extension.

Ah, yes. Our main focus as of late relates to camera functionality and image processing. But before we can get to our image processing, we need to work around using the Android device's camera and using its image files. We ran into so many problems just implementing a camera into an Image View in the main app. The camera preview was sideways, the captured image was sideways, an unreachable file path was trying to be accessed, and so much more issues (no, not on github). After hours and hours of tutorials and API workarounds (because we just love deprecated code), we finally managed to take a correctly oriented picture and store it into a custom directory. This application will be run on a separate application on the phone that will be mounted onto our robot which will communicate with the main client app.


Camera attachment acquired

On the client-side app, we made a test app that retrieves a picture from the image gallery on the phone, and converts the image into a bitmap for some basic image processing. Our current method, which increases brightness, currently loops through each pixel of the image and just increase the RBG values of all of them and replaces the old pixel.

a simple pixel replace method from the infamous "Do stuff" button





In the end, we aim to have our robot scan for a color and autonomously find our specified color. This is done by going through the bitmap's pixels using for loops that'll go row by row, column by column until a pixel has the color we're looking for. Though, it's not just enough to find a color and go towards it. We plan to look around the X and Y coorindates of the specified pixel (bitmaps are 2D) and determine if the color persists within the region, signifying that the color blob is probably an object. Afterwards, we aim to divide our robot's vision into a left, right, and center region. If our robot detects our goal color in either of the regions, then we will focus on positioning our robot so our goal color is within the center of our region. Afterwards, we approach the goal color and hope that the color region of interest increases in size and thus finding a color.

There will be plenty to do within this final week. We hope everyone has a nice and as-stressless-as-possible week 10!

No comments:

Post a Comment