Tuesday, February 14, 2017

The Aftermath

please ignore al's unphotogenic appearance



After 7 weeks of blood, sweat, and tears, we were pretty satisfied with the end-result of our app (okay, no blood or sweat were shed, though al may have cried (tears of joy)). Everything we had hoped to be in the project ended up being in the project.

We finished implementing image processing just days before today's presentation. It was a bit of a rush to get it working, and to get it working with moving our robot without thread-locking (we had a lot of issues with threads too). Originally we wanted to utilize our phone's quadcore processor to process the images, but thanks to Dr. Stonedahl, we decided to just resample our to-be processed image, which would increase our speed by 25 times versus 3-4 times. To resample our image, instead of iterating pixel by pixel, Stonedahl suggested we iterate every 5 pixels, which would be a very small change in the overview of things.

Originally, we wanted to utilized a sort of shape/edge detection. After some pseudo code including stacks, a search algorithm (which would have been awesome to use some A.I.), we decided we were very time limited to implement it so we left it out. Instead, we tracks our robot's heuristic using amount of instances of a target color and make our robot move according to that, which turned out pretty well.

Other than our last implemented feature, we felt our presentation turned out well. There's much much more than can be done in the future and there's probably many more things we could implement in the robot too.

Monday, February 6, 2017

Dev Update 2/6/2016



It's a wondrous day. We get our feet wet in image processing - our final "big" feature, we can access a camera through our application, we can create an overwriting image file in a custom directory, and not to mention we get a project extension.

Ah, yes. Our main focus as of late relates to camera functionality and image processing. But before we can get to our image processing, we need to work around using the Android device's camera and using its image files. We ran into so many problems just implementing a camera into an Image View in the main app. The camera preview was sideways, the captured image was sideways, an unreachable file path was trying to be accessed, and so much more issues (no, not on github). After hours and hours of tutorials and API workarounds (because we just love deprecated code), we finally managed to take a correctly oriented picture and store it into a custom directory. This application will be run on a separate application on the phone that will be mounted onto our robot which will communicate with the main client app.


Camera attachment acquired

On the client-side app, we made a test app that retrieves a picture from the image gallery on the phone, and converts the image into a bitmap for some basic image processing. Our current method, which increases brightness, currently loops through each pixel of the image and just increase the RBG values of all of them and replaces the old pixel.

a simple pixel replace method from the infamous "Do stuff" button





In the end, we aim to have our robot scan for a color and autonomously find our specified color. This is done by going through the bitmap's pixels using for loops that'll go row by row, column by column until a pixel has the color we're looking for. Though, it's not just enough to find a color and go towards it. We plan to look around the X and Y coorindates of the specified pixel (bitmaps are 2D) and determine if the color persists within the region, signifying that the color blob is probably an object. Afterwards, we aim to divide our robot's vision into a left, right, and center region. If our robot detects our goal color in either of the regions, then we will focus on positioning our robot so our goal color is within the center of our region. Afterwards, we approach the goal color and hope that the color region of interest increases in size and thus finding a color.

There will be plenty to do within this final week. We hope everyone has a nice and as-stressless-as-possible week 10!

Friday, February 3, 2017

Dev Update 2/3/2017


A lot of retouching has been done app-side lately, but also with a bit of deprecation problems. We figured since we've been focusing so much on our robot that we should probably double back for our poor, empty app.

Our first approach was to redo our Android Studios activity classes and rework them into fragments for better organization of our controls and and better user-flow. The current plan for our app flow goes something like this:
  • Launch the starting activity to prompt the user to type in an IP address and port number and save it using Android's SharedPreferences. If the app detects a previously used IP/Port skips to the next activity
  • The next activity will serve as our Fragment container, with all the options to perform our various actions with the robot (without the need to press the back button)
  • Added to our option Fragments will be an Options button that allows the IP to be edited or to be disconnected, which will prompt the user back to the main activity.
Some issues encountered were the useless of using old labs we did regarding Fragments as references. We're running our app on a higher API, which unfortunately blocked us off from using the Fragment strategy in that previous lab. This caused us to find another method of displaying multiple fragments within a single activity.

Our answer to that was the ViewPager class within Android. The ViewPager allows us to swipe through our different fragments while keeping track of the current fragment at the top of the screen. The only current issue is that it overlaps our action/status bar. Implementing the view pager was a bit tricky though. We had to restructure most of our fragment classes almost completely because we lose the references to the original activity classes and remove most of the onclick methods from theXML.

It was a pain to deal with, but we managed through. (:

Monday, January 23, 2017

Dev Update 1/23/17: Al learns to actually use the blog title field

We'll feel bad if Gregory is A.I. enough to feel this way



Did I forget to mention that we named our robot Gregory? Ah yes, Gregory. He's coming along quite well.

Following the Alpha demos

We were fairly satisfied with the results of our presentation of our robot. We got to display our app, showcase the state of our robot's movement, and show how easily the robot can't understand what I'm telling it (okay, blame Google for that, though I tend to do poorly enunciate sometimes). It's all baby steps, but we're getting there. We definitely got to complete a lot more than what we thought we would have ready come alpha presentation day.
Though we definitely had a hiccup though during the presentation. There was a point in our presentation where our robot has failed to respond to our controller, prompting us to halt the presentation for a bit to restart ol Greg, for which the robot worked the following attempt.

So what exactly happened?

 One of our biggest fear, that's what happened. A dreaded race condition from multiple active threads. A race condition occurs in multi-threading when two or more threads can access data (our robot) and change it. The way our robot works is for every time a button command is pressed, a thread is started for the corresponding movement. Upon release of the button, a stop command is sent, killing the aforementioned thread. Somehow a stop command was reaching our robot before we even start a command, causing the robot to be stuck within a stop command thread. 

It was soon thanks to Professor Mueller that we were able to find out the root of our cause (she turned out to be an excellent client after all). She broke our program within seconds of handing her the Android device, resulting our heads spinning in utter confusion. We soon realized that if you quickly tapped a movement command, the intended move command was never sent but the release command (the stop command) registers, forcing the robot to be stuck in the stop threads and never able to start a new thread. This called for us to change the way our movement works.

And thus a fix

 To combat clashing threads, we decided to change the way our robot moved. Previously, it moves in short stutters, listening for another command during the pauses. This was due to the robot moving a set short distance, being sent through a loop to stutter as a movement command was held. A new approached we used is to avoid using the LeJos's move commands, we synchronized both motors and directly powered them to turn on. This eliminates all of our pesky race conditions and lets us tap movement commands without fear of breaking. As an added bonus, we get smooth sailing movement (yay). 
 And... for the future??

So as we all know, we have pretty important individual projects due Friday. We decided to implement some short goals. I'll make it short since I've been rambling on for long enough:
  • We plan to do a UI overhaul on our app. It's just bland with just 2 edit fields and 4 buttons. We want to implement  fragments, or just a better way of organizing our options
  • Joysticks. Currently we use four arrow images for the controller. We want to make a more dynamic joystick that allows us more freely control the robot, with better accuracy and fluidity
  • Saving preferences. We plan on having a way to have "default" settings without manually entering the IP address and port number every time. We're aware that this could be done using Android's Shared Preferences or just store the settings on read-only file

And to close:

 another Rick and Morty gif. enjoy (:
 


Monday, January 16, 2017

01100100 01100101 01110110 00100000 01110101 01110000 01100100 01100001 01110100 01100101 00001010
(that's binary for dev update) 

 It moves! a  poorly-recorded demonstration of our manual controller option


Today is the day of our project alpha presentations. We have been pretty hard at work all weekend to get our robot is a pretty good presentable position. We're fairly excited to demonstrate what we've been working on for the past few weeks. A lot has been done since our last update.


Getting Tangled in Threads
One of our goals was to make movement controls somewhat real-time for our robot. But to do so is a huge issue for us because we would need to find a way to override an infinite loop for "move forward" commands with a "turn left command". As a button is pressed, we would like to send constant requests for the robot to move but at the same time listen for a new action to halt briefly halt the robot and do a completely new command. This is all done with the use of the Thread class. 

Well, we didn't totally remember how to use them at first, so we pulled some old labs with interacting threads from CS332 (Operating Systems). It was more so of a syntax check, but something meaningful from the labs is to store a count of threads created into a a thread-safe object, such as a Vector. With some tinkering, for every new button press, a new thread will be created and an old thread will terminate. It is important to keep only one thread running at a time, otherwise we'll get some nasty race conditions between two active threads (I got yelled at by Abby for trying to get the robot to move forward and backwards at the same time). 

It Understands Us!

Our current robot setup demo with voice recognition and refined app UI

Voice recognition is functional on our robot now! This is done by using a SpeechRecognizer object (with a given API thanks to the Android Studios Gods) and a handful of intents. It's wasn't too difficult to implement. All Android devices have a speech to text converter given to Google that functions with an online dictionary Google keeps up with. Well since to our misfortune of having to deal with Augustana's strict network, we were lucky to find out that most Android devices have an offline English voice recognition capability. This completely avoids the need to pair up another device to Abby's wifi hotspot. 

To implement with our robot controls, we create a new layout activity in Android Studios that has a microphone option and a stop button. Upon tapping on the microphone, a command is said to the phone. Only five words are currently accepted in our dictionary such as "move", "back", "left", "right" and "stop". These worlds directly correlate with the words used to command our robot. After a command has been sent, the app checks if the word is an accepted term. If it's a correct command, the command is reprinted on the screen and the robot receives the message to move via socket object.

Our only caveat is that sending voice commands such as "move" will not also call the robot to end the "move" thread if a new command is called. To combat that, a stop must be called in between actions to prevent thread races. To save our precious vocal chords from saying stop so often (and with the lag to send a command), we implemented a stop button, so the button can be pressed instead of voiced-in in between every command.

Our Alpha App

Just a quick show of our current app design. It's pretty rough at the moment so bare with us. Hopefully we'll have something more elegant to look at, but what we currently have is just enough to get the job done. 




And Lastly, Looking into the Future

Some things we want to look at for the future of our robot (and for the final product) will potentially include mounting a phone to the robot to use its camera properties and stream it to another Android device. After doing so, maybe include some image processing i.e. face recognition. 

Possibly another thing to do is make our robot artificially intelligent. With the the robot dev team taking the A.I. course in the past, we could possibly find a way to implement it into our robot.
 

Monday, January 9, 2017

Back into the Grind

look, there's a snowflake on the table (Abby made that)


Hope everyone had a nice holiday break! This post will cover our pre-break and over-break advancements for our robot.

So What's new?

We've recently had a change in plans on communicating to our robot with an Android device. We have decided to use a wifi connection to send texts to our robot. Though, due to Augustana's strict wifi settings, our robot cannot connect to the wifi because a form of authentication is required. To combat that struggle, we decided to use our phone as a wifi hotspot to connect another phone and the our lego robot.

We have successfully sent a string of text to our robot to be displayed on its screen using socket connection class. The class declares a new socket object and uses an output stream writer to send text to the robot using a write() method and a flush() method.

hey look, it's some code. 


 The socket object requires an IP and port number to be passed in to be created.

Now that we can send text to the robot, a class has been created on the LeJos side that translates input Strings from an Android device to actually function the robot to move using its pilot object. How it's current set is default movement directions for front, back, left, and right to move a preset distance. There is also a new format for basic movement denoted with a F or B, followed by a speed and distance such as F1305, which moves the robot forward at a speed of 130, moving 5 units forward. 

 a snippet of our robot movement class in Eclipse

And in the future?

Our project alpha demos are coming fairly soon. We plan on having our phone have two activities, the first default one being a setup for a socket, asking the user to input an IP address and port number which has an intent that will start the main layout, with directional buttons for movements and fields for a movement of angle and distance. We are looking into creating a stop button - that may have to be on a separate threat, that will override and cease all function in the robot and clear its movement queue (for which we will use a vector data structure to mange all the move commands).
For our final version of our application, we are currently researching into voice recognition and video processing (with the phone mounted on the robot). Until then, stay tuned!

Monday, December 12, 2016

Week 2 Development Update



https://heavyeditorial.files.wordpress.com/2014/12/184661628.jpg?quality=65&strip=all
Tim, Abby, and Al working hard on their robot project

It's been a busy week for Team Robot as everyone is finishing up with the assigned lab work and transitioning into group projects.

#goalzzzz

Our current goal is to command our robot to do simple commands such as moving forward and simple turns using an Android device as a brain/controller.  So far, we have set up a soft blueprint of how we're going to communicate to our EV3 using leJos and Android Studios.

but.. but how?

We plan to use a Bluetooth connection between the EV3 and an Android device. Between the two devices, we plan to set up a language that will "translate" input from the Android device that will send an output to the EV3 which will command and (hopefully) function the EV3. On a lower level, we brainstormed that a .txt file with move commands sent as Strings, such as "MOVE" from the Android device will be  received on the EV3 which will do a function if its document reader receives the string "MOVE".

Though, if we plan to map our movements like this, our EV3 will just have a long list of if-else statements of case sensitive strings (which probably doesn't seem too elegant). To resolve that, we'll create a class that does all the translating for us so hopefully we can send a command such as "MOVE, 30, FORWARD" which will be delimited by a comma and sent as parameter "MOVE", specifying a move forward command, "30" which will correlate a distance for the EV3, and "FORWARD" which tells the direction for EV3 to go in (based on where its front is facing, or perhaps we'll impose our own coordinate system of what's North, South, East, and West).

What we've done so far
So far, Tim and Al have found ways to programmatically establish Bluetooth connection between two android devices and even send files between the two. This calls for using Android's Intent objects to send and receive files. We are hoping we can just send an Intent with a .txt file to the leJos to have it read and translate the file sent to it.
Abby has been researching how to read an input stream of a file on the leJos side. Though, finding help material is hard as much documentation is for the NXT model of the Lego Robot versus ours (the EV3). Abby has managed to find some documentation on reading input files, but will have to find a way to translate it to work for the EV3.



tl;dr everything's lookin' pretty good so far.