Monday, January 23, 2017

Dev Update 1/23/17: Al learns to actually use the blog title field

We'll feel bad if Gregory is A.I. enough to feel this way



Did I forget to mention that we named our robot Gregory? Ah yes, Gregory. He's coming along quite well.

Following the Alpha demos

We were fairly satisfied with the results of our presentation of our robot. We got to display our app, showcase the state of our robot's movement, and show how easily the robot can't understand what I'm telling it (okay, blame Google for that, though I tend to do poorly enunciate sometimes). It's all baby steps, but we're getting there. We definitely got to complete a lot more than what we thought we would have ready come alpha presentation day.
Though we definitely had a hiccup though during the presentation. There was a point in our presentation where our robot has failed to respond to our controller, prompting us to halt the presentation for a bit to restart ol Greg, for which the robot worked the following attempt.

So what exactly happened?

 One of our biggest fear, that's what happened. A dreaded race condition from multiple active threads. A race condition occurs in multi-threading when two or more threads can access data (our robot) and change it. The way our robot works is for every time a button command is pressed, a thread is started for the corresponding movement. Upon release of the button, a stop command is sent, killing the aforementioned thread. Somehow a stop command was reaching our robot before we even start a command, causing the robot to be stuck within a stop command thread. 

It was soon thanks to Professor Mueller that we were able to find out the root of our cause (she turned out to be an excellent client after all). She broke our program within seconds of handing her the Android device, resulting our heads spinning in utter confusion. We soon realized that if you quickly tapped a movement command, the intended move command was never sent but the release command (the stop command) registers, forcing the robot to be stuck in the stop threads and never able to start a new thread. This called for us to change the way our movement works.

And thus a fix

 To combat clashing threads, we decided to change the way our robot moved. Previously, it moves in short stutters, listening for another command during the pauses. This was due to the robot moving a set short distance, being sent through a loop to stutter as a movement command was held. A new approached we used is to avoid using the LeJos's move commands, we synchronized both motors and directly powered them to turn on. This eliminates all of our pesky race conditions and lets us tap movement commands without fear of breaking. As an added bonus, we get smooth sailing movement (yay). 
 And... for the future??

So as we all know, we have pretty important individual projects due Friday. We decided to implement some short goals. I'll make it short since I've been rambling on for long enough:
  • We plan to do a UI overhaul on our app. It's just bland with just 2 edit fields and 4 buttons. We want to implement  fragments, or just a better way of organizing our options
  • Joysticks. Currently we use four arrow images for the controller. We want to make a more dynamic joystick that allows us more freely control the robot, with better accuracy and fluidity
  • Saving preferences. We plan on having a way to have "default" settings without manually entering the IP address and port number every time. We're aware that this could be done using Android's Shared Preferences or just store the settings on read-only file

And to close:

 another Rick and Morty gif. enjoy (:
 


Monday, January 16, 2017

01100100 01100101 01110110 00100000 01110101 01110000 01100100 01100001 01110100 01100101 00001010
(that's binary for dev update) 

 It moves! a  poorly-recorded demonstration of our manual controller option


Today is the day of our project alpha presentations. We have been pretty hard at work all weekend to get our robot is a pretty good presentable position. We're fairly excited to demonstrate what we've been working on for the past few weeks. A lot has been done since our last update.


Getting Tangled in Threads
One of our goals was to make movement controls somewhat real-time for our robot. But to do so is a huge issue for us because we would need to find a way to override an infinite loop for "move forward" commands with a "turn left command". As a button is pressed, we would like to send constant requests for the robot to move but at the same time listen for a new action to halt briefly halt the robot and do a completely new command. This is all done with the use of the Thread class. 

Well, we didn't totally remember how to use them at first, so we pulled some old labs with interacting threads from CS332 (Operating Systems). It was more so of a syntax check, but something meaningful from the labs is to store a count of threads created into a a thread-safe object, such as a Vector. With some tinkering, for every new button press, a new thread will be created and an old thread will terminate. It is important to keep only one thread running at a time, otherwise we'll get some nasty race conditions between two active threads (I got yelled at by Abby for trying to get the robot to move forward and backwards at the same time). 

It Understands Us!

Our current robot setup demo with voice recognition and refined app UI

Voice recognition is functional on our robot now! This is done by using a SpeechRecognizer object (with a given API thanks to the Android Studios Gods) and a handful of intents. It's wasn't too difficult to implement. All Android devices have a speech to text converter given to Google that functions with an online dictionary Google keeps up with. Well since to our misfortune of having to deal with Augustana's strict network, we were lucky to find out that most Android devices have an offline English voice recognition capability. This completely avoids the need to pair up another device to Abby's wifi hotspot. 

To implement with our robot controls, we create a new layout activity in Android Studios that has a microphone option and a stop button. Upon tapping on the microphone, a command is said to the phone. Only five words are currently accepted in our dictionary such as "move", "back", "left", "right" and "stop". These worlds directly correlate with the words used to command our robot. After a command has been sent, the app checks if the word is an accepted term. If it's a correct command, the command is reprinted on the screen and the robot receives the message to move via socket object.

Our only caveat is that sending voice commands such as "move" will not also call the robot to end the "move" thread if a new command is called. To combat that, a stop must be called in between actions to prevent thread races. To save our precious vocal chords from saying stop so often (and with the lag to send a command), we implemented a stop button, so the button can be pressed instead of voiced-in in between every command.

Our Alpha App

Just a quick show of our current app design. It's pretty rough at the moment so bare with us. Hopefully we'll have something more elegant to look at, but what we currently have is just enough to get the job done. 




And Lastly, Looking into the Future

Some things we want to look at for the future of our robot (and for the final product) will potentially include mounting a phone to the robot to use its camera properties and stream it to another Android device. After doing so, maybe include some image processing i.e. face recognition. 

Possibly another thing to do is make our robot artificially intelligent. With the the robot dev team taking the A.I. course in the past, we could possibly find a way to implement it into our robot.
 

Monday, January 9, 2017

Back into the Grind

look, there's a snowflake on the table (Abby made that)


Hope everyone had a nice holiday break! This post will cover our pre-break and over-break advancements for our robot.

So What's new?

We've recently had a change in plans on communicating to our robot with an Android device. We have decided to use a wifi connection to send texts to our robot. Though, due to Augustana's strict wifi settings, our robot cannot connect to the wifi because a form of authentication is required. To combat that struggle, we decided to use our phone as a wifi hotspot to connect another phone and the our lego robot.

We have successfully sent a string of text to our robot to be displayed on its screen using socket connection class. The class declares a new socket object and uses an output stream writer to send text to the robot using a write() method and a flush() method.

hey look, it's some code. 


 The socket object requires an IP and port number to be passed in to be created.

Now that we can send text to the robot, a class has been created on the LeJos side that translates input Strings from an Android device to actually function the robot to move using its pilot object. How it's current set is default movement directions for front, back, left, and right to move a preset distance. There is also a new format for basic movement denoted with a F or B, followed by a speed and distance such as F1305, which moves the robot forward at a speed of 130, moving 5 units forward. 

 a snippet of our robot movement class in Eclipse

And in the future?

Our project alpha demos are coming fairly soon. We plan on having our phone have two activities, the first default one being a setup for a socket, asking the user to input an IP address and port number which has an intent that will start the main layout, with directional buttons for movements and fields for a movement of angle and distance. We are looking into creating a stop button - that may have to be on a separate threat, that will override and cease all function in the robot and clear its movement queue (for which we will use a vector data structure to mange all the move commands).
For our final version of our application, we are currently researching into voice recognition and video processing (with the phone mounted on the robot). Until then, stay tuned!