Tuesday 26 August 2014

Controlling a Quadcopter like a Jedi Using a Leap Motion (TM)

Recently I went to an event entitled 'NodeBots'. If you've ever heard of Node.js, you might already see where this is going. The event describes itself as "Robots powered by JavaScript". Anyone even vaguely familiar with JavaScript will know that it is mainly used for websites. More specifically, it is used for client-side scripting of websites. Plainly, this means a web page can run code on your computer to power the fancy transition effects of drop down menus and all the other dynamic actions of the Web 2.0(!). Anyone familiar with JavaScript knows it has a bit of a controversial reputation. As a language, it has very little static checking and will keep on chugging even when an error is encountered. This can make it quite difficult to debug. So it might seem a little strange that the idea of this event is to control robots using JavaScript.The idea is that participants use a popular JavaScript platform called 'Node.js'. Node.js is a platform that allows JavaScript programs to run natively, eschewing the requirement for a web browser. Explaining why anyone would want native JavaScript is a bit complicated and is beyond the scope of this post.
I took the NodeBots opportunity to get familiar with JavaScript, and use it to control a quadcopter. A quadcopter essentially being a miniature helicopter, with 4 rotors instead of the traditional set up of a main rotor and a tail. These four rotors are places at the corners of the 'drone' such that it can tilt in any direction to control pitch and yaw, and causing it to drift in the tilted direction.
Note that when I use 'drone' above, I mean an unmanned aerial vehicle. The military uses much larger and deadlier 'drones' but the one I was playing with was the size of a garbage can lid and completely unarmed. Also note that 'unmanned' does not imply automated, since I was in fact controlling the drone with a 'Leap Motion'.

Quadcopter drone used NodeBots event.

Leap Motion

One of the main components in this build was the Leap Motion. The leap motion is a small devices about slightly larger than a Tic-Tac container, that can be used to detect orientation and pose of a user's hands. Using a Leap Motion really feels like sometime recently we have slipped seamlessly into the future. It can detect a swipe, fist, count fingers, whatever I can think of. It's one problem is that it isn't too accurate. The accuracy is a bit shakey, and often it will jitters a finger out of sensing existence for a split second. On top of that, since it is projecting a sensor upward, if my hands overlap it cannot figure out the orientation of my hands.
Despite all of this, it worked perfectly fine for flying a drone. No finicky hand movements were required, no fingers need be counted. All that was needed was for the Leap Motion to track the orientation and rough position of the palm of my hand.

Example of Leap Motion(TM) detecting hand orientation and pose.
The Leap Motion hardware can be seen as the black box at the bottom of the image.

Controlling the Drone

I set up the following to control the drone:
  • The drone will take off when the leap motion detects hands in its field of view.
  • The drone will land when there are no longer hands in its field of view.
  • Left hand controls the pitch and roll. Hold the left hand out and tilt the hand in the relevant direction.
  • Right hand controls the height and yaw. The height is controlled by raising the right hand higher or lower relative to the left hand.
  • The yaw is controlled by moving the right hand forward or backward relative to the left hand.

    Diagram showing the controls used for the drone.
    A leap motion would detect hand movements and convert them to commands for the drone.

The Experience

Controlling the drone was surprisingly difficult. One would think that controlling things with a wave of their hand would be the most natural and easy way to do so. It would be reasonable to think that the drone would act like an extension to the body, transcending the unnatural button mashing of the more traditional controller scheme. Unfortunately, the Leap Motion simply turned my hands into the controller. The same difficulty of learning a new control scheme persisted. Although by the day's end I was quite skilled and flying the drone around without too many accidents, a standard controller would likely win out on user interface usability.

Code and Credits

Code is available on my GitHub account.
Credits to Robert Wood, and Brandon Surmanski (me).

Thursday 10 July 2014

Neural Networks: Like Custom Virtual Computers

Neural Networks: Like Custom Virtual Computers

Trained for a specific task, Neural Networks are like specialized virtual computers

I've been thinking about neural networks quite a bit lately. I recently read an article (this one actually) that did an excellent job at explaining exactly how neural networks work, and went on to give an example of optical character recognition (OCR) using a neural network.


Neural Networks really aren't as complicated as I previously thought. There are a few different types of digital neurons used in neural networks. The most popular being perceptrons, and sigmoid neurons. The idea behind perceptrons is that the perceptron can have any number of weighted inputs, and has a single output which acts like a step function dependant on the weighted sum of the inputs.
http://upload.wikimedia.org/wikipedia/commons/8/8c/Perceptron_moj.png
The function of a perceptron artificial neuraon.
The output (t) is only on if the sum of it's weighted inputs (x1 .. xn),
plus some bias (w0) is greater than some threashold.

(Image from Wikimedia Commons)

One example might be the case where there are two inputs 'A' and 'B', both with weights of 1, bias of 0 and threshold on 1. Normally, perceptrons allow analog inputs, but if we limit the inputs to be digital values of 0 or 1, this perceptron forms a function. Below is the truth table:

http://www.diracdelta.co.uk/science/source/t/r/truth%20table/image002.gif
OR function truth Table


If you're familiar with computers and boolean logic, you might recognize the function this truth table forms. It is the boolean OR function. There are a handful of various basic boolean functions: OR, NOR, XOR, AND, NAND, NOT, and perhaps a handle of other less useful functions. In fact, all of these functions can be implemented using a single perceptron, just by modifying the perceptrons weights and threshold; for example a NAND function is made with weights of -1 and -1, and a threshold of -2. Those familiar with computer design might already realize why this is so important. These function are often implemented in discrete computer chips, or in integrated circuits where an instance is known as a 'gate'. It has been proven that any computable operation possible can be produced by some combination of these boolean logic gates. In fact, addition of 8 bit numbers can be done by connecting together 8 single bit adder circuits, where each single bit adder contains only 5 logic gates for a grand total of 40 logic gates for 8 bit addition. And this addition circuit would scale up by adding more adding more single bit adders. This is basically what every modern CPU's does (with various optimisations added). And its not just addition, the entire functional component of a CPU is created by wiring together a great number of these logic gates. And since these artificial neutrons, perceptrons, can effectively be used as any logic gate, it is thus possible to create an entire computer out of these perceptrons.

To actually get neural networks to do what you want, they are 'trained'. First, a really large network of these perceptrons are connected together, forming a web. Then, the network is 'trained' by defining some target and tweaking the weighted edges between the perceptrons until the network as a whole approximates the function desired. With the single perceptron example if we wanted to create a NAND function, the trainer would be provided the truth table for NAND and would modify the two input weights and the thresholds until the perceptron itself acted like the truth table. And this can be scaled up, to do very complicated things. A common example use for neural networks is character recognition; creating a neural network that recognizes which character is scribbled in an image.

http://upload.wikimedia.org/wikipedia/commons/thumb/4/46/Colored_neural_network.svg/300px-Colored_neural_network.svg.png
A simple neural network with 3 inputs and two outputs.
A trainer program would strengthen or weaken the connections
between nodes to closer approximate the target function.


Essentially, a neural network is a custom virtual computer designed to compute exactly what you tell it to. And I think that is very cool.

Monday 14 April 2014

The Difference Between All Those Computer Oriented Engineering Fields

Computer Engineering vs Software Engineering vs Computer Science vs Electrical Engineering

CEG vs SEG vs CS vs ELG

Computer Engineering, Software Engineering, Computer Science, Electrical Engineering. As a Computer Engineering student at the University of Ottawa, I've been asked more than once what the difference is. Hopefully I will be able to clarify.

Software Engineering (SEG)

Lets start with Software Engineering. Software Engineering is focused on the high level architecture of software. Software engineering does not worry too much about actual code or data, and instead work with levels of abstraction and the organization of code and data. While this may be true, there is still plenty of technical stuff to focus on. Software engineering students will still have programming courses. They will learn about object oriented design patterns, data structures and algorithms, databases, and software construction.

Computer Science (CSI)

Next up, Computer Science. Depending on the University, Computer science may not be considered to be in the faculty of engineering, but it is very closely related. Computer science approaches software from a mathematical point of view. Computer Science students will learn about things like data structures, algorithms and measurements of algorithmic efficiency, and linear algebra.

Traditionally in Canada (and now in some schools in the United States), engineering graduates will get an 'iron ring' as part of their graduation ceremony. This ring is meant to represent the heavy burden that engineers must face when making important decisions. If an engineer builds a shoddy bridge, it may fall down and kill people. This ring is worn as a mark of pride and honour. Since computer science isn't technically part of engineering, CS graduates may not get this ring, or may not be able to become 'professional engineers'. So, if you really want that ring, you may need to check specific school ruling to be sure.

Electrical Engineering (ELG)

Electrical Engineering is all about hardware. Electrical engineers will be working with a lot of circuits and and electricity, as the title may suggest. This doesn't just stop with simple circuits, though. ELG students will take courses for signal analysis and processing, communication systems, control systems and the like. At my university, electrical engineering students have the option in one of the following:

  • Communication Systems
  • Systems Engineering
  • Electronics
  • Microwaves and Photonics
  • Power and Sustainable Energy

Computer Engineering (CEG)

Computer Engineering is the most confusing of the bunch. Computer engineering is a bit of a hodgepodge between the rest of the fields mentioned. Computer engineering focuses mostly on the design of computer systems and the low level interface with software. Differentiating it from Computer Science and Software Engineering, there will be a considerable hardware aspect to the field, although unlike Electrical Engineering, the focus will be on digital electronics instead of analogue. Taking computer engineering, you can expect to learn about assembly code, CPU designs and architecture, as well as a general overview of the contents contained in the other fields mentioned.

Conclusion

So, in conclusion, software engineering is nice if you would like to focus on software design; computer science would be better if you are interested in the mathematics of computing; Electrical Engineering is for those that wish to learn mostly about hardware; and computer engineering is a balancing act between the other fields with focus on the interface between hardware and software.

Saturday 5 April 2014

Bomberman and Unicorns at the Shopify Lounge

Bomberman At the Shopify Lounge

Unicorns and things

Last Sunday, I attended another AI competition, this one was hosted at the Shopify lounge in Ottawa.

Firstly, wow, the Shopify lounge is big. It even has a slide between floors!

check out that slide on the left! Unpictured, an arcade machine!


Among the other impressive features, an arcade machine, full wall mural of a unicorn knight battle, and of course, a free espresso machine. It looked like a delightful place to work on a day to day basis.

Unicorn knights face off!

Bomberman

Anyways, I wasn't there just to drink fancy lattes and ride slides. The goal of this AI challenge was to create an AI for a bomberman clone. If you have never tried bomberman, go try it. Its quite a fun game. There is a grid of destructible rocks and indestructible walls and the point is to lay down bombs to destroy rocks and catch the enemy players in an explosion.

http://199.101.98.242/media/shots/38646-Mega_Bomberman_(USA)-3.png
Place bombs, explode stuff (except yourself)


The bomberman implementation used can be found at on github
This version of bomberman only has 2 powerups; number of bombs and radius of explosions. This would make logistics easier,
Provided for choice were 3 clients: one for python, one for java, one for ruby.
Since I knew Java best, I ended up using the provided Java client found on github.

It had been a long time since I last used Java for any project, since most of my side-projects are written in C or C++. I think it might have been over 2 years since last touching Java, but it was easy to slip back into. Fortunately since I only needed to hack out something quickly, I didn't need to approach much of the more annoying side of Java; No AbstractSingletonProxyFactoryBeans were needed, but checked exceptions of course made their unwelcome appearance in such a simple application.

The Implementation

To keep things simple, I created two classes: a 'Bomberman' main class to connect to the server, get updates and send back responses, and a Player class with a single externally important method 'update', which would return a single move computed for the turn.

Each turn the server will send over a json snippet containing the contents of each of the cell's of the map, and it was the Java client's responsibility to throw all of that information into a 'GameState' object that could be queried by my player.

The GameState contained an 2D array if cells representing the game board. Each cell could contain one of a set of choices:
  • a wall
  • ground (aka nothing)
  • a rock
  • a bomb
  • flame
  • a powerup to increase the amount of placable bombs
  • a powerup to increase radius of placable bombs
below can be seen an image of the bomberman implementation used.
Green are unbreakable walls, brown are breakable rocks. Also visible are the players (marked as p1, p2, p4), bombs (marked as BB) and flames from an explosion (marked as +)

Ideas and Things

With everything ready to go, I needed to think of a strategy for my AI. One idea I had was to have a 'grenade' AI. In the traditional game of bomberman, there is a powerup to pick up and throw bombs. This would have allowed me to create an AI that would cook a bomb and throw it at a nearby enemy for a near guaranteed kill. Unfortunately, with the simplified powerup list, that would not be possible.

Another possibility would be to do a whole Minimax search tree for the best possible move. The basic concept of a minimax AI is that the AI would predict the game state multiple turns into the future, score each choice on a decision criteria, and chose the move that produces the best likely outcome in the future. The term 'minimax' comes from smashing together the two words 'minimize' (for minimizing positive outcome from enemy moves), and 'maximize' (for maximizing positive outcomes from our own moves). For such a simple game as bomberman, this might have worked, but I only had 12 hours to work on my AI and I had never implemented a Minimax AI before. I thought it easier to stick with what I know. For a more in depth explanation of minimax AI, check out this link.

Instead of minimaxing, maybe I could aim trap an enemy, or maybe I could simply run up to an enemy and surround them with as many bombs as possible and run away, The hit-and-run approach. This seemed like a good idea for its simplicity. It would be kind of like 'Blinky's AI from Pacman, except with more explosions

(heres a good explanation of Pacman ghost AI; Check out the red ghost)

With a basic concept in mind, the first order of business for my AI was to get some sort of simple path finding working. Without path finding, the AI would be a sitting duck and wouldn't be able to hit nor run.
I decided that to get things working best, I would anotate the provided Cells. For every single cell of the map, I would find the distance to the player, the number of rocks that would need to be destroyed to reach that position, and the next step needed by the player to go in the direction of this cell. For this, I used a simple graph traversal. I created a graph of 'AnnotationCells', initializing their distance to infinity, and I recursively visited all neighbour cells and noted their current distance. It was a simple depth-first graph traversal, where I noted down the required parameters. Once this was done, I was then able to inspect any cell (along with it's annotation) and know the next step I need to make to get to that cell. Yes, the solution is pretty computationally expensive, but I figured in a hackathon, keeping it simple is more important than performance.

Bombs and Not Blowing Up

Next I needed to find a way through rocks while pathing. I figured the easiest way would be to deal with them when I reach them. So, if the pathing algorithm would tell me to path into a rock, then place a bomb.
Now I had the issue of dodging bombs. The solution I decided on was to add another field to my annotation, a 'danger' factor. A cell is dangerous if it is within 5 direct spaces from a bomb. Then I added another behaviour to my pathing. If the cell he is in or the one he is going to is dangerous, then run to the closest safe place. This required another yet another graph traversal, this time limited to the few cells around the AI. This traversal would find the closest 'safe' tile.

I could place more than bomb at once to blow up more walls, but I figured it would be easiest to only place a single bomb at a time to guarantee a path to a safe place.

Rounding Corners

At this point, I had a bomberman that correctly pathed to a given square, and avoided bombs. Fairly promising. It had only taken the first 5 or 6 hours to get to this point. But, one thing I noticed was that he sometimes had problems rounding corners.

The corner rounding issue took way too much time. Over hours of experimentation and debugging, I had discovered that the cause was the way the client and server communicated. The server would enqueue updates for future frames, and would buffer moves. So, If I told the server I wanted to go forward this turn, it might take 3 of 4 turns to actually move. In this case, I see myself at the same position and send more messages to go forward. This causes my character to overshoot the turning target, and would try to turn around causing the character to again overshoot the turn. The character would then osculate around the turn, and never get anywhere.
I was not able to fix this issue, since it was unknown how many turns it would take for the move to be scheduled by the server, and some times would be missed altogether. In the end, I had to compromise by having the AI update less often.

The Dance Off

Finally it was time for the versus. There were 4 AI's that were deemed complete, and it was set up so that they would all face each other at once. It was set up so all of the AI's were run on a single laptop, which also hosted the server. After a few minutes of messing around with configurations, the server and all of the AI were running.

I expected my AI to be able to path, drop bombs, maybe get stuck up on a corner or two, but my AI along with all of the others seemed to have cold feet. My AI danced back and forward between two starting squares. Two of the other AI's managed to blow themselves up, and one other AI managed to join me dancing.

Conclusion

So, it was a tie in the end. Despite all our hard work, none of the AI's managed to work correctly in the end. I tried changing some parameters before running him again on the competition computer, but to no avail. In retrospect, I believe that my AI was stuck not on path finding, but because his target kept changing. I set the path target to be the first character in a list, but perhaps as he moved forward the list was generated in a different order, even though I sorted the list before using. He didn't seem to dance strangely before because there were only 2 other characters on the board for testing.

Anyways, it was a pretty fun event regardless of my silly AI clamming up for his big performance. Hopefully there will be another AI event in the future.

The Source

Full source code of my AI can be found on github. Along with The Go server And The Java client

Thursday 27 March 2014

CS Games: AI Challenge



Last weekend I attended the 'CS Games'. The CS Games are a Canadian computer science competition that, this year, was hosted at ETS in Montreal. In this post, I will be looking at one of the challenges I participated in: The AI Challenge.

Being my first attempt at a CS games event, I had no idea what to expect. There were 3 people from our school team working on this challenge, and upon arrival, we were told that we would be writing an AI for a two team capture-the-flag snowball fight. There will be two teams, each AI controlled by a different university. Each team has seven players and the goal is to steal the flag in the middle of the course, and bring it back to the starting position, or to knock out all of the players on the other team. To prevent a rush for the flag, at least 20% of the players in the game must be knocked out before the flag can be picked up. Teams will fight a succession of games across a tournament bracket to see who is the best.

The AI Challenge. Two snowballer gangs enter, one leave.

The systems provided was a Kubuntu machine with no internet access. Documentation was provided on a local server. We had two computers to work on, both with a file-system synced with the logged in user (uottawa). So, if we changed a file on one computer, it would change the file on the other computer.
Standard software provided included: gedit, Java, Python, Netbeans, eclipse, emacs, vim, and anything else you could want for a 3 hour programming challenge. We were allowed to chose between two languages, Java or Python. To keep things simple and quick, we decided to develop with Python using gedit.

To implement the AI, we were provided with a class representing our 'gang' and needed to implement a single function named 'compute', which took a world state as a parameter, and returned a list of actions that our snowballer team would take. The available actions were: throw a snowball to a destination, move to a
destination, pick up the flag (if close enough and unlocked), drop the flag, or do nothing. Each snowballer is only able to take a single action per frame. Throwing a snowball takes more frames the farther it is being thrown, but faster moving snowballs also do more damage. The files provided implemented a server to host a game, and two clients. The clients came with a sample AI that moved around randomly and threw snowballs in random directions.

Each challenge at the CS Games are allotted 3 hours to complete. "Plenty of time!" we thought, and got started on thinking up a solution. It seemed that there were two basic philosophies to chose; either the team will split up and move independently, or the team will work together focused on the same thing.
One idea that came up was to treat the snowballers as boids. Boids are a simple flocking modelled after the movement of swimming fish, and birds in flight. Boid behaviour has three simple rules:

1) Alignment: all boids want to align movement direction with other boids around them;
2) Cohesion: all boids want to keep close to other boids, and move closer if too far away;
3) Separation: all boids want to keep space between them and other boids to
avoid crowding, and will move away if too close.

Although the rules are very simple, the behaviour that results is surprisingly complex. Here is a fairly good demonstration of what boids are capable of:
XnaBoids - Swarm Intelligence Demonstration
and for more reading on boids, see:
http://www.red3d.com/cwr/boids/

Although boids might be a good candidate for this game, seven snowballers probably wouldn't be enough for proper flocking; and it would probably be best to split up to prevent being hit by stray snowballs intended for a teammate nearby.

Taking inspiration from our favourite algorithms, we concluded divide-and-conquer solution would be a good plan. We decided that our snowballers would split up into squads of two and work on taking out the weak spread out AI our opponents may create, or surround the grouped AI that might be an alternative. From this we decided that 3 squads would work, two squads of two snowballers, and one squad of three snowballers. This way each squad could also have a unique AI to increase the chance of one behaviour surviving. Having to implement the 'gang' class, we started working on rearchitecturing the solution
so that the gang contained multiple squads, and each squad could have it's own
AI.

At first the going was slow. Although we had two computers and could use GIT to collaborate, we were both trying to add features to the same files and had to wait until the other committed. By the time we figured out our strategy and got the basic classes up, half our time had already passed! Further time was spent
getting basic controls going on, and figuring out how to use the API provided.

One problem that took way too long to get working was the throwing. If a snowballer was told to throw when currently throwing, then the new throw would interrupt the current throw. Potentially a required behaviour, but when you have three hours to make something, the 10 minutes needed to figure out why its not
working, and further time required to fix it, is time spent in frustration. By the end of the first AI challenge, we had a single two person squad that moved up to the same y-axis as the flag and throwing like floppy fishes; not a successful AI. We submitted the sample AI unmodified, and hoped for the best.
Perhaps we over-engineered for a challenge of three hours.

The next day when it was time to start the second round, we were determined to at least get something working properly. Focusing on the squad AI that stood next to the flag, we decided to make them into a 'defence squad' by adding logic to have the squad move around the flag in an orbit, stopping only to throw
snowballs at any visible enemies. The two squad members would trace a circle around the flag, each standing on opposite sides of the circle. This behaviour was added in experimentation, but it ended up being surprisingly effective.

After getting the throwing logic working correctly, the squad members would be able to peg the enemy units with consistency. The two members of our 'defence' squad were able to take out the entire enemy team of the sample AI. To be fair, the sample AI were mostly driven by 'math.random', but this still felt like a
fair achievement for our previously faltering AI.

In the end, the 'defence' AI worked so well, all squads were assigned the 'defence' AI, with some modifications were added to orbit at different distances from the flag for each different squad but otherwise following the same behaviour.

Furthermore, this flag-grabbing 'defence' AI was told to delay rushing for the flag. This was added in anticipation of other AI running straight for the flag as soon as it is available for grabbing, and thus will be mowed down by our AI on guard.

In result, our AI got second place out of about thirty teams! In going from nothing at the end of the first challenge block to second place at the end of the second challenge block, I would say that the AI challenge was a success.

Hello World!

It seems appropriate that the first entry should be a simple introduction; A summary of where I am right now. As of the time of this writing I, Brandon Surmanski, am a 4th year Computer Engineering (CEG) student at the University of Ottawa, and I am a serial programmer.

The word serial in the context of computers usually refers to low-level networking stuff, or marshalling objects into a flat representation. I am not that kind of programmer. I am the kind that always has some side project on my mind and can't stop myself from jamming at the keyboard until something runs. Sometimes it is nothing more than a prototype and I get bored and move onto the next project, and sometimes I end up with something cool. Either way, this blog is intended to log the progress of my meandering conscious. From the point of view of this blog, abandoning projects will result in new insight of whatever has caught my attention.

Soon to come:
1) I will throw down a post or two summarizing my experience with the 2014 'CS Games' computer science competition, which I attended this previous weekend.
2) I will ramble about the development of a compiler I am working on.
3) I may even reminisce on old projects lost in limbo from distraction.

Stay tuned.