CyberPeople Presentation at DevXS

Posted on November 13th, 2011 by John Christopher Murray

John Dyer’s awesome super robot presentation at DevXS!

MindBodySoul Robot

Posted on November 13th, 2011 by John Christopher Murray

The MindBodySoul Robot is a robot that accepts input in a wide range of formats – Mind control, Body movement, and Voice / Soul. The robot is a large roomba type and carries a laptop, running a webserver (lighttpd) to accept commands using a php script and run movement scripts written in C. Several applications can be used to control it:

A Kinect, to control it using body movement. Gestures can be used to move the robot, and we plan to be able to mount the Kinect on the robot.

The robot is also able to be controlled with a mind controller – yes, really. A neural headset connected to a laptop and an application can also be used to send commands to the robot, though it limited to ‘forwards’ and ‘turn’.

A voice recognition app for the android can be used to control the robot. The voice recognition works by using the androids speech-to-text method, which translates speech into text (used on some android keyboards, and for some of googles search widgets), and then matches the result from this to the closest word in the list of commands it has.

All of these communicate with the robots webserver to pass on commands, which it then performs movement for the robot, and more recently controls a quadrocopter in tandem with the robot so that they move together. All of the parts uses wifi to communicate with each other) with the exception of the modules mounted on the robot itself.

James Lendrem on the Kinect

Over this weekend I have been attending a developer conference for students known as DevXS. This is a prototype run of the idea of a conference for students as producers and I just wanted to give some of my viewpoints on the event and what I thought of it.

I heard about DevXS a few months ago from a friend. His University had sent out emails about the development conference asking if people wanted to go and if they would sign up. I was intrigued when he mentioned it as I had heard of codeathons before but never attended one.

At first I was somewhat weary of signing up to the convention. It was in the Lincoln area and I was in the Newcastle area, needless to say this was a large distance for me to travel. After deliberating on it for a while I decided to go for one of the best lines in the world ‘F— it!’. I signed up and waited for the conference to arrive.

When I first arrived I must admit I was somewhat tired after my long drive. However we soon got signed into our hotel and the event itself. For the first night we just did a basic meet and greet style event. Drank beer and played a game of developer bingo. (Developer Bingo is where you are given a board of questions related to development and have to find people who have done one of the things. After signing all the questions of you win.)

I met a lot of new people that night and found it rather amusing. I must say for once I thing I had found people who shared similar interests to myself. Interests relating to code, problems, solutions, and just general fun regarding computers.

The next day we had to get into a variety of teams. I joined a team that was doing some development work on robots. The idea was to make a robot that could be controlled by three different methods. Motion via the kinect, sound via an Android Phone, and mind via some device that could pick up concentration levels.

Needless to say this was a rather interesting project over all and presented us with a variety of different challenges. I was working on the motion control using the kinect. I must admit this was rather difficult.

First of all we had to find a variety of different libraries and API’s that we could utilise to make the Kinect work. Being something of a Free Software nut I started out looking for open source API’s that we could use.

The first API I found was one called OpenKinect which could connect straight to the Kinect and get the multiple data streams for it. I started playing around with this and found it rather cool. From the Kinect we could get video streams and depth information, not to mention rotate and tilt the Kinect not to mention use the LEDs on it.

However there was a problem with the API in that it only did raw data streams. This is all very well if we only wanted to use the raw data however we needed to utilise motion control and this meant being able to track different parts of the body. This library while neat was useless for this unless we could implement tracking. This is a rather large problem and we didn’t have the time to solve it. Being a computing scientist and being very lazy it was time to look for a new library that could do this for us.

The next library we tried to utilise was the OpenNI library. This one did look rather promising due to the fact it offered motion tracking capabilities. However this is where we encountered problems with open source products.

Open source can be a rather bad place for documentation. Not many programmers like documenting features, they would much rather be adding new ones or improving the existing feature set with better code. However this causes problems for people coming from outside of the project as they don’t have any idea what the project does or how to utilise it. Due to the severe lack of new documentation we couldn’t get any of the library examples to run.

The next library I had a look at was OpenCV as I had heard of people utilising this with the kinect to do tracking. This however turned out to be non-viable due to the interconnections between the two different libraries we could have to utilise.

After doing this I decided I would make a leap to the unthinkable. I booted into Windows. I then downloaded the Kinect for Windows SDK. This SDK had everything we needed in it. It did raw data streams and it also did skeletal tracking. The only problem was due to the fact it only allowed C# or C++ development however this wasn’t to great a problem.

Overall I have thoroughly enjoyed my time at DevXS and I would definitely love to come to another developer conference. It was great fun just being able to do what I love to do while listening to different talks on rather interesting subject matters. If I do come to another conference I am going to try and be a bit more on form with my programming and maybe have a rapid development language at my disposal.

While Java can be a useful language it is not very useful for prototyping as it requires so much skeletal code to get anything up and running. Something like Python, Ruby, or Perl would be much more useful.

The thing I enjoyed most about my time here has to be meeting people who shared similar interests to me. Being able to just sit down and write code in a group is great fun.

The thing I disliked was probably my need to sleep on the Saturday. I really wanted to go the whole night however the need to drive today was to great to risk not sleeping. The mats we got to sleep on were really uncomfortable. My back was aching like mad in the morning. However I did manage to get some kip which was good.

I hope they host more of these in the future. Coming to DevXS has taught me a lot and generally it got me back to the reason I got into coding in the first place. The pure enjoyment of just sitting down and programming stuff. Making things. Breaking things. Just generally finding solutions and problems.

The best leaps in innovation do not come from the ‘Eureka’ moments but come from the ‘F—! That isn’t suppose to happen?’ moments. DevXS to me encompasses everything great about development. Not the Eureka but the ‘That’s interesting… why’d that happen?’.

The Roomba Moves!

Posted on November 12th, 2011 by John Christopher Murray

The aim

Posted on November 12th, 2011 by John Christopher Murray

Our aim is to control an iRobot Roomba Create (a robot vacuum cleaner without the cleaning bit) using voice control, mind control and gesture control. If we get time we’d also like to get a Parrot AR Drone to fly above the robot and follow it and to get a Kondo Humanoid robot some gestures.

For voice input we’ll be using Google’s Android on a smartphone and its voice recognition API. For mind control we are using MindWave computer-brain interface. For the gestures we’ll be using an Microsoft Xbox 360 Kinect.

Hello world!

Posted on November 12th, 2011 by John Christopher Murray

Welcome to University of Lincoln Blogs. This is your first post. Edit or delete it, then start blogging!