It's second nature for us to follow an airplane across the sky, or to walk around a rock we see in our path. It's not so easy for robots – you just have to watch $16,000 robots play football to realise how hard it is for them to kick a rolling ball. In contrast, our brains handle streams of visual information seamlessly, picking out obstacles and navigating us around them.

So how do we make robot brains more like ours? One way might be to change the type of processor they use. Until now, robots have always been fitted with central processing units (CPUs), just like most PCs. Such units are very good at crunching small streams of data fast, but they can only do one thing at a time.

In contrast, graphics processing units (GPUs), which are heavily used in supercomputers and gaming, can handle larger data sets more quickly, and deal with several of them at once. This is how the human brain works, and even though we process some tasks millions of times more slowly than does a computer, the amount of information our brains can handle is vast. But until quite recently, GPUs have been too big and expensive to use in robots.

Now a neuroscience and robotics start-up called Neurala in Cambridge, Massachusetts, has built robot brains using GPUs. It says they run roughly 10 times as fast as those built on CPUs.

I watched a software simulation at the Neuromorphics Lab at Boston University. A virtual rover is given a basic route across the surface of a digital Mars and it sets off without hesitation, spotting the rocks in its way as it goes.

To read more, click here.