Super ‘Human Like’ Computer


Andrea Macdonald, founder of ideaXme, interviews Stephen Furber, ICL Professor, Computer Engineering at the University of Manchester and Creator of SpiNNaker.

SpiNNaker is short for Spiking Neural Network Architecture. It is a super ‘human like’ computer, which has been in the planning for 20 years. It is a neural network computer, the current main focus of which is to advance knowledge of neural processing in the brain.

Steve Furber Professor Computer Engineering and Andrea Macdonald
Andrea Macdonald, founder ideaXme Ltd left, Photo Credit: ideaXme. Stephen Furber ICL Professor Computer Engineering, at the University of Manchester, right, Photo Credit: the University of Manchester

Macdonald and Furber talk of SpiNNaker, which recently reached a milestone of 1 million processors.

Why is SpiNNaker considered to be a breakthrough in computer science?

Read, listen to or alternatively watch the interview to discover what distinguishes this particular neuromorphic computer from all other neural networks. In short, why this super ‘human like’ computer is so exciting!

You’ll learn that SpiNNaker is more flexible than other neuromorphic computers and quite unlike traditional computers. Moreover, you’ll gain an understanding of how it does not communicate by sending large amounts of information from point A to point B via a standard network. Furthermore, how it mimics the parallel communication architecture of the brain through a process which involves sending small amounts of information to different destinations simultaneously.

SpiNNaker, a parallel computing platform

The SpiNNaker machine, ‘super human’ like computer, is a parallel computing platform which focuses on neuroscience the brain, robotics and computer science. Within neuroscience: it focuses on understanding how the brain works, a Grand Challenge of 21st century science. Here, SpiNNaker is intended to help neuroscientists to unravel the mysteries of how the brain functions. The recent milestone of 1 million processors are capable of simulating 1 billion simple neurons, or millions of neurons with complex structure and internal dynamics.

Within robotics: SpiNNaker could be a valuable research tool for researchers in robotics, who need mobile, low power and flexible computation. A small SpiNNaker board makes it possible to simulate a network of tens of thousands of spiking neurons, process sensory input and generate motor output, all in real time and in a low power system.

In computer science: SpiNNaker breaks the rules of traditional supercomputers that rely on deterministic, repeatable communications and reliable computation. SpiNNaker nodes communicate using simple messages, that is spikes, that are inherently unreliable. This break with determinism offers new challenges, but also the potential to discover powerful new principles of massively parallel computation.

Funding of SpiNNaker and the human journey and connections that made it possible

Further, learn here of the ‘behind-the-scenes details’ of how this project has been funded, whether Brexit offers any challenges to its progress and the plans in process for creating SpiNNaker 2.

Discover, on a personal level, Professor Furber’s background and human journey to creating a super ‘human like’ computer that moves the human story forward. Moreover, hear of the deep connections human to human that Stephen Furber has made to move his innovations and human story forward. Towards the end of the interview, despite all of the connections Stephen has already made, you can find out who else he would like to meet and why.

Super ‘human like’ computer

Below, the full text interview

Andrea Macdonald, founder ideaXme [00:05:54] Who are you?

Stephen Furber, ICL Professor [00:05:58] I’m ICL Professor of Computer Engineering, in the School of Computer Science at The University of Manchester in the UK. I was brought up in Manchester in North Cheshire, South Manchester. I went to school at Manchester Grammar School and then I spent 10 years at Cambridge going to initially study mathematics and then a PhD in aerodynamics and then I got drawn into the computer industry via a little company in Cambridge called Acorn. And in my 10 years at Acorn, which was pretty much the whole of the 1980s, I was involved in some very exciting early developments in personal computing. So, I had a hand in the BBC Micro development which introduced computing into schools and following that we developed the first Acorn RISC Machine, or ARM processor which has since come to dominate the mobile and embedded world. After ten years at Acorn I moved to Manchester and my title computer engineering suggests that my position is related to the hardware side of computers, so my core research interest is in microchip design, and for the last 20 years this has been focused on building large scale computing machinery for real time modelling of biological brains to try and understand the grand scientific challenge of getting to the bottom of how the brain processes information.

Super ‘human like’ computer to model the human brain

Andrea Macdonald, founder ideaXme [00:07:35] Why do you specifically want to understand how the brain processes information?

Steve Furber ICL Professor Computer Engineering
Stephen Furber, ICL Professor Computer Engineering, at the University of Manchester, Photo Credit: the University of Manchester

Stephen Furber, ICL Professor [00:07:40] Well because it is one of the great frontiers of science and when people talk about scientific frontiers they are usually thinking about the unimaginably small, such as subatomic particles, or the unimaginably large, such as the square kilometre array which is looking into the origins of the universe and the far reaches of space. But we all carry this thing which is very human sized around inside our heads – if we take it out, we can hold it in our hands – and we don’t know how it works. And this seems to me to be a big gap in our knowledge and a gap that’s probably very important to our future because we know that for example diseases of the brain cost the developed economies more than cancer, heart disease and diabetes put together. So, medically it’s extremely important. And we lack treatments because we don’t understand it. So, we don’t know how to design drugs to interrupt the disease pathways. Also, of course it impinges on my professional area of computing, and computing is increasingly moving towards artificial intelligence. I personally don’t think we’re there yet – what people call AI, I would prefer to badge as machine learning – but understanding more about the brain will certainly help us work out how to make computers a bit more intelligent.

Andrea Macdonald, founder ideaXme [00:09:10] Can you talk to us about the specifics of SpiNNaker?

SpiNNaker Super Computer
The SpiNNaker Supercomputer, Photo Credit: the University of Manchester

Stephen Furber, ICL Professor [00:09:16] So, the SpiNNaker computer is a supercomputer in one sense that’s been developed for brain modeling. We started the project about 20 years ago and we considered what we might be able to contribute to the scientific quest to understand the brain as computer engineers. And we wondered what we could do if we built a machine with about a million mobile phone processors in it and got them all working together, so that we could support large scale brain models. We realized very early on that even with a million processors, we’re not approaching even 1 percent of the scale of the human brain. But we are at a scale where we could possibly model whole mouse brains which would be a significant step forward. We built the machine around several principles, but we had to customize its architecture to the problem of brain modeling because conventional computers really struggle to support large scale brain modelling in anything like real time because the brain is hugely connected.

[00:10:25] Each neuron inside your head – and you have about a hundred million of them – connects to on average about 10,000 others. And this requires the communications inside the machine to send messages not from one place to on other place which is typically what’s required in computers but from one place to many thousands of other places. So, we built a bespoke communication infrastructure inside the machine to tune in to this problem.

 

SpiNNaker’s connectedness

Andrea Macdonald, founder ideaXme [00:10:57] Could you talk about the specifics in relation to the actual silicon chip, how the different components work. And the thing that is special about your chip in terms of the connectedness, how the different components connect with each other.

SpiNNaker silicon chip
Image Credit: Stephen Furber

Stephen Furber, ICL Professor [00:11:16] Each SpiNNaker chip is about a square centimetre of silicon. That’s not an atypical size for today’s chips. It’s made on a fairly old silicon process technology. And so we can manufacture about a hundred million transistors on a chip and those are divided into eighteen processing regions, each processing region has a relatively small and energy efficient ARM core, that is the processor I helped develop back at Acorn in the 1980s, though of course it has evolved through the hands of many thousands of people in the meantime. We use ARM cores with memory and various devices, and you can see 18 physical copies of that on the chip if you know what you’re looking at. In the centre of the chip there is the thing that implements this connectivity.

Andrea Macdonald, founder ideaXme [00:12:16] Is that the router?

Spiking Neural Network Architecture creates ‘human like’ processing

Stephen Furber, ICL Professor [00:12:16] Yes. it is the router in the middle of the chip. Biologically, neurons communicate principally by sending spikes. They go “ping” every so often. So, one thing to ponder is that all your thoughts are spatiotemporal patterns of pings flowing between the neurons in your head. A ping is in effect a pure asynchronous event.

 [00:12:39] So, there’s no information in its size or shape. The information is purely in its timing. So, in SpiNNaker each ping becomes a tiny packet of information. All that packet of information says is that neuron number 320 just went ping. We communicate that packet around the machine from chip to chip, across the machine which occupies 10 data center sized rack cabinets. And we deliver it in a small fraction of a millisecond to every destination to which it has to go. And that’s the real time requirement.

 [00:13:15] So, the key to SpiNNaker is this router in the middle of each chip. Routing is not new. It is the basis of the Internet; all the information that flows when you watch a video on your computer that’s coming from an Internet source is flowing in packets across the Internet. But typically, the requirements there are to get very high data rates using very big packets and the requirement in SpiNNaker is to achieve relatively modest data rates but using tiny packets because each packet is basically carrying one ping.

 [00:13:52] So, there’s a difference in the objective that it is implemented for. There is a difference in detail. So, this little 4 square centimeters in the middle of each SpiNNaker chip is the equivalent of one of these rack ethernet switches that you see in the corner of many offices today. But obviously, we had to scale everything down and make it very simple and lightweight so that we can go from a rack cabinet into a few square millimeters of silicon and that’s the key to SpiNNaker really.

SpiNNaker Board
Image Credit: Stephen Furber

Andrea Macdonald, founder ideaXme [00:14:27] So, now having reached the landmark moment of a million processes, you are now working on SpiNNaker 2. Could you take us through what that will entail in terms of improving on SpiNNaker1?

Plans for SpiNNaker 2

Stephen Furber, ICL Professor [00:14:44] Sure. I should say of course that we haven’t abandoned SpiNNaker 1 because we’re still doing a lot of work in using it and we have a whole raft of collaborators through the European Union Human Brain Project, neuroscientists, computational neuroscientists scientists, who are keen to map their models onto SpiNNaker 1, so a lot of what we’re doing is supporting users on SpiNNaker1 and working with users. But alongside that, SpiNNaker 1 is relatively old technology. We’ve had the silicon since 2011. So, we’re developing a second-generation chip on a much more up to date semiconductor technology. And we’re developing this in collaboration with chip designers at TU Dresden, in Germany. And the goal is to hit something around 10x performance per chip. That will allow us to put something perhaps of the scale of the small insect brain into a single SpiNNaker 2 chip which would then control a drone or a small robot, obviously only with the kind of capabilities you’d find in an insect. But still that’s quite a lot of neurons to control some autonomous device.

 [00:16:04] We’ve also learnt a lot in the development and use of SpiNNaker 1 about what’s important and that’s allowed us to tune the design of SpiNNaker 2 even more to the problem of modelling biological neurons in real time than we were able to tune SpiNNaker 1 because at that time we didn’t know what we now know after 10 years. There has been more learning since then that we are building into SpiNNaker 2.

Merging other technologies with SpiNNaker

Andrea Macdonald, founder ideaXme [00:16:32] Looking far into the future and accurately predicting what will happen is obviously difficult to do with anything and particularly so with the computing industry. But can you see a cross-fertilization of ideas with biotech for example?

Stephen Furber, ICL Professor [00:16:55] I think there’s quite a lot of intersection between brain science and biotech. Obviously, I mentioned earlier the issue that the pharmaceutical industries have in developing drugs for diseases of the brain. They can’t really make progress and they largely stopped investing for that reason because the way they develop drugs these days is they understand the disease pathway. They build models of it and then they design drugs to interfere with the disease pathway. But with the brain we don’t have the models. And so, I hope that SpiNNaker will contribute to the effort that will result in those models becoming available. Now of course we’re not the only project in that space at all. There are many people working to try and fill that gap. But at the point where we can have even partially representative models that can be used to see the effects of brain diseases, and to model the influence that drugs would have on those, then I think we’re in a very strong position for interactions with the pharma industry.

Andrea Macdonald, founder ideaXme [00:18:07] What about the manufacture creation of human tissue. Is that an area where there could be some crossover way into the future?

Stephen Furber, ICL Professor [00:18:20] Well I think that’s probably beyond my field of experience to comment on. I understand the tissue that makes microchips, not the sort of tissue that makes biology…

Andrea Macdonald, founder ideaXme [00:18:36] Andrea Macdonald Interjects: That makes brains?

Stephen Furber, ICL Professor [00:18:36] I know the function of tissue that makes brains, but I am only interested in its function. I’m not really interested in the details of the biology which a lot of the neuroscientists are very interested in.

Andrea Macdonald, founder ideaXme [00:18:51] My question was directed to understand whether you could see exponential technologists working with biotech to create a human brain in the future?

Stephen Furber, ICL Professor [00:18:57] We already know how to create a human brain in biological material. But that’s not what you’re referring to. On current technology, if we knew enough to build a complete model of the human brain it would occupy an aircraft hangar and require a small power station of its own to run it. So, it would in no sense be any kind of real threat to the biological brains as a competitor, nor would it be anywhere near anything we could build into a mobile humanoid robot that would look and walk and talk like a human. That’s still a very long way off our current technology and of course we don’t have the knowledge to know how to build such a model anyway.

 [00:19:52] There already is some progress – and it is going to get more dramatic – in brain prostheses. So not displacing the human brain but adding to it. This is already happening of course. One clear example is a cochlear implant which is a treatment for certain forms of deafness. This is a piece of electronics which basically interfaces directly with nerves that go into the brain and is very effective. On a similar vein, retinal plants are emerging. They are not as fully developed as cochlear implants but they are certainly going in the right direction to enable us to restore sight to people with certain forms of blindness – and it is not a cure for all forms of blindness but for certain forms where the problem is in the biological retina. It’s becoming increasingly possible to think about replacing the biological retina in the physical eyeball with a silicon retina that connects into the optic nerve to restore at the moment a fairly low-resolution form of vision, but that will improve over time.

 [00:21:17] Those kind of activities are using the technology to compensate for a disease or deficiency and are reasonably uncontroversial. Of course, there’s also the question as to whether you could build an interface that would give you a memory prosthetic. We have very effective and reliable computer memory chips. Could you augment your brain’s capabilities through this sort of technology? That’s not an area of interest to me. It’s clearly an area where the technology is not far away from being able to deliver something. I think there’s a lot for the ethicists to think about. A lot of thought needs to go into that, but the technology is not far off being able to contemplate that kind of brain capability augmenting prosthesis. But again, this isn’t an area we’re working in. Our work is very much focused on the science problem of understanding what the basic functionality in the brain looks like and also understanding the differences between biological intelligence and machine intelligence – which has exploded over the last decade in parallel with our work – seeing where the differences and similarities are and where the opportunities for crossover might be. Is AI – as implemented by leading people such as Google Deep Mind – is this telling us something about natural intelligence or is it completely different? And if it is telling us something, are we picking up those lessons? As we learn more about the brain, can the lessons we learn be transferred into that machine intelligence space?

Modelling the human brain without some form of embodied cognition is meaningless

Andrea Macdonald, founder ideaXme [00:23:35] What is your opinion about embodied cognition?

Stephen Furber, ICL Professor [00:23:38] Once you’re modelling the brain above a certain level, it becomes almost meaningless to do it without some form of embodiment. At the level at which we’re currently doing it, that’s not the case. Currently, we have a very detailed biological model of a square millimeter a cortex. And that seems to be quite accurate in the sense that it reproduces measured firing rates and so on, but it doesn’t actually do anything. It doesn’t need an embodiment to be useful scientifically. But as soon as you start looking at brain subsystems, these subsystems always have sensory inputs at one end and muscle or actuator outputs at the other end. And they’re much more likely to behave sensibly and give meaningful results if they are connected to suitable sensors and actuators. Now of course they needn’t necessarily be physical. These could be virtual because the model is virtual it could be connected to the virtual sensors which are seeing a scene in the virtual environment and controlling movement in that virtual environment. So, the embodiment could be virtual.

Stephen Furber, ICL Professor [00:25:01] Often a purely virtual environment is removing some quite important real-world phenomena. And so, if you really want to understand how this brain system works in the real world you have to embody it in a physical robot of some sort.

Andrea Macdonald, founder ideaXme [00:25:17] I must thank Michael Mannino, neuroscientist and neuroscience ideaXme ambassador for that question before we move on. So, thank you Michael and thank you for your answer Steve!

 [00:25:33] If we can go back to SpiNNaker and to the things that make it special within the context of computing and other similar systems. Can you talk about the flexibility?

The flexibility of SpiNNaker

Stephen Furber, ICL Professor [00:25:51] Yes. And this really is a metric that positions SpiNNaker relative to the many other neuromorphic systems that have been developed around the world. So, for example within The Human Brain Project we collaborate with a team at Heidelberg who are developing the BrainScaleS System, which is a very different approach with some similar objectives of modelling brain systems. The differences are in fact that we model the behaviour of a neuron by implementing the equations that describe that behaviour in software on very small processors. They implement those equations by finding analogous electronic circuits that were if you describe those circuits in equations you get the same equations as in the biology. They talk about their system as a physical model. Effectively, the functionality has been mapped into an electronic substrate while ours is much more of a programmed model.

 [00:27:00] Those are two ends of the neuromorphic scale, and in between there are devices such as the Loihi chip that Intel has developed, and that is not a product, they’re not pushing it into the market as a product. They’re putting it out as a research prototype and encouraging academic groups to explore its capabilities. I think until we can prove that there are commercially viable capabilities for these chips there’s little point in productizing them. So, Intel is exploring that space and 10 years earlier of course IBM built the TrueNorth Chip, which is in a similar sort of middle space between our software approach and Heidelberg’s analog electronic circuit approach. And there are many other groups – for example, from ETH in Zurich to Stanford in the USA – who are developing chips in this space and we sit at the very soft end. Every aspect of SpiNNaker is software configurable, so we can construct the routing tables that make the neural network connections in software. We can build the neural models in software. So, if a neuroscientist has a new observation that requires an adjustment to the equations, we can implement that easily. Whereas, Heidelberg has to redesign their circuits and remanufacture their chips. So, there are merits in flexibility. There are also costs. The cost of implementing an algorithm in software is roughly an order of magnitude increased power consumption compared with implementing it in dedicated digital hardware. The digital hardware is less efficient than the analog hardware. So, the physical model approach has major advantages in efficiency. And all these approaches contribute and cause people to think about the problems in different ways. There is no single right answer at the moment. It’s a space that’s still being explored. And we don’t know if, or when the right answer will emerge that will say this is the right way and all the others are less efficient.

Andrea Macdonald, founder ideaXme [00:29:28] Could you tell the audience about the principles of Moore’s Law.

Steve Furber ICL Professor Computer Engineering at the University of Manchester
Professor Stephen Furber, Photo Credit: the University of Manchester

Stephen Furber, ICL Professor [00:29:34] Moore’s Law, a principle that underpins all chip technology, is named after Gordon Moore, who was one of the founders of Intel, and in 1965 he published a paper where he talked about his observation that the number of transistors that Intel would manufacture on a chip was doubling every, I think 18 months time or two years. And this was at the core of advances in chip technology and he confidently predicted in 1965 that this would continue for 10 years. So, Moore’s Law was a prediction made in 1965 with an end date of 1975 and of course it didn’t end in 75. It went on and on. It’s now very much approaching its limits. But doubling every 2 years is an exponential process. If you double every 2 years, then every 20 years you get a thousand times more capability. 40 years, 1 million times more capability. So, it’s a ferocious rate of growth. From the late 70s, it’s really been the major planning tool of the whole global semiconductor industry. All their plans have been embodied in a vast document called the International Technology Roadmap For Semiconductors, which is about a thousand pages and tells you what the next 10 or 20 years are going to look like and where the major problems are and what solutions are emerging. It is a huge planning tool that is underpinned hundreds of billions of dollars investment in driving chip technology from where we were in 1965 to where we are today.

The future of computing

Andrea Macdonald, founder ideaXme [00:31:33] And you concentrate on one particular area. Where do you see computing going? Do you see it involving under a number of different approaches, cold computing, atomic? Well where do you see it going?

Stephen Furber, ICL Professor [00:31:55] So, computing is approaching a threshold. Moore’s Law has gone on for half a century, but it can’t go on much longer. The reason it can’t go on much longer is the major mechanism that’s being used to deliver it has been making transistors ever smaller and they’re now getting very close to atomic scale. Clearly, the size of an atom is a fundamental limit. You can’t change that. There have been physical limits at many points in the development of Moore’s Law. In the 90’s, it was generally viewed that we would never get past one micron for chip size, because the chips are made using photolithography technique, the wavelength of blue light is 390 nanometers. You can’t possibly manufacture anything that’s less than 2 or 3 wavelengths in size. That was obvious in the 90’s. We now routinely manufacture things which are a tiny fraction of the wavelength the light is used to draw them, and that was a huge transformation in understanding what you could do with lithography. You had to stop thinking about it as a printing plate and start thinking about the lithographic mass as a diffraction grating and you can use logical masks and overlay diffractions to get very small features. It is a formidable achievement and involves solving computational inverse problems and other things. But we are clearly now getting to limit. The cost of designing a chip has gone through the roof. The cost of building a factory for today’s chips has gone through the roof. Everything is becoming expensive as we approach the physical limits.

 [00:33:39] Everybody is asking what are the alternatives to just driving Moore’s Law forward? Now one answer to that question that has emerged in the last decade in this parallel development – or as I described earlier, explosion – in machine learning systems which are largely based on neural networks but they’re not biologically realistic neural networks. They use an early form of continuous net that doesn’t spike. But they’ve taken over machine learning in a very dramatic way and they’ve shifted the center of gravity of computing from the standard paradigm of high precision deterministic computation, where if you run the same program twice you expect identical answers, into this world of neural operations where things are a bit random, a bit unpredictable. Where the calculations don’t really need to be that accurate most of the time. That’s creating an opportunity for people to come up with some exciting new architectures to support machine learning because, although you can do the training and inference on conventional machines, it is, however, very expensive and so only the very big companies can afford to do this as part of their mainstream business because they have the data centers full of expensive computers to run this on. So, there are many companies emerging that are trying to democratize machine learning by introducing chip technology that makes it much cheaper to do.

 [00:35:28] That’s a relatively modest change but it is quite significant economically. Alongside this people are talking about the potential role that even more brain like technology, neuromorphic technology such as SpiNNaker, might play in the future of computing. They see that as having the potential – though it’s not yet proven – to take machine learning to another level to solve some of the problems that they currently can’t solve. It’s clear you know biological systems learn continuously while machine learning is done by having a huge expensive training process and then the cheaper inference process. But once you’re in the inference phase you stop learning; you’re just using what you’ve already learnt, and biological systems keep learning and that is attractive. Also, the thing that I frequently quote is the famous Google image classification network had to be shown 10 million pictures of cats, after which it was very good at recognising cats, whereas my 2 year old grandson when he’d seen one cat could recognise cats reliably for the rest of his life. So, there’s something different, though it is not a fair comparison because the Google network starts off with its brain completely scrambled, whereas the 2 year old has had 2 years to develop quite a complex model of the world in his head into which cats fit rather nicely. But it does say there’s something different going on and maybe we can find systems that don’t need huge amounts of data.

 [00:37:08] One of the things that made deep learning effective is big data. But biological systems don’t seem to need big data. So, maybe we can get back to small data and build effective learning systems. But these are not solved problems. Even more radical of course, is the development of quantum computing. Whether that’s going to be anything more than a sort of an expensive accelerator for very special problems is unclear. There’s been a lot of work on what you could do if you could make a quantum computer. In some ways disproportionate amount of work that’s been going into working out how to make one. They are beginning to emerge. There are quantum machines available although they’re not particularly spectacular in terms of what they can offer. I am a bit of a quantum sceptic. That’s partly because I don’t see them delivering within my career horizon which is rapidly approaching. In the longer run they may have something interesting to offer.

Funding of the super ‘human like’ computer

Andrea Macdonald, founder ideaXme [00:38:20] Your work has been funded by The Human Brain Project (HBP). What impact does or could Brexit have on all the train raising questions?

 [00:38:35] To correct you, the original development is SpiNNaker that put us in a position to join the HBP was funded by the UK Engineering and Physical Research Council (EPSRC). I’d like to acknowledge that. Without that funding we wouldn’t be in HBP. HBP is now the major source of funding. The machines were funded by EPSRC. HBP has been running for 6 years now. 

 [00:39:12] We had all the silicon design and chips made in 2011. We were building boards, so we’ve continued scaling up the machine and only last year did we get to our original target of the million processors in the machine. We’d had half a million online over the two and a half years people were still working out what to do with that. 

 [00:39:51] So for the last 6 years we’ve had our software team supported by the Human Brain Project and the development of SpiNNaker 2. The reassurance we have on the Brexit matter is that any EU science program contracts which are entered into by the year 2020 will be underwritten by the UK government whatever the outcome of the Brexit negotiations. So, I don’t think our HBP funding is under any threat from Brexit, although it creates a huge atmosphere of uncertainty and of course it creates doubts in the minds of our partners and the European Commission. But I think our ability to continue to participate is not threatened by the funding position.

Andrea Macdonald, founder ideaXme [00:40:49] We spoke earlier about what makes the SpiNNaker system special, connectedness the way in which the individual components connect you’ve described as the special thing. If I can take you into a new area which is your human story and ask you whether there were any special rich connections with individuals in your past that helped you to move forward with your life and move forward with your innovations?

Rich connectedness that moved Steve Furber and his breakthroughs forward

Stephen Furber, ICL Professor [00:41:43] Obviously, there have been a number of important people in my life. Passing over family, which is always important I think and school teachers. I did my PhD under the supervision of Shon Ffowcs Williams of Cambridge who was a very interesting PhD supervisor. Everything he did, he did with great enthusiasm and then it didn’t matter whether it was the technical work you are talking about in a PhD supervision, or whether it was playing bowls on the Emmanuel Fellows lawn at lunchtime, he somehow managed to find excessive enthusiasm in all of these contexts which was quite infectious. The person who’s had the biggest influence after that was Hermann Hauser, the founder of Acorn, who had an approach to creative engineering that inspired everybody in Acorn in the early years and underpinned the development of the BBC Micro and the ARM processor in that he was always very close to the technical team. He didn’t technically lead the technical team. He was a sort of managing director of the company. But he was always around when there were interesting challenges to be faced and of course it was his fault that I moved from aerodynamics into computing. In aerodynamics I had experience and qualifications and in computing I had none of the above! And he recruited myself and Sophie Wilson from Cambridge University Processor Group, which was just a society of people who built computers for fun, and formed the company Acorn around us. So, I think he was probably a big influence, my Acorn years certainly set the direction for the rest of my career.

Andrea Macdonald, founder ideaXme [00:44:04] You were obviously a shining star at University and Acorn from an academic perspective. We’re talking about neurons and spiking. How important was that emotional spark with those 2 individuals that you mention, that emotional spike? Or, was it just purely intellectual? They saw that you were possibly head and shoulders above everybody else at your time. Or was it a combination of the two things?

Stephen Furber, ICL Professor [00:44:58] I wouldn’t agree with the description of being head and shoulders above our colleagues. Herman was very good at finding people. I think the team that we worked with across Acorn were all very capable and very committed to working together. In the case of myself and Sophie, there was a kind of creative tension. I mean the relationship wasn’t always smooth. We often disagreed. I had the more practical and more human skills, and Sophie was extremely good, I mean she has a sort of memory that remembers every detail of everything. It is quite terrifying to work with people like that. It wasn’t a smooth and gentle ride. There was a lot of creative tension. And several of the other people that we worked with, these were people who went on to lead ARM and grew it to its global influence today, people like Tudor Brown and Mike Muller. Mike was also a great source of creative tension. He was Acorn’s antidote to group think. Whatever everybody else in the room was agreeing on, he would disagree on, and this is actually very useful. It’s dangerous to have a room of people who all agree. So, there was a lot of very good people. Sophie and I happened to be in at the start, and there was Chris Turner who was also chief engineer early on and a major contributor to the BBC Micro success story. There were many others. One of my views on human heroes is that they are nearly always inappropriately badged as such because if you take any individual out of the picture you find it doesn’t really change much. If Einstein hadn’t done general relativity, then I think some other guy was 6 months behind him. So, most human progress I think is cultural, rather than individual, and you need the right type of people in the right place and then the ideas emerge.

Andrea Macdonald, founder ideaXme [00:47:33] I’d like to ask you whether you are currently mentoring anybody, maybe someone from the younger generation who through having that connection with you is moving their story forward and in turn potentially the entire human story forward. Would you say that one to one mentoring and connecting in that way, whether it is in an official capacity or in an informal capacity is very powerful?

Stephen Furber, ICL Professor [00:48:27] I have a number of roles which are officially described as mentoring. I am a professor, so I have PhD students that I supervise which is a blend of mentoring and encouraging and cajoling and all those other things. I think that’s probably the most direct mentoring that I do but I am also a fellow of the Royal Academy of Engineering and I have mentored quite a number of Royal Academy of Engineering research fellows over the years. Some of whom have gone on to great things.

Andrea Macdonald, founder ideaXme [00:49:10] Are you allowed to mention any names?

Stephen Furber, ICL Professor [00:49:10] Maire O’Neill, at Queens University Belfast, was my first Royal Academy mentee and she went on to become the youngest Professor of Electrical Engineering at Queens and has won a number of awards in engineering. Her subject area is quite different from mine. She implements hardware in cryptography and computer security. Whether her star has risen because of, or despite of my mentoring it is very difficult to say. I very much doubt my mentoring was a major factor in this as I think her success was going to happen anyway. I have mentored 4 or 5 people that way. As far as mentoring is concerned, PhD students are the most direct, an occasional meeting is not a very close mentoring role. With PhD students you have a substantial involvement. I find that works best. I have a research group of about 20 people and so it is not the traditional single student with single supervisor relationship. Most of the time we’re building things and students are getting just as much, if not support from the rest of the team than they are receiving directly from me. My role in this is to primarily bring in the funds, so that I can pay the bills but secondly to keep the team working reasonably smoothly and pointing in the right direction which creates a context in which PhD students can flourish.

Andrea Macdonald, founder ideaXme  [00:51:23] Your work and your team’s work has attracted an enormous amount of world attention, so you have the opportunity to meet the great and the good. Out of everybody that you could meet, who would you like to meet and what question would you like to ask them?

Who would the creator of SpiNNaker like to meet?

Stephen Furber, ICL Professor [00:51:47] I was given advance warning of this question. I’m still not entirely sure how to answer. You’ve now posed it as though the individual has to be work related.

Andrea Macdonald, founder ideaXme [00:51:57] No, not necessarily.

Stephen Furber, ICL Professor [00:52:00] There aren’t any individuals that I am particularly desperate to meet but somebody who’s been a hero of mine since an early age is Justin Hayward who’s the lead guitarist and singer with the Moody Blues Band and whose music has accompanied me through my life since my mid-teens. I particularly admire his sense of harmony and controlled motion. I’ve seen him perform several times, but I’ve never met him personally and I think if I had a chance to talk with him, I’d like to understand where the ideas for his songs come from.

Andrea Macdonald, founder ideaXme [00:52:54] Steve Furber thank you very much for your time. It’s been an absolute pleasure speaking to you.

@FurberSteve

Andrea Macdonald, Founder of ideaXme
Andrea Macdonald, Founder of ideaXme

Credits: Andrea Macdonald interview video, text, and audio.

Follow ideaXme on Twitter: @ideaxm

On Instagram: @ideaxme

On YouTube: ideaxme

Find ideaXme across the internet including on iTunes, SoundCloud, Radio Public, TuneIn Radio, I Heart Radio, Google Podcasts, Spotify and more.

ideaXme is a global podcast, creator series and mentor programme. Our mission: Move the human story forward!™ ideaXme Ltd.

7 thoughts on “Super ‘Human Like’ Computer

  1. Pingback: SpiNNaker, the Supercomputer which Models the Human Brain

  2. Pingback: Exploration of the Human Brain: The Human Brain Project - ideaXme

  3. Pingback: 'I Study Death Because I Love Life!' - ideaXme Ambassador Interviews ideaXme

  4. Pingback: Building a Synthetic Brain: Dr. Alice C. Parker's Work - ideaXme

  5. Pingback: Big Questions: The Multiverse, Cosmological Neural Networks and "Space Noodles" - ideaXme

  6. Pingback: Tom Lawry: Champion of Intelligent Health at Microsoft - ideaXme

  7. Pingback: Unlocking The Mysteries Of The Universe: Neutrinos and Muons To Reveal New Laws? - ideaXme

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.