// php echo do_shortcode (‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’)?>
One of the most difficult problems when creating intelligent machines – especially at the edge – can be to take a behavior or functionality that has been designed or trained in one environment and make it work in another. Your robot controller, vision system, or neural network may work perfectly until the temperature, light level, or radiation exposure changes, and then rapidly degrades or fails. Back in the 1950s, researchers realized that same process that made life so successful – evolution – could be used to optimize all kinds of engineered systems.
With increasing momentum behind building intelligent machines, there has been an uptick in evolutionary research applied to this area. What’s important here is that the focus is not finding the most efficient solutions, but the most robust ones: robust against noise, variability, and failure within the hardware where they will be implemented. This feature will be crucial for the success of many artificial intelligence (AI) technologies, especially those employed in hostile environments such as space, and those using emerging analog technologies such as memristors.
In engineering and computation, the concept of evolution is much the same as it is in biology. Essentially, a set of initial configurations – potential solutions to the problem to be solved – are defined within a set of constraints (what components can be used, how can they connect to each other, etc.). These are made to perform a task such as controlling a robot with a sensor so that it avoids obstacles. The success of each solution is measured using some fitness function and the worst ones are then eliminated.
Each solution – good and bad – is represented by a genetic code that determines its form, wiring, structure, and anything else that is allowed to change with evolution. The more successful ones are either bred (their genetic codes are somehow combined), mutated (some of the code is randomly changed), or both. This is repeated many times, essentially searching the state space for increasingly successful configurations. This happens without the need for insight from a designer. One of the advantages of this approach is that, as in nature, seemingly negligible benefits of poorly – performing solutions can be bootstrapped into major advantages later on.
This is not a new thing, even in robotics. One of the most compelling examples of evolving AI was produced back in 1994 by Karl Sims. At the time, Sims was working for Thinking Machines, which gave him access to one of the most powerful supercomputers of that time.
In a simulated environment, he evolved virtual creatures (including body morphology, sensors, and controllers) that learned – through survival of the fittest – to swim, walk, and competitively grab for an object (see Figure 1).
Check out a video of the evolved creatures below. Though this project was virtual, it showed the potential for using the approach to evolve not just algorithms but hardware.
This work was interesting in three ways. First, it represented the first fully evolved hardware robot controller. Second, it showed how evolution could exploit subtle elements in the structure to complete the target task as efficiently as possible, but such solutions were (inevitably) hardware dependent. This means they either wouldn’t work (or wouldn’t work well) when replicated on other seemingly identical machines. Third, which the team demonstrated a few years later, was that evolution could be the solution to its own problem, as long as the variability in the hardware was built into the process.
More recently, researchers from the same group have started working in this area again. From digital FPGAs in the 1990s (non-clocked albeit, which gave them continuous dynamics), they moved to evolving controllers in a 16 × 16 fully analog field programmable transistor array in 2020. By building sufficient noise and variability into the simulators in which the controller evolved, they were able to provide a low – spec robot with poor sensors sophisticated obstacle – avoidance behaviors.
Within the neuromorphic engineering community, Katie Schuman and her colleagues at Oak Ridge National Laboratory and the University of Tennessee have been working for years to evolve optimized neural networks. In 2020, they published a paper, “Evolutionary Optimization for Neuromorphic Systems”, showing how they could create systems to work within the normal constraints of hardware, such as limited weight resolution or delays in synapses and neurons. However, they pointed out that – with further development – the type of results presented could “… be used as part of a neuromorphic hardware co – design process in developing new hardware implementations.”
Olga Krestinkaya and her colleagues have been working on exactly that, with a specific focus on analog chips. Their co-design process allows not just the known limitations of the specified technology to be taken into account, but also the inherent (but unknown) variability in the underlying devices. The team particularly focuses on the properties of memristors as an enabling technology that will never have the inherent device uniformity of digital memories (see Figure 2).
A few months ago, Žiga Rojec and his colleagues from the University of Ljubljana in Slovenia showed that you can take this further still by not just taking account of non – idealities or variability, but outright failure. One of the stand – out applications of early neuromorphic systems, particularly analog, could be satellites: size, weight, and power are critical, but price is not. Such systems must be sufficiently tolerant to the radiation and vast temperature variations of space to work well. Rojec’s research shows that, through evolution, an analog chip can be designed to produce satisfactory results even in the presence of short – circuit failures.
Perhaps it’s inevitable that a bio – inspired technology should find its progress enabled by a bio – inspired optimization technique. Time will tell.