Endless regression: hardware goes virtual on the cloud – E&T Magazine

In the summer of 2018, professors John Hennessy and David Patterson declared a glorious future for custom hardware. The pair had picked up the Association for Computing Machinerys Turing Award for 2017 for their roles in the development of the reduced instruction set computer (RISC) architectural style in the 1980s.

Towards the end of their acceptance speech, Patterson pointed to the availability of hardware in the cloud as one reason why development of custom chips and the boards they would be soldered onto is getting more accessible. Cloud servers can be used to simulate designs on-demand and, if you have enough dollars to spend, you can simulate a lot of them in parallel to run different tests. If the simulation does not run quickly enough, you can move some or all of the design into field-programmable gate arrays (FPGAs). These programmable logic devices wont handle the same clock rates as a custom chip but they might only be five or ten times slower, particularly if the design you have in mind is some kind of sensor for the internet of things (IoT), where cost and energy are more important factors than breakneck performance.

The great news that's happened over the last few years is that there's instances of FPGA in the clouds, said Patterson. You dont have to buy hardware to do FPGAs: you can just go the cloud and use it. Somebody else sets it all up and maintains it.

A second aspect of this movement is being driven by projects such as OpenROAD organised by the US defence agency DARPA. This aims to build a portfolio of open-source hardware-design tools that lets smaller companies create chips for their own boards instead of relying on off-the-shelf silicon. In principle, that would make it easier to compete with bigger suppliers who traditionally have been able to deploy customisation to improve per-unit costs.

For more than a decade, those bigger silicon suppliers have used simulation to deal with one of the main headaches in custom-chip creation. Getting the hardware to boot up and run correctly is one thing. Getting the software to run often winds up a more expensive part of the overall project. As debugging software for a chip that doesnt exist yet is tricky, they turned to simulation to handle that. Even if the hardware is not fully defined, it is often possible to use abstractions to run early versions of the software, which is then gradually refined as the details become clearer. The old way of handling that was to use some hardware and FPGA combination that approximated the final design and have it running on a nearby bench. That is changing to where its not just hardware designers running simulations, its increasingly the software team.

When we started 12 or 13 years ago, everyone was doing simulation for hardware to get the SoC to work, says Simon Davidmann, president of Imperas, a company that creates software models of processor cores. We founded Imperas to bring these EDA technologies into the world of the software developers. We learned with Codesign [Davidmanns previous company] that software development would become more like the hardware space.

A second trend is the pull of the cloud. The designs may run on models that trade off accuracy for speed on a server processor in the cloud or a model loaded into an FPGA or a mixture of both. As Imperas and others can tune their models for performance by closely matching the emulated instructions to those run by the physical processor, a typical mixture is to have a custom hardware accelerator and peripherals emulated in the FPGA and the microprocessors in fast software models.

Davidmann says the trend towards the use of more agile development approaches in the embedded space is driving greater use of simulation. Even hardware design, which does not seem a good fit for a development practice that relies on progressive changes to requirements and implementations, has used them. One of the main reasons for this is the extensive use of automated testing. Whenever code whether its hardware description or software lines gets checked in, the development environment does a bunch of quick tests with more scheduled for the nighttime. If the new code triggers new bugs, it gets sent back. If not, the developer can continue.

This continuous integration and test relies on servers being available and ready to run the emulations and simulations whenever needed. That, in turn, points to the cloud, as it is easy to spin up processors for a battery of tests on demand. Even if the target hardware has finally come back from the fab, simulation still gets used. Though one way to test in bulk on finished hardware is to run device farms basically shelves stacked with the target boards and systems they present maintenance issues. They are always breaking and often have the wrong version of the firmware, Davidmann says. Moving to continuous integration doesnt work that well with hardware prototypes.

You can quickly push new versions to simulations in the cloud, turn them off and on again virtually. And, funds allowing, run many of them in parallel, which can be vital if a team has to meet a shipping deadline with a shipment-ready form of the firmware.

Now, the use of simulation is moving even further into the lifecycle, as evidenced by Arms launch of its Virtual Hardware initiative last week. The core technology underneath this is the same as that used to support conventional chip designs, including fast processor models similar to those provided by Imperas and others.

In its current form, Arm Virtual Hardware is limited in terms of the processors it supports. The off-the-shelf implementation thats in a free beta programme covers just one processor combination: the recently launched Cortex-M55 and its companion machine-learning accelerator. The presence of the accelerator provides much of the motivation for the virtual-hardware programme.

Stefano Cadario, director of software product development said at Arms developers summit last week, one of the driving forces behind the programme is the steep increase in the complexity of software with several factors: managing security, over-the-air updates and machine learning.

Where so much of the interaction the embedded device has is with cloud servers that deliver software updates as well as authenticating transaction, it makes sense to be able to run and debug that in the cloud. But machine learning presents a situation where updates will be far more frequent than they are today. The models will typically be trained off-device on cloud servers as the target hardware does not have the performance or raw data to do the job itself. Potentially, devices could get updated models every night, though the frequency will most likely be a lot lower than that.

Development teams need to be sure that a new model wont upset other software when loaded, which points to regression testing being used extensively on simulated hardware in the cloud. That automated testing potentially makes it possible for the machine-learning models to be updated by specialist data scientists without the direct involvement of software writers, unless there is a big enough change to warrant it. The result is a situation where Arm expects customers to routinely maintain cloud simulations for years, through the entire lifecycle of the production hardware.

As with existing virtual-processor models, the Arm implementation makes it possible to gauge performance before a chip has made it back from the fab. According to Cadario, Cambridge Consultants used an early-access version to test the software for a medical device and Googles Tensorflow team optimised the machine-learning library for the accelerator earlier in the development cycle than they would normally.

Arm has not yet said which, if any, other processors would be added to the programme. However, it seems likely that it will not go outside the companys own portfolio. Where we are different is that we support heterogeneous platforms, Davidmann says. Weve got some of the largest software developments using our stuff because it can support heterogeneous implementations.

There will still be a place for prototype hardware, not least because field trials of ideas will still have to take place before suppliers commit to hardware. But if there is a push towards the use of more custom hardware, it will be cloud simulation that helps drive it.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Read the rest here:
Endless regression: hardware goes virtual on the cloud - E&T Magazine

Related Posts

Comments are closed.