Tech Blog
Open Source
October 14, 2020

Three open source Sonos projects: efficient embedded development in Rust

Mathieu Poumeyrol

Distinguished Software Engineer, Sonos Voice Experience

Thibaut Lorrain

Senior Software Engineer, Sonos Voice Experience

A rustacean

The three projects introduced in this post were created by Snips, a French startup specialised in embedded speech understanding, which, following a 2019 acquisition, now makes up Sonos’ voice experience team. The Sonos team continues to develop and maintain these open source projects.

Inside a young software ecosystem, often some libraries or tools are lacking if you venture outside of the main focus of the initial ecosystem foundation. We encountered that phenomenon a few times, and on a few occasions, we tried to rise to the challenge and make a contribution to the ecosystem.

As early adopters of the Rust language, we got frustrated with the practical difficulties of running tests and benches on mobile phones and other connected devices. This led to the development of dinghy. Running neural network models on devices in a world where the big players prefer the cloud pushed us to develop tract. Finally, targeting embedded computers like the Sonos devices requires a lot of interfacing with other native software. We developed ffi-convertbecause we wanted it to be easier, both for us and for other teams.

Dinghy, practical on-device tests and benches

A toy boat on a map

Sonos is a software company; besides the hardware, what makes the Sonos system special is software. The software stack spans the cloud, mobile devices, and of course the speakers and components themselves. On-device software has to be cross-compiled. The small computers at the core of our players can run the software, but they could not build it — or they could, but not in a very practical way.

The Sonos Voice Experience software is written in Rust. This language solves most of the cross-compiling issues. Actual cross-compilation is natively handled by rustc, cargo, and rustup. Rustc, the compiler, is built on top of LLVM, so it can generate code for a wide collection of architectures. Cargo, the build and dependency manager, is aware of cross-compiling and can drive the compiler accordingly. Rustup makes it easy to set up and keep up-to-date an environment able to cross-compile to many architectures. But we wanted to push it a bit further — we wanted to run tests and benches on actual devices as easily as on a PC.

Dinghy is a cargo extension that rewires the familiar “run”, “test”, and “bench” commands. It handles cross compilation, deploys the compiled code to the device, then remotely runs tests or benches on the target device. It reports the results as if the test were running locally. As the Rust ecosystem matured, we patched third-party libraries so they were cross-compiled and worked on ARM processors. One ambition of dinghy was to make it trivial for any Rust library developer to run tests on their own phone, even for libraries that did not explicitly target phones. We wanted any Rust developer to be able to cross-compile and test on a phone without having to be an expert on Android or iOS.

Running code on Android phones from arbitrary tools is relatively easy. iOS phones are more of a challenge, as iOS devices will only accept signed code, even for tests. XCode handles most of the complexity of signing code and certificate management for an iOS application developer, but replicating the process outside of XCode for an external tool like Cargo is not a trivial task.

Additionally, dinghy can target remote devices using ssh. This makes it very easy to use a raspberry pi or any single board computer as a test and bench device, while keeping the heavy compilation of rust code to a powerful intel workstation, server or laptop.

Today, dinghy is at the core of the SVE build system, since SVE code is mostly Rust. In-house developers also use it in interactive tests and benches, targeting unlocked Sonos devices or standard single board computers as proxies. We also maintain mobile platform support in good working order, even though it is not a primary target for SVE.

On GitHub:

The post that introduced Dinghy

Tract, a neural network inference toolkit

tract logo

Tract is a neural network inference library. The most visible neural network libraries, like TensorFlow or PyTorch, are training frameworks; they are used by machine learning teams to train neural networks. Training is usually done in the cloud, with access to vast amounts of computing resources and training data. Once the network is trained, it has to be deployed in order to run and perform the task it was designed and trained for.

Training frameworks are also perfectly capable of doing this task, called “inference”. However, they tend to favour the training aspects above all other imperatives, to the detriment of the inference use case. They are also huge pieces of software, making them a quite expensive solution for embedded systems where resources are scarce. It is not uncommon for embedding teams to develop ad-hoc neural network runners that hardcode a specific neural network design.

We chose another way, developing tract as a generic neural network inference library. We make sure it is competitive with other libraries in the standard use cases like image categorization challenges. Since inference is a considerably easier problem than training, quite a few libraries exist in this space independently of the big training frameworks.

But voice, music, sound, and other time-oriented signals are not necessarily first class citizens when it comes to neural networks. Running real time in a streaming fashion also adds constraints on both model design and runtime engineering that may elude off-the-shelf solutions. Owning our own library gives us the opportunity to put energy into solving our specific constraints, hardware or applicative. We are firm believers in the virtues of owning business-critical pieces of engineering.

While tract was initiated as a developer’s personal pet project, a lot of development time was invested to make it great at running real-time voice applications on embedded systems. Sonos is happy to keep tract’s story going, investing developer time on tract. While we exchanged some constraints for others, and pushed the library in new directions, we do our best to make sure that tract remains a good general purpose neural network inference library. We were very happy to help with the development of tractjs, a third-party Javascript tract binding that allows running neural network inference in node.js, or even in a browser.

On GitHub:

A former blog post about tract

Ffi-convert, easier and safer interface between Rust, C, and other languages

A rusty metal chain

As previously mentioned, we mostly use Rust for the SVE codebase. However, most of the rest of the Sonos ecosystem is coded in C++, and we need a way to have both codebases communicate. The standard way for having two different languages communicate is to use the C ABI. This is a set of conventions that are defined by the C language and explain how functions should be called, and it has the advantage of being properly defined and stable. This means that most languages are able to use these conventions to call into C code, or whatever code follows these conventions. This process is usually called FFI, for Foreign Function Interface.

In the case of Rust, a few adjustments on the layout of the structs and the way functions are declared are necessary in order to have a C-compatible interface. We need to rewrite some high level Rust constructs to resemble something closer to lower-level C semantics. For instance, use raw pointers instead of Rust references, or force the rustc compiler to lay out the memory of a struct as the C compiler would. We end up having two structs representing the data going through our C interface: the C-like one that is cumbersome to use in Rust (you need to use “unsafe” rust blocks to access anything behind a raw pointer) and the pure Rust one that can be used easily in the Rust code. With two structs effectively representing the same thing, we need a way to easily convert data from one representation to the other.

This conversion code is somewhat easy to write, but it is quite repetitive, and there are quite a few gotchas (using unsafe Rust code is, well, unsafe). This is why we decided to create ffi-convert: a set of Rust traits standardising the conversion process, complemented with Rust proc macros to automatically derive the implementation of these traits. This means we don’t have to write the unsafe and error prone conversion code anymore, since it is automatically generated. This also ensures all the conversion code follows good practices, improving code quality, while making it easy to systematically change all the implementations quickly if we find a problem in the way we handle a conversion.

While it has been written to support our use cases, ffi-convert does not contain anything specific to our projects. It can be helpful for any project having to deal with a non trivial FFI interface. It is available on GitHub as well as on, so you can easily use it to create a C ABI for your own rust projects.

On GitHub:

These three projects are at different stages of their lifetime, ranging from very active development to maturity and stability. All of them play an important role in our day to day activities, so we naturally invest some time to develop them more, or maintain them in good shape. It’s always with great pleasure that we get some feedback from the community, be it in the form of bug reports or feature requests, so feel free to give them a shot and reach out!


Continue reading in Software:

Continue reading in Open Source:

© 2024 by Sonos. Inc.
All rights reserved. Sonos and Sonos product names are trademarks or registered trademarks of Sonos, Inc.
All other product names and services may be trademarks or service marks of their respective owners. Sonos, Inc.