Accelerated ML at the edge with mainline

As of today, it is impossible to make use of ML accelerators with a fully open stack on mainline, on embedded-grade hardware.

This talk presents the status of an effort to write a fully open and upstream stack for inference on VeriSilicon NPUs, found in SoCs from Amlogic, Rockchip, NXP and more.

Additionally, a path for upstreaming drivers for other accelerator hardware will be proposed.

Tomeu Vizoso, Consultant

Video: YouTube

Slides:

oil painting of a robot dressed as a chef in paris with eiffel tower in the background

Tomeu Vizoso is an independent consultant that has been working on FOSS since 2007, from kernel infrastructure to drivers and most parts of the desktop and consumer userspace stack.

Since 2023 consults directly for companies that wish to enable machine-learning workloads with mainline.