View a PDF of the paper titled Increasing the scalability of graph convolution for FPGA-implemented event-based vision, by Piotr Wzorek and 3 other authors
Abstract:Event cameras are becoming increasingly popular as an alternative to traditional frame-based vision sensors, especially in mobile robotics. Taking full advantage of their high temporal resolution, high dynamic range, low power consumption and sparsity of event data, which only reflects changes in the observed scene, requires both an efficient algorithm and a specialised hardware platform. A recent trend involves using Graph Convolutional Neural Networks (GCNNs) implemented on a heterogeneous SoC FPGA. In this paper we focus on optimising hardware modules for graph convolution to allow flexible selection of the FPGA resource (BlockRAM, DSP and LUT) for their implementation. We propose a ”two-step convolution” approach that utilises additional BRAM buffers in order to reduce up to 94% of LUT usage for multiplications. This method significantly improves the scalability of GCNNs, enabling the deployment of models with more layers, larger graphs sizes and their application for more dynamic scenarios.
Submission history
From: Tomasz Kryjak [view email]
[v1]
Wed, 6 Nov 2024 21:22:46 UTC (909 KB)
[v2]
Mon, 25 Nov 2024 08:44:32 UTC (909 KB)
Source link
lol