Adaptive compressive sensing

 

Can we exploit distributed compressive sensing to adaptively select sample rates in a sensor network whilst guaranteeing the quality of data?

 

Partners

Imperial College London

 


As proof of concept, a small-scale testbed consisting of four Intel Edison boards, an isolated wifi network, a synchronization server, and a Matlab server was developed.

 


A high-frequency low-power energy monitoring hardware module was developed to measure and disaggregate energy consumption due to communication and other operations (i.e. sensing and idle state) in real time. Experimental results reveal that our hardware improves the knowledge of energy consumption by ~20% compared to the estimation techniques.

 


LoRa communication modules and new MAC layers were integrated to the testbed to enable the incorporation of “Adaptive Low Powered Wide Area communications” project.

 


Seven CS and DCS algorithms evaluated in terms of reconstruction error and execution time. With data transmission reduced by 80%, the reconstruction error varies between 10-4 and 10-5 depending on the noise level.

 


An algorithm that guarantees sustainability and ensures data reliability (QoS) based on application requirements has been developed for the in-vitro testbed. In addition, data from the ENO deployment in QEOP is used during the evaluation process.

 

For decades, the sampling process has been largely dominated by the classical Nyquist-Shannon theories. However, several studies have shown that many natural signals are amenable to highly sparse representations in appropriate transform domains (e.g., wavelets and sinusoids). Compressive Sensing (CS) provides a powerful framework for simultaneous sensing and compression, enabling a significant reduction in the sampling and computation costs on a sensor node with limited memory and power resources. CS has been used widely in multiple domains including image and video processing, communication and networking, and biological applications.

In spite the benefits of CS, data correlation among different data sources has not been considered. In order to exploit data correlation and to improve the reconstruction performance, Distributed Compressive Sensing (DCS) has been introduced. DCS provides two main benefits compared to classic CS: it decreases dramatically the  number of required measurements, achieving low reconstruction error by exploiting data correlation from different nodes

and transferring the complexity from edge to the base station, which is critical in resource constrained wireless sensor networks.

In both CS and DCS approaches, the state of the art research solutions use a static selection of the sample rate based on only the application data properties, such as the sparsity level, ensuring an upper bound of reconstruction error. However, this data-driven fixed selection of sample rates leads to: (a) underutilization of edge node resources, especially when a limited and dynamic harvested energy is supported, and (b) inability to adapt to changes in sparsity.

In this project, we overcome these problems by proposing a distributed system that adaptively selects the sample rates in a sensor network considering the existing resources. To compute the optimal sample rate policy, a lightweight algorithm was designed and developed that minimizes the DCS reconstruction error (i.e. Quality of Service – QoS), while ensuring the sustainable operation of the sensor network under dynamic conditions given variable energy harvested in the sensor node.