Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural NILM: Deep Neural Networks Applied to Energy Disaggregation (1507.06594v3)

Published 23 Jul 2015 in cs.NE

Abstract: Energy disaggregation estimates appliance-by-appliance electricity consumption from a single meter that measures the whole home's electricity demand. Recently, deep neural networks have driven remarkable improvements in classification performance in neighbouring machine learning fields such as image classification and automatic speech recognition. In this paper, we adapt three deep neural network architectures to energy disaggregation: 1) a form of recurrent neural network called `long short-term memory' (LSTM); 2) denoising autoencoders; and 3) a network which regresses the start time, end time and average power demand of each appliance activation. We use seven metrics to test the performance of these algorithms on real aggregate power data from five appliances. Tests are performed against a house not seen during training and against houses seen during training. We find that all three neural nets achieve better F1 scores (averaged over all five appliances) than either combinatorial optimisation or factorial hidden Markov models and that our neural net algorithms generalise well to an unseen house.

Citations (757)

Summary

  • The paper’s key contribution is adapting and evaluating LSTM, denoising autoencoders, and regression networks for effective energy disaggregation.
  • It demonstrates that dAE and regression models outperform CO and FHMM, while LSTMs excel on simple two-state appliances.
  • The study highlights using synthetic data for improved generalization and suggests unsupervised pre-training to enhance future NILM solutions.

Neural NILM: Deep Neural Networks Applied to Energy Disaggregation

Overview

The paper "Neural NILM: Deep Neural Networks Applied to Energy Disaggregation" presents an exploratory paper on the application of deep neural networks (DNNs) to the problem of energy disaggregation or non-intrusive load monitoring (NILM). NILM aims to estimate the electricity consumption of individual appliances from a single aggregate meter reading for an entire home. The key contribution of this paper is the adaptation and evaluation of three different neural network architectures for this task: Long Short-Term Memory (LSTM) networks, denoising autoencoders (dAEs), and regression networks that estimate the start time, end time, and average power demand of appliance activations.

Methodology

Data and Preprocessing

The paper uses data from the UK Domestic Appliance-Level Electricity (UK-DALE) dataset. The dataset includes high-frequency electricity consumption data from various household appliances. The authors prepare the data by extracting individual appliance activations and generating both real and synthetic aggregate data. Training involves a mix of 50% synthetic and 50% real data, aiming to improve the network's ability to generalize.

Neural Network Architectures

  1. Long Short-Term Memory (LSTM):
    • The LSTM network includes layers for time-series data processing, specifically focusing on capturing long-range dependencies.
    • A bidirectional LSTM (bi-LSTM) is used to enable the network to learn from the data in both forward and backward directions.
  2. Denoising Autoencoders (dAE):
    • The dAE architecture is employed to separate the "clean" appliance signal from the noisy aggregate data.
    • The network uses convolutional layers to extract smaller region-specific features, helping to identify individual appliance signatures effectively.
  3. Regression Networks ("Rectangles"):
    • This network architecture aims to predict the start time, end time, and average power of each appliance activation.
    • The output consists of three scalars representing the aforementioned parameters, refining the activation detection to a more structured form.

Experimental Setup

Each network is trained and evaluated on data from the UK-DALE dataset, with performance metrics including F1 score, precision, accuracy, mean absolute error, and the proportion of total energy correctly assigned. A fixed network configuration is employed for simplicity, with comparison against combinatorial optimization (CO) and factorial hidden Markov models (FHMM).

Key Results

  • On an unseen house, both the dAE and "rectangles" networks outperform CO and FHMM across all metrics.
  • LSTMs demonstrate strong performance on simpler, two-state appliances (kettle, fridge, microwave) but underperform on more complex, multi-state appliances (dishwasher, washing machine).
  • For houses seen during training, the dAE shows a consistent lead over CO and FHMM, though with a shorter gap in performance compared to unseen data.

Implications and Future Work

Practical Implications

  • The successful application of DNN architectures to energy disaggregation holds promise for developing more robust and generalizable NILM algorithms.
  • Using synthetic data for regularization suggests a pathway for enhancing generalization capacity without requiring prohibitively large annotated datasets.

Theoretical Implications

  • The introduction of autoencoders and regression-based approaches to NILM opens new avenues for understanding the computational mechanics of energy disaggregation.
  • Further exploration of bidirectional LSTMs may yield insights into why their performance varies for different classes of appliances.

Speculations on Future Developments

  • Unsupervised Pre-training: Given the wealth of unlabelled data available relative to labelled data, incorporating unsupervised pre-training could significantly enhance the network’s effectiveness.
  • Enhanced Synthetic Data Generation: Refining the method to generate more realistic synthetic training data based on appliance usage patterns could further improve network performance.
  • Broader Appliance Spectrum: Extending the paper to cover a wider array of appliances, particularly low-power devices, can test the scalability and applicability of these DNN architectures.
  • Hardware Considerations: Investigating the feasibility of deploying these models in an embedded environment or leveraging cloud-based solutions for real-time disaggregation.

Conclusion

The adaptation of deep neural networks to NILM demonstrates notable advancements, particularly the dAE and "rectangles" networks, which show significant promise in effectively disaggregating appliance-level consumption data from aggregate readings. Future research should focus on leveraging unsupervised pre-training, refining synthetic data generation, broadening appliance coverage, and assessing deployment feasibility to enhance the efficacy and practical implementation of NILM solutions.