top of page
Writer's pictureMichael Tegtmeier

Transfer Learning: A Guide to Utilizing Pre-trained Models for Wind Turbine Data

Updated: Jan 30, 2023

Transfer learning is a machine learning technique that leverages the knowledge gained from pre-trained models on similar tasks to enhance the performance of a new task. It involves taking pre-trained models, fine-tuning them, and applying them to a new task with similar characteristics. Transfer learning saves time and resources and boosts accuracy.

Advantages of Transfer Learning

  1. Saves time and resources: Pre-trained models are already trained on a large amount of data, reducing the need for an extensive amount of data for the new task.

  2. Improved accuracy: Pre-trained models have already learned features and patterns from the data they were trained on, which can be reused to increase the accuracy of the new task.

  3. Solving data scarcity problems: Transfer learning is useful when data for the new task is limited, and using pre-trained models can help overcome this issue.

Disadvantages of Transfer Learning

  1. Limitations to the new task: Pre-trained models may not be suitable for the new task if it is significantly different from the task they were trained on.

  2. Fine-tuning required: Fine-tuning pre-trained models to the new task can be time-consuming and requires domain-specific knowledge.

  3. Limitations of pre-trained models: The accuracy of pre-trained models is dependent on the quality of training data, and the model may not be optimal for the new task.

Using Transfer Learning with Wind Turbine Data

Transfer learning can be applied to wind turbine data to enhance the performance of predictive maintenance models. Predictive maintenance models predict potential failures in wind turbines, reducing maintenance costs and increasing efficiency. By using transfer learning, models can be fine-tuned on wind turbine data, boosting accuracy and reducing the amount of data required to train the model.


In cases where there is limited data available for a wind turbine, a pre-training-fine-tuning strategy based on transfer learning is used to achieve accurate predictions. If there is less than one year of data available, transfer learning is recommended, and if there is less than twelve months of data, it is essential for good model performance. However, transfer learning is not necessary if there is more than two years of data available.


The transfer learning process involves two steps. First, a neural network is pre-trained using data from similar turbines, resulting in a generic model representing the average behavior of the group. Next, this generic turbine model is fine-tuned using the limited data available for an individual turbine, taking into account its peculiarities when generating predictions.


Although transfer learning is an effective strategy for limited data, it is still recommended to retrain the neural network when more data becomes available. A retraining schedule of retraining every month for the first half-year, every two months for the second half-year, every six months for the second year, and once a year after the second year, ensures a wider range of turbine states are represented in the training data.


In conclusion, transfer learning is a valuable tool for wind turbine data, providing a solution to the challenge of limited data while enhancing the performance of predictive maintenance models.


Analysis of Transfer Learning - Results from Turbit




To test the accuracy of transfer learning in the specific domain of Turbit, we trained several neural networks.


In the first picture (far left) you can see an instance trained to learn the power of a wind turbine with a full year of data (wind speed, ambient temperature, wind direction). This has been done without pre-training and one can see, that the results are not optimal with an average deviation of 140kW.


In the second picture (mid-left) one can see a trained instance only with pre-training from similar turbines. Predictions get very accurate, but in some cases they are far off.

In the third picture (mid-right) one can see the pre-trained instance fine tuned with 1 month of data from January. Predictions are very accurate and we don't see the outlier predictions anylonger (absolutely no false positives).


In the fourth picture, one can see, how training on only one month of data would have performed. This has been surprisingly good, but we can very likely assume, that this instance will not perform during other years, especially during unexpected high temperatures.


Turbit has performed many more of these tests (Cross-Validation) and came to this conclusion:


"Transfer learning is always a good option and makes predictions more robust and accurate, however, it is not always necessary"



Comentários


bottom of page