In model-driven software engineering, models are used in all phases of the development process. These models may get broken due to various editions during the modeling process. To repair broken models we have developed PARMOREL, an extensible framework that uses reinforcement learning techniques. So far, we have used our version of the Markov Decision Process (MDP) adapted to the model repair problem and the Q-learning algorithm. In this paper, we revisit our MDP definition, addressing its weaknesses, and proposing a new one. After comparing the results of both MDPs using Q-Learning to repair a sample model, we proceed to compare the performance of Q-Learning with other reinforcement learning algorithms using the new MDP. We compare Q-Learning with four algorithms: Q(λ), Monte Carlo, SARSA and SARSA (λ), and perform a comparative study by repairing a set of broken models. Our results indicate that the new MDP definition and the Q(λ) algorithm can repair with faster performance

A comparative study of reinforcement learning techniques to repair models

Iovino, Ludovico
2020-01-01

Abstract

In model-driven software engineering, models are used in all phases of the development process. These models may get broken due to various editions during the modeling process. To repair broken models we have developed PARMOREL, an extensible framework that uses reinforcement learning techniques. So far, we have used our version of the Markov Decision Process (MDP) adapted to the model repair problem and the Q-learning algorithm. In this paper, we revisit our MDP definition, addressing its weaknesses, and proposing a new one. After comparing the results of both MDPs using Q-Learning to repair a sample model, we proceed to compare the performance of Q-Learning with other reinforcement learning algorithms using the new MDP. We compare Q-Learning with four algorithms: Q(λ), Monte Carlo, SARSA and SARSA (λ), and perform a comparative study by repairing a set of broken models. Our results indicate that the new MDP definition and the Q(λ) algorithm can repair with faster performance
2020
9781450381352
model repair, markov decision process, reinforcement learning
File in questo prodotto:
File Dimensione Formato  
2020_23IntConfMODELS_C_Barriga.pdf

non disponibili

Tipologia: Altro materiale allegato
Licenza: Non pubblico
Dimensione 879.86 kB
Formato Adobe PDF
879.86 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12571/14221
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? ND
social impact