Resumen:
As in many other research areas, the use of Deep Learning (DL) techniques is growing in software engineering. However, these techniques are not yet widespread in the Model-Driven Engineering (MDE) field. In this paper, we explore the use of DL to extract useful text embeddings out of software models. We propose a novel approach to embedding software models by means of transformer architectures trained on large datasets. Our approach combines intermediate representations and Language Models (LMs) to extract features from modelling artefacts in order to enable applications of interest, like intelligent model assistance, classification, transformation,completion and correction, among others. We show that the approach is potentially useful in MDE and may lead to useful results in the future.