COMPUTER STANDARDS & INTERFACES, cilt.76, 2021 (SCI-Expanded)
Software development required for constructing multi-agent systems (MAS) usually becomes challenging and time-consuming due to the properties of autonomy, distributedness, and openness of these systems in addition to the complicated nature of internal agent behaviors and agent interactions. To facilitate MAS development, the researchers propose various domain-specific modeling languages (DSMLs) by enriching MAS metamodels with a defined syntax and semantics. Although the descriptions of these languages are given in the related studies with the examples of their use, unfortunately, many are not evaluated in terms of either the usability (being hard to learn, understand and use) or the quality of the generated artifacts. Hence, in this paper, we introduce an evaluation framework, called AgentDSM-Eval, with its supporting tool which can be used to evaluate MAS DSMLs systematically according to various quantitative and qualitative aspects of agent software development. The empirical evaluation, presented by the AgentDSM-Eval framework, was successfully applied for one of the well-known MAS DSMLs. The assessment showed that both MAS domain coverage of DSMLs and the agent developers? adoption of modeling elements can be determined with this framework. Moreover, the tool?s quantitative results can assess MAS DSML?s performance on the development time and throughput. AgentDSMEval also enables the qualitative assessment of MAS DSML features according to novel quality characteristics and measures, which it defines specifically for the MAS domain.