Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Condori Fernández, Nelly"

Filter results by typing the first few letters
Now showing 1 - 7 of 7
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Publication
    A metrics-driven inspection framework for model transformations
    (Curran Associates, 2019) Granda Juca, María Fernanda; Parra González, Luis Otto; Condori Fernández, Nelly; Granda Juca, María Fernanda
    Model transformations are key elements of Model-driven Engineering. They allow querying, synthesizing and transforming models into other models or code. [Problem] However, as with other software development artefacts, they are not free from anomalies and thus require specialist verification techniques. [Objective] The objective of this study is to define a semi-automated framework for inspecting the correctness (notions of type and correspondence) of model transformations, by means of detecting and locating anomalies in the transformation rules. [Method] In order to compare the correctness of source and target models, we assume that operational behaviour can be compared by metrics applied on projections from the source model to the target (with deliberate loss of information), which should be preserved by the transformation. [Results] We demonstrate the applicability of our framework for inspecting the correctness of a model-to-model transformation required in a model-driven testing approach. The main result of the study highlights the advantages of metrics for detecting any missing, incorrect or unnecessary transformation rules that have an impact on the correctness of the model transformations. From the research perspective, the feedback produced by the implemented tool will be useful for future research.
  • Loading...
    Thumbnail Image
    Publication
    Effectiveness Assessment of an Early Testing Technique using Model-Level Mutants
    (Association for Computing Machinery, 2017) Granda Juca, María Fernanda; Condori Fernández, Nelly; Vos, Tanja Ernestina; Pastor López, Oscar; Granda Juca, María Fernanda
    While modern software development technologies enhance the capabilities of model-based/driven development, they introduce challenges for testers such as how to perform early testing at model level to ensure the quality of the model. In this context, we have developed an early testing technique supported by the CoSTest tool to validate requirements at model level. In this paper we describe an empirical evaluation of CoSTest with respect to its effectiveness in terms of its fault detection and test suite adequacy. This evaluation is carried out by model-level mutation testing using first order mutants (created by injection of a single fault) and high order mutants (containing more than one fault) with seven conceptual schemas (of different sizes) that represent the functionality of different software systems in different domains. Our findings show that the tests generated by CoSTest are effective at killing a large number of mutants. However, there are also some fault types (e.g. delete the references to a class attribute or an operation call in a constraint) that our test suites were not able to detect. CoSTest was more effective in terms of detecting fault types using high order mutant types that first order mutant. Thus, CoSTest’s effectiveness is affected by the mutant type tested.
  • Loading...
    Thumbnail Image
    Publication
    How do negative emotions influence on the conceptual models verification?: a live study proposal
    (Association for Computing Machinery, Inc, 2020) Mayhua Quispe, Angela; Suni Lopez, Franci; Granda Juca, María Fernanda; Condori Fernández, Nelly
    The present live study is proposed with the objective of investigating the influence of negative emotions (i.e., stress) in the efficiency for verifying conceptual models. To conduct this study, we use a Model-driven Testing tool, named CoSTest, and our own version of stress detector within a competition setting. The experiment design, overview of the empirical procedure, instrumentation and potential threats are presented in the proposal.
  • Loading...
    Thumbnail Image
    Publication
    Mutation operators for UML class diagrams
    (Springer, Cham, 2016) Granda Juca, María Fernanda; Condori Fernández, Nelly; Vos, Tanja Ernestina; Pastor López, Oscar; Granda Juca, María Fernanda
    Mutation Testing is a well-established technique for assessing the quality of test cases by checking how well they detect faults injected into a software artefact (mutant). Using this technique, the most critical activity is the adequate design of mutation operators so that they reflect typical defects of the artefact under test. This paper presents the design of a set of mutation operators for Conceptual Schemas (CS) based on UML Class Diagrams (CD). In this paper, the operators are defined in accordance with an existing defects classification for UML CS and relevant elements identified from the UML-CD meta-model. The operators are subsequently used to generate first order mutants for a CS under test. Finally, in order to analyse the usefulness of the mutation operators, we measure some basic characteristics of mutation operators with three different CSs under test.
  • Loading...
    Thumbnail Image
    Publication
    Towards a functional requirements prioritization with early mutation testing
    (IEEE Computer Societ, 2018) Condori Fernández, Nelly; Granda Juca, María Fernanda; Granda Juca, María Fernanda
    Researchers have proposed a number of prioritization techniques to help decision makers select an optimal combination of (non-) functional requirements to implement. However, most of them are defined based on an ordinal or nominal scale, which are not reliable because they are limited to simple operations of ranked or ordered requirements. We argue that the importance of certain requirements could be determined by their criticality level, which can be assessed using a ratio scale. The main contribution of the paper is the new strategy proposed for prioritizing functional requirements, using early mutation testing and dependency analysis
  • Loading...
    Thumbnail Image
    Publication
    Towards the automated generation of abstract test cases from requirements models
    (IEEE, 2014) Granda Juca, María Fernanda; Condori Fernández, Nelly; Vos, Tanja Ernestina; Pastor, Oscar; Granda Juca, María Fernanda
    In a testing process, the design, selection, creation and execution of test cases is a very time-consuming and error-prone task when done manually, since suitable and effective test cases must be obtained from the requirements. This paper presents a model-driven testing approach for conceptual schemas that automatically generates a set of abstract test cases, from requirements models. In this way, tests and requirements are linked together to find defects as soon as possible, which can considerably reduce the risk of defects and project reworking. The authors propose a generation strategy which consists of: two metamodels, a set of transformations rules which are used to generate a Test Model, and the Abstract Test Cases from an existing approach to communication-oriented Requirements Engineering; and an algorithm based on Breadth-First Search. A practical application of our approach is included.
  • Loading...
    Thumbnail Image
    Publication
    Using ALF within the CoSTest process for validation of UML-based conceptual schema
    (CEUR-WS, 2017) Granda Juca, María Fernanda; Condori Fernández, Nelly; Vos, Tanja Ernestina; Granda Juca, María Fernanda
    The Unified Modelling Language (UML) is widely used for modelling software systems and its integration with executable languages, such as the Action Language for Foundational UML (ALF), provides a bridge between the graphical specification techniques used by mainstream software engineers and the precise analysis and validation techniques essential for the model-driven development of information systems. As far as we know, the idea of transforming Conceptual Schemas (CS) based on UML Class Diagrams into ALF to execute systematic ALF-based test cases against these CSs and to report defects by checking logs has not been explored to date. In this paper, we use ALF to create a testing environment to validate requirements and verify some system properties at the CS level. We also report on some of the implementation details and design decisions of our proof-of-concept tool, as well as its limitations and possible use scenarios.

DSpace software copyright © 2002-2025 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback