ExtremeDesignTool/OntologyTesting

From STLab

(Difference between revisions)
Jump to: navigation, search
(Ontology Testing)
(Ontology Testing)
Line 43: Line 43:
# Technical considerations:
# Technical considerations:
#* As a technical starting point, the testing plugin can be implemented as an extension of the analyzer in XDTools. It it not necessarily the case that it has to be the same tabs or buttons in the GUI. The fact that an ontology imports the "test annotation schema" can be used by XDTools to distinguish test cases, so that new functionalities (buttons, tabs...) are "opened up" if such an ontology is currently marked. The annotation dialogue can be reused for the first version of the form to enter information about a test case. '''AP(Azam):''' Make a mock-up of how this could look in the GUI.
#* As a technical starting point, the testing plugin can be implemented as an extension of the analyzer in XDTools. It it not necessarily the case that it has to be the same tabs or buttons in the GUI. The fact that an ontology imports the "test annotation schema" can be used by XDTools to distinguish test cases, so that new functionalities (buttons, tabs...) are "opened up" if such an ontology is currently marked. The annotation dialogue can be reused for the first version of the form to enter information about a test case. '''AP(Azam):''' Make a mock-up of how this could look in the GUI.
 +
#* A first user workflow that is easy to implement could be (proposed by Enrico):
 +
#** Create an ontology for testing, i.e. a test case ontology
 +
#** Annotate it with:
 +
#*** ontology to test (ID - URI?)
 +
#*** SPARQL query to run
 +
#*** run reasoner first (yes/no)
 +
#** Test Button is enabled on the contextual menu
 +
#** Analyzer shows results
#* We do not want the XDTools to be relying on other NTK plugins (this would stop us from using it independently of NTK, as a general Eclipse plugin) so if we need SPARQL, it is better to use Jena, rather than the NTK SPARQL plugin.
#* We do not want the XDTools to be relying on other NTK plugins (this would stop us from using it independently of NTK, as a general Eclipse plugin) so if we need SPARQL, it is better to use Jena, rather than the NTK SPARQL plugin.

Revision as of 09:01, 20 May 2011

Ontology Testing

A master's thesis project at Jönköping university aiming to extend XDTools with testing functionalities. Below are links to documentation and meeting minutes.


Skype meeting 2011-05-19

Participants: Azam, Enrico, Eva, Valentina

Agenda:

  1. Look at the proposed use case description and scenario to understand the overall idea - discuss any missing or strange things.
  2. What is a "test case"? How do we manage and store them? What do they contain? Options: a) sessions in the tool, stored in workspace, b) stored in (separate) ontology module, through annotations etc., c) something else?
  3. Testing is done against the requirements - how should requirements be brought into the tool? Options: a) As annotations of the ontology module? b) Imported from file? c) Entered by the user? d) Something else? How to express and test other requirements than CQs?
  4. Technical integration with XDTools: Where does this appear in the GUI? Buttons, menus etc? New tab, where? Reuse of the selection services for the last use case, but also needs SPARQL, do we reuse SPARQL plugin?
  5. Extent of thesis, i.e. first version of the plugin: what to prioritize?

Minutes:

  1. Scenario and use case - overall comments:
    • Scenario should not describe a necessary sequence of actions. Functionalities need to be accessible at any time, regardless of sequence. AP(Azam): Rewrite scenario so that it is clear that although user does things in sequence, this is not required.
    • Points of integration with XDTools and NTK need to be clear from the scenario, e.g. "...the testing plugin calls the get-functionality of XDTools..." etc. AP(Azam): Check that all cases when functionality is reuse are explicit in the text.
    • Although there may be "complex" user actions and sequences of actions, the plugin needs to also accomodate "light-weight"-testing, i.e. a user that just wants to "try out" something quickly. This means that there needs to be quick ways to access functionality, enter data and run tests, without for instance going through a complete wizard, or entering ALL information about the test or ontology.
    • Analogous to software testing the principle "one requirement - one test" should be applied, meaning that a test case should test one thing at a time. This is on one hand related to the previous bullet, i.e. that one should be able to enter just one requirement and then test that (without at the moment caring about the rest), and on the other hand it impacts the way test cases are stored and used. One test case has only one type, and tests only one requirement.
    • The plugin should (ideally - in the full implementation) support all types of unit tests. At the moment Azam has worked on the EXD as it was described some time ago, e.g. mostly considering CQs as requirements, since then we have started to focus also on reasoning requirements and contextual statements, so there can be other kinds of tests that are relevant, apart from SPARQL queries. See for instance testing exercise in Bologna PhD course: http://ontologydesignpatterns.org/wiki/Training:PhD_Course_on_Computational_Ontologies_%40_University_of_Bologna_2011/Ontology_Testing:Method-based_testing AP(Azam): These should be taken into account in the requirement specification, and the implementation should at least be open to incorporate them in the future (even if maybe not implemented now).
    • Generically testing, e.g. in software contains the following steps:
      • precondition (run test if this happen, for example, if this class exists)
      • fill sandbox (put some data in memory before execute the test)
      • execute the actual script
        • With reasoning?
        • SPARQL query?
      • assert (see result and check the condition)
  2. What is a "test case"?
    • To start with, we could represent a test case just as an ontology. This makes it easy to handle the test cases in the tool, they will appear as any ontology. We can use a separate test case vocabulary, analogous to what the cpannotationschema.owl does for the content ODPs, to annotate a test case ontology with all the needed information. When a new test case is created, an empty ontology is created, the ontology that it tests is either imported or linked through annotations, then all the information needed can be filled manually by a user (into the annotation properties, through some form). Test data can be added in the new test case as ontology elements.
    • For the future we should be open to an extended notion of "test case", which also can store things like the results of a test run for future comparison and sharing with other developers, links between test cases (i.e. some kind of test suites) etc.
  3. How do we get and represent the requirements?
    • We should allow at least three types of requirements (+ "other requirements"):
      • CQs
      • Contextual statements
      • Reasoning requirements
      • other requirements - so that the user can enter whatever he likes.
    • There should be three main ways to get the requirements:
      • A form belonging to a test case where the user can write one requirement of either one of the above types, e.g. form with four fields but where you can only enter something in one field, or radio button to select the type and only one field to enter the requirement.
      • Import from the annotations of an ontology. If the ontology contains annotations, e.g. coversRequirements containing CQs, then a number of test cases can be automatically generated (one for each requirement), and the requirement field is automatically filled based on the annotations of the ontology.
      • Import from a text file, i.e. the user can provide a file in a certain format containing all the requirements, and one test case is created for each one when they are imported.
  4. Technical considerations:
    • As a technical starting point, the testing plugin can be implemented as an extension of the analyzer in XDTools. It it not necessarily the case that it has to be the same tabs or buttons in the GUI. The fact that an ontology imports the "test annotation schema" can be used by XDTools to distinguish test cases, so that new functionalities (buttons, tabs...) are "opened up" if such an ontology is currently marked. The annotation dialogue can be reused for the first version of the form to enter information about a test case. AP(Azam): Make a mock-up of how this could look in the GUI.
    • A first user workflow that is easy to implement could be (proposed by Enrico):
      • Create an ontology for testing, i.e. a test case ontology
      • Annotate it with:
        • ontology to test (ID - URI?)
        • SPARQL query to run
        • run reasoner first (yes/no)
      • Test Button is enabled on the contextual menu
      • Analyzer shows results
    • We do not want the XDTools to be relying on other NTK plugins (this would stop us from using it independently of NTK, as a general Eclipse plugin) so if we need SPARQL, it is better to use Jena, rather than the NTK SPARQL plugin.
Personal tools