Open Access

Searching contexts in paraconsistent rough description logic

  • Henrique Viana1Email author,
  • João Alcântara1 and
  • Ana Teresa Martins1
Contributed equally
Journal of the Brazilian Computer Society201521:7

https://doi.org/10.1186/s13173-015-0031-2

Received: 1 May 2014

Accepted: 18 June 2015

Published: 7 July 2015

Abstract

Background

Query refinement is an interactive process of query modification used to increase or decrease the scope of search results in databases or ontologies.

Methods

We present a method to obtain optimized query refinements of assertion axioms in the paraconsistent rough description logic \(\mathcal {PR_{\textit {ALC}}}\), a four-valued paraconsistent version of the rough \(\mathcal {ALC}\), which is grounded on Belnap’s Logic. This method is based on the notion of the discernibility matrix commonly used in the process of attribute reduction in the rough set theory. It consists of finding sets of concepts which satisfy the rough set approximation operations in assertion axioms. Consequently, these sets of concepts can be used to restrict or relax queries in this logic.

Results

We propose two algorithms to settle this problem of query refinement in \(\mathcal {PR_{\textit {ALC}}}\) and show their complexity results.

Conclusions

The problem of query restrictions using contextual approximation is proved to have exponential time complexity, while the problem of query relaxations has polynomial space complexity.

Keywords

Description logicsRough setsQuery refinement

Background

In large databases as hypertext document collections, it is often possible to find too many results available for specific queries, and it is not always that all these answers are important. In an opposite situation (but potentially undesirable), too few results can be available for queries with keywords rarely used. Some approaches based on query refinements can, then, be employed to settle these problems. As we know, query refinement is an interactive process of query modification used to increase or decrease the scope of search results.

One of these approaches is the rough set theory introduced by Z. Pawlak [1] to represent and reason about uncertainty through two operations of set approximation: the lower and the upper approximations. For a set S, its lower approximation gives the set of elements that certainly belong to S, while its upper approximation gives the set of elements that possibly belong to S.

Because of such approximations, the rough set theory may be applied to produce two forms of query refinement: query restriction and query relaxation. A query can be restricted in order to obtain only the necessary results, or it can be relaxed, aiming at increasing the number of its results. In this paper, we propose a new method settled on the rough set theory and description logics (DLs) for automatically generating refinements or related terms to queries.

Some works can be found in the literature with respect to query refinement in DLs as in [2, 3]. Most of them focus on syntactical manipulations of knowledge bases to increase or decrease the number of results of a query. Unfortunately, these syntactical manipulations may generate results without connections with the initial query. In [4], by resorting to rough sets as a tool of query refinement in DLs, it is ensured that the obtained results always have some kind of relationship with the initial query, because of the role played by the approximation operator [5]. In addition, the representation of rough sets in DLs brings no growth in the complexity of the corresponding satisfiability problem.

Still, in [4], in order to generate query refinements of assertion axioms, the authors employed the notion of contexts in rough set approximations [6]. However, these contexts are given in the query. This problem is solved in [7], where a method is proposed to obtain optimized query refinements of assertion axioms in rough \(\mathcal {ALC}\) [6]. We will now extend this result to work with paraconsistent rough description logic (\(\mathcal {PR_{\textit {ALC}}}\)) [4]. It is a four-valued extension of the rough \(\mathcal {ALC}\), and its semantics follows the well-known Belnap’s paraconsistent logic [8]. With that, we increase the expressivity of uncertainty representation, allowing the expression and approximation of unknown and contradictory knowledge bases. It will be achieved by resorting to two algorithms to search for contexts which will be applied in the approximation operations. These algorithms generate optimized solutions when they exist. Depending on the considered refinement, minimal or maximal cardinality sets which satisfy approximations are chosen (minimal sets for query restrictions and maximal sets for query relaxations). Furthermore, we present complexity results of both algorithms.

The paper is structured as follows. In the “Background” section, we present the basic notions of rough \(\mathcal {ALC}\), an extension of \(\mathcal {ALC}\) with the approximations of the rough set theory, and we introduce the paraconsistent extension of the rough \(\mathcal {ALC}\), called \(\mathcal {PR_{\textit {ALC}}}\). In the “Methods” and “Results and discussion” sections, we will find the main contributions of this work, where we define a method to obtain optimized refined queries in this logic and we present the corresponding algorithms (as well as their complexity analysis), respectively. Finally, in the “Conclusions” section, we conclude the paper.

Rough description logic \(\mathcal {ALC}\)

Rough description logics (RDLs) [6, 7, 912] introduced a mechanism to model uncertain reasoning by means of concept approximation. They extend DLs with two operations: the lower and upper approximations. Both approximations are conceived to capture uncertainty from an indiscernibility relation (an equivalence relation). We can define the upper approximation of a concept C in \(\mathcal {ALC}\) as the set of individuals in C that are indiscernible from at least an individual known to belong to C [6]. Similarly, we can define the lower approximation of a concept C as the set of all indiscernible individuals in C. In the sequel, we will introduce some basic characteristics of the rough \(\mathcal {ALC}\), beginning with the syntax, semantics, and, lastly, alternative approaches to represent approximations, which will be used later in the query refinements. For a detailed explanation about DLs and rough sets, see [13] and [14], respectively.

Syntax

As mentioned above, the basic idea behind RDLs is straightforward: we can approximate an uncertain concept C through lower and upper bounds.

Definition 1.

(Concepts) Concepts in rough \(\mathcal {ALC}\) are defined by the following rules, where C and D are concepts, A is an atomic concept, and R is an atomic role:

$$C, D \longrightarrow A \mid \bot \mid \neg C \mid C \sqcap D \mid \exists R.C \mid \overline{C} \mid \underline{C}. $$

The rough \(\mathcal {ALC}\) is based on \(\mathcal {ALC}\) with the addition of the upper and lower approximation as unary constructors of concepts, i.e., if C is a concept then \(\overline {C}\) (possibly C) and \(\underline {C}\) (necessarily C) are also concepts. Others concepts can be defined by the following equivalences: ≡¬,(CD)≡¬(¬C¬D),R.C≡¬(RC).

The notions of TBox and ABox, as well the knowledge base in rough \(\mathcal {ALC}\), extend the original notion of \(\mathcal {ALC}\).

Definition 2.

(TBox) A TBox \(\mathcal {T}\) is a finite set of terminological axioms of the form \(C \sqsubseteq D\) or CD.

The first axiom \(C \sqsubseteq D\) (inclusion axiom) means that each individual of C is also an instance of D, while the axiom CD (equivalence axiom) means that each individual of C is also an instance of D and each individual of D is also an instance of C.

Definition 3.

(ABox) An ABox \(\mathcal {A}\) consists of the finite set of assertion axioms of the form C(a) or R(a,b).

The concept assertion C(a) denotes that the individual a belongs to the concept C, and the role assertion R(a,b) denotes that the individual a is related to individual b by the role R.

Definition 4.

(Knowledge base) A knowledge base =<,> in rough \(\mathcal {ALC}\) consists of a TBox and an ABox.

Semantics

The semantics of rough \(\mathcal {ALC}\) is given by an interpretation I=(Δ I I ,R ), where Δ I is a domain set, · I is an interpretation function, and R is an equivalence relation on Δ I , which will be used in concept approximations. The function · I maps atomic concepts to subsets of Δ I and role names to binary relations on the domain Δ I . The interpretation for complex concepts remains the same as in \(\mathcal {ALC}\):
  • For an individual a, a I Δ I ;

  • For atomic concepts A, A I Δ I ;

  • For atomic roles R, R I Δ I ×Δ I ;

  • I =Δ I ;

  • I =;

  • A) I =Δ I A I ;

  • (CD) I =C I D I ;

  • (CD) I =C I D I ;

  • (R.C) I ={aΔ I bΔ I ,(a,b)R I bC I };

  • (R.C) I ={aΔ I bΔ I ,(a,b)R I bC I }.

For the approximation of concepts, we will have:
  • \((\overline {C})^{I} = \left \{ a \in \Delta ^{I} \mid \exists b \left ((a,b) \in R^{\sim } \wedge b \in C^{I}\right) \right \}\),

  • \((\underline {C})^{I} = \left \{ a \in \Delta ^{I} \mid \forall b \left ((a,b) \in R^{\sim } \rightarrow b \in C^{I}\right)\right \}\).

One of the advantages of this method of modeling uncertainties in concepts is that it does not increase the complexity of inference with respect to the original logic without upper and lower approximations. In fact, reasoning with RDLs can be reduced to reasoning with DLs, by translating rough concepts in usual DL concepts with a new relation which is reflexive, symmetric, and transitive. On the other hand, this method does not increase its expressivity with respect to DLs, i.e., by resorting only to DL, it is possible to simulate the approximations. A translation function of concepts · t : RDL → DL can be defined to show this equivalence of expressivity (introducing a new role name R for the equivalence relation):
  • A t =A, for all atomic concepts A in RDL,

  • \((\overline {C})^{t} = \exists R^{\sim }.C^{t}\) e \((\underline {C})^{t} = \forall R^{\sim }.C^{t}\).

For all others complex concepts, the translation function is applied recursively in their subconcepts. The same definition is extended to inclusion and assertion axioms [11, 12].

A different way of representing equivalence between individuals was also proposed in [6], in which an alternative approach to approximation in DL was introduced. In this work, an approximation depends on a specific set of concepts to determine the indiscernibility of the individuals (and not anymore an explicit indiscernibility relation). We will detail this idea in the sequel.

Contextual approximation

In [6], the notion of contextual indiscernibility relation in RDLs was introduced as a manner to define an equivalence relation based on indiscernibility criteria. In particular, the notion of context is introduced, allowing the definition of specific equivalence relationships to be applied on approximations. The great advantage of this approach is that the reasoning with equivalence classes is optimized, because the equivalence relation is discovered in the process of inference, differently from the traditional RDL, where the equivalence relation must be explicitly defined.

We will show the definition of contexts of a collection of concepts and successively the definitions of lower and upper approximations through a context. First, we will present the notion of a projection function in DL [6]:

Definition 5.

(Projection) Let \(\mathcal {K}\) be a knowledge base and A an atomic concept. The projection function \(\pi ^{\mathcal {K}}_{A} : N_{I} \rightarrow \{ 0,*,1 \}\) is defined as
$$\forall a \in N_{I}: \pi^{\mathcal{K}}_{A} (a) = \left\{ \begin{array}{cl} 1, & \ \mathcal{K} \models A(a); \\ 0, & \ \mathcal{K} \models \neg A(a); \\ \text{*}, & \ \text{otherwise}. \end{array} \right.$$
where N I is the set of individuals in the knowledge base \(\mathcal {K}\).

A context can be defined as a finite set of relevant features in the form of DL concepts, which may encode a kind of context information for the similarity to be measured [6].

Definition 6.

(Context) A context is a nonempty finite set of atomic concepts Σ={A 1,…,A n }.

Two individuals a and b are indiscernible with respect to the context Σ={A 1,…,A n } and a knowledge base \(\mathcal {K}\) if and only if for all A i in which \(i \in \{ 1, \dots, n \}, \pi ^{\mathcal {K}}_{A_{i}} (a) = \pi ^{\mathcal {K}}_{A_{i}} (b)\). This easily induces an equivalence relation:

Definition 7.

(Contextual indiscernibility relation) Let Σ={A 1,…,A n } be a context and \(\mathcal {K}\) a knowledge base. The indiscernibility relation R Σ induced by Σ is defined as: \(\mathrm {R}_{\Sigma } = \ \{ (a,b) \in N_{I} \times N_{I} \mid ~\text {for all}~ A_{i}~ \text {in which}~ i \in \{ 1, \dots, n \}, \pi ^{\mathcal {K}}_{A_{i}} (a) = \pi ^{\mathcal {K}}_{A_{i}} (b)\}\).

As DLs induce the representation of uncertain information when it is the case that \(\mathcal {K} \not \models A(a)\) and \(\mathcal {K} \not \models \neg A(a)\), a similarity relation (instead of an equivalence relation) may be more adequate to model relationships between individuals, because it permits grouping of individuals that are close, but not necessarily indiscernible. Formally, a binary relation is a similarity relation if it is at least reflexive (an equivalence relation is reflexive, symmetric, and transitive). We introduce the following similarity relation based in [15], which loosens the original condition of indiscernibility:

Definition 8.

(Contextual similarity relation) Let Σ={A 1,…,A n } be a context and \(\mathcal {K}\)a knowledge base. The similarity relation S Σ induced by Σ is defined as
$${} {\fontsize{9.2pt}{9.6pt}\selectfont{\begin{aligned} &\mathrm{S}_{\Sigma} = \{ (a,b) \in N_{I} \times N_{I} \mid~ \text{for all }~ A_{i}~ \text{in which}~ i \in \{ 1, \dots, n \},\\ &\qquad\qquad\qquad\quad\! \pi^{\mathcal{K}}_{A_{i}} (a) \!=\pi^{\mathcal{K}}_{A_{i}} (b)~ \text{or} ~\pi^{\mathcal{K}}_{A_{i}} (a) \!= * ~\text{or} ~\pi^{\mathcal{K}}_{A_{i}} (b) \!= * \}. \end{aligned}}} $$

The contextual approximations are defined below.

Definition 9.

(Contextual upper/lower approximation) Let Σ be a context, C a concept, I an interpretation, and Sim{R,S}. The contextual upper and lower approximations of C w.r.t. Σ, are defined as
  • \(\left (\overline {C}^{\text {Sim}_{\Sigma }}\right)^{I} = \left \{ a \in \Delta ^{I} \mid \exists b ((a,b) \in \text {Sim}^{I}_{\Sigma } \wedge b \in C^{I}) \right \}\),

  • \(\left (\underline {C}_{\text {Sim}_{\Sigma }}\right)^{I} = \left \{ a \in \Delta ^{I} \mid \forall b ((a,b) \in \text {Sim}^{I}_{\Sigma } \rightarrow b \in C^{I}) \right \}\).

We can generalize the definitions of contextual approximations by using the notion of k-step relation [16].

Definition 10.

(k-step relation) Let Δ I be the nonempty universe set, S a binary relation in Δ I , and k a natural number. The k-step relation of S is defined as:

  • S 1=S;

  • S k+1=S k {(x,y)Δ I ×Δ I exists y 1,y 2,…,y k Δ I ,such that x S y 1,y 1 S y 2,…,y k S y}, for k≥1.

The idea behind the k-step relation is that when we are using a similarity relation, e.g., S Σ , it may happen that S Σ 1S Σ 2S Σ n . As a consequence, the successive application of approximation operations may imply in different results.

Definition 11.

( Loose and tight approximations) For the loose upper approximation and the tight lower approximation of a concept C w.r.t., the similarity relation Sim in n steps is denoted by \(\overline {C}^{(\text {Sim},n)}\) and \(\underline {C}_{(\text {Sim},n)}\), respectively, and defined as
  • \(\left (\overline {C}^{(\text {Sim}_{\Sigma,n})}\right)^{I} = \left \{ a \mid \exists b \left ((a,b) \in {\text {Sim}^{I}_{\Sigma }}_{n} \wedge b \in C^{I}\right) \right \}\),

  • \(\left (\underline {C}_{(\text {Sim}_{\Sigma,n})}\right)^{I} = \left \{ a \mid \forall b \left ((a,b) \in {\text {Sim}^{I}_{\Sigma }}_{n} \rightarrow b \in C^{I}\right) \right \}\).

The contextual approximations will play a central role in the process of query refinement. In this paper, refining a concept means to apply a lower (restriction of a concept) or an upper (relaxation of a concept) approximation. Hence, it is needed to identify first a context that can be used in approximations. This process of finding contexts and some more relevant questions will be tackled and exemplified in the next sections.

Paraconsistent rough description logic \(\mathcal {PR_{\textit {ALC}}}\)

We begin this section by presenting a brief explanation about Belnap’s logic. Subsequently, we will show a paraconsistent extension of rough \(\mathcal {ALC}\), based on Belnap’s semantics, called \(\mathcal {PR_{\textit {ALC}}}\).

Belnap’s logic

Belnap’s logic [8] has four truth values instead of the two classic truth values true and false. These values are t (true), f (false), u (unknown), and i (inconsistent). The truth value i represents a contradictory information, while u means neither true nor false, i.e., the absence of any information about veracity or falsity.

Syntactically, Belnap’s logic is very similar to classical logic. However, it introduces different notions of implication. In fact, we will show three of these notions brought from literature. The connectives that will be used here are negation (¬), disjunction (), conjunction (), material implication (), internal implication (), and strong implication (→) [17].

The interpretations of the formulas are mappings of the set formulas to one of the possible four truth values, respecting the truth table of the connectives as detailed in the Tables 1 and 2. The semantics of the truth tables of the implications are:
  • xy is defined as ¬xy;
    Table 1

    Truth tables for ,, and ¬

    f

    u

    i

    t

     

    f

    u

    i

    t

     

    ¬

     

    f

    f

    f

    f

    f

     

    f

    f

    u

    i

    t

     

    f

    t

    u

    f

    u

    f

    u

     

    u

    u

    u

    t

    t

     

    u

    u

    i

    f

    f

    i

    i

     

    i

    i

    t

    i

    t

     

    i

    i

    t

    f

    u

    i

    t

     

    t

    t

    t

    t

    t

     

    t

    f

    Table 2

    Truth tables for ,, and →

    f

    u

    i

    t

     

    f

    u

    i

    t

     

    f

    u

    i

    t

    f

    t

    t

    t

    t

     

    f

    t

    t

    t

    t

     

    f

    t

    t

    t

    t

    u

    u

    u

    t

    t

     

    u

    t

    t

    t

    t

     

    u

    u

    t

    u

    t

    i

    i

    t

    i

    t

     

    i

    f

    u

    i

    t

     

    i

    f

    u

    i

    t

    t

    f

    u

    i

    t

     

    t

    f

    u

    i

    t

     

    t

    f

    u

    f

    t

  • xy is evaluated to y if x{t,i}; t if x{f,u};

  • xy is defined as (xy)y¬x).

Models are defined as follows, in which {t,i} are the designated values (i.e., the truth values considered satisfiable with respect to the consequence relation defined below).

Definition 12.

(Model) Let I be a four-valued interpretation, Γ a set of formulas, and φ a formula in Belnap’s logic. We say that I is a four-valued model of φ if and only if φ I {t,i}. I is a four-valued model of Γ if and only if I is a four-valued model of each formula in Γ. We say Γ entails φ, written Γφ, if and only if every four-valued model of Γ is also a four-valued model of φ.

Defining \(\mathcal {PR_{\textit {ALC}}}\)

Now, we will describe the syntax and the semantics of the paraconsistent rough description logic \(\mathcal {ALC}\) (\(\mathcal {PR_{\textit {ALC}}}\)). Such a logic is an extension of the description logic \(\mathcal {ALC}_{4}\) [17], with the addition of the lower and upper approximation operators.

Syntactically, \(\mathcal {PR_{\textit {ALC}}}\) almost does not differ from rough \(\mathcal {ALC}\). Complex concepts and the assertion axioms are defined in the same way:
$$C, D \longrightarrow A \mid \bot \mid \neg C \mid C \sqcap D \mid \exists R.C \mid \overline{C} \mid \underline{C}. $$
For concept inclusion axioms, we have three types of inclusions, relative to the three implications showed before:
$$\begin{array}{*{20}l} &C \mapsto D (\text{material~inclusion~axiom}),\\ &\,\,C \sqsubset D (\text{internal~inclusion~axiom}),\\ &\,\,\,C \rightarrow D (\text{strong~inclusion~axiom}). \end{array} $$

As usual, semantically, interpretations map individuals to elements of the domain of interpretation. For concepts and atomic roles, however, some changes in the notion of interpretation need to be made in order to reason with inconsistencies.

Intuitively, in a four-valued logic, we need to consider four situations that can occur in terms of membership of an individual in a concept: (1) we know that it belongs to the set, (2) we know that it does not belong to the set, (3) we do not have any knowledge if it belongs or if it does not belong, and (4) we have a contradictory information, stating that the individual belongs and does not belong to the concept. There are many equivalent ways to formalize this notion; one of them will be described in the sequel.

For a given domain Δ I and a concept C, an interpretation on Δ I maps to C a pair <P,N> of subsets (not necessarily disjoints) of Δ I , in which P is the set of the elements that we know belong to C (positive information), while N is the set of the elements that we know do not belong to C (negative information). Consider the functions proj +() and proj (), the projections of the positive and negative part, respectively, which are defined as
$$\text{proj}^{+} \left< P,N \right> = P~\mathrm{and~proj}^{-} \left< P,N \right> = N. $$
In more formal terms, we can define a four-valued interpretation as displayed below:

Definition 13.

(Four-valued interpretation) A four-valued interpretation is a triple I=(Δ I I ,R ) with Δ I as domain, R is an equivalence relation on Δ I , and · I is a mapping function from elements of Δ I to individuals and subsets of Δ I ×Δ I to concepts such that the conditions showed below are satisfied:
  • For atomic concepts A, A I =<P,N>, such that P,NΔ I ;

  • For atomic roles R, R I =<P 1×P 2,N 1×N 2>, such that P 1×P 2,N 1×N 2Δ I ×Δ I ;

  • I =<Δ I ,>;

  • I =<,Δ I >;

  • C) I =<N,P> if C I =<P,N>;

  • (CD) I =<P 1P 2,N 1N 2>, if C I =<P 1,N 1> and D I =<P 2,N 2>;

  • (CD) I =<P 1P 2,N 1N 2>, if C I =<P 1,N 1> and D I =<P 2,N 2>;

  • (R.C) I =<{xy,(x,y)proj+(R I )yproj+(C I )},

    {xy,(x,y)proj+(R I )→yproj(C I )}>;

  • (R.C) I =<{xy,(x,y)proj+(R I )→yproj+(C I )},

    {xy,(x,y)proj+(R I )yproj(C I )}>;

  • \((\overline {C})^{I} = \left < \{ x \mid \exists y, (x,y) \in R^{\sim } \wedge y \in \text {proj}^{+}(C^{I}) \}, \right.\)

    {xy,(x,y)R yproj(C I )}>;

  • \((\underline {C})^{I} = \left < \{ x \mid \forall y, (x,y) \in R^{\sim } \rightarrow y \in \text {proj}^{+}(C^{I}) \}, \right.\)

    {xy,(x,y)R yproj(C I )}>.

Note that the conditions above for the role restrictions are described in a way that the logical equivalences ¬(R.C)=R.(¬C) and ¬(R.C)=R.(¬C) are preserved. This was the convenient manner found in [17] to deal with role restrictions and that allows a direct translation to \(\mathcal {ALC}\). Note that in this language, only the positive part of the interpretation of the role is required, because it involves only atomic roles.

Obviously, under the restrictions that PN= and PN=Δ I , the four-valued interpretations collapse into the usual two-valued case. The correspondence between the truth values and the membership of concepts and roles is described in the following way: let a,bΔ I , C be a concept name and R a role name. We have that:
  • C I (a)=t, iff a I proj+(C I ) and a I proj(C I );

  • C I (a)=f, iff a I proj+(C I ) and a I proj(C I );

  • C I (a)=i, iff a I proj+(C I ) and a I proj(C I );

  • C I (a)=u, iff a I proj+(C I ) and a I proj(C I );

  • R I (a,b)=t, iff (a I ,b I )proj+(R I ) and (a I ,b I )proj(R I );

  • R I (a,b)=f, iff (a I ,b I )proj+(R I ) and (a I ,b I )proj(R I );

  • R I (a,b)=i, iff (a I ,b I )proj+(R I ) and (a I ,b I )proj(R I );

  • R I (a,b)=u, iff (a I ,b I )proj+(R I ) and (a I ,b I )proj(R I ).

Lastly, it follows the semantics of the different kinds of axioms.
  • ICD iff Δ I proj(C I )proj+(D I );

  • \(I \models C \sqsubset D\) iff proj+(C I )proj+(D I );

  • ICD iff proj+(C I )proj+(D I ) and proj(D I )proj(C I );

  • IC(a) iff a I proj+(C I );

  • IR(a,b) iff (a I ,b I )proj+(R I ).

We say that a four-valued interpretation I satisfies a knowledge base \(\mathcal {K}\) (i.e., I is a model of \(\mathcal {K}\)) if and only if it satisfies each inclusion and assertion axiom in \(\mathcal {K}\). A knowledge base \(\mathcal {K}\) is satisfiable (respectively, unsatisfiable) if and only if there exists (respectively, there does not exist) a model for \(\mathcal {K}\).

Considering the complexity of satisfiability of \(\mathcal {PR_{\textit {ALC}}}\), it was proved in [17] that the complexity of the satisfiability decision problem for the paraconsistent version of \(\mathcal {ALC}\) is equivalent to the complexity of the same problem for \(\mathcal {ALC}\). This result shows that the paraconsistent reasoning is not more expressive than the classical two-valued reasoning and it can be simulated in a two-valued \(\mathcal {ALC}\) without an increasing of complexity. To show such a result, a polynomial translation of a paraconsistent knowledge base to a knowledge base of \(\mathcal {ALC}\) was described, which preserves all of its inference properties. According to this result, we can easily show that the satisfiability decision problem for \(\mathcal {PR_{\textit {ALC}}}\) has the same complexity of this problem for \(\mathcal {ALC}\) through the same translations presented in [11, 12, 17]

As in rough \(\mathcal {ALC}\), we can also define the contextual approximation related to \(\mathcal {PR_{\textit {ALC}}}\).

Definition 14.

(Four-valued contextual approximations) Let Σ be a context, C be a concept, I be a four-valued interpretation, and Sim be a similarity relation. The contextual upper and lower approximations of C with respect to Σ are defined as:

  • \((\overline {C}^{\text {Sim}_{\Sigma }})^{I} = \left < \{ x \mid \exists y, (x,y) \in \text {Sim}^{I}_{\Sigma } \wedge y \in \text {proj}^{+}(C^{I}) \}, \right.\)

    \(\left. \{ x \mid \forall y, (x,y) \in \text {Sim}^{I}_{\Sigma } \rightarrow y \in \text {proj}^{-}(C^{I})\} \right >\),

  • \((\underline {C}_{\text {Sim}_{\Sigma }})^{I} = \left < \{ x \mid \forall y, (x,y) \in \text {Sim}^{I}_{\Sigma } \rightarrow y \in \text {proj}^{+}(C^{I}) \}, \right.\)

    \(\left. \{ x \mid \exists y, (x,y) \in \text {Sim}^{I}_{\Sigma } \wedge y \in \text {proj}^{-}(C^{I})\} \right >\).

Furthermore, due to the possibility of representation of a different notion of uncertainty (contradictory information), we can also develop different similarity relations between individuals. In particular, we will work here with a specific similarity relation described in [15]. But first, we need to make a little change in the definition of the projection function described before to adapt it to the four-valued interpretation.

Definition 15.

(Four-valued projection) Let \(\mathcal {K}\) be a knowledge base and A an atomic concept. The projection function \(\pi ^{\mathcal {K}}_{A} : N_{I} \rightarrow \{ \textbf {t},\textbf {f},\textbf {u},\textbf {i} \}\) is defined as
$$\begin{array}{*{20}l} \forall a \in N_{I}: \pi^{\mathcal{K}}_{A} (a) = \left\{ \begin{array}{cl} \textbf{t}, & \ \mathcal{K} \models A(a) \text{ and}~ \mathcal{K} \not\models \neg A(a); \\ \textbf{f}, & \ \mathcal{K} \models \neg A(a) \text{ and}~ \mathcal{K} \not\models A(a); \\ \textbf{u}, & \ \mathcal{K} \not\models A(a) \text{ and}~ \mathcal{K} \not\models \neg A(a); \\ \textbf{i}, & \ \mathcal{K} \models \neg A(a) \text{ and}~ \mathcal{K} \models \neg A(a). \\ \end{array} \right. \end{array} $$

where N I is the set of individuals contained in the knowledge base \(\mathcal {K}\).

Now the uncertainty of a concept can be modeled in two ways: as a contradiction or as an unknown information. We can define then the similarity relation P Σ :

Definition 16.

(Similarity relation—unknown and inconsistent concepts) Let Σ={A 1,…,A n } be a context. The similarity relation P Σ induced by Σ is defined as follows:
$${} {\fontsize{8.9pt}{9.6pt}\selectfont{\begin{aligned} &\mathrm{P}_{\Sigma} = \{ (a,b) \in N_{I} \times N_{I} \mid \text{for all}~ A_{i} \text{ in which} i \in \{ 1, \dots, n \}, \pi^{\mathcal{K}}_{A_{i}} (a) \\&\quad\;= \pi^{\mathcal{K}}_{A_{i}} (b) \text{ or }\\ &\qquad\qquad\qquad\qquad\qquad\qquad\pi^{\mathcal{K}}_{A_{i}} (a) = \textbf{u}~ \text{or}~ \pi^{\mathcal{K}}_{A_{i}} (b) = \textbf{u}\\ &\quad\qquad\qquad\qquad\qquad\qquad\text{or if}~ \pi^{\mathcal{K}}_{A_{i}} (a) = \textbf{t} \text{ then}~ \pi^{\mathcal{K}}_{A_{i}} (b) = \textbf{i}\\ &\,\,\qquad\qquad\qquad\qquad\qquad\text{or if}~ \pi^{\mathcal{K}}_{A_{i}} (a) = \textbf{f} \text{ then}~ \pi^{\mathcal{K}}_{A_{i}} (b) = \textbf{i} \}. \end{aligned}}} $$

In P Σ , it is assumed that an information can be partially described because of our incomplete or contradictory knowledge [18]. From this point of view, an element a can be considered similar to the element b if the information contained in a is also contained in b. Thus, for a concept A, such that \(\pi ^{\mathcal {K}}_{A} (a) = \textbf {t}\) and \(\pi ^{\mathcal {K}}_{A} (b) = \textbf {i}\), the individual a is similar to b because the truth value t is contained in i. Note that the reverse is not true: b is not similar to a according to P Σ , because not every information in b is contained in a. We highlight that these similarity relations introduced in this work are two-valued, but nothing prevents one to create four-valued similarity relations.

Example.

[4] (Query relaxation/restriction) Let {x 1,x 2,x 3,x 4,x 5,x 6,x 7} be a set of individuals representing houses; GL = GoodLocation, B = Basement, F = Fireplace, E = Expensive, C = Cheap, and M = Medium be concepts; and Σ={GL,B,F} be a context and \(\mathcal {A}\) be a ABox such that
$${} \begin{aligned} &GL(x_{1}); \neg GL(x_{2}); GL(x_{3}); \neg GL(x_{4}); GL(x_{6}); \neg GL(x_{7});\\ &B(x_{1}); B(x_{2}); \neg B(x_{2}); \neg B(x_{3}); B(x_{4}); B(x_{6}); \neg B(x_{6}); B(x_{7});\\ &\qquad\quad F(x_{1}); \neg F(x_{2}); \neg F(x_{4}); F(x_{5}); F(x_{6}); F(x_{7}); \\ &\neg M(x_{1}); \neg M(x_{2}); M(x_{3}); M(x_{4}); M(x_{5}); \neg M(x_{6}); M(x_{7});\\ &E(x_{1}); \neg E(x_{2}); \neg E(x_{3}); \neg E(x_{4}); \neg E(x_{5}); E(x_{6}); \neg E(x_{7});\\ &\neg C(x_{1}); C(x_{2}); \neg C(x_{3}); \neg C(x_{4}); \neg C(x_{5}); \neg C(x_{6}); \neg C(x_{7}). \end{aligned} $$
First, we will consider an example using query relaxation. Suppose that we want to know which houses are expensive. We have that
$$\begin{aligned} &\mathcal{A} \models \textit{E}(x_{1}), \mathcal{A} \not\models \textit{E}(x_{2}), \mathcal{A} \not\models \textit{E}(x_{3}), \mathcal{A} \not\models \textit{E}(x_{4}),\\ &\mathcal{A} \not\models \textit{E}(x_{5}) \mathrm{~and}~ \mathcal{A} \models \textit{E}(x_{6}). \end{aligned} $$
This means that x 1 and x 6 are the only expensive houses. But suppose that we want to know which houses are possibly expensive (houses that are not expensive but share some features of expensive houses) according to the context Σ. Relaxing this query (we will use the similarity relation S Σ in this example), we shall have
$$\mathcal{A} \models \overline{E}^{\mathrm{S}_{\Sigma}}(x_{1}), \mathcal{A} \models \overline{E}^{\mathrm{S}_{\Sigma}}(x_{5}) \mathrm{~and}~ \mathcal{A} \models \overline{E}^{\mathrm{S}_{\Sigma}}(x_{6}). $$
Thus, x 1, x 5, and x 6 are possibly expensive houses. Observe that x 5 is possibly expensive because it is similar to x 1, which is evaluated as expensive. By employing again the query relaxation in this ABox, we shall have
$$\begin{aligned} &\mathcal{A} \models \overline{\overline{E}^{\mathrm{S}_{\Sigma}}}^{\mathrm{S}_{\Sigma}}(x_{1}), \mathcal{A} \models \overline{\overline{E}^{\mathrm{S}_{\Sigma}}}^{\mathrm{S}_{\Sigma}} (x_{3}), \mathcal{A} \models \overline{\overline{E}^{\mathrm{S}_{\Sigma}}}^{\mathrm{S}_{\Sigma}} (x_{5}) \mathrm{~and} \\&\mathcal{A} \models \overline{\overline{E}^{\mathrm{S}_{\Sigma}}}^{\mathrm{S}_{\Sigma}} (x_{6}). \end{aligned} $$
We have now that x 3 is possibly a possible expensive house, because x 3 is similar to x 5 according to the relation S Σ , i.e., x 3 has less features of expensive houses than x 5. We will show in the sequel another example related with query refinement but now using query restriction: suppose that we want to know which houses in the ABox have medium value. We shall have then
$${} \begin{aligned} &\mathcal{A} \not\models \textit{M}(x_{1}), \mathcal{A} \not\models \textit{M}(x_{2}), \mathcal{A} \models \textit{M}(x_{3}), \mathcal{A} \models \textit{M}(x_{4}) \mathrm{~and}\\& \mathcal{A} \models \textit{M}(x_{5}). \end{aligned} $$
That is, x 3,x 4, and x 5 are houses of medium value. Using the query restriction with the context Σ, we will conclude that
$$\mathcal{A} \models \underline{M}_{\mathrm{S}_{\Sigma}}(x_{3}), \text{but} \mathcal{A} \not\models \underline{M}_{\mathrm{S}_{\Sigma}}(x_{4}) \mathrm{~and}~ \mathcal{A} \not\models \underline{M}_{\mathrm{S}_{\Sigma}}(x_{5}). $$
The individuals x 4 and x 5 do not have necessarily a medium value. If we apply the query restriction again, we will observe that
$$\mathcal{A} \not\models \underline{\underline{M}_{\mathrm{S}_{\Sigma}}}_{\mathrm{S}_{\Sigma}} (x_{3}). $$
Therefore, x 3 does not have necessarily a necessary medium value, i.e., x 3 is similar to some house that necessarily does not have medium value. Focusing a little in the similarity relation for unknown and inconsistent information, we can show that
$$\mathcal{A} \not\models \textit{C}(x_{4}) \mathrm{~and~} \mathcal{A} \not\models \overline{C}^{\mathrm{S}_{\Sigma}}(x_{4}), \mathrm{~but~} \mathcal{A} \models \overline{C}^{\mathrm{P}_{\Sigma}}(x_{4}). $$
This means that P Σ can be used to discover which individuals share connections with contradictory information in the context Σ. Knowing that \(\mathcal {A} \not \models \overline {\textit {C}}^{\mathrm {S}_{\Sigma }}(x_{4})\) and \(\mathcal {A} \models \overline {\textit {C}}^{\mathrm {P}_{\Sigma }}(x_{4})\), we can infer that by accepting the presence of similarities with contradictions in Σ, x 4 can be viewed as a possibly cheap object. A resembling intuition may be used for the lower approximation after searching those individuals that certainly have a specific property when pointed relations with contradictions. For example, with
$$\mathcal{A} \models \underline{M}_{\mathrm{S}_{\Sigma}}(x_{3}) \mathrm{~and~} \mathcal{A} \not\models \underline{M}_{\mathrm{P}_{\Sigma}}(x_{3}), $$
we can conclude if we permit similarity relations with contradictions that x 3 will not be considered as a medium value house. In regarding the individual x 7, the result will be
$$\mathcal{A} \models \underline{M}_{\mathrm{S}_{\Sigma}}(x_{7})\,\, \mathrm{~and~} \,\, \mathcal{A} \models \underline{M}_{\mathrm{P}_{\Sigma}}(x_{7}). $$
This shows that independently of analyzing the similarities with contradictions or not, the result will be the same. In other words, the presence of similarities with contradictory information for x 7 does not exist.

Methods

Contextual approximations were designed to optimize the automation of approximate reasoning, since the relations between individuals are discovered during the reasoning process. However, if we think about an automated query refinement process, the possibilities of generating all contexts are \(2^{|N_{C}|} - 1\phantom {\dot {i}\!}\), where N C is the set of atomic concepts. Moreover, most of these contexts can be redundant or cannot satisfy a query refinement. In order to avoid this problem, in this section, we present a method based on the notions of discernibility and indiscernibility matrices [19] to compute contexts for lower and upper approximations.

Using approximations

The main problem found in query refinements with rough sets is to determine a set of concepts which satisfy a restriction (lower approximation) or a relaxation (upper approximation) of a concept. The following results will help us to discover these appropriated sets.

Lemma 1.

(Generalized monotonicity) [20] Given two contexts Σ 1 and Σ 2, such that Σ 1Σ 2, the following equations hold for all concept expressions C, an interpretation I, and a similarity relation Sim{R,S,P}:
$${} \left(\underline{C}_{\mathrm{\text{Sim}}_{\Sigma_{1}}}\right)^{I} \subseteq \left(\underline{C}_{\mathrm{\text{Sim}}_{\Sigma_{2}}}\right)^{I} \,\,\text{and}\,\, \left(\overline{C}^{\mathrm{\text{Sim}}_{\Sigma_{2}}}\right)^{I} \subseteq \left(\overline{C}^{\mathrm{\text{Sim}}_{\Sigma_{1}}}\right)^{I}. $$

Intuitively, Lemma 1 states that by increasing the size of the context, the size of the interpretation of the concepts increases for the lower approximation and decreases for the upper approximation. Therefore, in order to find a context to satisfy the lower approximation of a concept C, only those minimal satisfying C are needed, since all their supersets will also satisfy C. Analogously, in order to find contexts satisfying the upper approximation of C, only the maximal ones satisfying C will suffice. Finally, for loose and tight approximations, the following statements hold:

Proposition 1.

[20] Given a context Σ, a concept C, an interpretation I, and a natural number n, it holds that
$${} {\fontsize{9.2pt}{9.6pt}\selectfont{\begin{aligned} \left(\overline{C}^{(\mathrm{R}_{\Sigma},n)}\right)^{I} = \left(\overline{C}^{(\mathrm{R}_{\Sigma},n+1)}\right)^{I} \,\,\,\text{and} \,\, \left(\underline{C}_{(\mathrm{R}_{\Sigma},n)}\right)^{I} = \left(\underline{C}_{(\mathrm{R}_{\Sigma},n+1)}\right)^{I};\\ \left(\overline{C}^{(\mathrm{S}_{\Sigma},n)}\right)^{I} \subseteq \left(\overline{C}^{(\mathrm{S}_{\Sigma},n+1)}\right)^{I} \,\,\text{and} \,\, \left(\underline{C}_{(\mathrm{S}_{\Sigma},n+1)}\right)^{I} \subseteq \left(\underline{C}_{(\mathrm{S}_{\Sigma},n)}\right)^{I};\\ \left(\overline{C}^{(\mathrm{P}_{\Sigma},n)}\right)^{I} \subseteq \left(\overline{C}^{(\mathrm{P}_{\Sigma},n+1)}\right)^{I} \,\,\text{and} \,\, \left(\underline{C}_{(\mathrm{P}_{\Sigma},n+1)}\right)^{I} \subseteq \left(\underline{C}_{(\mathrm{P}_{\Sigma},n)}\right)^{I}. \end{aligned}}} $$

Loose upper approximations can be applied when there are no contexts which satisfy the upper approximation of a concept. In other words, a similarity relation of a higher step can be used to find a possible context. Similarly, a tight lower approximation can be applied to discover a set of concepts reinforcing the lower approximation, i.e., contexts which preserve the lower approximation in a similarity relation of a higher step. Note that the result for loose upper approximation does not change for the indiscernibility relation, since it is transitive and does not increase the size of the interpretation when it is applied successively (or does not decrease when the tight lower approximation is considered).

Contexts for lower approximations

First, we consider the problem of searching for an adequate context for query restriction. This problem can be formulated as follows:
  • Input: The set of concept names N C , an ABox \(\mathcal {A}\), a similarity relation Sim, a concept C, and an individual a.

  • Output: A nonempty context ΣN C −atom(C) and a natural number n≥1 such that \(\mathcal {A} \models \underline {C}_{(\mathrm {\text {Sim}}_{\Sigma },n)}(a)\).

The function atom(C) returns the set of atomic concept names contained in the concept C. We emphasize that in this paper the problem of query refinement is restricted only to ABoxes and atomic concept assertions (i.e., we consider that empty TBox and complex concepts are not allowed in the ABox). One of the problems related to applications of rough set methods is whether the whole set of attributes (concepts) is necessary and, if not, how to determine the simplified and still sufficient subset of attributes that preserves the distinguishability information of the original one, called reduct. For a knowledge base in \(\mathcal {ALC}\), the reducts are determined by the minimal sets of concepts that preserve discernibility of all individuals from one another. A resulting reduct is, therefore, a minimal set of concepts enabling one to introduce the same indiscernibility on the universe as the whole set of concepts does.

In the rough set theory, the computation of all types of reducts is based on discernibility matrices [19]. Such matrices are constructed from the discernibility relation. In this paper, we consider dissimilarity instead [21], because we are working with the notion of similarity. We highlight that a dissimilarity relation can be viewed as the complement of a similarity relation.

Definition 17.

(Dissimilarity matrix) Let N I be the set of individuals, x,yN I , Σ be a context, and Sim be a similarity relation. A dissimilarity matrix is defined as
$${} \begin{aligned} \text{DIS}(\Sigma,x,y,\text{Sim}) &= \{ A_{i} \in \Sigma \mid y \notin \text{Sim}_{\{ A_{i} \}}(x)\}, ~\text{such that}\\ \text{Sim}_{\{ A_{i}\}}(x) &= \{ y \in N_{I} \mid (x,y) \in \text{Sim}_{\{A_{i} \}} \}. \end{aligned} $$

Intuitively, DIS(Σ,x,y,Sim) describes the set of all concepts in Σ in which the individual x is not similar to y with respect to Sim. To evaluate this set of concepts, we will define a Boolean function f(Σ,x,y,Sim), called dissimilarity function, that will return the set of reducts.

Definition 18.

(Dissimilarity function) The dissimilarity function of an individual x with respect to a context Σ, a concept C, and a similarity relation Sim is defined by
$${} {\fontsize{9.4pt}{9.6pt}\selectfont{\begin{aligned} f(\Sigma,C,x,\text{Sim}) = \displaystyle\bigwedge_{y \in N_{I}, \mathcal{A} \models C(x) \Leftrightarrow \mathcal{A} \not\models C(y)} \bigvee \text{DIS}(\Sigma,x,y,\text{Sim}). \end{aligned}}} $$

By Lemma 1, the intersection of all dissimilarities of the individual x is calculated, since we are interested only in minimal contexts. In order to find a context for an assertion axiom of the form A(x), the function f(N C −{A},A,x,Sim) can be used. From the point of view of rough sets, A is not considered in the context because it is treated as the decision attribute and N C −{A} as the conditional attributes.

Example.

[4] Consider the ABox \(\mathcal {A}\) below representing houses in which {x 1,x 2,x 3,x 4,x 5,x 6,x 7} are individuals, GL = GoodLocation, B = Basement, F = FirePlace, and M = Medium.
$${} \begin{aligned} &\mathit{GL}(x_{1}); \neg \textit{GL}(x_{2}); \mathit{GL}(x_{3}); \neg \mathit{GL}(x_{4}); \mathit{GL}(x_{6}); \neg \mathit{GL}(x_{7});\\ &\mathit{B}(x_{1}); \mathit{B}(x_{2}); \neg \mathit{B}(x_{2}); \neg \mathit{B}(x_{3}); \mathit{B}(x_{4}); \mathit{B}(x_{6}); \neg \mathit{B}(x_{6}); \mathit{B}(x_{7});\\ &\qquad\quad\mathit{F}(x_{1}); \neg \mathit{F}(x_{2}); \neg \mathit{F}(x_{4}); \mathit{F}(x_{5}); \mathit{F}(x_{6}); \mathit{F}(x_{7});\\ &\neg \mathit{M}(x_{1}); \neg \mathit{M}(x_{2}); \mathit{M}(x_{3}); \mathit{M}(x_{4}); \mathit{M}(x_{5}); \neg \mathit{M}(x_{6}); \mathit{M}(x_{7}).\\ \end{aligned} $$
A result obtained here is that \(\mathcal {A} \models \textit {M}(x_{3}).\) In order to know if x 3 necessarily has the property of medium value, we can apply f(Σ,M,x 3,S), in which Σ={GL,B,F}:
$${} {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} f(\Sigma,M,x_{3},\mathrm{S}) & = \displaystyle\bigwedge_{y \in N_{I}, \mathcal{A} \models M(x_{3}) \Leftrightarrow \mathcal{A} \not\models M(y)} \left(\bigvee \text{DIS}(\Sigma,x_{3},y,\mathrm{S})\right). \\ & = \left(\bigvee \!\text{DIS}(\Sigma,x_{3},x_{1},\mathrm{S})\right)\! \wedge\! \left(\bigvee \text{DIS}(\Sigma,x_{3},x_{2},\mathrm{S})\right) \!\wedge \\ & \quad\left(\bigvee \text{DIS}(\Sigma,x_{3},x_{6},\mathrm{S})\right).\\ & = \textit{B} \wedge (\textit{GL} \vee \textit{B}) \wedge \textit{B}. \\ & = \textit{B}. \end{aligned}}} $$
It follows that \(\mathcal {A} \models \underline {\textit {M}}_{(\mathrm {S}_{\Sigma _{1}},1)}(x_{3})\), where Σ 1={B}. This result shows that the context {B} is already enough to be used in the lower approximation of Medium. In fact, we have that \(\mathcal {A} \models \underline {\textit {M}}_{(\mathrm {S}_{\Sigma },1)}(x_{3})\) is also satisfied, since, as it was mentioned in Lemma 1, Σ 1Σ implies \(\underline {\textit {M}}_{\mathrm {S}_{\Sigma _{1}}}(x_{3}) \subseteq \underline {\textit {M}}_{\mathrm {S}_{\Sigma }}(x_{3})\). If we consult the dissimilarity function with relation to R, we will obtain
$${} {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} f(\Sigma,M,x_{3},\mathrm{R}) & = \displaystyle\bigwedge_{y \in N_{I}, \mathcal{A} \models M(x_{3}) \Leftrightarrow \mathcal{A} \not\models M(y)} \left(\bigvee \text{DIS}(\Sigma,x_{3},y,\mathrm{R})\right). \\ & = \left(\bigvee \!\text{DIS}(\Sigma,x_{3},x_{1},\mathrm{R})\!\right) \!\wedge\! \left(\bigvee\! \text{DIS}(\Sigma,x_{3},x_{2},\mathrm{R})\!\right) \!\wedge \\ &\quad\, \left(\bigvee \text{DIS}(\Sigma,x_{3},x_{6},\mathrm{R})\right).\\ & = (\textit{B} \vee \textit{F}) \wedge (\textit{GL} \vee \textit{B} \vee \textit{F}) \wedge (\textit{B} \vee \textit{F}). \\ & = (\textit{B} \vee \textit{F}). \end{aligned}}} $$
The context Σ 2={B,F} satisfies \(\mathcal {A} \models \underline {\textit {M}}_{(\mathrm {R}_{\Sigma _{2}},1)}(x_{3})\). Comparing with the similarity relation S, we can note that the context Σ 2 exhibits similarities with unknown information, since it does not appear in f(Σ,M,x 3,S), but it is found in f(Σ,M,x 3,R). By applying the dissimilarity function with the relation P we will obtain
$${} {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} f(\Sigma,M,x_{3},\mathrm{P}) & = \displaystyle\bigwedge_{y \in N_{I}, \mathcal{A} \models M(x_{3}) \Leftrightarrow \mathcal{A} \not\models M(y)} \left(\bigvee \text{DIS}(\Sigma,x_{3},y,\mathrm{P})\right). \\ & = \left(\bigvee \!\text{DIS}(\Sigma,x_{3},x_{1},\mathrm{P})\!\right) \!\wedge\! \left(\bigvee \text{DIS}(\Sigma,x_{3},x_{2},\mathrm{P})\!\right) \!\wedge \\ & quad \left(\bigvee \text{DIS}(\Sigma,x_{3},x_{6},\mathrm{P})\right).\\ & = \textit{B} \wedge \textit{GL} \wedge \emptyset \\ & = \emptyset. \end{aligned}}} $$

This empty result shows that context satisfying the lower approximation of Medium to x 3 and the similarity P does not exist. Intuitively, we can say that when we permit similarities of the values of x 3, with contradictory information, there will be an individual similar to x 3 that will not satisfy the concept Medium—in this case, the individual x 6.

Contexts for upper approximations

The second problem can be formulated as follows:
  • Input: The set of concept names N C , an ABox \(\mathcal {A}\), a similarity relation Sim, a concept C, and an individual a.

  • Output: A nonempty context ΣN C −atom(C) and a natural number n≥1 such that \(\mathcal {A} \models \overline {C}^{(\mathrm {\text {Sim}}_{\Sigma },n)}(a)\).

Unlike the lower approximation, we will consider now the notion of similarity matrix instead of dissimilarity matrix. The motivation behind the search for contexts to an upper approximation comes from the following idea: when an assertion axiom is not satisfied by a knowledge base, we need to find individuals satisfying the assertion of the consulted concept that share some similarities with the individual of the original query. These similarities will characterize the context. Thus, we can calculate the set of maximal concepts preserving similarity.

Definition 19.

(Similarity matrix) Let N I be the set of individuals, x,yN I , Σ a context, and Sim a similarity relation. A similarity matrix is defined as
$$\begin{array}{*{20}l} {}&\text{SIM}(\Sigma,x,y,\text{Sim}) = \{ A_{i} \in \Sigma \mid y \in \text{Sim}_{\{ A_{i} \}}(x)\},\,\,such~that\,\,\\ {}&\qquad\quad\text{Sim}_{\{ A_{i}\}}(x) = \{ y \in N_{I} \mid (x,y) \in \text{Sim}_{\{A_{i} \}} \}. \end{array} $$

The matrix SIM(Σ,x,y,Sim) describes the set of all concepts in Σ in which an individual x is similar to y with respect to Sim. To evaluate this set of concepts, we define the Boolean function g(Σ,x,y,Sim), called similarity function.

Definition 20.

(Similarity function) The similarity function of an individual x with respect to a context Σ, a concept C, and a similarity relation Sim is defined by
$${} {\fontsize{9.4pt}{9.6pt}\selectfont{\begin{aligned} g(\Sigma,C,x,\text{Sim}) = \displaystyle\bigvee_{y \in N_{I}, \mathcal{A} \not\models C(x) \Leftrightarrow \mathcal{A} \models C(y)} \bigwedge \text{SIM}(\Sigma,x,y,\text{Sim}). \end{aligned}}} $$

Now, we are not interested in finding the minimal set of concepts (reduct). By Lemma 1, we are searching for the maximal set of concepts, so the disjunction of all similarities of the individual x is performed.

Example.

[4] Consider now the ABox \(\mathcal {A}'\) below, where {x 1,x 2,x 3,x 4,x 5,x 6,x 7} are individuals, GL = GoodLocation, B = Basement, F = FirePlace, and E = Expensive.
$${} \begin{aligned} &\mathit{GL}(x_{1}); \neg \mathit{GL}(x_{2}); \mathit{GL}(x_{3}); \neg \mathit{GL}(x_{4}); \mathit{GL}(x_{6}); \neg \mathit{GL}(x_{7});\\ &\mathit{B}(x_{1}); \mathit{B}(x_{2}); \neg \mathit{B}(x_{2}); \neg \mathit{B}(x_{3}); \mathit{B}(x_{4}); \mathit{B}(x_{6}); \neg \mathit{B}(x_{6}); \mathit{B}(x_{7});\\ &\qquad\quad\mathit{F}(x_{1}); \neg \mathit{F}(x_{2}); \neg \mathit{F}(x_{4}); \mathit{F}(x_{5}); \mathit{F}(x_{6}); \mathit{F}(x_{7});\\ &\neg \mathit{E}(x_{1}); \neg \mathit{E}(x_{2}); \neg \mathit{E}(x_{3}); \neg \mathit{E}(x_{4}); \neg \mathit{E}(x_{5}); \mathit{E}(x_{6}); \neg \mathit{E}(x_{7}). \end{aligned} $$
Consequently, \(\mathcal {A}' \not \models \textit {E}(x_{7})\). We can apply the function g(Σ,E,x 7,S), in which Σ=N C −{E}, to discover if there exists a context satisfying the upper approximation of Expensive according to x 7:
$${} {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} g(\Sigma,\textit{E},x_{7},\mathrm{S}) & = \displaystyle\bigvee_{y \in N_{I}, \mathcal{A}' \not\models E(x_{7}) \Leftrightarrow \mathcal{A}' \models E(y)} \left(\bigwedge \text{SIM}(\Sigma,x_{7},y,\mathrm{S})\right). \\ & = \left(\bigwedge \text{SIM}(\Sigma,x_{7},x_{1},\mathrm{S})\right) \!\vee\! \left(\bigwedge \text{SIM}(\Sigma,x_{7},x_{6},\mathrm{S})\right).\\ & = (\textit{B} \wedge \textit{F}) \vee \textit{F}.\\ & = (\textit{B} \wedge \textit{F}). \end{aligned}}} $$
Therefore, we have \(\mathcal {A}' \models \overline {\textit {E}}^{(\mathrm {S}_{\Sigma _{1}},1)}(x_{7})\), where Σ 1={B,F}. We have chosen (BF) as the simplification of ((BF)F), because, as stated in Lemma 1, all nonempty subsets of {B,F} also satisfy the upper approximation of E(x 7). We consider Σ 1 an optimized context as it covers a greater number of concepts for the upper approximation. Regarding the similarity relation R, we have the following outcome:
$${} {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} g(\Sigma,\textit{E},x_{7},\mathrm{R}) & = \displaystyle\bigvee_{y \in N_{I}, \mathcal{A}' \not\models E(x_{7}) \Leftrightarrow \mathcal{A}' \models E(y)} \left(\bigwedge \text{SIM}(\Sigma,x_{7},y,\mathrm{R})\right). \\ & = \left(\bigwedge \!\text{SIM}(\Sigma,x_{7},x_{1},\mathrm{R})\right) \!\vee\! \left(\bigwedge\! \text{SIM}(\Sigma,x_{7},x_{6},\mathrm{R})\right).\\ & = (\textit{B} \wedge \textit{F}) \vee \textit{F}.\\ & = (\textit{B} \wedge \textit{F}). \end{aligned}}} $$
In this example, we obtained the same results to the relations R and S. In this case, we can conclude that unknown information related to the individual x 7 does not exist. For the similarity function with the relation P, we will have
$${} {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} g(\Sigma,\textit{E},x_{7},\mathrm{P}) & = \displaystyle\bigvee_{y \in N_{I}, \mathcal{A}' \not\models E(x_{7}) \Leftrightarrow \mathcal{A}' \models Caro(y)} \left(\bigwedge \text{SIM}(\Sigma,x_{7},y,\mathrm{P})\right). \\ & = \left(\bigwedge \text{SIM}(\Sigma,x_{7},x_{1},\mathrm{P})\right) \!\vee\! \left(\bigwedge \text{SIM}(\Sigma,x_{7},x_{6},\mathrm{P})\right)\!.\\ & = (\textit{B} \wedge \textit{F}) \vee (\textit{B} \wedge \textit{F}).\\ & = (\textit{B} \wedge \textit{F}). \end{aligned}}} $$

The outcome is the same obtained with the relations R and S. However, differently from them, \((\bigwedge \text {SIM}(\Sigma,x_{7},x_{6},\mathrm {P})) = (\textit {B} \wedge \textit {F})\). This implies that there are some evidence of similarities with contradictions in P, but in this case, this evidence is redundant, because they do not change the final result.

Results and discussion

Now, we present the algorithms to find contexts for query refinements. They consist of searching for minimal sets of concepts (for query restrictions) or maximal sets of concepts (for query relaxations). If no result is found, then the process is repeated taking into account a similarity relation of a higher step. The algorithms finish when any context is found (which will be the answer of the problem) or when they search in all possible k-step relations and no result is returned (in this case, an empty set will be the answer of the problem). In the sequel, we explain the algorithms and show their complexities.

We assume in the problem that a formula is represented in conjunctive normal form (CNF) (if it is the input of Algorithms 1–2) or in disjunctive normal form (DNF) (if it is the input of Algorithm 3). For instance, S={{A 1,A 2},{A 2,A 3}} can be treated as S=(A 1A 2)(A 2A 3) (if it is the input of Algorithms 1–2) or S=(A 1A 2)(A 2A 3) (if it is the input of Algorithm 3). Algorithm 1 simplifies a formula in DNF by removing redundant clauses. This procedure is done by employing absorption law, i.e., (ab)aa, which is performed in the function Extract (lines 6 and 10). Only this law is needed to simplify, since the input of the algorithm contains only atomic concepts. The function Extract consists simply in removing a clause from a set of clauses.

The function SimplifyDNF2(S,n) (Algorithm 2) follows the idea of Algorithm 1, but instead of the absorption law, the rule (ab)a≡(ab) to represent maximal contexts is applied to it. Then, SimplifyDNF2(S,n) is obtained by exchanging the lines 5 and 9 of Algorithm 1.

Theorem 1.

The time complexities of SimplifyDNF(S,n) and SimplifyDNF2(S,n) are O(|N C |.n 2), where n is the number of clauses of the disjunctive normal form formula received as input and N C is the set of atomic concepts.

Proof.

The Algorithms have two nested loops of the type while, which are limited by n (thus, O(n 2)), where n is the number of clauses in the DNF formula of the input. The complexity of Extract has a linear upper bound in the size of the formula, i.e., O(|N C |). Therefore, the complexity of the algorithms SimplifyDNF(S,n) and SimplifyDNF2(S,n) are O(|N C |.n 2).

Theorem 2.

The time complexity of SimplifyCNF(S,n) is \(O(2^{|N_{C}|})\phantom {\dot {i}\!}\).

Proof.

Algorithm 3 simplifies a formula S in CNF by removing redundant clauses, similarly to Algorithm 1. This is achieved by translating a CNF formula into a DNF formula via distribution rules (Distribute) and then applying the function SimplifyDNF. The complexity of the algorithm which translates a CNF to a DNF formula is O(2 n ) [22], where n is the number of different variables of the CNF formula. The complexity of line 1 is \(O(2^{|N_{C}|})\phantom {\dot {i}\!}\), in which |N C | is the number of atomic concepts and the limit of variables that can occur in the CNF formula. The complexity of line 2 is O(|N C |.n 2), as showed above. Therefore, the complexity of Algorithm 3 is \(O(2^{|N_{C}|})\phantom {\dot {i}\!}\).

Algorithm 4 searches sets of contexts for lower approximations. In other words, it implements the dissimilarity function f(Σ,C,x,Sim). First, the dissimilarity matrix of a specific individual with respect to the similarity relation Sim (lines 8–18) is computed. After, the dissimilarity function through SimplifyCNF (line 19) is calculated.

If the result is nonempty, then it will be the solution of the problem. Otherwise, the procedure is restarted with a similarity relation of a higher step. The result of the problem is found by finding a k-step relation returning a nonempty set. If all k-step relations return empty sets, then the solution will be empty (i.e., there are no contexts to solve the problem).

For instance, taking an example of the previous subsection, f({GL, B, F},M,x 3,S) will find the contexts B,(GLB), and B in the first iteration through the dissimilarity matrix. Then, the context B is concluded after a simplification. As the result is nonempty, this will be the result of the problem.

Theorem 3.

Determining if there exist contexts for lower approximations of an ABox assertion in \(\textit {paraconsistent rough} \mathcal {ALC}\) is EXPTIME.

Proof.

By Algorithm 4, the complexity of the loop for all (lines 8–18) takes into account the complexity of logical consequence of an assertion axiom in \(\mathcal {PR_{\textit {ALC}}}\) and the computation of the dissimilarity matrix DIS. By definition, DIS also depends on the problem of logical consequence in \(\mathcal {PR_{\textit {ALC}}}\), i.e., its complexity is PSPACE. Thus, the complexity of the lines 8–18 takes polynomial space. As pointed in Proposition 1, tight and loose approximations are monotonic, and since we are leading with finite sets of individuals, there is a finite number of k-step relations, which are bounded by |N I |. Therefore, the maximum number of tests done through the while (line 5) is O(|N I |). Lastly, we have that SimplifyCNF(S,n) has a time complexity of \(O(2^{|N_{C}|})\phantom {\dot {i}\!}\).

Algorithm 5 follows the rationale of Algorithm 4, but it constructs the similarity matrix SIM. After that, the algorithm computes the similarity function through SimplifyDNF2. Considering the example g({GL, B, F},E,x 7,R) from the previous subsection, the algorithm will discover the contexts (BF) and F in the first iteration. Then, the simplification will result in (BF) that will be the result of the query relaxation.

Theorem 4.

Determining if there exist contexts for upper approximations of an ABox assertion in \(\textit {paraconsistent rough} \mathcal {ALC}\) is PSPACE.

Proof.

As in Algorithm 4, the construction of the similarity matrix SIM takes into account the logical consequence problem in \(\mathcal {PR_{\textit {ALC}}}\) (lines 8–14) and has polynomial space complexity. In the same way, the number of k-step relations is bounded by O(|N I |) (line 5). The time complexity of SimplifyDNF2(S,n) is polynomial, but the overall complexity of the algorithm is PSPACE for \(\mathcal {PR_{\textit {ALC}}}\).

Conclusions

In this paper, we defined techniques for handling query refinements of assertion axioms in \(\mathcal {PR_{\textit {ALC}}}\), by employing the notion of contextual approximation and similarity relations, which exploited the presence of unknown and inconsistent information. We also showed a method to compute optimized contexts for these queries based on the notion of reducts presented in the rough set theory. The problem of query restrictions using contextual approximation was proved to have exponential time complexity, while the problem of query relaxations has polynomial space complexity.

As future work, we will investigate some ways of choosing the most representative contexts, as approaches involving measures of inconsistency or information, since the method presented in this paper concerns only about minimal or maximal contexts. Some approaches to deal with this problem can be found in [21, 23]. Another possible line of research is to extend the application of query refinement in complex assertion axioms (any concept C(a)) as well to terminological axioms, i.e., axioms of the form \(C \sqsubseteq D\).

Declarations

Acknowledgements

This research is partially supported by CNPq (305980/2013-0,301607/2010-9, 474821/2012-9, 482481/2010-2), CAPES (PROCAD/NF789/2010), CNPq/CAPES(552578/2011-8).

Authors’ Affiliations

(1)
Departamento de Computação, Universidade Federal do Ceará

References

  1. Pawlak Z (1982) Rough sets. Int J Inf Comput Sci 11: 341–356.MathSciNetView ArticleGoogle Scholar
  2. Schaerf M, Cadoli M (1995) Tractable reasoning via approximation. Artif Intell 74: 249–310.MathSciNetView ArticleGoogle Scholar
  3. Stuckenschmidt H (2007) Partial matchmaking using approximate subsumption In: AAAI’07, 1459–1464.. AAAI Press, Vancouver, British Columbia, Canada.Google Scholar
  4. Viana H, Alcântara J, Martins AT2011. Paraconsistent rough description logic. CEUR-WS.org, Barcelona, Spain.Google Scholar
  5. Cornelis C, De Cock M, Radzikowska AM (2008) Fuzzy rough sets: from theory into practice. In: Pedrycz W, Skowron A, Kreinovich V (eds)Handbook of Granular Computing, 533–552.. John Wiley & Sons, Inc., New York, NY, USA.View ArticleGoogle Scholar
  6. Fanizzi N, d’Amato C, Esposito F, Lukasiewicz T (2008) Representing uncertain concepts in rough description logics via contextual indiscernibility relations In: URSW, Karlsruhe, Germany.. Springer, Berlin Heidelberg.Google Scholar
  7. Viana H, Alcantara J, Martins AT (2013) Searching contexts in rough description logics In: Intelligent systems (BRACIS), 2013 Brazilian Conference On Intelligent Systems, 163–168.Google Scholar
  8. Belnap ND (1977) A useful four-valued logic. In: Dunn JM Epstein G (eds)Modern uses of multiple-valued logic, 8–37.. D. Reidel, Springer Netherlands.Google Scholar
  9. Peñaloza R, Zou T (2013) Roughening the E $\mathcal {EL}$ envelope In: Frontiers of combining systems (FroCoS 2013), 71–86. Springer Berlin Heidelberg.Google Scholar
  10. Jiang Y, Wang J, Tang S, Xiao B (2009) Reasoning with rough description logics: an approximate concepts approach. Inf Sci 179(5): 600–612.MathSciNetView ArticleGoogle Scholar
  11. Keet CM (2010) On the feasibility of description logic knowledge bases with rough concepts and vague instances In: Description logics, Waterloo, Canada. Citeseer.Google Scholar
  12. Schlobach S, Klein M, Amsterdam VU (2007) Description logics with approximate definitions: precise modeling of vague concepts In: IJCAI 07, Hyderabad, India, 557–562.Google Scholar
  13. Baader F (2003) The description logic handbook: theory, implementation, and applications. Cambridge University Press, New York, NY, USA.Google Scholar
  14. Komorowski J, Pawlak Z, Polkowski L, Skowron A (1998) Rough sets: a tutorial. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37.2477.
  15. Grzymala-Busse JW (2006) A rough set approach to data with missing attribute values In: RSKT, Chongqing, China, 58–67.. Springer, Berlin Heidelberg.Google Scholar
  16. Wu W, Zhang W (2002) Neighborhood operator systems and approximations. Inf Sci 144(1-4): 201–217.View ArticleGoogle Scholar
  17. Ma Y, Hitzler P, Lin Z (2006) Paraconsistent reasoning with OWL — algorithms and the ParOWL reasoner. Technical report, AIFB, Univerisity of Karlsruhe, Germany.Google Scholar
  18. Grzymala-Busse JW (2006) Rough set strategies to data with missing attribute values In: Foundations and novel approaches in data mining, 197–212.. Springer, Berlin Heidelberg.Google Scholar
  19. Skowron A, Rauszer C (1992) The discernibility matrices and functions in information systems. In: Slowiński R (ed)Handbook of applications and advances of the rough sets theory, Kluwer, Dordrecht.Google Scholar
  20. Viana H (2012) Refinamento de Consultas en Lógicas de Descrição Utilizando a Teoria dos Rough Sets. http://mdcc.ufc.br/teses/doc_download/183-161-henrique-viana-oliveira.
  21. Stepaniuk J (1998) Approximation spaces, reducts and representatives. In: Polkowski L Skowron A (eds)Rough sets in knowledge discovery, Part I and II, 109–126.. Physica-Verlag, Heidelberg.View ArticleGoogle Scholar
  22. Miltersen PB, Radhakrishnan J, Wegener I (2005) On converting CNF to DNF. Theor Comput Sci 347(1-2): 325–335.MathSciNetView ArticleGoogle Scholar
  23. Nguyen HS (2006) Approximate Boolean reasoning: foundations and applications in data mining. Trans Rough Sets V 4100: 334–506.View ArticleGoogle Scholar

Copyright

© Viana et al. 2015

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.