Searching contexts in paraconsistent rough description logic
 Henrique Viana†^{1}Email author,
 João Alcântara^{1} and
 Ana Teresa Martins^{1}
https://doi.org/10.1186/s1317301500312
© Viana et al. 2015
Received: 1 May 2014
Accepted: 18 June 2015
Published: 7 July 2015
Abstract
Background
Query refinement is an interactive process of query modification used to increase or decrease the scope of search results in databases or ontologies.
Methods
We present a method to obtain optimized query refinements of assertion axioms in the paraconsistent rough description logic \(\mathcal {PR_{\textit {ALC}}}\), a fourvalued paraconsistent version of the rough \(\mathcal {ALC}\), which is grounded on Belnap’s Logic. This method is based on the notion of the discernibility matrix commonly used in the process of attribute reduction in the rough set theory. It consists of finding sets of concepts which satisfy the rough set approximation operations in assertion axioms. Consequently, these sets of concepts can be used to restrict or relax queries in this logic.
Results
We propose two algorithms to settle this problem of query refinement in \(\mathcal {PR_{\textit {ALC}}}\) and show their complexity results.
Conclusions
The problem of query restrictions using contextual approximation is proved to have exponential time complexity, while the problem of query relaxations has polynomial space complexity.
Keywords
Description logics Rough sets Query refinementBackground
In large databases as hypertext document collections, it is often possible to find too many results available for specific queries, and it is not always that all these answers are important. In an opposite situation (but potentially undesirable), too few results can be available for queries with keywords rarely used. Some approaches based on query refinements can, then, be employed to settle these problems. As we know, query refinement is an interactive process of query modification used to increase or decrease the scope of search results.
One of these approaches is the rough set theory introduced by Z. Pawlak [1] to represent and reason about uncertainty through two operations of set approximation: the lower and the upper approximations. For a set S, its lower approximation gives the set of elements that certainly belong to S, while its upper approximation gives the set of elements that possibly belong to S.
Because of such approximations, the rough set theory may be applied to produce two forms of query refinement: query restriction and query relaxation. A query can be restricted in order to obtain only the necessary results, or it can be relaxed, aiming at increasing the number of its results. In this paper, we propose a new method settled on the rough set theory and description logics (DLs) for automatically generating refinements or related terms to queries.
Some works can be found in the literature with respect to query refinement in DLs as in [2, 3]. Most of them focus on syntactical manipulations of knowledge bases to increase or decrease the number of results of a query. Unfortunately, these syntactical manipulations may generate results without connections with the initial query. In [4], by resorting to rough sets as a tool of query refinement in DLs, it is ensured that the obtained results always have some kind of relationship with the initial query, because of the role played by the approximation operator [5]. In addition, the representation of rough sets in DLs brings no growth in the complexity of the corresponding satisfiability problem.
Still, in [4], in order to generate query refinements of assertion axioms, the authors employed the notion of contexts in rough set approximations [6]. However, these contexts are given in the query. This problem is solved in [7], where a method is proposed to obtain optimized query refinements of assertion axioms in rough \(\mathcal {ALC}\) [6]. We will now extend this result to work with paraconsistent rough description logic (\(\mathcal {PR_{\textit {ALC}}}\)) [4]. It is a fourvalued extension of the rough \(\mathcal {ALC}\), and its semantics follows the wellknown Belnap’s paraconsistent logic [8]. With that, we increase the expressivity of uncertainty representation, allowing the expression and approximation of unknown and contradictory knowledge bases. It will be achieved by resorting to two algorithms to search for contexts which will be applied in the approximation operations. These algorithms generate optimized solutions when they exist. Depending on the considered refinement, minimal or maximal cardinality sets which satisfy approximations are chosen (minimal sets for query restrictions and maximal sets for query relaxations). Furthermore, we present complexity results of both algorithms.
The paper is structured as follows. In the “Background” section, we present the basic notions of rough \(\mathcal {ALC}\), an extension of \(\mathcal {ALC}\) with the approximations of the rough set theory, and we introduce the paraconsistent extension of the rough \(\mathcal {ALC}\), called \(\mathcal {PR_{\textit {ALC}}}\). In the “Methods” and “Results and discussion” sections, we will find the main contributions of this work, where we define a method to obtain optimized refined queries in this logic and we present the corresponding algorithms (as well as their complexity analysis), respectively. Finally, in the “Conclusions” section, we conclude the paper.
Rough description logic \(\mathcal {ALC}\)
Rough description logics (RDLs) [6, 7, 9–12] introduced a mechanism to model uncertain reasoning by means of concept approximation. They extend DLs with two operations: the lower and upper approximations. Both approximations are conceived to capture uncertainty from an indiscernibility relation (an equivalence relation). We can define the upper approximation of a concept C in \(\mathcal {ALC}\) as the set of individuals in C that are indiscernible from at least an individual known to belong to C [6]. Similarly, we can define the lower approximation of a concept C as the set of all indiscernible individuals in C. In the sequel, we will introduce some basic characteristics of the rough \(\mathcal {ALC}\), beginning with the syntax, semantics, and, lastly, alternative approaches to represent approximations, which will be used later in the query refinements. For a detailed explanation about DLs and rough sets, see [13] and [14], respectively.
Syntax
As mentioned above, the basic idea behind RDLs is straightforward: we can approximate an uncertain concept C through lower and upper bounds.
Definition 1.
(Concepts) Concepts in rough \(\mathcal {ALC}\) are defined by the following rules, where C and D are concepts, A is an atomic concept, and R is an atomic role:
The rough \(\mathcal {ALC}\) is based on \(\mathcal {ALC}\) with the addition of the upper and lower approximation as unary constructors of concepts, i.e., if C is a concept then \(\overline {C}\) (possibly C) and \(\underline {C}\) (necessarily C) are also concepts. Others concepts can be defined by the following equivalences: ⊤≡¬⊥,(C⊔D)≡¬(¬C⊓¬D),∀R.C≡¬(∃R.¬C).
The notions of TBox and ABox, as well the knowledge base in rough \(\mathcal {ALC}\), extend the original notion of \(\mathcal {ALC}\).
Definition 2.
(TBox) A TBox \(\mathcal {T}\) is a finite set of terminological axioms of the form \(C \sqsubseteq D\) or C≡D.
The first axiom \(C \sqsubseteq D\) (inclusion axiom) means that each individual of C is also an instance of D, while the axiom C≡D (equivalence axiom) means that each individual of C is also an instance of D and each individual of D is also an instance of C.
Definition 3.
(ABox) An ABox \(\mathcal {A}\) consists of the finite set of assertion axioms of the form C(a) or R(a,b).
The concept assertion C(a) denotes that the individual a belongs to the concept C, and the role assertion R(a,b) denotes that the individual a is related to individual b by the role R.
Definition 4.
(Knowledge base) A knowledge base =<,> in rough \(\mathcal {ALC}\) consists of a TBox and an ABox.
Semantics

For an individual a, a ^{ I }∈Δ ^{ I };

For atomic concepts A, A ^{ I }⊆Δ ^{ I };

For atomic roles R, R ^{ I }⊆Δ ^{ I }×Δ ^{ I };

⊤^{ I }=Δ ^{ I };

⊥^{ I }=∅;

(¬A)^{ I }=Δ ^{ I }∖A ^{ I };

(C⊓D)^{ I }=C ^{ I }∩D ^{ I };

(C⊔D)^{ I }=C ^{ I }∪D ^{ I };

(∃R.C)^{ I }={a∈Δ ^{ I }∣∃b∈Δ ^{ I },(a,b)∈R ^{ I }∧b∈C ^{ I }};

(∀R.C)^{ I }={a∈Δ ^{ I }∣∀b∈Δ ^{ I },(a,b)∈R ^{ I }→b∈C ^{ I }}.

\((\overline {C})^{I} = \left \{ a \in \Delta ^{I} \mid \exists b \left ((a,b) \in R^{\sim } \wedge b \in C^{I}\right) \right \}\),

\((\underline {C})^{I} = \left \{ a \in \Delta ^{I} \mid \forall b \left ((a,b) \in R^{\sim } \rightarrow b \in C^{I}\right)\right \}\).

A ^{ t }=A, for all atomic concepts A in RDL,

\((\overline {C})^{t} = \exists R^{\sim }.C^{t}\) e \((\underline {C})^{t} = \forall R^{\sim }.C^{t}\).
For all others complex concepts, the translation function is applied recursively in their subconcepts. The same definition is extended to inclusion and assertion axioms [11, 12].
A different way of representing equivalence between individuals was also proposed in [6], in which an alternative approach to approximation in DL was introduced. In this work, an approximation depends on a specific set of concepts to determine the indiscernibility of the individuals (and not anymore an explicit indiscernibility relation). We will detail this idea in the sequel.
Contextual approximation
In [6], the notion of contextual indiscernibility relation in RDLs was introduced as a manner to define an equivalence relation based on indiscernibility criteria. In particular, the notion of context is introduced, allowing the definition of specific equivalence relationships to be applied on approximations. The great advantage of this approach is that the reasoning with equivalence classes is optimized, because the equivalence relation is discovered in the process of inference, differently from the traditional RDL, where the equivalence relation must be explicitly defined.
We will show the definition of contexts of a collection of concepts and successively the definitions of lower and upper approximations through a context. First, we will present the notion of a projection function in DL [6]:
Definition 5.
A context can be defined as a finite set of relevant features in the form of DL concepts, which may encode a kind of context information for the similarity to be measured [6].
Definition 6.
(Context) A context is a nonempty finite set of atomic concepts Σ={A _{1},…,A _{ n }}.
Two individuals a and b are indiscernible with respect to the context Σ={A _{1},…,A _{ n }} and a knowledge base \(\mathcal {K}\) if and only if for all A _{ i } in which \(i \in \{ 1, \dots, n \}, \pi ^{\mathcal {K}}_{A_{i}} (a) = \pi ^{\mathcal {K}}_{A_{i}} (b)\). This easily induces an equivalence relation:
Definition 7.
(Contextual indiscernibility relation) Let Σ={A _{1},…,A _{ n }} be a context and \(\mathcal {K}\) a knowledge base. The indiscernibility relation R_{ Σ } induced by Σ is defined as: \(\mathrm {R}_{\Sigma } = \ \{ (a,b) \in N_{I} \times N_{I} \mid ~\text {for all}~ A_{i}~ \text {in which}~ i \in \{ 1, \dots, n \}, \pi ^{\mathcal {K}}_{A_{i}} (a) = \pi ^{\mathcal {K}}_{A_{i}} (b)\}\).
As DLs induce the representation of uncertain information when it is the case that \(\mathcal {K} \not \models A(a)\) and \(\mathcal {K} \not \models \neg A(a)\), a similarity relation (instead of an equivalence relation) may be more adequate to model relationships between individuals, because it permits grouping of individuals that are close, but not necessarily indiscernible. Formally, a binary relation is a similarity relation if it is at least reflexive (an equivalence relation is reflexive, symmetric, and transitive). We introduce the following similarity relation based in [15], which loosens the original condition of indiscernibility:
Definition 8.
The contextual approximations are defined below.
Definition 9.

\(\left (\overline {C}^{\text {Sim}_{\Sigma }}\right)^{I} = \left \{ a \in \Delta ^{I} \mid \exists b ((a,b) \in \text {Sim}^{I}_{\Sigma } \wedge b \in C^{I}) \right \}\),

\(\left (\underline {C}_{\text {Sim}_{\Sigma }}\right)^{I} = \left \{ a \in \Delta ^{I} \mid \forall b ((a,b) \in \text {Sim}^{I}_{\Sigma } \rightarrow b \in C^{I}) \right \}\).
We can generalize the definitions of contextual approximations by using the notion of kstep relation [16].
Definition 10.
(kstep relation) Let Δ ^{ I } be the nonempty universe set, S a binary relation in Δ ^{ I }, and k a natural number. The kstep relation of S is defined as:

S _{1}=S;

S _{ k+1}=S _{ k }∪{(x,y)∈Δ ^{ I }×Δ ^{ I }∣ exists y _{1},y _{2},…,y _{ k }∈Δ ^{ I },such that x S y _{1},y _{1} S y _{2},…,y _{ k } S y}, for k≥1.
The idea behind the kstep relation is that when we are using a similarity relation, e.g., S_{ Σ }, it may happen that S_{ Σ } _{1}⊆S_{ Σ } _{2}⊆⋯⊆S_{ Σ } _{ n }. As a consequence, the successive application of approximation operations may imply in different results.
Definition 11.

\(\left (\overline {C}^{(\text {Sim}_{\Sigma,n})}\right)^{I} = \left \{ a \mid \exists b \left ((a,b) \in {\text {Sim}^{I}_{\Sigma }}_{n} \wedge b \in C^{I}\right) \right \}\),

\(\left (\underline {C}_{(\text {Sim}_{\Sigma,n})}\right)^{I} = \left \{ a \mid \forall b \left ((a,b) \in {\text {Sim}^{I}_{\Sigma }}_{n} \rightarrow b \in C^{I}\right) \right \}\).
The contextual approximations will play a central role in the process of query refinement. In this paper, refining a concept means to apply a lower (restriction of a concept) or an upper (relaxation of a concept) approximation. Hence, it is needed to identify first a context that can be used in approximations. This process of finding contexts and some more relevant questions will be tackled and exemplified in the next sections.
Paraconsistent rough description logic \(\mathcal {PR_{\textit {ALC}}}\)
We begin this section by presenting a brief explanation about Belnap’s logic. Subsequently, we will show a paraconsistent extension of rough \(\mathcal {ALC}\), based on Belnap’s semantics, called \(\mathcal {PR_{\textit {ALC}}}\).
Belnap’s logic
Belnap’s logic [8] has four truth values instead of the two classic truth values true and false. These values are t (true), f (false), u (unknown), and i (inconsistent). The truth value i represents a contradictory information, while u means neither true nor false, i.e., the absence of any information about veracity or falsity.
Syntactically, Belnap’s logic is very similar to classical logic. However, it introduces different notions of implication. In fact, we will show three of these notions brought from literature. The connectives that will be used here are negation (¬), disjunction (∨), conjunction (∧), material implication (↦), internal implication (⊃), and strong implication (→) [17].

x↦y is defined as ¬x∨y;Table 1
Truth tables for ∧,∨, and ¬
∧
f
u
i
t
∨
f
u
i
t
¬
f
f
f
f
f
f
f
u
i
t
f
t
u
f
u
f
u
u
u
u
t
t
u
u
i
f
f
i
i
i
i
t
i
t
i
i
t
f
u
i
t
t
t
t
t
t
t
f
Table 2Truth tables for ↦,⊃, and →
↦
f
u
i
t
⊃
f
u
i
t
→
f
u
i
t
f
t
t
t
t
f
t
t
t
t
f
t
t
t
t
u
u
u
t
t
u
t
t
t
t
u
u
t
u
t
i
i
t
i
t
i
f
u
i
t
i
f
u
i
t
t
f
u
i
t
t
f
u
i
t
t
f
u
f
t

x⊃y is evaluated to y if x∈{t,i}; t if x∈{f,u};

x→y is defined as (x⊃y)∧(¬y⊃¬x).
Models are defined as follows, in which {t,i} are the designated values (i.e., the truth values considered satisfiable with respect to the consequence relation defined below).
Definition 12.
(Model) Let I be a fourvalued interpretation, Γ a set of formulas, and φ a formula in Belnap’s logic. We say that I is a fourvalued model of φ if and only if φ ^{ I }∈{t,i}. I is a fourvalued model of Γ if and only if I is a fourvalued model of each formula in Γ. We say Γ entails φ, written Γ⊧φ, if and only if every fourvalued model of Γ is also a fourvalued model of φ.
Defining \(\mathcal {PR_{\textit {ALC}}}\)
Now, we will describe the syntax and the semantics of the paraconsistent rough description logic \(\mathcal {ALC}\) (\(\mathcal {PR_{\textit {ALC}}}\)). Such a logic is an extension of the description logic \(\mathcal {ALC}_{4}\) [17], with the addition of the lower and upper approximation operators.
As usual, semantically, interpretations map individuals to elements of the domain of interpretation. For concepts and atomic roles, however, some changes in the notion of interpretation need to be made in order to reason with inconsistencies.
Intuitively, in a fourvalued logic, we need to consider four situations that can occur in terms of membership of an individual in a concept: (1) we know that it belongs to the set, (2) we know that it does not belong to the set, (3) we do not have any knowledge if it belongs or if it does not belong, and (4) we have a contradictory information, stating that the individual belongs and does not belong to the concept. There are many equivalent ways to formalize this notion; one of them will be described in the sequel.
Definition 13.

For atomic concepts A, A ^{ I }=<P,N>, such that P,N⊆Δ ^{ I };

For atomic roles R, R ^{ I }=<P _{1}×P _{2},N _{1}×N _{2}>, such that P _{1}×P _{2},N _{1}×N _{2}⊆Δ ^{ I }×Δ ^{ I };

⊤^{ I }=<Δ ^{ I },∅>;

⊥^{ I }=<∅,Δ ^{ I }>;

(¬C)^{ I }=<N,P> if C ^{ I }=<P,N>;

(C⊓D)^{ I }=<P _{1}∩P _{2},N _{1}∪N _{2}>, if C ^{ I }=<P _{1},N _{1}> and D ^{ I }=<P _{2},N _{2}>;

(C⊔D)^{ I }=<P _{1}∪P _{2},N _{1}∩N _{2}>, if C ^{ I }=<P _{1},N _{1}> and D ^{ I }=<P _{2},N _{2}>;

(∃R.C)^{ I }=<{x∣∃y,(x,y)∈proj^{+}(R ^{ I })∧y∈proj^{+}(C ^{ I })},
{x∣∀y,(x,y)∈proj^{+}(R ^{ I })→y∈proj^{−}(C ^{ I })}>;

(∀R.C)^{ I }=<{x∣∀y,(x,y)∈proj^{+}(R ^{ I })→y∈proj^{+}(C ^{ I })},
{x∣∃y,(x,y)∈proj^{+}(R ^{ I })∧y∈proj^{−}(C ^{ I })}>;

\((\overline {C})^{I} = \left < \{ x \mid \exists y, (x,y) \in R^{\sim } \wedge y \in \text {proj}^{+}(C^{I}) \}, \right.\)
{x∣∀y,(x,y)∈R ^{∼}→y∈proj^{−}(C ^{ I })}>;

\((\underline {C})^{I} = \left < \{ x \mid \forall y, (x,y) \in R^{\sim } \rightarrow y \in \text {proj}^{+}(C^{I}) \}, \right.\)
{x∣∃y,(x,y)∈R ^{∼}∧y∈proj^{−}(C ^{ I })}>.
Note that the conditions above for the role restrictions are described in a way that the logical equivalences ¬(∀R.C)=∃R.(¬C) and ¬(∃R.C)=∀R.(¬C) are preserved. This was the convenient manner found in [17] to deal with role restrictions and that allows a direct translation to \(\mathcal {ALC}\). Note that in this language, only the positive part of the interpretation of the role is required, because it involves only atomic roles.

C ^{ I }(a)=t, iff a ^{ I }∈proj^{+}(C ^{ I }) and a ^{ I }∉proj^{−}(C ^{ I });

C ^{ I }(a)=f, iff a ^{ I }∉proj^{+}(C ^{ I }) and a ^{ I }∈proj^{−}(C ^{ I });

C ^{ I }(a)=i, iff a ^{ I }∈proj^{+}(C ^{ I }) and a ^{ I }∈proj^{−}(C ^{ I });

C ^{ I }(a)=u, iff a ^{ I }∉proj^{+}(C ^{ I }) and a ^{ I }∉proj^{−}(C ^{ I });

R ^{ I }(a,b)=t, iff (a ^{ I },b ^{ I })∈proj^{+}(R ^{ I }) and (a ^{ I },b ^{ I })∉proj^{−}(R ^{ I });

R ^{ I }(a,b)=f, iff (a ^{ I },b ^{ I })∉proj^{+}(R ^{ I }) and (a ^{ I },b ^{ I })∈proj^{−}(R ^{ I });

R ^{ I }(a,b)=i, iff (a ^{ I },b ^{ I })∈proj^{+}(R ^{ I }) and (a ^{ I },b ^{ I })∈proj^{−}(R ^{ I });

R ^{ I }(a,b)=u, iff (a ^{ I },b ^{ I })∉proj^{+}(R ^{ I }) and (a ^{ I },b ^{ I })∉proj^{−}(R ^{ I }).

I⊧C↦D iff Δ ^{ I }∖proj^{−}(C ^{ I })⊆proj^{+}(D ^{ I });

\(I \models C \sqsubset D\) iff proj^{+}(C ^{ I })⊆proj^{+}(D ^{ I });

I⊧C→D iff proj^{+}(C ^{ I })⊆proj^{+}(D ^{ I }) and proj^{−}(D ^{ I })⊆proj^{−}(C ^{ I });

I⊧C(a) iff a ^{ I }∈proj^{+}(C ^{ I });

I⊧R(a,b) iff (a ^{ I },b ^{ I })∈proj^{+}(R ^{ I }).
We say that a fourvalued interpretation I satisfies a knowledge base \(\mathcal {K}\) (i.e., I is a model of \(\mathcal {K}\)) if and only if it satisfies each inclusion and assertion axiom in \(\mathcal {K}\). A knowledge base \(\mathcal {K}\) is satisfiable (respectively, unsatisfiable) if and only if there exists (respectively, there does not exist) a model for \(\mathcal {K}\).
Considering the complexity of satisfiability of \(\mathcal {PR_{\textit {ALC}}}\), it was proved in [17] that the complexity of the satisfiability decision problem for the paraconsistent version of \(\mathcal {ALC}\) is equivalent to the complexity of the same problem for \(\mathcal {ALC}\). This result shows that the paraconsistent reasoning is not more expressive than the classical twovalued reasoning and it can be simulated in a twovalued \(\mathcal {ALC}\) without an increasing of complexity. To show such a result, a polynomial translation of a paraconsistent knowledge base to a knowledge base of \(\mathcal {ALC}\) was described, which preserves all of its inference properties. According to this result, we can easily show that the satisfiability decision problem for \(\mathcal {PR_{\textit {ALC}}}\) has the same complexity of this problem for \(\mathcal {ALC}\) through the same translations presented in [11, 12, 17]
As in rough \(\mathcal {ALC}\), we can also define the contextual approximation related to \(\mathcal {PR_{\textit {ALC}}}\).
Definition 14.
(Fourvalued contextual approximations) Let Σ be a context, C be a concept, I be a fourvalued interpretation, and Sim be a similarity relation. The contextual upper and lower approximations of C with respect to Σ are defined as:

\((\overline {C}^{\text {Sim}_{\Sigma }})^{I} = \left < \{ x \mid \exists y, (x,y) \in \text {Sim}^{I}_{\Sigma } \wedge y \in \text {proj}^{+}(C^{I}) \}, \right.\)
\(\left. \{ x \mid \forall y, (x,y) \in \text {Sim}^{I}_{\Sigma } \rightarrow y \in \text {proj}^{}(C^{I})\} \right >\),

\((\underline {C}_{\text {Sim}_{\Sigma }})^{I} = \left < \{ x \mid \forall y, (x,y) \in \text {Sim}^{I}_{\Sigma } \rightarrow y \in \text {proj}^{+}(C^{I}) \}, \right.\)
\(\left. \{ x \mid \exists y, (x,y) \in \text {Sim}^{I}_{\Sigma } \wedge y \in \text {proj}^{}(C^{I})\} \right >\).
Furthermore, due to the possibility of representation of a different notion of uncertainty (contradictory information), we can also develop different similarity relations between individuals. In particular, we will work here with a specific similarity relation described in [15]. But first, we need to make a little change in the definition of the projection function described before to adapt it to the fourvalued interpretation.
Definition 15.
where N _{ I } is the set of individuals contained in the knowledge base \(\mathcal {K}\).
Now the uncertainty of a concept can be modeled in two ways: as a contradiction or as an unknown information. We can define then the similarity relation P_{ Σ }:
Definition 16.
In P_{ Σ }, it is assumed that an information can be partially described because of our incomplete or contradictory knowledge [18]. From this point of view, an element a can be considered similar to the element b if the information contained in a is also contained in b. Thus, for a concept A, such that \(\pi ^{\mathcal {K}}_{A} (a) = \textbf {t}\) and \(\pi ^{\mathcal {K}}_{A} (b) = \textbf {i}\), the individual a is similar to b because the truth value t is contained in i. Note that the reverse is not true: b is not similar to a according to P_{ Σ }, because not every information in b is contained in a. We highlight that these similarity relations introduced in this work are twovalued, but nothing prevents one to create fourvalued similarity relations.
Example.
Methods
Contextual approximations were designed to optimize the automation of approximate reasoning, since the relations between individuals are discovered during the reasoning process. However, if we think about an automated query refinement process, the possibilities of generating all contexts are \(2^{N_{C}}  1\phantom {\dot {i}\!}\), where N _{ C } is the set of atomic concepts. Moreover, most of these contexts can be redundant or cannot satisfy a query refinement. In order to avoid this problem, in this section, we present a method based on the notions of discernibility and indiscernibility matrices [19] to compute contexts for lower and upper approximations.
Using approximations
The main problem found in query refinements with rough sets is to determine a set of concepts which satisfy a restriction (lower approximation) or a relaxation (upper approximation) of a concept. The following results will help us to discover these appropriated sets.
Lemma 1.
Intuitively, Lemma 1 states that by increasing the size of the context, the size of the interpretation of the concepts increases for the lower approximation and decreases for the upper approximation. Therefore, in order to find a context to satisfy the lower approximation of a concept C, only those minimal satisfying C are needed, since all their supersets will also satisfy C. Analogously, in order to find contexts satisfying the upper approximation of C, only the maximal ones satisfying C will suffice. Finally, for loose and tight approximations, the following statements hold:
Proposition 1.
Loose upper approximations can be applied when there are no contexts which satisfy the upper approximation of a concept. In other words, a similarity relation of a higher step can be used to find a possible context. Similarly, a tight lower approximation can be applied to discover a set of concepts reinforcing the lower approximation, i.e., contexts which preserve the lower approximation in a similarity relation of a higher step. Note that the result for loose upper approximation does not change for the indiscernibility relation, since it is transitive and does not increase the size of the interpretation when it is applied successively (or does not decrease when the tight lower approximation is considered).
Contexts for lower approximations

Input: The set of concept names N _{ C }, an ABox \(\mathcal {A}\), a similarity relation Sim, a concept C, and an individual a.

Output: A nonempty context Σ⊆N _{ C }−atom(C) and a natural number n≥1 such that \(\mathcal {A} \models \underline {C}_{(\mathrm {\text {Sim}}_{\Sigma },n)}(a)\).
The function atom(C) returns the set of atomic concept names contained in the concept C. We emphasize that in this paper the problem of query refinement is restricted only to ABoxes and atomic concept assertions (i.e., we consider that empty TBox and complex concepts are not allowed in the ABox). One of the problems related to applications of rough set methods is whether the whole set of attributes (concepts) is necessary and, if not, how to determine the simplified and still sufficient subset of attributes that preserves the distinguishability information of the original one, called reduct. For a knowledge base in \(\mathcal {ALC}\), the reducts are determined by the minimal sets of concepts that preserve discernibility of all individuals from one another. A resulting reduct is, therefore, a minimal set of concepts enabling one to introduce the same indiscernibility on the universe as the whole set of concepts does.
In the rough set theory, the computation of all types of reducts is based on discernibility matrices [19]. Such matrices are constructed from the discernibility relation. In this paper, we consider dissimilarity instead [21], because we are working with the notion of similarity. We highlight that a dissimilarity relation can be viewed as the complement of a similarity relation.
Definition 17.
Intuitively, DIS(Σ,x,y,Sim) describes the set of all concepts in Σ in which the individual x is not similar to y with respect to Sim. To evaluate this set of concepts, we will define a Boolean function f(Σ,x,y,Sim), called dissimilarity function, that will return the set of reducts.
Definition 18.
By Lemma 1, the intersection of all dissimilarities of the individual x is calculated, since we are interested only in minimal contexts. In order to find a context for an assertion axiom of the form A(x), the function f(N _{ C }−{A},A,x,Sim) can be used. From the point of view of rough sets, A is not considered in the context because it is treated as the decision attribute and N _{ C }−{A} as the conditional attributes.
Example.
This empty result shows that context satisfying the lower approximation of Medium to x _{3} and the similarity P does not exist. Intuitively, we can say that when we permit similarities of the values of x _{3}, with contradictory information, there will be an individual similar to x _{3} that will not satisfy the concept Medium—in this case, the individual x _{6}.
Contexts for upper approximations

Input: The set of concept names N _{ C }, an ABox \(\mathcal {A}\), a similarity relation Sim, a concept C, and an individual a.

Output: A nonempty context Σ⊆N _{ C }−atom(C) and a natural number n≥1 such that \(\mathcal {A} \models \overline {C}^{(\mathrm {\text {Sim}}_{\Sigma },n)}(a)\).
Unlike the lower approximation, we will consider now the notion of similarity matrix instead of dissimilarity matrix. The motivation behind the search for contexts to an upper approximation comes from the following idea: when an assertion axiom is not satisfied by a knowledge base, we need to find individuals satisfying the assertion of the consulted concept that share some similarities with the individual of the original query. These similarities will characterize the context. Thus, we can calculate the set of maximal concepts preserving similarity.
Definition 19.
The matrix SIM(Σ,x,y,Sim) describes the set of all concepts in Σ in which an individual x is similar to y with respect to Sim. To evaluate this set of concepts, we define the Boolean function g(Σ,x,y,Sim), called similarity function.
Definition 20.
Now, we are not interested in finding the minimal set of concepts (reduct). By Lemma 1, we are searching for the maximal set of concepts, so the disjunction of all similarities of the individual x is performed.
Example.
The outcome is the same obtained with the relations R and S. However, differently from them, \((\bigwedge \text {SIM}(\Sigma,x_{7},x_{6},\mathrm {P})) = (\textit {B} \wedge \textit {F})\). This implies that there are some evidence of similarities with contradictions in P, but in this case, this evidence is redundant, because they do not change the final result.
Results and discussion
Now, we present the algorithms to find contexts for query refinements. They consist of searching for minimal sets of concepts (for query restrictions) or maximal sets of concepts (for query relaxations). If no result is found, then the process is repeated taking into account a similarity relation of a higher step. The algorithms finish when any context is found (which will be the answer of the problem) or when they search in all possible kstep relations and no result is returned (in this case, an empty set will be the answer of the problem). In the sequel, we explain the algorithms and show their complexities.
We assume in the problem that a formula is represented in conjunctive normal form (CNF) (if it is the input of Algorithms 1–2) or in disjunctive normal form (DNF) (if it is the input of Algorithm 3). For instance, S={{A _{1},A _{2}},{A _{2},A _{3}}} can be treated as S=(A _{1}∧A _{2})∨(A _{2}∧A _{3}) (if it is the input of Algorithms 1–2) or S=(A _{1}∨A _{2})∧(A _{2}∨A _{3}) (if it is the input of Algorithm 3). Algorithm 1 simplifies a formula in DNF by removing redundant clauses. This procedure is done by employing absorption law, i.e., (a∧b)∨a≡a, which is performed in the function Extract (lines 6 and 10). Only this law is needed to simplify, since the input of the algorithm contains only atomic concepts. The function Extract consists simply in removing a clause from a set of clauses.
The function SimplifyDNF2(S,n) (Algorithm 2) follows the idea of Algorithm 1, but instead of the absorption law, the rule (a∧b)∨a≡(a∧b) to represent maximal contexts is applied to it. Then, SimplifyDNF2(S,n) is obtained by exchanging the lines 5 and 9 of Algorithm 1.
Theorem 1.
The time complexities of SimplifyDNF(S,n) and SimplifyDNF2(S,n) are O(N _{ C }.n ^{2}), where n is the number of clauses of the disjunctive normal form formula received as input and N _{ C } is the set of atomic concepts.
Proof.
The Algorithms have two nested loops of the type while, which are limited by n (thus, O(n ^{2})), where n is the number of clauses in the DNF formula of the input. The complexity of Extract has a linear upper bound in the size of the formula, i.e., O(N _{ C }). Therefore, the complexity of the algorithms SimplifyDNF(S,n) and SimplifyDNF2(S,n) are O(N _{ C }.n ^{2}).
Theorem 2.
The time complexity of SimplifyCNF(S,n) is \(O(2^{N_{C}})\phantom {\dot {i}\!}\).
Proof.
Algorithm 3 simplifies a formula S in CNF by removing redundant clauses, similarly to Algorithm 1. This is achieved by translating a CNF formula into a DNF formula via distribution rules (Distribute) and then applying the function SimplifyDNF. The complexity of the algorithm which translates a CNF to a DNF formula is O(2^{ n }) [22], where n is the number of different variables of the CNF formula. The complexity of line 1 is \(O(2^{N_{C}})\phantom {\dot {i}\!}\), in which N _{ C } is the number of atomic concepts and the limit of variables that can occur in the CNF formula. The complexity of line 2 is O(N _{ C }.n ^{2}), as showed above. Therefore, the complexity of Algorithm 3 is \(O(2^{N_{C}})\phantom {\dot {i}\!}\).
Algorithm 4 searches sets of contexts for lower approximations. In other words, it implements the dissimilarity function f(Σ,C,x,Sim). First, the dissimilarity matrix of a specific individual with respect to the similarity relation Sim (lines 8–18) is computed. After, the dissimilarity function through SimplifyCNF (line 19) is calculated.
If the result is nonempty, then it will be the solution of the problem. Otherwise, the procedure is restarted with a similarity relation of a higher step. The result of the problem is found by finding a kstep relation returning a nonempty set. If all kstep relations return empty sets, then the solution will be empty (i.e., there are no contexts to solve the problem).
For instance, taking an example of the previous subsection, f({GL, B, F},M,x _{3},S) will find the contexts B,(GL∨B), and B in the first iteration through the dissimilarity matrix. Then, the context B is concluded after a simplification. As the result is nonempty, this will be the result of the problem.
Theorem 3.
Determining if there exist contexts for lower approximations of an ABox assertion in \(\textit {paraconsistent rough} \mathcal {ALC}\) is EXPTIME.
Proof.
By Algorithm 4, the complexity of the loop for all (lines 8–18) takes into account the complexity of logical consequence of an assertion axiom in \(\mathcal {PR_{\textit {ALC}}}\) and the computation of the dissimilarity matrix DIS. By definition, DIS also depends on the problem of logical consequence in \(\mathcal {PR_{\textit {ALC}}}\), i.e., its complexity is PSPACE. Thus, the complexity of the lines 8–18 takes polynomial space. As pointed in Proposition 1, tight and loose approximations are monotonic, and since we are leading with finite sets of individuals, there is a finite number of kstep relations, which are bounded by N _{ I }. Therefore, the maximum number of tests done through the while (line 5) is O(N _{ I }). Lastly, we have that SimplifyCNF(S,n) has a time complexity of \(O(2^{N_{C}})\phantom {\dot {i}\!}\).
Algorithm 5 follows the rationale of Algorithm 4, but it constructs the similarity matrix SIM. After that, the algorithm computes the similarity function through SimplifyDNF2. Considering the example g({GL, B, F},E,x _{7},R) from the previous subsection, the algorithm will discover the contexts (B∧F) and F in the first iteration. Then, the simplification will result in (B∧F) that will be the result of the query relaxation.
Theorem 4.
Determining if there exist contexts for upper approximations of an ABox assertion in \(\textit {paraconsistent rough} \mathcal {ALC}\) is PSPACE.
Proof.
As in Algorithm 4, the construction of the similarity matrix SIM takes into account the logical consequence problem in \(\mathcal {PR_{\textit {ALC}}}\) (lines 8–14) and has polynomial space complexity. In the same way, the number of kstep relations is bounded by O(N _{ I }) (line 5). The time complexity of SimplifyDNF2(S,n) is polynomial, but the overall complexity of the algorithm is PSPACE for \(\mathcal {PR_{\textit {ALC}}}\).
Conclusions
In this paper, we defined techniques for handling query refinements of assertion axioms in \(\mathcal {PR_{\textit {ALC}}}\), by employing the notion of contextual approximation and similarity relations, which exploited the presence of unknown and inconsistent information. We also showed a method to compute optimized contexts for these queries based on the notion of reducts presented in the rough set theory. The problem of query restrictions using contextual approximation was proved to have exponential time complexity, while the problem of query relaxations has polynomial space complexity.
As future work, we will investigate some ways of choosing the most representative contexts, as approaches involving measures of inconsistency or information, since the method presented in this paper concerns only about minimal or maximal contexts. Some approaches to deal with this problem can be found in [21, 23]. Another possible line of research is to extend the application of query refinement in complex assertion axioms (any concept C(a)) as well to terminological axioms, i.e., axioms of the form \(C \sqsubseteq D\).
Declarations
Acknowledgements
This research is partially supported by CNPq (305980/20130,301607/20109, 474821/20129, 482481/20102), CAPES (PROCAD/NF789/2010), CNPq/CAPES(552578/20118).
Authors’ Affiliations
References
 Pawlak Z (1982) Rough sets. Int J Inf Comput Sci 11: 341–356.MathSciNetView ArticleGoogle Scholar
 Schaerf M, Cadoli M (1995) Tractable reasoning via approximation. Artif Intell 74: 249–310.MathSciNetView ArticleGoogle Scholar
 Stuckenschmidt H (2007) Partial matchmaking using approximate subsumption In: AAAI’07, 1459–1464.. AAAI Press, Vancouver, British Columbia, Canada.Google Scholar
 Viana H, Alcântara J, Martins AT2011. Paraconsistent rough description logic. CEURWS.org, Barcelona, Spain.Google Scholar
 Cornelis C, De Cock M, Radzikowska AM (2008) Fuzzy rough sets: from theory into practice. In: Pedrycz W, Skowron A, Kreinovich V (eds)Handbook of Granular Computing, 533–552.. John Wiley & Sons, Inc., New York, NY, USA.View ArticleGoogle Scholar
 Fanizzi N, d’Amato C, Esposito F, Lukasiewicz T (2008) Representing uncertain concepts in rough description logics via contextual indiscernibility relations In: URSW, Karlsruhe, Germany.. Springer, Berlin Heidelberg.Google Scholar
 Viana H, Alcantara J, Martins AT (2013) Searching contexts in rough description logics In: Intelligent systems (BRACIS), 2013 Brazilian Conference On Intelligent Systems, 163–168.Google Scholar
 Belnap ND (1977) A useful fourvalued logic. In: Dunn JM Epstein G (eds)Modern uses of multiplevalued logic, 8–37.. D. Reidel, Springer Netherlands.Google Scholar
 Peñaloza R, Zou T (2013) Roughening the E ℒ $\mathcal {EL}$ envelope In: Frontiers of combining systems (FroCoS 2013), 71–86. Springer Berlin Heidelberg.Google Scholar
 Jiang Y, Wang J, Tang S, Xiao B (2009) Reasoning with rough description logics: an approximate concepts approach. Inf Sci 179(5): 600–612.MathSciNetView ArticleGoogle Scholar
 Keet CM (2010) On the feasibility of description logic knowledge bases with rough concepts and vague instances In: Description logics, Waterloo, Canada. Citeseer.Google Scholar
 Schlobach S, Klein M, Amsterdam VU (2007) Description logics with approximate definitions: precise modeling of vague concepts In: IJCAI 07, Hyderabad, India, 557–562.Google Scholar
 Baader F (2003) The description logic handbook: theory, implementation, and applications. Cambridge University Press, New York, NY, USA.Google Scholar
 Komorowski J, Pawlak Z, Polkowski L, Skowron A (1998) Rough sets: a tutorial. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37.2477.
 GrzymalaBusse JW (2006) A rough set approach to data with missing attribute values In: RSKT, Chongqing, China, 58–67.. Springer, Berlin Heidelberg.Google Scholar
 Wu W, Zhang W (2002) Neighborhood operator systems and approximations. Inf Sci 144(14): 201–217.View ArticleGoogle Scholar
 Ma Y, Hitzler P, Lin Z (2006) Paraconsistent reasoning with OWL — algorithms and the ParOWL reasoner. Technical report, AIFB, Univerisity of Karlsruhe, Germany.Google Scholar
 GrzymalaBusse JW (2006) Rough set strategies to data with missing attribute values In: Foundations and novel approaches in data mining, 197–212.. Springer, Berlin Heidelberg.Google Scholar
 Skowron A, Rauszer C (1992) The discernibility matrices and functions in information systems. In: Slowiński R (ed)Handbook of applications and advances of the rough sets theory, Kluwer, Dordrecht.Google Scholar
 Viana H (2012) Refinamento de Consultas en Lógicas de Descrição Utilizando a Teoria dos Rough Sets. http://mdcc.ufc.br/teses/doc_download/183161henriquevianaoliveira.
 Stepaniuk J (1998) Approximation spaces, reducts and representatives. In: Polkowski L Skowron A (eds)Rough sets in knowledge discovery, Part I and II, 109–126.. PhysicaVerlag, Heidelberg.View ArticleGoogle Scholar
 Miltersen PB, Radhakrishnan J, Wegener I (2005) On converting CNF to DNF. Theor Comput Sci 347(12): 325–335.MathSciNetView ArticleGoogle Scholar
 Nguyen HS (2006) Approximate Boolean reasoning: foundations and applications in data mining. Trans Rough Sets V 4100: 334–506.View ArticleGoogle Scholar
Copyright
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.