What is Textual Entailment ?

Textual entailment has been recently defined as a common solution for modelling language variability in different NLP tasks [Dagan,2004]. Textual Entailment is formally defined as a relationship between a coherent text T and a language expression, the hypothesis H. T is said to entail H (T → H) if the meaning of H can be inferred from the meaning of T. An entailment function e(T,H) thus maps an entailment pair T-H to a true value (i.e., true if the relationship holds, false otherwise). Alternatively, e(T,H) can be also intended as a probabilistic function mapping the pair T-H to a real value between 0 and 1, expressing the confidence with which a human judge or an automatic system estimate the relationship to hold.
For example "Yahoo acquired Overture" entails "Yahoo owns Overture".

Since the task involves natural language expressions, textual entailment has a more difficult nature with respect to logic entailment, as it hides two different problems: paraphrasing and what can be called strict entailment . Generally, this task is faced under the simplifying assumption that the analysed text fragments represent facts (ft for the ones in the text and fh for those in the hypothesis) in an assertive or negative way.

The hypothesis h carries a fact fh that is also in the target text t but is expressed with different words. For example "Yahoo acquired Overture" is a paraphrase of "Yahoo bought Overture".

Strict Entailment
Target sentences carry different fact, but one can be inferred from the other. For example, we have strict entailment between "Yahoo acquired Overture""Yahoo owns Overture". In fact, the relation does not depend on the possible paraphrasing between the two expressions but on an entailment of the two facts governed by acquire and own.

In textual entailment, the only restriction on T and H is to be meaningful and coherent linguistic expressions: simple text fragments, such a noun phrase or single words, or complete sentences. In the first case, entailment can be verified simply looking at synonymy or subsumption relation among words. For example the entailment cat → animal holds, since the meaning of the hypothesis (an animal exists) can be inferred from the meaning of the text (a cat exists). In the latter case, deeper linguistic analysis are required, as sentential expression express complex facts about the world: here is where Textual Entailment gets really interesting and complicated.

Whatever the form of textual entailment is, the real research challenge consists in finding a relevant number of textual entailment prototype relations such as:

X acquired Y entails X own Y
X acquired Y entails X bought Y

Such patterns can be then used to recognise entailment relations in texts.

What is Textual Entailment good for ?

Several applications like Question Answering (QA) and Information Extraction (IE) strongly rely on the identification in texts of fragments answering specific user information needs. For example, given the question :

"Who bought Overture?"

a QA system should be able to extract and return to the user forms like "Yahoo bought Overture", "Yahoo owns Overture", "Overture acquisition by Yahoo", all of them conveying equivalent or inferable meaning. A huge amount of linguistic and semantic knowledge is needed in order to find equivalences and similarities both at lexical and syntactic levels. Both the study of syntactic alternation and normalization phenomena, and the use of semantic and lexical resources, such as WordNet , could be useful to disentangle the problem from a linguistic perspective.
On the contrary, most applications adopt statistical approaches, looking at collocation and co-occurrence evidences, avoiding deeper and complex linguistic analysis.
Whatever the approach is, what still lacks in the NLP community is the identification of a common framework in which to analyse, compare and evaluate these different techniques. Indeed, there is an emerging need for gathering together researches and methodologies that share the underlying common goal of equivalence/similarity recognition among surface forms.
In this direction, the Textual Entailment has been set up as a new framework, whose aim is to capture the common core shared by most NLP applications.

What kind of knowledge do we need ?

In oreder to disentangle to problem of Textual Entailment, different type of knowledge are needed. In fact, the entailmente relation can be linguistically expressed at different levels: surface, lexical, syntactic, semantic or even pragmatic.
In this view any type of linguistic, ontological or common-sense resorce could be useful, such as WordNet, tesauri, domain ontologies, lexical-semantic databases, common-sense repository, etc.

In our view, from an operational perspective, three types of entailment can be then defined:
  • semantic subsumption. T and H express the same fact, but the situation described in T is more specific than the situation in H. The specificity of T is expressed through one or more semantic operations. For example in the sentential pair H:"the cat eats the mouse", T:"the cat devours the mouse", T is more specific than H, as eat is a semantic generalization of devour. Here, semantic resoruces could be best suited to detect entailment.

  • syntactic subsumption. Tand H express the same fact, but the situation described in T is more specific than the situation in H. The specificity of T is expressed through one or more syntactic operations. For example in the pair H:"the cat eats the mouse", T:"the cat eats the mouse in the garden", T contains a specializing prepositional phrase. Here, syntactic analysis and parsing should be of great help.

  • direct implication. H expresses a fact that is implied by a fact in T. For example H:"The cat killed the mouse" is implied by T:"the cat devours the mouse", as it is supposed that killed is a precondition for devour. This type of entailment underlies deeper semantic and discourse analysis.