Multiple different natural language processing tasks in a single deep model

What can our model do?

Our model first analyzes grammatical roles (part of speeches) of the words in the sentence. A part-of-speech tag is assigned for each word; for example, the tag NN means that the corresponding word is a noun. The task is one of the most basic and fundamental NLP tasks.
Next, the syntactic phrases (chunks) in the sentence are identified; for example, the tag NP means that the corresponding span represents a noun phrase. The beyond word-level task can handle syntactically motivated phrases.
Subsequently, the syntactic relationships (dependencies) between the words are specified by using the previously analyzed results; for example, the arrow from the symbol "ROOT" to the word "dunking" means that the word "dunking" plays the central role when representing the sentence, and the subject (nsubj) of the action is a man and the object (dobj) of the action is a ball. This task is fundamental in analyzing sentence structures and in detecting relationships between words.
Now that our model has finished syntactically analyzing the single sentence, it is time to go beyond the single sentence level tasks. In the next phases, our model tries to understand semantic relationships between two sentences. Let us denote the above example sentence as the sentence A and take another sentence B:
"The ball is being dunked by a man with a jersey at a basketball game".
In the semantic tasks, our model predicts how much the two sentences are semantically related to each other, by using a real-valued score ranging from 1 to 5. The higher the score is, the more semantically related the sentence pair is. The score for the two sentences A and B is 4.9 because the two sentences describe the same situation. In addition, our model can recognize whether the sentence A entails the sentence B or not. Here, our model says that the sentence A entails the sentence B.