Abstract Universal properties of languages have always been important in traditional linguistics study. In recent years, studies have increasingly presented a trend which integrates multiple disciplines and methods,e.g.cognitive science, network science, big data analysis and quantitative techniques. So far, results of the survey on large-scale cross-language materials have indicated that human languages have a tendency toward dependency distance minimization. This tendency suggests that, although human languages differ in pronunciation, vocabulary and grammar, etc., their syntax may be bound by universal mechanisms, and their evolution may also have a universal model.
Dependency distance, which is defined as the linear distance between two words which are syntactically related,can reflect the comprehension difficulty of syntactic structure. Therefore,the dependency distance minimization is considered as resulting from cognitive mechanism and the effect of ″the principle of least effort″ on syntactic structure. It also proves that humans prefer to avoid the use of long-distance dependencies to reduce cognitive cost. As a result, dependency distance distribution may present a certain pattern. Revealing this pattern will help us understand how human cognitive mechanism works on syntactic structure.
But the question is which of the probability distributions can fit the pattern of dependency distance distribution more properly - the power law distribution or the exponential distribution? To find out the answer, this paper uses the following methods and materials to analyze dependency distance distribution: 1) Complementary Cumulative Distribution Function (CCDF) is used to smooth data, to avoid statistical fluctuation, and to lower fitting error; 2) Maximum likelihood estimation and likelihood ratio test are used to fit and compare five kinds of ″heavy tail″ distribution, including exponential and power law; 3) HamleDT 2.0 dependence treebank is adopted, especially for language materials which are annotated with Prague Dependencies Scheme, because this annotation scheme is most similar to the traditional dependency grammar, and more helpful to find the rules of language structure.
With these methods and materials, this research analyzes dependency treebanks of 30 languages, and summarises the following findings: 1) Complementary Cumulative Distribution indicates that distribution of dependency distance in human languages proved has certain regularity; 2) for the majority of 30 languages, the distribution of dependency distance conforms to certain models, namely, Stretched Exponential Distribution (SED) for ″short sentences″ and Truncated Power Law Distribution (TPLD) for ″long sentences″ ; 3) although dependency distance distribution patterns differ among languages, they all fit, in essence, to a mixed exponential and power law distribution; 4) the debate over exponential distribution and power law distribution might mainly be caused by different fitting methods, different languages, different sentences length and text size, etc. The findings of this research will help us better understand the nature of the dependency distance minimization. It may also reveal that the dependency distance in human languages may abide by a certain universal distribution pattern. At the same time, the findings may contribute significantly to constructing the syntactical synergetic subsystem in the framework of dependency grammar.
|