Links between information construction and information gain. Entropy and bibliometric distributions.

 

 

Thierry Lafouge

Christine Michel[1]

Laboratoire Recodoc

Université Claude Bernard Lyon 1, Bat 721

43 Bd du 11 novembre 1918

69622 Villeurbanne Cedex, France.

lafouge@enssib.fr

Christine.Michel@montaigne.u-bordeaux.fr

 

 

 

Keywords : bibliometric distribution / entropy / least effort law / information theory

 

The study of the statistical regularities observed in the field of the information production and use has confirmed the existence of important similarities. Thus, the existence of regularities and measurable ratios allow the prevision and the concept of laws. In the fifties, C. Shannon (Shannon C., Weaver W. 1975 : Théorie mathématique de la communication, Bibliothèque du CELP, 1975) modeled the information circulation theory. The entropy hypothesis of this theory is: the more ranked a system is, the less information it produces. Theoretical studies have tried to formalize the connection between the bibliometric distribution and the entropy. In this paper we try to extend previous results linked with "the least effort principle" and the analytical slope of a bibliometric distribution. In the first and second parts we present some recalls about entropy and bibliometric distributions, and after that, we describe different links between them.

1.                       Recall about entropy

Let a source of information produce n random events of respective probabilities p1, p2...., pnwhere image.We call entropy of such a source the following function H (Caumel 1988).

imageimage

As well as in the systems studied in physical science, the higher the entropy source is, the less organized the system is (foreseeable here). Shannon supposes that the more ranked a system is and the less information it produces. Thus, the entropy H is at a maximum if all the events are equiprobable, that is to say if we have image and in this case H = Log2(n).

The logarithm function of base two is often used because it is coherent with the electronic information binary digit code (the bit being defined as the maximum entropy of the random binary source).

It is possible to spread and generalize the entropy's definition for a continuous probability's law.

In this case, we will not talk about distribution but density function (noted v) of a random phenomenon. We will define its entropy by the function:

 

image

 

In bibliometry, the events normally studied are: the papers or keywords' output, the books borrowing, the author quoting ... The sources taken into account will then be the authors, the bibliographical references and the books. These events are noticeable because they are featured by statistical regularities. Thus, it is interesting to observe how, by the entropy calculus, the quantity of information changes from these various sources according to the random process that lead them.

2.                       Recall on bibliometric distributions

 The classical bibliometric distributions are: the Lotka's law related to the publications' output number, the Zipf's law related to how often words appear in a text and the Bradford's one related to the articles' dispersion in journals. According to the studies phenomena, these laws will be put clearly either in a frequential or in a ranked way.

2.1.                  Frequency (Lotka):

The frequential approach is the oldest one (Lotka 1926). The probability that an event appears is calculated according to its apparition frequency. An example of a frequential distribution law is the law that analyses the scientific article's output by searchers. Lotka suggests to write the distribution of the number of scientists who have written i publications:

image where image is the maximum productivity of a scientist. This law is generalized in the formula: image where K and aare constants depending on the studied field.

2.2.                  Rank (Zipf):

The statement of a law according to the rank implies that the information source has been previously ranked according to its output. These distributions per rank are used when the source's production ranking is inevitable to point out the apparition of regularity. The most characteristic example of a distribution per rank is Zipf's law. It observes how  often words appear in English texts. By ranking these numbers in a decreasing way, he observes that there was an inversely proportional connection between the presentation rank of a word and its apparition frequency. Zipf expresses this regularity with the following equation: 

image   where g(r) represents the frequency of the rank r.

Numerous works have shown equivalencies between the distributions per rank and the frequential distributions (Egghe 1988). The choice between the one or the other of the presentations depends on the study one wants to carry out. In the study of keyword's distributions, the distribution per rank will be chosen because it is the most significant (Quoniam 1992). In a recent publication (Lhen and al. 1995), in collaboration with the «CRRM»[2], we have shown the relevance of the generalized entropy's theory (Reyni 1960) to handle distributions.

2.3.                  The continuous case (Pareto):

Pareto's distribution for the continuous case has the same role as Lotka's distribution in the discrete case . It is written as follows:

image 

to instead of to image

 

Haitun (Haitun 1982) defines a Zipfian distribution with the following hyperbolic density function: image  where t belongs to the interval image  and where a  and C are positive constants. If image, we are in the well-known case of Lokta's law. All the mathematical properties of such distributions have been widely studied. S.D. Haitun opposes this type of distribution to the Gaussian ones.

2.4.                  The geometrical case

Very often the geometrical distribution is used to quantify some regularities observed in bibliometry, especially in the field of documentary uses in libraries (Lafouge et al. 1997).

A geometrical distribution is written as follows:

               image

 

If the continuous equivalent of this distribution is written the following exponential is obtained:      imageimageimage.

2.5.                  The negative binomial law

Another distribution called the negative binomial distribution is often used (Lafouge 1999) to model the use distributions.  

It is written as follows:

image

image

If image we have the equation of a geometric distribution. As far as we know there is not any law strictly equivalent to the negative binomial one in the continuous case.

3.                       Entropy and distribution

3.1.                  Problem

Let (F,I) be a bibliometric distribution where :

 F represents the set of all the patterns referring to an identified bibliographic source as for example the authors, publications.... The pattern may also be a word, defined by sequences of characters surrounded with separators, or several words.

I represents the set of all the items: each item is a positive number indicating the pattern occurrence (the number of appearances).

All these values form the source output. We have previously seen that several representations of a distribution are possible. We suppose that this source is ruled by a stable random process that can be observed and it is characterized by a function conveying the effort to produce all the different items. Thus the studied question is as follows: for a given quantity of effort, what is the connection between the random distribution of the source and the effort function when the quantity of information (considered on the Shannon sense) produced by the source is maximized? The technique used is called MEP, Maximum Entropy Principle, and has already been used in other studies.

Kantor (Kantor 1998) presents an application where MEP is used to improve information retrieval. Let us consider the K index terms image of a query, Kantor considers the 2K possible Boolean equations constructed with combinations of image . He considers each equation as queries and calls atom A(i) (i=1, ..., 2K) the corresponding answering subset of documents. A(i) is used to represent both the logical combination of terms and the set of documents indexed by that logical combination.

We make the assumption that every document is either relevant or not. In an atom, the relevant documents take the value 1, the others the value 0. An atom A(i) has the probability image to be relevant, and image to be non relevant. By knowing image (k=1, ..., K) "the probability of relevance for documents indexed by terms Tk» and image, "the probability of relevance for all documents», Kantor hopes to calculate the distribution of image which maximizes the entropy, and then presents to the users the first atoms corresponding to the highest image. The mathematical formalization of this problem is :

 

Find a positive (condition 1) distribution image respecting (condition 2) image,

and which maximizes the entropy image

subjected to (condition 3')image where image for each k.

and (condition 3'')image

The evaluation study results show that the MEP method is useful in IR for small collections but not for big ones. Final discussions argue that this method may be improved by taking into account finest criterion than «presence or absence of terms in a document» to estimate the document's relevance.

In the case of continuous distributions, Yablonsky (Yablonsky 1980) used the MEP in order to find the distribution of the «least effort principle» in the case of scientific article production. The necessary effort to produce an article is modeled by the function E, defined by image, where k is a positive constant.

The effort made to produce the first article is image, it is called the "minimum state of the scientist". The effort made to produce the second one image requires less effort from the scientist, and so on.

 

The aim is to find a positive (condition 1) density function v(t) on theinterval [a, ¥[ image,respecting (condition 2)   image                                  (v  density function)     

and which maximizes the entropy :   image                 

subjected to (condition 3) image (Constraint of an effort)

 

This model is known to be the «least effort law»: the density function which results from the question of the entropy maximization, under an effort constraint, is the Zipfian function.

Indeed, the entropy maximization for the effort function image is obtained for a density function whose analytical shape is: imageimage whereimageimage

The calculation of the entropy according to a gives: image

a being positive by definition, it is easy to show that the entropy is a decreasing function of a. The classical interpretation of Lotka's law is found, that is to say the higher a is, the bigger the gap between the number of scientists who produce a little is. (Knowing that there are few scientists who produce a lot compared to the number of scientists who produce a little).

 

Our aim is to spread these results to other bibliometric distributions: the geometric distribution and the binomial negative distribution.

We will consider the continuous case, so we will keep Yablonsky`s notations:

 

§      Let v(t) be the bibliometric distribution density function, t being defined on [a, ¥[, such as : v(t)³ 0on [a, ¥[image (condition 1).

§      Let E(t) be the effort production function (E is a positive constant corresponding to the average effort),

§      And H  be the entropy.

We have found couples (E(t), v(t)) where :

            (condition 2)    image 

            (condition 3)    image          (Constraint of an effort)

            (condition 4)    image  is maximized

3.2   The geometrical case

Let us remember the density function of a geometrical distribution :

imageimage ( 1- q = p )

 

We have chosen to take the linear effort function  image.

The case t =1 as previously seen corresponds to the minimal state of the scientist who has produced one publication.

If we put ourselves as previously in the case of the scientific output, in the case of a  linear functionimage, we want to show that when the MEP is applied with constraint of an effort, the resulting function is a geometric distribution with a density function such as :

 

image

 

Remark : in the case of t =1, we have image  the average of v. The condition 3 thus aims at setting the expectation.

 

 

Demonstration

 

These results are shown using variational calculation techniques.

Let us demonstrate that the function:

               image           where   image

checks the conditions

                        v ³0 on [1, ¥[                                    (1)

                        image                                  (2)

                        image                        (3)

and maximizes the function: image     

 

We can easily show that the function wchecks the conditions (1) (2) and (3). Let us show that wmaximizes the entropy. We will prove that H reaches its maximum for the function w.

 

Let F be the following function:

image where  l is a constant whose value is:image

 

We have: image

We can easily show that this derivative cancels  for w.

So for t fixed, we have: image and image

F being convex and imagecanceling for w with any value of t determined we can write:

image,image

Soimage, image

And so image, image

 

Let n be any function checking the normalization (conditions 2) and (condition 3)

image ³image hence the result is proved.

 

image.

 

We have the same result as previously for the variation of H in function of image. Moreover we can notice that the entropy calculated with a linear effort law is always inferior to an entropy calculated with a logarithmic effort law and that the difference varies in an inversely proportional way. The dispersion, and then the entropy, is stronger in the Zipfian case. This result justifies the choice of the entropy of order 1 to feature the diversity of a Zipfian distribution (Lhen et al.1995).

 

 

 

Note

We can show Yablonsky's previous result using the same technique with the function denoted F:

image

3.2.                  The negative binomial case

We have said that we do not know the density function of the negative binomial law in the continuous case. So we will use convolution techniques to build a new distribution that we will call here " pseudo negative binomial".

The convolution technique is defined by :

If X1 and X2 are two independent and continuous random variables having respectively as density functions F1 and F2 defined on the interval image, then the random variable X1+X2 will have the density function F1*F2, called convolution product of F1 and F2 and defined by :

image

This definition is generalized for a convolution product of order j.

Indeed, if Xj is a finite series of independent identically distributed random variables of density F we show that the random variable image has a density function Fjdefined by the following convolution:

 

 

F1 = F

image

image

 

The convolution techniques have been used to give a new interpretation of Lotka's law (Egghe 1994).

We know (Calot 1984) that in the discrete case the sum of j independent variables of a geometric law G(q) is a negative binomial law image

The exponential distribution is the geometric distribution density function: image. If we build a density function vj from the convolution of j exponential distributions, we may consider it as the continuous version of a negative binomial law, we will call it "pseudo negative binomial " law.

 

A simple calculation shows that the  convolution product of order j of the v function is :

image

 

Egghe has shown (Egghe 1994) the stability properties for the geometric distribution using this convolution product. If the original distribution is an exponential one, we can interpret this distribution vj in different ways:

-In a context of articles output, image is the proportion of authors who have written  i articles, each article having exactly j authors.

- In a context of bibliographic references keywords distributions, image is the proportion of words used i times, each reference having exactly j keywords

 

Then the question that we want to solve is: if we set the effort quantity (noted Ej), what is the nature (linear, logarithmic...) of the effort distribution (noted EF) linked to a random process of  a negative binomial pseudo type, when the information quantity is maximum? This question is mathematically written:

 

Let's consider the following distribution :image

What is the nature of the effort distribution EF that checks the following conditions:

                image                    (Constraint of an effort)

and maximizes the entropy H: -image for any function n checking the conditions:

               v(t)  ³0 on [0, ¥[

               image

 

 

We will see if the effort function EF ,imageis valid to solve the problem.

 

 

 

 

Demonstration:

We can easily check by recurrence that image.

It's more difficult to know the value of the effortimage.

Let us put image

image

image 

however we have image and we have seen that image

So image

So we have shown that: image

 

Let's now calculated the value of image

image

image

With image

Let's calculate image 

We will show recurrently that image

image and image

Now, let suppose that image , image.

It's easy to show that image

So image

And So image

 

In Annex 1 we have the result :

image

 

In Annex 2 we have the result :

image

 

So image

 

We have image

So image

image

By using complete Gamma function it is possible to say that :

image (g  is Euler's constant :0.5772....)

So image

With initial notations : image

So image

So image

 

 

Now let us show that EF maximizes the entropy.

We will show that the entropy H reaches its minimum for the function image.

 

Let F be the following function:

 

image where l is the constant image 

 

We have: image

Let us put : image

image

image

If we replace l by its value then this derivative cancels for image.

For t fixed we have: image

Fbeing convex and imagecanceling in image for any value of t fixed we can write:

image

 

that is to say:

image

image

 

Let n be any function checking the normalization (conditions 2) and (condition3):

     image 

So the entropy is maximum for nj(t).

 

Calculation of the entropy  image

image

image

imageWe have: image , imageand

image

So

image

 

We can remark that for j=1 we have the same results as for the geometrical low, i.e. image

We can remark thatimage

 

We want now to analyze the characteristics of the two functions image and image.

In all cases of a, image and image are increasing functions of j, the author's number. It means that publishing with many authors required more effort but induce, in all cases, a gain of information. We analyze now the difference image-image.

image j =1,2.......

image is depending on j and a. The 3 following curves give the results for a=1 (figure 1), a =2 (figure 2) and a =3 (figure 3).

 

image

Figure 1 : case of a =1

 

imageimage

Figure 2 : case of a =3                                    Figure 3 : case of a = 5

 

As it's represented in figure 1, in the case of a=1 we have alwaysimage, which means that the gain of information is always higher than the effort need to produce it. On the other case, if a>1 (figure 2 and 3), the curves of image is first negative, cut the x-axe on a particular point ma and stay positive after than. The value ma define the minimum number of authors required for having a gain of information relative to a paper, biggest than the effort due to produce it.

 

4.                       Conclusion

We have observed that, in the Zipfian distribution law case, the corresponding effort function was logarithmic, in the geometric law case, it was linear. In each case the entropy is decreasing with a. In the pseudo negative binomial case, the effort function is composed by two functions: the fist one is linear (effort constant) and the other logarithmic (the least effort law). For j fixed the corresponding entropy is decreasing with a.

Let us now vary j. Let us remember a possible interpretation of the pseudo negative binomial law: in a context of articles output, image is the proportion of authors who have written i articles, each article having exactly j authors. There is a link between the effort and the entropy variation. We have shown that for each value of a, there is a number ma representing the minimum number of authors required to write a paper having more information than the effort due to produce it. In the case of a=1, we have m1=1, i.e. whatever the number of author, each new article produce more information that the effort developed to write it. That needs to be verify.


References

 

EGGHE L.(1988): On the classification of the classical bibliometric laws. Jounal of Documentation, Vol 44(1), 1988, p. 53-62.

 

EGGHE L. (1994): Special features of the author publication relationship and a new explanation of Lotka's law based on convolution theory. Journal of the American Society for Information Science, Vol 45(6),1994, p. 422-427.

 

CALOT G. (1984): Cours de calcul de probabilité. Chapitre 12. Dunod décision 1984, 476 pages.

 

CAUMEL Y. (1988): Probabilités, théorie et applications. Chapitre 8. Eyrolles, 1988, 290 pages.

 

HAITUN SD. Stationary Scientometrics Distributions. Scientometrics N°4, 1982, Part I p.5-25, Part II p.89-104, Part III p.181-194.

 

KANTOR Paul B., JUNG Jin Lee: Testing the Maximum Entropy Principle for information Retrieval. Journal of the American Society for Information Science, Vol 49 (6), 1998, p 557-556.

 

LAFOUGE T., LAINE CRUZEL S. (1997): A new explanation of the geometrical law in the case of library circulation data . Information Processing and Management, Vol 33 (4), 1997, p. 523-527.

 

LAFOUGE T.,GUINET E. (1999): A new explanation of the negative binomial law and the Poisson law with regard to library circulation data. Journal of Information Science Vol 25 (1), 1999, p 89-93.

 

LHEN J., LAFOUGE T., ELSKENS Y., QUONIAM L., DOU H. (1995): La « statistique des lois de Zipf. Revue Française de Bibliométrie N°14, 1995, p. 135-146.

 

LOTKA  A. (1926) : The frequency distribution of scientific productivity. Journal of the Washington Academy of Sciences, 16, 1926, p.317-323.

 

QUONIAM L. (1992): Bibliométrie sur les références bibliographiques: méthodologie, p. 244, 261.La Veille technologique; l'information scientifique, technique et industrielle. Dunod, 1992, 436 pages.

 

REYNI  A. (1961): On measures of entropy and information. In NEYMAN J. ed Berkeley symposium on mathematical statistic and probability .

 

SHANNON C.,WEAVER W.(1975): Théorie mathématique de la communication. Bibliothèque du CEPL, 1975, 188 pages.

 

YABLONSKY A.L (1980): On fundamental regularities of the distribution of scientific productivity. Scientometrics, Vol 2,(1), 1980, p. 3-34.

 

Annex 1

image

image

image

image

We know that image And image

So image

 

Moreover image

image

image

 

So image

 

Annex 2

image

image

image

 

And image

Soimage

 

image

 

image

image

 

image

 

 



[1] Postal adress : CEM-GRESIC, Maison des Sciences de l'Homme d'Aquitaine, Esplanade des Antilles, DU, 33607 Pessac Cedex, France

[2] CRRM : Centre de Recherche Retrospectif de Marseille.