DanQingShanShui (483K)

Evolution of the World Wide Web

Part 2: Web Evolution Theory and The Next Stage


Yihong Ding
ding@cs.byu.edu

Li Xu
lxu@email.arizona.edu

First draft: April 27, 2007
Last update: May 2, 2007
(blog discussion)

Index of Part 2

    Laws of web evolution
         Web is in evolution
         Postulate 1: transformation of quantity into quality
         Postulate 2: incrementally cloning of humans' consciousness
         Corollary 1: interpretation of web evolution stages
         Corollary 2: identification of web evolution stages
         Corollary 3: definition of evolutionary characteristic variables
         Corollary 4: trigger of stage transition (primary contradiction in web evolution)
         Corollary 5: identification of stage transition
         Corollary 6: essence of web evolution
         Corollary 7: ending mark of stage transition
         Use case study of the web evolution theory: web industry and the rise of Google

    Next-Stage World Wide Web: beyond Web 2.0
         Symptom One: Multiple Characters
         Symptom Two: Being Educated
         Symptom Three: Homework
         Symptom Four: Exam
         Symptom Five: Conditional Friendship

    Summary

In Part 1 we discussed the past and present of World Wide Web through an analogical view. In Part 2, we summarize our discussions with two postulates and seven corollaries, which constitute a theory of web evolution. Then we apply this theory to predict the next stage of World Wide Web.

Laws of Web Evolution

Web is in evolution

The progress of World Web Wide is evolutionary rather than supervisory. Web evolution has its intrinsic laws. By discovering these web evolution laws, we can predict the progress of World Wide Web.

Our discussions of the past and present of WWW in Part 1 support the existence of web evolution laws. The transition from Web 1.0 to Web 2.0 is not supervised. W3C had not launched a special group for a plot of Web 2.0; and neither did Tim O'Reilly though he was one of the most insightful observers who caught and named this transition and one of the most anxious advocates of Web 2.0. In comparison, W3C did have launched a special group about Semantic Web that was engaged by hundreds of brilliant web researchers all over the world. The progress of WWW in the past several years, however, shows that the one lack of supervision (Web 2.0) advanced faster than the one with lots of supervision (Semantic Web). This phenomenon suggests the existence of web evolution laws that is objective to individual willingness.

WWW is progressed by aggregating contributions from hundreds of millions of web users. As a free and open world, WWW allows anybody to contribute anything. If there were no intrinsic laws guiding its progress, WWW must have been advanced slowly rather than rapidly, and nondirectionally rather than directionally. The highly diverse chaos produced by the hundreds of millions of web contributors must be very difficult to overcome without a constant, directional, and powerful driving force. As a matter of fact, however, the web did grow rapidly and directionally in the past years. The hype of Web 2.0 has shown that the entire web progresses not only explodingly but also directionally. This fact becomes another evidence to the existence of web evolution laws.

Evolution or intelligent design, this is a grand question. (We have a separated post focusing on this argument, which could be found at Here.) For many years, web researchers have over-emphasized the importance of intelligent design to the progress of World Wide Web. Certainly the web was begun with intelligent designs. WWW was invented by a great scientist---Tim Berners-Lee. Nevertheless WWW was dominated by intelligent designs in its early time; and W3C was an evidence to conduct a better organization of intelligent designs about the web. All these past facts, however, does not necessarily lead to a conclusion that the progress of WWW must always be in supervision. In history, WWW was once small and with much fewer members than the present. During its infant stage, the progress of WWW had been greatly affected by unexpected smart designs from its pioneer members. Kevin Kelly, a famous futurologist and technology historian, had pointed it out in his article "We are the Web" that the WWW might have already been directed by its pioneers to a very different route. But all these happened only due to that the web was once small, and it has already not been in that scenario any more. Now the web is engaged in hundreds of millions (and possibly billions) of members. Any influential web design thus means the adoption by most of these hundreds of millions of web users. This requirement makes random smart designs become harder and harder to be influential. Only the intelligent designs that well satisfy the common interest of most web users at the meantime could be prompted; the others, however, then have to be delayed despite of their creativeness and insightfulness. There is a timing that certain technologies becomes public appealing and thus they could be adopted in the global scale. No single persons or organizations can violate these timing issues because nobody could resist on the pressure from hundreds of millions of people. Intelligent designs must satisfy these timing issues to be get adopted. The study of these evolutionary timing issues, thus, is the focus of the web evolution research. The generalization of these evolutionary timing issues are the web evolution laws.

WWW is in evolution, but self-evolution of WWW does not prohibit intelligent designs. We want to emphasize this point before approaching to the details of web evolution laws. The progress of WWW needs (lots of) intelligent designs. There is no question about it. Only one thing we must be careful, i.e. the relation between intelligent designs and web evolution laws. Previously, most of us believed that the direction of web progress was supervised by intelligent designs. This viewpoint is no longer true. From (and even before) the present and ever after, intelligent designs have become the secondary issue that must priorly obey the objective web evolution laws. The designs that well follow the laws can be prompted, and others will be delays, at the least. This reversed relation between intelligent designs and web evolution laws shows the importance and urgency to the web evolution research.

Fundamental Postulates

World Wide Web evolves in stages. This is not due to the buzzword "Web 2.0." In fact, our discussions of the past and present of WWW in Part 1 show that the web can be recognized distinctively by the characteristics of a few features such as data, services, and web links. For example, on Web 1.0 we only had reactive services. But on Web 2.0 many web services become active. Actually, some web researchers even argued to rename Web 2.0 to be "active web". Though we don't favor to this renaming, we agree with their observations about the active web. This observation leads to the first postulate of web evolution.

Postulate 1: the web evolution is a stagewise directional process.

growing-line (4K) Stagewise and directional are the two fundamental standpoints. By stagewise, the web evolves in distinct and updating stages. Web evolution is not a gradual process. In contrast, its progress is often interrupted by sudden leaps and catastrophes. By directional, there are deterministic relations between any two successive web-evolution stages. The web does not jump randomly or nondirectionally from one stage to another.

This postulate is a web-oriented version of a general law of dialectics---the Law of Transformation of Quantity into Quality. Basically, this general law tells that prompt qualitative changes are caused by gradual quantitative alterations. For example, if a book has 100 pages as opposed to 50 pages, it is a quantitative change. However, if we reduce its length to be only one page, it is no longer a book. This is, thus, a qualitative change. As another example, if the temperature of ice rises, it remains to be ice until it reaches 0 centigrade. This is a quantitative change. When temperature continuously rises beyond 0 centigrade, ice thaws and turns into water. This is thus a qualitative change.

In the occidental philosophy, this general law was well presented by Friedrich Hegel (1770-1831) and Karl Marx (1818-1883), two extraordinary German Philosophers. In the oriental philosophy, however, we can also find the interpretation of this general law in different ways. Based on the theory of Yin Yang, Yin and Yang are two contradictive fundamental elements of the world. The ultimate accumulation of Yin, however, is the birth of Yang. Similarly, the ultimate accumulation of Yang is the birth of Yin.

By applying this general law to the WWW, the web also evolves in a progress of quality upgrade on the basis of quantity accumulation. Every stage on the web evolution represents a unique quality. The transition from a lower stage to a higher stage is a quality upgrade. Within any stage, however, it is a process of quantity accumulation.

The Postulate 1 describes the general scene of web evolution. But neither does it explain the driving forces of web evolution, (i.e. the cause of quantity accumulation and quality upgrade), nor does it provide ways of defining the quantity and quality of web evolution. To answer these questions, we have the Postulate 2 and several colloraries derived from these two postulates.

Postulate 2: the evolution of the World Wide Web is a process of stepwisely cloning the human society.

World Wide Web evolves due to nonstopping contributions from human users. A fundamental question to the web evolution is thus why people are willing to contribute to the web. The answer of this question is the fundamental driving force to the web evolution.

People publish on web because they want their publications are watched by others. If we construe this answer in a more illuminating way, web publishers are materializing their consciousness to be explicitly watchable or hearable on the web. Ultimately, this process of materializing ones' consciousness is the process of cloning men's mind, stepwisely, however.

Cloning is a route to an ultimate desire of mankind---immortality. Many religionists and philosophers has pointed out that the desire to immortality is intrinsic to humans. Typically, humans think of tomorrow, the day after tomorrow, and the days even after. The ultimate of this chain is eternity, with which is immortality. Human religions are the most important products due to this desire. On the other hand, however, even the atheists believe in mental immortality, which can be approached by preserving their work in history. The work, such as books, music, or great constructions, are highly concentrated consciousness of the authors. By reverse-engineering this materialized consciousness, the authors can be virtually resurrected as if they still talk to us after passing away. This is thus a pragmatic solution of immortality, which has been practiced and desired by mankind for centuries.

In history, however, this route to immortality was a luxury to very few people only. Due to numerous wars, natural disasters, and most importantly, the time-decaying methods of preserving men's work, only very few really extraordinary work could have been kept in history.

The invention of WWW changes the fact. WWW provides everyone a cheap, convenient, and certain way of recording their consciousness and keeping it in history. As a result, the mind of anybody can be kept in history (as long as web exists) even if the thought is silly. In a sudden, immortality of mind becomes affordable to everybody. This is the fundamental driving force to the accumulation of web knowledge, i.e, the advancement of World Wide Web.

The e-commerce bubble and rise of Web 2.0 supports this observation on the other hand. Coincidentally, the rise of Web 2.0 was begun when the e-commerce bubble was about to burst; and it kept rising after the bubble bursted. The e-commerce bubble represents the commercial force that drives the advance of WWW, while Web 2.0 was initiated with ideality of providing better ways of materializing men's consciousness on the web. During the Web 2.0 hype, many observers of WWW was astonished by the fact the millions of normal web users were willing to contribute to blogging and tagging without considering the dramatic payload to their time and money. It was, and is, very much unbalanced to normal web users. Economists cannot explain this phenomenon, but we, as researchers on web evolution, can explain it. To the end, the intrinsic desire of immortality cannot be evaluated by money. This was why the advance of WWW did not wane with the burst of e-commerce bubble.

This desire of immortality is more than an individual perspective. It is also a common community expectation. The loss of a member in a community is usually a pity to the common interest. A major portion of community resources in human society has to be used for educating and retraining new generations to cover the constant loss of its members by death. This is a waste because essentially it only covers the loss rather than produces new contributions to a society. But what will happen if members become immortal so that all the consciousness can be preserved? It thus means a great save to the interest of the entire society. This matching between individual and community interest to look of immortality of human mind together compose the most solid foundation of the driving force to the web evolution.

The invention of WWW allows web users to clone their consciousness. But it does not mean that WWW immediately allows 100 percent cloning of human consciousness. This cloning is, however, a stepwise process. Human consciousness has its variety of complexity. For example, the complexity of consciousness of a baby is definitely less than the complexity of consciousness of an adult. Stepwisely, WWW clones these different levels of complexity during its evolution. Web 1.0 materializes the consciousness only on a very shallow level, which is respect to the level of human babies. We could publish our thoughts on web literally but not meaningfully. All Web-1.0 publications need readers to interpret, which could be significantly deviated from the original meanings of the publishers. On Web 2.0, however, we start being able to not only write down our thoughts, but also tag our thoughts and produce active services based on our thoughts. This new way of materializing human consciousness allows us cloning our mind on web as if we are pre-school kids, who can do explanation but in shallow, limited, and ambiguous ways. In the future, web is going to evolve to be able to clone our mind in more and more mature levels. As a result, the entire web will become closer and closer to our human society. This process of stepwisely cloning of individuals as well as the links among individual (the human society) is a main thread on web evolution.

Corollaries

Corollary 1: the web evolution stages can be respectively mapped to the stages on human growth.

The Corollary 1 can be directly derived from the two postulates: (1) the web evolves in stages; (2) humans grow up in stages that master consciousness in different levels; and (3) the web evolves to stepwisely clone humans consciousness. Thus we can deduce respective mappings between the web evolution stages and human growth stages. The discussions in Part 1 support this derivation.

Corollary 2: the evolutionary stage of any macroscopic web existence can be identified by the quality of its contained web resources.

In this corollary, a macroscopic web existence can be anything from a single web page to the entire World Wide Web. Web resources are self-contained pieces of productive information on the web. Self-contained means that a web resource can be transmitted on the web alone without information loss. Productive means that a web resource can be used to produce. For example, a web document is a web resource, and so is a web service or a web link. But one single word such as "Ford" is not a web resource because its meaning is undecidable without particular context, i.e., it cannot be transmitted by itself alone without information loss. In contrast, however, a formally annotated word may be a web resource. For example, the word "Ford" with a formal annotation as "make of car" is a web resource because it can be delivered without information loss. Besides, a random collection of words is generally not a web resource because they are not productive. Informally, a web resource must be intentionally produced information that can be unambiguously reused and further manufactured.

Web resources have three basic types: the descriptive type (data resources), the functional type (service resources), and the interconnective type (link resources). These three types of resources are mutually independent to each other. A data resource can exist without being used by any services and connected by any links; a service resource can exist without being given any data and connected by any links; a link resource can exist without explicitly linking to any other resources. For different types of web resource, we need varied measurement for their quality and quantity.

This corollary is derived from the Postulate 1. Based on the Postulate 1, the quality of web resources must be kept on a constant level within a particular evolutionary stage. These qualities thus can be applied to identify these evolutionary stages.

Corollary 3: the quality measurement of characteristic variables on web evolution can be defined analogously to the quality measurement of the respective characteristic variables on human growth.

This corollary is based on the Corollary 1 and 2. Since every stage of web evolution can be mapped to a stage of human growth, these two evolutionary procedures must share their criteria on judging the stages. Therefore, we can define the web evolution criteria by analogizing the criteria on evaluating human growth. This is the essential methodology to study web evolution based on this theory.

We need analogical methodology to study web evolution because we do not know the future. Unlike the nature evolution or human evolution, both of which has occurred for millions of years, web evolution is a process barely started and we do not have many clues by only watching its history. We need something similar to compare to study the future of the web. The Postulate 2 tells us that this something similar is the growth of humans. The Corollary 3 thus claims that although we still do not have enough evidences to directly define quality measurements of web evolution, we can indirectly define them by analogizing the respective quality measurements of human growth. Unlike web evolution, the process of human growth can be repeatedly study and we have plenty of direct samples for us to study. This corollary solves the most difficult dilemma on web evolution research that we must define evolutionary criteria for things that have not happened.

Based on this corollary, we define the quality measurements of web evolution. With respect to the three basic types of web resources (descriptive, functional, and interconnective), there are three characteristic variables on human growth---personality, capability, and interpersonal relationship. To simplify our presentation, we replace "interpersonal relationship" with "friendship" in this article. In particular, personalities of humans are data resources on the web, capabilities of humans are service resources on the web, and friendship of humans are interconnective resources on the web. We will justify these claims shortly. Based on these mappings, we can analogically define the quality measurements of web evolution on the basis of the quality measurements of human growth.

There are two basic views when defining the quality measurements of human growth---the individual point of view and the community point of view. Based on the individual point of view, a person grows up. Based on the community point of view, a person's growth is about this person's incrementally taking over (and producing) varied resources from (and for) a community. Within a particular stage of one's growth, a person grows gradually by possessing more and more community resources in quantity but often with a constant quality. Within a transition period of one's growth (e.g., from newborn to pre-school), however, this person grows promptly by possessing resources of higher quality. This is indeed also the model of web evolution.

From the individual point of view, personality is the complex of all the attributes that characterizes a unique individual. For example, a person's personality is a complex of his emotion, knowledge, customs, etc. From the community point of view, however, every personal attribute is a descriptive resource of a community. For example, one's emotion and knowledge are community resources that can be consumed by community members (certainly including oneself). This is why we can map personalities to web data resources. Based on this community point of view, the personality of a person is a unique, personalized subset of descriptive community resources. In terms of humans' growth, this definition presents a measurement of quantity and quality of personalities. In particular, the quantity of a personality can be measured by the amount of descriptive community resources this personality possesses. For example, individually John learns more and more knowledge. This is equivalent to say that John possesses greater quantity of shared knowledge resources from a community. Individually John think of a new theory that has never been presented before. This is equivalent to say that John has produced a new knowledge resource for a community (since it is produced by John, automatically John possesses it). The quality of a personality can be measured by the highest quality of its possessed descriptive resources, which is measured by the degree of productiveness these resources are when they are used. The more productive descriptive resources have higher quality. For example, the emotion of impatience is a lower quality personality resource because it is often not productive to produce valuable resources for a community. In contrast, the emotion of patience is a higher quality personality resource because it can often be applied to produce valuable products (such as the service of teaching) for a community.

From the individual point of view, capability is the ability to execute a specified course of action. For example, Alice can knit; knitting is a capability of Alice. From the community point of view, every personal ability is a functional resource of a community. A functional resource is a resource that may consume the other community resources and produce outputs. For example, Alice possesses the functional resource of knitting, which consumes community resources such as Alice's labor and patience, and produces new community descriptive resources such as sweaters. Hence the mapping from capabilities to web service resources is sound. Based on this community view, the capability of a person is a unique, personalized subset of functional community resources. In terms of humans' growth, this definition presents a measurement of quantity and quality of capabilities. The quantity of a capability can be measured by the amount of functional community resources this capability possesses. For example, individually Alice learns more and more capabilities. This is equivalent to say that Alice possesses greater quantity of functional resources from a community. The quality of a capability can be measured by the highest quality of its possessed functional resources, which is measured by how much initiative these functional resources are to consume community resources (of any type). The functional resources with greater initiative have higher quality. For example, Mary can clean room by being asked to do it. This house-cleaning capability has lower quality because it is passive. In contrast, Alice automatically clean her room every day. This house-cleaning capability has higher quality than the former one because it is active.

From the individual point of view, friendship is the connections of a person to the other persons in a society. From the community point of view, however, every connection among persons is an interpersonal resource. Hence it is reasonable to map friendship to link resources on the web. Based on this community view, the friendship of a person is a unique, personalized subset of interpersonal community resources. In terms of humans' growth, this definition presents a measurement of quantity and quality of friendships. The quantity of a friendship can be measured by the number of interpersonal community resources it possesses. For example, individually Peter makes more and more friends. This is equivalent to say that Peter possesses more and more interpersonal resources. The quality of a friendship can be measured by the highest quality of its possessed interpersonal community resources, which is measured by how vulnerable the connections are. The less vulnerable connections have higher quality. For example, the friendships between pre-school kids are generally with less quality than the friendships between college students. The friendships between pre-school kids are built upon loose foundation, such as they live in neighborhood and have attended the same school. But the friendships between colleague students are often built upon much stronger foundation such as common interest and beliefs (e.g., the common interest of classic music and the common belief of pursuing democracy). Hence comparatively, the friendships of the former type are more vulnerable to the change of external environment than the ones of the latter type.

By carefully define the quality measurements of the characteristic variables of human growth, based on the Corollary 3, we can analogically define the quality measurements of the three types of web resources in terms of web evolution. (1) The quality of a data resource can be measured by how much productive when this resource is used by the public. The Web-1.0-quality data resources are unlabeled syntactic strings. In contrast, the Web-2.0-quality data resources are generally tagged. Hence Web-2.0 data resources have higher quality than Web-1.0 data resources because they can be more productive with their tags. (2) The quality of a service resource can be measured by how much initiative this service resource can consume web resources (including itself). The Web-1.0-quality service resources are basically passive (or reactive) and non-portable web functions. In contrast, the Web-2.0-quality service resources are active and portable web services such as web widgets. Hence Web-2.0 service resources have higher quality than Web-1.0 service resources because they are more initiative to do their work. (3) The quality of a link resource can be measured by how vulnerable this link resource is. The Web-1.0-quality link resources are hardcoded links from one page to another. The Web-2.0-quality link resources are labeled links that can simultaneously connect many pages together by a common label. Hence Web-2.0 link resources have higher quality than Web-1.0 link resources because these links reflect common agreements (labels) that cannot be subjectively altered by individual users. Combining the the the Corollary 2 and the the Corollary 3, we can now quantitatively measure a web page, a web site, or even the entire World Wide Web with particular stages.

Until now we have solved the recognition of static stages of web evolution. But there are more crucial questions remaining about web evolution. How does the web evolve from a lower stage to a higher one? What are the reasons causing stage transitions? What is the general pattern of stage transitions? The Corollary 4 to 7 answer these questions.

Corollary 4: a stage transition on web evolution is caused by unbounded quantitative accumulation of web resources with the quality of the old stage.

Based on the Postulate 1, the Law of Transformation of Quantity into Quality tells us that a quality transition is always caused by nonstopping quantitative accumulation. Then the Corollary 2 tells us that on web evolution this quantitative accumulation is about web resources.

The evolutionary transition from Web 1.0 to Web 2.0 shows an evidence to this corollary. This transition was initiated by the rapid expansion of Web-1.0-quality resources. On Web 1.0, web publishers continuously produced more valuable resources. As a result, the average quantity of web resources in individual web pages increased gradually. When readers enjoyed richer and richer content on average web pages, they started to be annoyed by refreshing these pages. Traditionally, when refreshing, the entire content of a web page was reloaded. Therefore, the page-refreshing time was about in proportion to the quantity of resources on a page. Most of the time, individual web readers may only be interested in partially but not all the abundant resources in web pages. The frequent reloading of materials that are uninteresting caused more and more troubles on the readers' side, especially when the quantity of resources continuously increases unboundedly. The information update became a major bottleneck for the further accumulation of quantity about web resources.

In fact, the general pattern of the problem we mentioned in the transition from Web 1.0 to Web 2.0 reflects the primary contradiction on web evolution. The contradiction between the unbounded quantitative accumulation of web resources and the limited resource-operating mechanism at the meantime is the primary contradiction in web evolution . (We are going to do a detailed discussion and definition of resource-operating mechanism in the following corollaries. Right now it can be simply understood as a mechanism that operates web resources.) A stage transition on web evolution is to solve this primary contradiction when the conflict between the two parties becomes too severe to continuously support effective web operation. In the last transition, the solution to this primary contradiction is AJAX, a new resource-operating mechanism on which the Web 2.0 is based. This invention temporarily solves this primary contradiction, but only on its Web-1.0 level. Then this contradiction restarts again but now it is on the Web-2.0 level. The solving and repeating of this primary contradiction on higher and higher level is the fundamental reason causing stage transitions on web evolution.

Corollary 5: the initiative of a stage transition on web evolution can be identified by a fundamental upgrade of the web-resource-operating mechanism.

The Corollary 4 expresses the driving force of stage transitions. This Corollary 5, then, presents a way to recognize the beginning of a stage transition.

The web-resource-operating mechanism (or we simply it to be the resource-operating mechanism) is the methodology of declaring, displaying, and transmitting particular collections of web resources. Although not the same, a resource-operating mechanism to the web is similar to an operating system to personal computers. But we name it a mechanism instead of a system because these two concepts have fundamental differences. An operating system manages all the hardware and software resources in a computer, and it controls everything from basic I/O to memory allocation. An operating system is a coherent program that requires preciseness; so it is a "system." In contrast, a resource-operating mechanism on the web only presents general but not precise rules on how web resources should be declared, displayed, and transmitted on the web. Neither does it precisely allocate web resources into local memory, nor does it controls local I/O processes. And thus, it is nothing but only a "mechanism."

The basis of the resource-operating mechanism on Web 1.0 is the HTML encoding. All Web-1.0 resources can be encoded and decoded uniformly in HTML. This is the standard way users declare and display web resources. In addition to the HTML, there are other auxiliary resource-operating methods such as PHP and Javascript. But they are not the basis.

On Web 2.0, although HTML is still an essential part of its resource-operating mechanism, it is added a new basis component, which is AJAX. Unlike many other inventions on operating web resources (such as PHP and JavaScript), the invention of AJAX is a fundamental upgrade because it supports the effective use of new quality web resources (and the others do not).

To understand why the invention of AJAX is a fundamental upgrade while several others are not, we do a brief comparison between PHP (the traditional version) and AJAX. PHP is a well-designed computational language that supports dynamic web pages. The philosophy of PHP is to help web masters dynamically operate web resources when there are too many of them on servers. Nevertheless PHP was very successful, many deep web data became conveniently accessible due to PHP. But PHP, as well as many other Web-1.0 resource operating innovations, was only intended for accelerating the prevalence of old-quality web resources. As the result, this invention aggravates the the primary contradiction on web evolution by allowing more and more Web-1.0 quality online. Thus, this invention helped the growth of WWW by accelerating quantitative accumulation, which drove the web to its next transition.

The invention of AJAX was different because this invention directly addressed primary contradiction on web evolution at its Web-1.0 level. As we discussed earlier, the direct consequence of the primary contradiction at its Web-1.0 level was the page-refreshing problem. When this problem became severe, it decreased users' willingness to produce more resources online since the performance became worse and worse. AJAX solved this problem by allowing web resources being transmitted piece-by-piece on the basis of user requests. When a resource is updated, the rest of resources in the same web page keeps being untouched. In consequence, users can simultaneously enjoy the great abundance and diversity of web resources while they no longer need to worry about the slow performance on reloading heavily contented web pages.

Since this solution addressed the primary contradiction on web evolution, it became a fundamental upgrade by allowing the prevalence of new quality web resources. The widespread of web widgets is a typical example. Web widgets are are portable chunks of code that can be installed and executed within any separate HTML-based web page by an end user without requiring additional compilation. In theory, however, web widgets does not depend on AJAX. We can certainly implement and apply widgets without AJAX. Supposing there is no AJAX, however, an entire web page then has to be reloaded synchronously with the update of any of its embedded widgets. If a web page contains dozens of widgets, the reloading experiences may then be a nightmare to any web user since any single information update from any widget would cause a reload of the entire page. Hence web widgets, though theoretically not depending on AJAX, was not becoming popular until the widespread of AJAX. Widgets are Web-2.0 quality instead of Web-1.0 quality, this is the reason.

Corollary 6: the evolution of World Wide Web is fundamentally the evolution of resource-operating mechanisms.

three-property (20K) The upgrade of the resource-operating mechanism plays crucially on web evolution. When a mechanism is so crucial, it must be a characteristic variable about web evolution. The Corollary 3 then tells us that there must be a proper analog to this web-resource-operating mechanism on human growth; and indeed we find one. The resource-operating mechanism is respect to the self-consciousness on human growth.

In its formal definition, self-consciousness is "a personal understanding of the very core of one's own identity." Informally, self-consciousness is a person's knowledge about which resources belong to oneself. "This is mine!" This common sentence typically illustrates the self-consciousness. A person is not a random collection of personalities, capabilities, and friendships. In fact, any of such a collection is not a person if they are not operated by a particular self-consciousness. "These are MY personalities; these are MY capabilities, these are MY friendships. As a whole, it is ME!" Self-consciousness is the deepest concern of self; and thus it is the most fundamental identity of individuals. Similarly, the web-resource-operating mechanism is to help resource occupiers, such as web pages, to claim their ownership over particular resources. A web page consists of varied web resources, but it is not about a random set of resources. A set of web resource becomes a web page only if it is properly operated by a resource-operating mechanism. This is the most intrinsic explanation of the resource-operating mechanism.

The origin of the self-consciousness of humans is still in mystery. Fortunately, this is not what we are interested to the research of web evolution. What we do care of, however, is that the maturing of self-consciousness is an essential (or probably the most essential) phenomenon on human growth. This claim has been suggested by several psychologists and anthropologists (e.g. here). The maturing of the self-consciousness allows humans to be master of community resources on higher and higher quality. For example, children may only handle simple emotions such as like and hate, or simple behaviors such as do and not do, or simple interpersonal relations such as friend and enemy. In contrast, adults can handle complicated (higher quality) emotions such as love and irksomeness, or complicated (higher quality) behaviors such as devote and quash, or complicated (higher quality) interpersonal relations such as ally and rival. Because of this reason, the growth of humans is indeed the growth of humans' self-consciousness. And thus, the evolution of WWW is indeed the evolution of resource-operating mechanisms.

Corollary 7: the end of a stage transition on web evolution is visible by the emergence of a new representation of web spaces.

This is the last corollary that completes our theory of web evolution. This corollary tells how we can detect the finishing of a stage transition on web evolution.

A web space is a personating composition of web resources. A web space on the web is the analog to a person in our society. A web space thus can be watched as a virtual person in the virtual society of World Wide Web. When a human user materializes one of his consciousness on the web, he produces a new web resource in his web space. In particular, his personalities become pieces of data resources; his capabilities become pieces of service resources; and his friendships become pieces of link resources. The typical presentation of web space on Web 1.0 is homepage. On Web 2.0, however, web spaces are represented as personal accounts.

Normal people often tell children's growth by their external shape rather than looking at either the quality of their intrinsic characters or the level of their self-consciousness. When a growth becomes observable, however, this child has already passed his crucial transition and been on his new stage of life. Similarly, we do not expect normal web users to observe a web stage transition by looking at the quality of web resources or the upgrade of the resource-operating mechanism. There is an explicit and unmissable sign for the public to recognize the coming of a new stage, which is the emergence of a new representation of web spaces.

web spaces (5K) Internal upgrade of web resources and resource-operating mechanism always ultimately cause external changes of displaying these resources. Therefore, a stage transition should always lead to a brand new representation of web spaces. On Web 1.0, resources are raw data, hardcoded links, and passive, non-portable services. They are displayed anonymously. The primary goal of Web-1.0 spaces is to properly show these resources rather than to do interaction with human users. Hence normal web page is the suitable and convenient representation style of Web-1.0 spaces. Web-1.0 spaces are typically homepages.

On Web 2.0, resources in web spaces become labeled data, labeled links, and active, portable services. They can no longer be presented anonymously due to the tags. Explicit declarations of ownership over the Web-2.0 resources are critical because different people may tag differently on the same resources. As a result, Web-2.0 spaces are migrated to be individual accounts to protect individual specifications.

The emergence of new web space representations is not only a sign to the end of a stage transition, but it also leverages the creation of new quality resources. We need new bottles to contain new wine; this is a key of evolution in common. Labeled resources may be handled poorly under the old presentation. Web-1.0-style homepages do not provide handy solutions to protect private information of tags. Hence users sticking on Web-1.0-style homepages would not be willing to produce Web-2.0-quality resources. On the contrary, Web-2.0 personal accounts effectively protect the ownership of individually specified tags. This new style of presentation of web spaces thus encourages users to produce more resources on the new quality.

Based on this Corollary 7, we can be sure that the stage transition from Web 1.0 to Web 2.0 has already done. We are now in the middle of the Web 2.0 stage. The main task at present is to accelerate the quantitative accumulation of Web-2.0-quality resources, which is indeed what most of the current Web 2.0 companies are doing. In this period, the most timely innovations are the ones that can help people produce more Web-2.0 resources and allow them producing faster. Based on the Corollary 5, the next transition has not started yet. Most critically, the quantity of Web-2.0-quality resources has not been accumulated enough to cry for a new fundamental upgrade of the resource-operating mechanism. Without the pressure of needs, the research on this next fundamental upgrade cannot be prompted at this moment. This web evolution theory can predict what this next fundamental upgrade is, and this is what we are going to do in the following. But this theory would not foretell the exact time when a new transition will happen. It depends on the speed we are now accumulating Web-2.0 resources.

In summary, we have concluded a new web evolution theory in this section. This theory is not about inventing a future, but only about predicting the future. The future of World Wide Web is not decided by random intelligent designs. On the contrary, successful intelligent designs must satisfy the web evolution laws so that they could survive from the "natural selection." The future of WWW is predictable.

Sample Use Case

This web evolution theory can be applied to explain many web phenomena. As an example, we try to apply it to explain several general phenomena of web industry. Following some general discussions, we apply these web evolution laws to a specific case, the rise of Google, and show how this particular phenomenon can be explained by web evolution. Readers who are not interested in this topic may simply skip this part to the next section, which starts the discussion of the next-stage web.

Based on the web evolution point of view, web companies are factories that produce varied web resources. Though many companies produce all the three basic types of web resources, their primary products are often focused on one type. For example, the primary products of Amazon are descriptive resources (data); the primary products of eBay are functional resources (service); and the primary products of Yahoo are interconnective resources (link). Based on their primary products, these companies also produce the other types of byproducts to prompt their revenue.

In our real world, we have different factories consume various feedstock and produce varied products. These factories also partition their work load and cooperate to each other. Some of them are pretreating factories that take crude materials and produce refined materials or parts. Some others are retreating factories that take the refined materials and parts and then produce final products to human users. For example, the iron puddling factories take ironstone and produce refined steel products. Then the automobile manufactories take the refined steel products to produce cars. Ideally, we may expect to watch this similar work-load partition among web industry since WWW as a whole simulates our real world. Therefore, some web companies might have taken low-quality web resources to produce high-quality web resources; and some others might take the high-quality resources to produce user-oriented end-point products. But certainly in the current web, this type of partitioning and cooperation is still not a general phenomenon. By the web evolution theory, there is a reason.

Unlike the other industries, web industry has a unique restriction, which is the progressive status of WWW. Our web evolution theory tells that the quality of effectively useful web resources is limited by the resource-operating mechanism at the meantime. Nevertheless lower quality resources are not desired, higher quality resources are also not valuable because they cannot be performed well by the lower level resource-operating mechanism. As we have discussed, web widgets cannot be effectively spreaded until the prevalence of AJAX. This unique restriction greatly limits the choices of the current web companies. As a matter of fact, the web is only on its second stage of evolution---Web 2.0. There are not many differences on the quality of web resources. Therefore, we cannot look for deep partition and cooperation among web companies at the current stage of the web. Many of the current web companies are working like to dig crude gold out of ore rather than to refine pure gold from pretreated gold products. The flourishing time of resource refining has not yet come. If we watch the history of the first and second industrial revolution, we can also see that the flourishing of mining industry was before the flourishing of manufactory industry. Every evolution has its timing issues, and web evolution has no exception. With the further evolution of WWW, we can foresee deeper and broader partition and cooperation in web industry as they have already happened on the other industries.

Another puzzle in web industry is who the customers are. By the first glimpse, it seems a dumb question. Certainly humans are customers. This is true but indeed not a complete answer. There is another large group of customers who are overlooked---the web spaces (virtual people). A large chunk of web products, from HTML design toolkits to avatar icons, their targeting consumers are indeed not humans but web spaces. Humans are nothing but the guardians of their web spaces, and thus humans buy these products to decorate their web spaces. This scenario is similar to that we buy products for our babies. Though babies are the targeting consumers, they do not present ideas by themselves (because they do not have this ability yet). In contrast, though parents do all these things for their babies, parents are not the real consumers of the baby products, such as the baby toys. Based on our web evolution theory, Web-1.0 spaces are babies. So it looks like that only humans are consumers of web products because we have to do everything for our babies. But starting from Web 2.0, things begin to change. As pre-school kids, web spaces do start consume something by themselves. The widespread of web feeds is an important signal. Unlike the traditional products that first go to the hand of humans and then humans decide how to put them into web spaces, web feeds are first sent to web spaces and then web spaces can decide to pass to humans. This is a minor but very important change on the history of web industry. How to produce better products serving these virtual persons will become a big task and huge opportunity for web companies. In fact, however, some very sensitive thinkers of web industry has also noticed this important change. For example, Susan Wu has pointed out in her blog that the industry of avatars and widgets has a great future of marketing. Implicitly, she pointed out that the consumers of this great new market was not directly the real humans, but the second-life clones of humans. By extending her vision, consumers are going to be virtual persons on the web, who typically are the web spaces.

Our web evolution theory also explains the relations between web companies. Big (broad domain) companies are competitors only if they produce the same type of primary web resources, for example Google vs. Yahoo. On the other hand, Yahoo and Amazon are not necessarily the competitors but can be cooperators because they focus on producing different types of primary web resources. Small (narrow domain) companies, however, may co-exist well even if they are producing the same type of primary web resource as long as they are about different domains.

The competition among web companies is essentially the competition of the quantity and quality of their products. When two companies produce the same type of products, the one producing in greater quantity beats the one producing in less quantity of resources, and the one producing in higher quality beats the ones producing in lower quality of resources. This is the general rule of web competition, but with exceptions. We explain the competition and exceptions by a classic example on web evolution---the rise of Google.

The rise of Google can be watched in two stages: the pre-duel (to Yahoo) stage and the duel stage. Many people know that Google is built on the famous PageRank algorithm. In fact, however, our web evolution theory tells that this algorithm only helped Google to be a winner on its pre-duel stage. It was still other more important things that made Google win the duel to Yahoo.

From the web evolution point of view, the PageRank algorithm is a good example of producing web resources with the right quality. In the PageRank algorithm, Google's co-founders very sharply caught the crucial rank criteria about Web-1.0-quality resources---their link popularity in the network. At the meantime, most of Google's competitors focused on ranking web resource with the closeness of the meaning of content to the user-specified keyword. Therefore, they produced complicated keyword matching algorithms (often depending on sophisticated statistical analysis over the keywords) to rank their produced link resources. The difference between Google's thought and the others' thought is that: which type of resource manipulation could be effectively and efficiently performed by the Web-1.0 resource-operating mechanism? Web-1.0 resource-operating mechanism itself did not support effective computation of keyword similarities. To optimize page ranking from this direction meant extra load (i.e. inefficiency) on the Web 1.0 environment. It thus slowed down the performance of real-time page ranking. On the contrary, Web 1.0 did support effective computation of the popularity of web links. Statistically, the pages with higher popularity in connection had the better chances to be the one for which is looked when the meaning distance was hard to be computed. Therefore, rather than employing complicated keyword matching, the PageRank algorithm needed only simpler and high-performance keyword matching algorithms that were compliant to the computation of link popularity. As a result, the PageRank algorithm could produce greater quantity of relative link resources (and might even rank them better statistically) per single time unit when comparing to the algorithms used by their competitors at the meantime. This outperforming made Google be a winner in its pre-duel stage, and grew to be a Yahoo-scale company.

Though some "smart" people laugh at Yahoo's employing of manual labors, we should and could never underestimate the value of Yahoo's hierarchical directory. This directory had represented the highest quality of link resources we had ever reached (even probably until now). Yahoo had an ambitious plan. It tried to index the entire web into a huge taxonomy, and this effort partially succeeded. Even though this percentage of "partial" was small on the eye of some critics, it was enough to make Yahoo be the biggest web search giant on the Web-1.0 stage. By Yahoo's hierarchical directory, web searchers got not only the relative links of what they searched, but also the close ones in at least one hierarchical tree that they might be interested. This was, and still is, very much valuable information to web users. But as we know, Yahoo could not hold this precious hierarchical directory any longer and it caused the sunset of Yahoo. Why could this thing happen if it indeed represented a higher (but not lower) quality of link resources? We need to review our theory of web evolution.

We have repeated an important point of web evolution in several times. The quality of consumable web resources is always restricted by the resource-operating mechanism at the meantime. Web users generally cannot effectively consume resources whose quality is higher than the processing power of the present resource-operating mechanism. This was Yahoo's problem. The quality of link resources in Yahoo's hierarchical directory were far ahead of the time. As a result, these resources became very difficult to be produced and updated. (So Yahoo had employed many human labors.) When the quantity of web resource increased explodingly, ultimately there reached some threshold that even the giant Yahoo could no longer afford the manual maintenance of this huge semantic index. The decaying of Yahoo thus started.

So the duel between Google and Yahoo was apparently the battle between greater quantity versus higher quality. However, since practically Yahoo could not maintain its high quality any longer, the ultimate victory of Google was predictable from the beginning; it was only a matter of how much time. This duel would, however, not be classic to web evolution if the story was ended here. Suddenly an unexpected factor was appeared---the emergence of Web 2.0. This was a factor could have changed the fate of the both sides. Very likely, however, that neither of the sides had intentionally realized the importance of this factor to their battle. But there was always one side that was more sensitive to new innovations and was active to adopt them. This time, it was Google.

Google made an important decision. It decided to embrace AJAX and thus the emergence of Web 2.0. This decision made Google to be a web-resource producer at a completely newer level---the first generic provider of Web-2.0-quality resources. By this decision, Google produced many remarkable Web-2.0 products such as GMail and Google Earth. These new quality resources immediately credited Google to be an alternative term of "fashion." As its opponent, Yahoo was unfortunately marked to be an alternative term of "old fashion." At the very moment, the battle was lost its meaning.

Certainly this end was not an inevitable destiny of Yahoo. If only Yahoo knew about web evolution a little bit earlier, it could have avoided this ending by kicking Google out of the history. Web 2.0 did not choose Google, but it was Google that had decided to follow Web 2.0. In fact, Yahoo finally realized something and acquired Flickr, a leading Web 2.0 company on 2005. Nevertheless this purchase itself might have already been too late, Yahoo spent another several months to debate whether Flickr's technologies should be integrated into Yahoo's main-stream products.

The rise of Google is a great phenomenon. Google has replaced not only Yahoo, but probably also Microsoft to be the most influential company in IT industry. At present, Google seems to be unbeatable. From the web evolution point of view, however, Google is certainly defeatable. It has its unavoidable problems as once Yahoo had.

Google has its weakness. Ironically, Google's greatest weakness is its PageRank algorithm, similar to that Yahoo's greatest weakness once was its hierarchical directory. The strength turned to be weakness, this phenomenon has been repeated in history many many times. The problem of Yahoo was that its hierarchical directory was beyond its time and eventually failed to grow by the quantitative expansion of web resources. Google's PageRank algorithm is on the other side. It was the best description of Web-1.0 network. But the web has evolved to be Web 2.0, and this evolution continues. As as result, this algorithm will slowly (but acceleratively) lose its domination due to its out of the date. The Web-2.0 network must have a newer model of ranking criteria because of the widespread of manual labels. Link popularity may still be an important factor, but no longer be so important as before because we now have more information with richer semantics. This changes may not be trivially reflected by modifying the existing PageRank algorithm because the fundamental focus has been moved. After having invested so much on scaling its PageRank algorithm, Google is likely the last one that would be willing to adopt a change (as Yahoo once did). This is the Achilles' heel to Google.

We need new Larry Page and Sergey Brin. Just like these two brilliant pioneers who had discovered the most intrinsic rules about ranking on Web 1.0, we now need new Larry Page and Sergey Brin to discover the most intrinsic rules about ranking on Web 2.0, and even for the later stages of WWW. We may need fundamental changes on web search strategies instead of only doing minor modifications on existing ranking algorithms. In this article, we are going to present one brand new search strategy based on our study of the web evolution. We, however, do not claim we are the new Larry Page and Sergey Brin (and certainly we are not). But we hope that our discoveries would be important resources to the new Larry Page and Sergey Brin and help them lead the web search to a complete new level.

Beyond Web 2.0

Based on our web evolution theory, we start to discuss the next stage beyond Web 2.0. For the convenience of discussion, we call it the "Web 3.0" since it follows Web 2.0. But this Web 3.0 is not the John Markoff's Web 3.0 on his New York Times article. Based on the Corollary 1, this Web 3.0 is a world of elementary-school children.

The Corollary 3 allows us to make a link between Web 3.0 and a society of elementary-school children. We thus set five closely-related symptoms about children's growth from the pre-school stage to the elementary-school stage, and we show how these five typical symptoms will be occurred during the transition from Web 2.0 to Web 3.0. Here are the five symptoms.

  1. Multiple characters,
  2. Formal education,
  3. Homework,
  4. Exam,
  5. Conditional friendship.
Symptom One: Multiple Characters

Scenario

Alice becomes an elementary-school student. Her life changes quite a bit. The most essentially, she needs to play various characters on different context. For example, in school she is a student, and in chorus she is a singer. In school, the skills of singing is not the focus. In fact, most of her classmates are not aware of her singing skills. In chorus, however, almost none of her peers cares what she has learned daily in her math classes. For her, it likes to speak two different languages to two different groups of people. Self-consciousness goes to multiple characters. This is a grand challenge not only to Alice, but also to the web researchers. This is the trigger to the Web 3.0.

Challenge

By the Corollary 4, the initiative of a stage transition is always caused by unbounded quantitative accumulation of web resources with the old quality. It would not be an exception to the rise of Web 3.0. So first we need to watch the consequences if the quantity of Web-2.0-quality resources is increased unboundedly.

A common characteristic of Web-2.0-quality resources is being tagged. Closely tagged resources are likely to be grouped together. This is the trigger to the emersion of Web-2.0 special communities. This prediction has already been foreseen by other Web-2.0 researchers when they categorize Web 2.0 to be a social web. A social web is composed by web communities, which are sets of people (or agents in a more abstracted sense) with shared elements. Based on the web evolution point of view, a web community is a group of web spaces that share common interest on a particular domain. We emphasize the members of a web community to be web spaces rather than the owners of web spaces. In fact, we want to automate our web spaces to do as much as they can on tedious daily life, and leave humans only the creative part of social communication. Web spaces (virtual people on the web) are the direct members of web communities. Human owners are the masters of these community members. At this very moment when this article is written, the formulation of Web-2.0 communities has already started. For instance, the friend-making community is the first large Web-2.0 community. The Web 2.0 sites such as MySpace, Facebook, and LinkedIn are the typical community moderators of this friend-making community.

With the progress of Web 2.0 evolution, there will be more and more special communities. For the readers who doubt about this claim, here is our explanation. A web community is lay on common interest among a group of people. For example, many people are interested in making friends on the web, and thus this community is picked by the Web-2.0 pioneers to be the pioneer. But certainly friend-making is not the only popular one. For instance, the coupon-search domain is another community that have already engaged with millions of members on Web 1.0, e.g. the Web-1.0 sites such as FatWallet and dealsea. There is no technical barrier except of willingness (commercial motivations) to upgrade these sites to Web 2.0. And we predict this type of upgrade will happen sooner or later because if these sites do not upgrade by themselves, new Web-2.0 sites on the same domain will emerge and eventually replace them by providing much better user-friendly facilities. If both friend-making and coupon-search are attractive and profitable due to their numerous members, how about the other domains also engaged with numerous real-world members such as clinic-search, food-making, movie-review, immigrant-consulting, astronomy-lover, bird-watch, treasure-hunting, coin-collecting, and many other interesting topics you name. This upgrade from Web 1.0 to Web 2.0 is on full scale. At present it is only the beginning.

The booming of Web-2.0 communities causes a problem: minor at the beginning, but gradually being severe in the future. The Web-2.0 resource-operating mechanism is not designed to leverage community-sensitive resources. This problem is certainly not severe when there are only a few communities (such as at present). But its severity will be increased exponentially to the number of new communities. For example, if there are thousands of different communities, (1) how do users find and join the communities of interest? (2) do they need to login to every community site individually and remember all the passwords? (3) do they need to repeatedly copy and paste onto all the subscribed communities to update their generic information? (4) if, however, there are some information that is only sensitive to some, but not all, the communities they subscribed, how can the users update it efficiently? These problems are generally unsolvable under the frame of Web 2.0. An effective solution requires fundamental upgrades of the resource-operating mechanism. Based on the Corollary 5, it means the start of a new stage transition on web evolution---the transition to the Web 3.0.

Solution

Web-2.0 pioneers have started thinking of part of the questions we have mentioned. The OpenID project is an initiative to address the universal login request. The philosophy underneath OpenID is to create a universal identity for every web user. With such an identity, a web user can login to any Web-2.0 sites (as long as they support OpenID) without the need of remembering varied login-password pairs.

web spaces on communities (32K) OpenID is practical and convenient. But it is not an ultimate solution to all of our questions. By using OpenID, web users still need to login to every site individually to update their personal information. This experience could be dreadful if one has subscribed to, for example, dozens of communities. This general information updating problem is indeed the Web-2.0 version of the primary contradiction on web evolution: the contradiction between unbounded increase of web communities and the requirement of information updating across these communities.

This primary contradiction is caused by the requirement of individual accounts on every Web-2.0 site. Different web sites have their own design of information representation. It is non-trivial to automatically copy the data on one site directly to another site and preserve the meanings. Note that, however, this problem is not severe on Web 1.0 because homepages often do not require logins. We can simply upload and download homepages from one site to another. This observation gives us a hint to solve this problem.

We name our solution the Automatic Character-Switch (ACtS). A web user, such as Bob, can set up a local web space that stores his web resources. When he subscribe to a new web community, Bob uploads his local web space to the site and allow the site to customize its resources based on the community specifications. This scenario is exactly the puzzle of the little Alice we mentioned at the beginning. She grows up and is required to play varied characters properly on varied context. Similarly, now our web spaces are growing older enough to ACtS differently on varied communities.

The development of ACtS relies on two advancement. First, we need a uniform representation of web spaces similar to what is on Web 1.0. This requires advancement on HTML encoding. Particularly, we need independent HTML encoding of individual web resources. Then a web page can be flexibly laid out using CSS based on its dynamic web resource units. Based on the current dynamic-web-page technology, this solution is achievable in the near future. Second, we need a character recognition and casting technology, which is a combination of information retrieval and semantic annotation methods. This is the core of ACtS.

ACtS (22K) As in the figure on the right, ACtS begins with a user's subscribing a web space to a community. The community server thus performs a community-sensitive resource identification procedure to categorize (information retrieval) and annotate (semantic annotation) public resources stored in the web space. As the result, the local web space creates a community-specific view over its resources, which composes a community-sensitive sub-space. Finally the community server can apply its community-specific resource-operation methods to this identified sub-space and lay its content out according to the community conventions.

ACtS is a strategy and philosophy rather than a standard. Community moderators can independently implement their ACtS methods. Functionally similar to AJAX, ACtS is invoked only when it is necessary, i.e. when a web space is connected to a community. The performing of ACtS creates a typical community-sensitive view of a web space with respect to a particular community. Hence ACtS is an upgrade about the fundamental resource-operating mechanism.

three-property (5K) Based on the Corollary 5, a fundamental upgrade on the generic resource-operating mechanism must lead to the prevalence the new quality web resources; and ACtS will be. The invention of ACtS will lead to the prevalence of community-sensitive resources, which is beyond the quality of Web-2.0 resources. Straightforwardly, Web 1.0 allowed resources being "online." Web 2.0 allows resources being "tagged." And Web 3.0 will allow resources to be "community-specifically labeled."

The core of ACtS is community-specific annotation instead of mapping. We discuss annotation (and thus how ACtS works) in the next section. But we emphasize at here that ACtS does not enforce mapping. Generic knowledge mapping is a long-time difficult problem that is unlikely to be effectively solved in the foreseeable future. ACtS only allows different communities to recognize whatever they can identify from a web space. ACtS does not care whether a resource is also identified by another community. So it is a pure resource classification and annotation problem. On the other hand, however, we can expect many resource being identified simultaneously by more than one community. These overlapped identifications automatically result in mappings, which will lead to a brand new type of web resources---the primitive mapping resources, a new generation link resources. The abundance of this type of web resources will become a phenomenon on Web 3.0, and the unboundedly increasing of them in quantity will be a major factor triggering the web to evolve to its 4.0 stage.

This web mapping problem also has its analogical meaning. In elementary schools, children learn knowledge on varied topics. Though some very smart kids may know how to apply their learned knowledge across domains, it is not a general goal of elementary education. At this elementary level, the main goal of education is to teach children domain knowledge rather than to educate them flexibly applying knowledge across domains. This latter education goal would fail in general because elementary-school kids are not matured enough (their self-consciousness is not grown up enough) to grab resources on varied domains and creatively link them together. Repeating it, this limitation is not due to education methods but to the reality of the progress of human growth. There is no way we can enforce this process. "To every thing there is a season, and a time to every purpose under the heaven." (The Holy Bible, KJV, Ecclesiastes 3:1)

Symptom Two: Being Educated

Scenario

Going to school becomes a major component to Alice's life. Previously, Alice learned almost everything from her parents. But now she is also taught by teachers. Previously, Alice learned primarily from experiences. But now she also learns from textbooks.

Challenge

Being actors/actresses (ACtS) is an art that requires learning. An old proverb tells us that "everyone is an actor." But few people are born to be good actors. For the majority of us, we learn to act better all through our lives. For example, we are not born to be good mathematicians, or good singers, or good politicians. All of them require learning, and nonstopping learning. The most important and effective way to learn is to get educated. As one of the most important and widespread phenomena in our human world, formal education must have its position in the clone of our world---World Wide Web. The realization of education on WWW will eventually result in the emergence of a long-time expected consequence---semantic web.

Solution

Being expected for long time, semantic web finally floats above the surface. But this Web 3.0 is still not the ideal Semantic Web that Tim Berners-Lee foresaw in his 2001 Scientific American paper. Web 3.0 would not be mature enough on that shape yet. In contrast, Web 3.0 can be watched as an elementary Semantic Web since it is a society of elementary-school children. The goal of the education of Web 3.0 is to teach our web spaces (instead of us) to properly participate in various community lives.

Successful education contains three essential factors: right textbooks, right education methods, and right teachers. On Web 3.0, right textbooks are right community taxonomies. Elementary-school students cannot learn college textbooks because college textbooks are too complicated. Similarly, right Web-3.0 community taxonomies are not the huge, comprehensive ontologies such as Cyc or Mikrokosmos. In contrast, they are small and focused ones such as the FOAF taxonomy for the friend-making community.

Small-scale taxonomies are the basis of Web-2.0 communities. Either presented explicitly or implicitly, taxonomies are the community conventions, upon which community resources are marked. For example, when subscribing to a friend-making community, such as LinkedIn, we are asked to upload our contacts and the community servers can marked them based on the LinkedIn-version FOAF specifications. These community-specified resources can then be effectively operated by the respective community-specific resource-operating mechanisms. For example, the LinkedIn community servers can provide distinctive services to different groups of people (such as classmates or working partners) based on the understanding of community-sensitive data resources.

The creation of these community taxonomies is community efforts rather than individual efforts. Although no restriction forbids individuals to build a taxonomy for any community, community taxonomies are valuable only after they are adopted by large number of community members. A community site that uses a poorly designed (i.e biased and subjectively created) taxonomy will lose its member to its competitors that adopt well-designed taxonomies. This natural selection phenomenon on community taxonomies can be analogized to the adoption of textbooks in our real world. Certainly everyone can write textbooks on any domain. But whether a textbook would be widely adopted depends on the public voting from textbook users. Only the best ones could survive; this is thus the natural selection.

The second factor of successful education is right education methods. Web-2.0 education method is pre-school style: teachers define rules and ask parents to help pre-school kids to follow them exactly. After users subscribe to a Web 2.0 community (such as a friend-making site), they set up a web space by filling strictly defined forms (rigid hand-by-hand education). This rigid learning allows web spaces lately acting precisely as instructed. This education model works on the pre-school level because of the simple and enumerable educating scenarios. This pre-school-style education method, however, no longer work effectively on the elementary-school level when the amount of knowledge becomes non-enumerable. Anyway, when learning objects becomes non-enumerable, we need some sort of proactiveness on these children. While human children magically gain this capability on reasons we still do not know well, machines can only be educated externally. When fully automatic methods are not reliable, it requires cooperation between teachers (server side) and parents (client side web users). The question thus becomes: what is the most effective strategy to attract web users staying with computers? The answer is: computer games!

Princessmaker2 Princess Maker is a classic child-raising-style computer game created by Gainax, a Japanese anime studio. To the best of our knowledge, Princess Maker is one of the earliest (or probably the earliest) computer game on this category. The figure shows the second game in the series---the Princess Maker 2. It is also the only game of the series that has been translated into English. The story of this game is straightforward. At the beginning, a player is given a virtual girl, who is assumed just on her 10-year-old birthday. The player's duty is to help this girl grow by helping her design a better education schedule. The player needs to redo this scheduling every month (on game's time) for the girl until her 18-year-old birthday. On this particular birthday, the player can watch how the girl eventually grows to be, which is the end of this game. There are more than a dozen varied endings depending on how players have played the game. The ultimate goal is to make the girl a princess who marries a prince. But depending on players' varied choices, the girl may actually grow to be varied characters, such as a teacher, a warrior, a waitress, etc. This game is a perfect example of our Web-3.0 ACtS education scenario. We educate our web spaces and let them grow to be what we expect them to be.

Web-3.0 spaces are preferable to be game-style software terminals. Web users are encouraged to create personalized avatar icons for web spaces, through which they may have a better understanding of web spaces to be their real children. Besides these visual components, a Web-3.0 game terminal should support three basic functions. First, it is the front-end display of a web space. A user can retrieve arbitrary web resources stored in a web space and watch them on screen. Second, it supports ACtS. Users can connect their terminals to any community service provider. ACtS then creates a community-specific sub-space over the the original web space with respect to the linked community. This performance allows a web space to play varied characters based on their linked communities. Third, it supports education, which is performed as a short game. This education game basically helps community annotators annotate local resources by interacting with human users. Due to the high diversity of knowledge representations and abnormal user preferences, machines would not be able to automatically annotate everything perfectly, even if the annotation domain has already been very much restricted. We need individuals to participate into this education process (just like that we need parents participate in educating their children because every child is very different). This is the whole purpose of this education game.

Before we simulate a particular education process, we address the last important factor of successful education---right teachers, who are community annotators. In fact, community annotators include both humans (domain-specific annotation engineers) and machines (community-specific annotation programs). Machine annotators are front-end teachers directly contacting to the students (web spaces) and their parents (owners of web spaces). Human annotators, however, are the supervisors of these machine annotators. They can continuously upgrade education content (community taxonomies), education methods (game content), and front-end teachers (annotation programs) based on the feedbacks from community members.

A particular teaching procedure may be as follows. When Bob, a parent (a web user), subscribes his child (a web space) to a course (a domain, which means a community site), the school (community server) assigns a teacher (a machine annotater) to teach this new student. The teacher simply scans all the knowledge that this student already have (web resources in the web space) and teach the student which are ones that are related to this course and how they relate (categorize and annotate domain-specific resources). During this process, Bob is cooperating with the teacher and tells the teacher some family-specific conventions, e.g., in Bob's family "the sweat heart" means a favorite guitar. The teacher thus helps the student remember all these special family conventions with respect to the community so that this child knows how to correctly convert them to standard community conventions later on. After all the education procedure is done, the teacher says goodbye to the student and its parent, and reports all user-specified conversions (on the basis of getting users' permission) to its supervisors (human annotators). The human supervisors regularly perform statistical analysis on user feedbacks to decide whether some user specifications are so common that they need to be updated into community taxonomies or community annotation programs.

In principle, the purpose of this education process is to produce a local community-specific view for a particular web space. Hence it only needs to be done once of all for any web-space and community pair. Later on, when this web space is logged into the same community, ACtS automatically customizes the resources in web space onto the community-specific view. Any new resources after last time login can be automatically annotated based on the specified local view.

Our real-world children-raising experiences tell that children's growth is highly related to the passions of parents to educate their children. The more efforts parents have spent on their children, the better these children can grow to be. Web education follows the same philosophy. Web spaces can serve their masters better only if they have been well educated, and it demands parents' (web users') passions. What web scientists can do, however, is to design better education methods (more attractive computer games) that helps produce and maintain these passions. To the end, the duty is still on the user side. This philosophy is fair: if someone wants machines to work well to his personal interest, he must personalize machine's knowledge by himself since no one else except himself knows his little secret. As its return, a well-educated Web-3.0 space can bring so many benefits back to him that are over his expectations. In the following sections, we discuss some of these benefits.

Web-space Education Industry

Keen readers may have noticed the similarity between our education game scenario and a popular game at present---the Second Life. They are surely similar to each other. But the difference is also significant. Second Life builds a virtual society that is open to all topics. But each of our education game has only narrow topics. On the other hand, however, each of our education game could be watched as a special game scene of a same game, which could be such as the Second Life. When we subscribe to a new community, it could be equivalent to download a new game scene of an already installed game. The difference is that each of this game scene is designed by different companies instead of a single one such as the Linden Lab.

This virtual education business may grow to be very big in the future. Especially, note that this is only the elementary education. Following the evolution of World Wide Web, we will have high-school education, collage education, any type of special education, and so on. Comparatively, we can look at how big the education business is now in the real world. The company that produces the education platform (base game terminal) may rise to be a giant as Microsoft and Google. Personally, we favor to the Linden Lab at present. If Linden Lab could open its platform and actively participate into developing the ACtS technology and web-space education methods on top of the Second Life, Linden Lab would become a leading company on Web 3.0. It means that Linden Lab is the next Google. To be a successful game provider or a new leader of World Wide Web, the decision is now on the hand of the managers of Linden Lab. If some other leading game companies (such as Blizzard) would like to participate into this competition, certainly they could.

Symptom Three: Homework

Scenario

Life is not always shining when being grown up. One unpleasant consequence of being a formal student is to do homework. Everyday, Alice must spend at least several hours on (not playing but) doing homework. Doing homework is both a reviewing of what she has learned and a practice of solving common questions using what she has learned.

Challenge

Homeworks are regular assignments. In our analogy, homeworks are regular assignments to the web spaces. On Web 2.0, we have already invented these regular assignments, which, typically, are web feeds. Web-2.0 spaces often contain standard web feed formats such as RSS and Atom to enable web syndication. These are pre-school-level assignments that are fixed and simple.

As we know, pre-school assignments are hardly real homeworks. The main purpose of pre-school assignments is to build up the capability of doing homework rather than really solving problems. Every pre-school assignments are rigidly fixed by teachers. Similarly, Web-2.0 feeds are restricted by the feed providers. They are more likely for the demonstrating of a new capability of doing homeworks than really doing some personal assignments. On the contrary, elementary-school-level homeworks become more complicated and require the understanding of domain-specific materials. The purpose of elementary-school assignments is to practice learned domain knowledge. This is a Web-3.0-quality service---user-requested web feeds.

Solution

The Web-3.0 web feeds will allow bi-directional web syndication rather than the current one-way communication (receivers cannot actively request what they expect). This technology is based on ACtS, and so it is Web-3.0-quality. Web-3.0 users can produce active feed requests to respective community servers. These requests are regarded by ACtS as new web resources for target communities and they can be properly annotated by respective community annotators. Then community servers can dynamically produce web feeds specific to the annotated user requests. This process of generating user-requested web feeds is a real homework-doing process since it applies learned knowledge to solve regular assignments.

Web-3.0 feeds still have their limitations. In general, users may not request web feeds across multiple domains, i.e., automatically integrating information from varied sites. As we have discussed earlier, cross-community mappings are not enforced on Web 3.0. This type of cross-domain web feeds may be realized in the later stages on web evolution.

Symptom Four: Exam

Scenario

Doing homeworks is unpleasant. Taking exams is nightmare. This is what Alice feels. In fact, doing homeworks and taking exams require two different types of skills. As a good homework-taker, Alice needs to be patient on questions because her homeworks are for her long-term interest and she usually has plenty of time to tune the results. (Similarly, web feeds are about general interest. Often users do not immediately expect particular answers from web feeds.) In contrast, as a good exam-taker, Alice needs to be proactive on preparing answers for potential questions because she often does not have much time to answer questions on exam. Proactiveness means always preparing answers before questions are asked.

In the real life, taking exams has its broader sense beyond doing tests in school. Any accidental event in our lives is an exam. For example, a car is suddenly broken; a child suddenly gets sick; or a book is accidentally lost. The capability of taking exams is the capability of how we can deal with emergent accidents. Let's have a look at the following scenario. Both Alice and Nancy are not good at repairing. The difference between the two girls is, however, that Alice is proactive (she has made some friends such as John who are good mechanics) but Nancy is not proactive (she simply has prepared nothing). One day, both girls' bicycles were broken. It took Nancy several hours to fix the problem because she was not good at this type of work. But Alice called John and he came to help. Just in a few minutes, John fixed the bike for Alice. In this story, Alice did not own better capability than Nancy. But Alice took the exam better than Nancy because she has prepared an answer for this accident before it really happened. This is an attitude of proactiveness.

Challenge

If we project the orbit of web evolution to the dimension of service resources, we can obtain a trace of service evolution. Typically, web services evolves from reactive services on Web 1.0, to active services on Web 2.0, and then to community-specific proactive services on Web 3.0. Reactive services work only on certain pre-defined conditions. They are conditional reflex. Web-1.0-quality services can perform well when their programmed conditions are satisfied. But they need external stimuli (such as a click of button) to be invoked. Active services can work regularly without intentionally being invoked every time. Web feeds are typical examples. Moreover, many web widgets are also active services. "Active" is the term to describe the quality of Web-2.0 services. Proactive services can prepare answers before the requests are issued. The execution of proactive services needs certain level of machine understanding. This is why they are not appeared until the Web 3.0. But even on Web 3.0, their usage is often limited within certain communities since Web 3.0 does not support mappings. So Web-3.0 proactive services must be community-specific.

To be proactive, we must have some long-term perspectives. Proactive services are not for daily requests (which should be handled by web feeds), nor can they handle accidents that are totally out of imagination (nobody can be proactive on things unexpectable). Proactive services solve the low-frequent requests that are, however, likely issued in the future. Low frequency does not mean unimportance. In contrast, many low-frequent requests are very important to our lives. For example, all of us will get sick some day in the future, which is, however, a low-frequent event in general. But no one can deny the importance of this low-frequent event. We'd better preparing how to solve this problem before it really happens because otherwise it might be too late. So a long-term perspective about potential exams is necessary to be proactive. So we decide to name the web proactive services the Exam Preparing Services (EPS).

Solution

A typical EPS solution contains three phases: initial phase, latent phase, and exam phase. During the initial phase, users specify a long-term perspective to an EPS handler. The EPS handler regards the long-term perspective to be a new web resource, and connects it to a respective community annotater through ACtS. This perspective is then annotated based on the community taxonomies and users' personal specifications. This is the end of the initial phase.

The latent phase starts as soon as the initial phase is done, and it keeps running until the respective long-term perspective is terminated. Moreover, the latent phase may be suspended temporarily when an exam phase is demanded, and the suspension ends as soon as the exam phase terminates. During the latent phase, EPSes requires lower priority on execution if there are other processes running at the same time.

During the latent phase, an EPS continuously contacts the other EPSes on the same community and looks at whether they have the same (or close, depending on community conventions) user perspectives. Whenever it finds one, both of them mutually record each other as exam peers. Later on, when any of the exam peers enters its exam phase, it notifies and broadcasts its exam results (which we discuss shortly) to the peers. The peers then record the exam results on their own web spaces as references of their own exams in the future.

An exam phase starts whenever users get an accident that requires invoking a pre-assigned perspective. There are two possibilities. First, this perspective is so rare that no one else has looked at it before. In this situation, this request is simply switched back to a regular community-specific web search case. Users have to solve it by themselves since there are no previous references. At the same time, however, their web space can record their final decisions as future references. Therefore, once this perspective is invoked again, the web space at least can provide them their own searching history and previous decisions as the references.

On the other possibility, if this perspective is common in a community and the question has already been invoked by community peers, the EPS can priorly organize its collected exam results (with the up-to-the-date online data) and hand them to the users as the exam answers. Users can thus look at these peer-suggested answers before going to take their own tedious web search. This methodology leverage the collaborative search for personal perspectives.

As exams, users are encouraged to give scores to the results (possibly through an automatic voting service). Scores tell which answers are better than others. For example, Dr. Judd is better than Dr. Jodd as a maternity doctor. By this scores, particular web spaces know the preferences of their human masters, and thus they can do better search in the future. At last, web spaces broadcast their exam results (with scores) to the peers, if any.

There are many detailed issues about EPS (we can write a whole another paper about it). Here we only address two common concerns to this technology. First, people may worry about the problem of spam. Certainly this risk exists, but it is manageable. For example, users can limits the peers of their EPS servants to a groups of trustworthy friends. Though this restriction may decrease the effectiveness of EPS, it is still much better than without collaborative search. Moreover, nowadays companies have already invested much on detecting and stopping spams. When Web 3.0 is coming to the stage, this problem may have no longer been a serious issue, especially when comparing to the benefits we can gain by applying EPS. Second, people may worry about the bias on peers' suggestions. As a matter of fact, however, this type of bias is a common phenomenon in our real lives. In fact, any advises from friends are biased by their experiences. But we are still happy to take them because these suggestions often work, and work well. Most of the time, we are indeed not looking for the best answers, but only the working answers. So the quality of answers is promising. Besides, "Rich get richer" is a common phenomenon in our world. People with fame are easy to get attentions than others, though it is indeed one of the most severe bias in our world since fame only evaluates the past but not the future. According to Albert-Laszlo Barabasi, a famous mathematician and network scientist, this bias is a common rule to any network including World Wide Web. Therefore, we should also not complain the bias caused by EPS. It only reflects what it is real in our real lives.

Proactiveness: the basis to collaborative search

The value of proactive services is far beyond what we have discussed. Proactive services are the basis to collaborative search, a new web search model that is going to greatly change the whole web.

The philosophy of collaborative web search is as follows: if only web users have tried the best to dig out answers of their own questions, we can find a satisfactory answer to any question we ask because there must be someone who is expert on answering this question, and this someone is likely to be an acquaintance other than a stranger.. This hypothesis is built on two widely adopted philosophical assertions. First, it is from the book of Ecclesiastics in the Holy Bible. In verse 9 and 10 of the chapter 1, it is written, "The thing that hath been, it is that which shall be; and that which is done is that which shall be done: and there is no new thing under the sun. Is there any thing whereof it may be said, See, this is new? it hath been already of old time, which was before us." (adopted from the King James Version) This statement addresses an important fact on not only web search but also generally in our real lives: we often repeat searching for the same things that have already been searched before (possibly by someone else though). In addition, this assertion claims that in general sense there is nothing that has not been searched before. The second philosophical support is from the taught by Confucius (551-479 BC). "Walking together in three persons, I have a teacher among them." This statement claims that we often do not need to find answers from strangers. In contrast, our answers are with our acquaintances if only we can know better of them.

This collaborative search is practical since it only asks individuals to find answers about their own interest (but not for other's interest). Due to unfortunate nature of greedy in humans, this assumption is pragmatic because it serves for the self-centric. In fact, this search model is the most common (and thus proved effective) search strategy in humans' history. Do we still remember how we search for answers before the prevalence of WWW? Certainly! We search for answers by asking questions to our friends and friends of our friends. This is the collaborative search.

Collaborative search is very different from the current dominating web search strategy---the oracle-based web search. We have built web-search oracles (such as Yahoo and Google) that are assumed knowing everything (though every one of us knows that they do not). This oracle-based web search strategy was initiated on the early days of the World Wide Web. At that time, the quantity of web resources was in comparatively small number and we did not search by meaning. Based on these two conditions, oracle-based web search became a pragmatic and affordable strategy. But the web changes fast. First, the explosion of the amount of web resources has already made oracle-based web search be more and more difficult. Google has employed thousands of web servers all over the world and hired leading parallel computing experts to optimize its search algorithms. Even though, Google admits that it could still only effectively search a small chunk of the entire web. This quantity problem, however, is even not the most severe one if we compare it to the appealing for the searching for meanings. Basically, oracle-bases web search is far less effective on the realm of semantics. More or less, maybe someone does not like this comparison, but to build a semantic oracle (i.e. semantic Google) is equivalent to create a real God (who knows everything) to our human beings. If ever someone believes that we may create a God by ourselves, please go for this project to launch a semantic Google. For others, please just give up this plan and think of some alternative resolutions. (We have more discussions on this one here.)

Again, we can explain this debate between oracle-based search and collaborative search using our analogy on the basis of our web evolution theory. As we know, pre-school teachers can answer almost all the questions from their students, but college professors cannot. Does it mean that pre-school teachers are more knowledgeable than college professors? As we know, the answer is often exactly the opposite. The difference is on the student side, but not the tutor side. Pre-school teachers can handle all the questions because these questions are silly ones. But college professor cannot because their faced questions often demand high level knowledge that even leading scientists may not answer easily. Hence pre-school teachers can play like oracles to their students, but college professors, even though they are much more knowledgeable, cannot play like oracles.

Let's explain this evolutionary phenomenon on web search from another angle. On many aspects, the evolution of WWW is repeating the history of mankind. Long time ago, many of our antecessors worshiped various Gods to look for oracles. At this early time of human history, priests (discarding their specific religions) were often more educated than the sum of the other normal people. This was why they could "produce" oracles (and many of these oracles were as incorrect and ambiguous as the answers we currently get from Google). With the prevalence of education, normal people know better and better about world facts. As a result, many old religions waned and eventually disappeared in human history. Humans started not to look for oracles, but to consult among peers and look for better education; this was the spirit of the great renaissance, which had led us out of the dark of the Middle Age. Oracle producers would not like this movement, just like that Google would never prefer to having the semantic web. EDUCATION is the killer to oracles; once it was, and continuously it is. As a result of prevalent education on web spaces, we will achieve the semantic web and collaborative search simultaneously.

As its ultimate goal, the combining of semantic web and collaborative search strategy will lead the web to be a web of specialists, as what our real human world is. In our real world, everybody is a specialist of something, even if he could only answer who he is (and actually no one else can answer this question better than oneself). In our real world, to look for an answer is to look for a specialist who is professional on answering this question. Our entire education system is thus to produce various specialists that may answer all kinds of questions. Similarly, the evolution of WWW is to make every single web space be not only a specialist of some professional realm, but also a mini-search-engine of this professional realm. This professional realm could be anything, from great philosophical thoughts to the methods of cleaning a table. Certainly there will be many overlappings of specialties, which is why we often can find answers among acquaintances.

At the end of this section, we compare this collaborative search strategy to the current answer-based web search strategy represented by such as Answers.com and Yahoo! Answers. Though looked like similar, they are fundamentally different to each other from their philosophical foundations. Most crucially, the current answer-based search strategy is motivated by questions, while our collaborative search strategy is motivated by answers. In the current answer-based search model, the ones who answer questions may not (and often do not) have passions on these questions. Most of the time, they just occasionally cross over a question and subjectively believe that they know the answers, which many times are not correct. Moreover, they totally have no responsibility on the answers, which make the entire search space even worse. So this answer-based search strategy does not work well. (For example, Google has stopped the Google Answer project, which we believe is a smart decision.)

On the contrary, this collaborative search strategy is on the basis of people who look for answers. Since these answers are important to them, they have the passions to look for better answers. This passion allows them to be specialists on their search realms. Therefore, the followers can trust these previous answers because they are from someone who cares and knows about the questions. Or more precisely, these answers are from specialists who have done their own search (with passions) on the same questions. This is why this collaborative search strategy provides trustworthy that is generally lacked in the current answer-based search strategy.

Symptom Five: Conditional Friendship

Scenario

At childhood, when two children are friends, they are friends on any condition. This is, however, not about loyalty but because of naive. Young children do not have the meanings of real friendships, which are always founded on common belief, interest, and promise. This understanding of friendships requires education and experiences. It is a long process and cannot be suddenly realized. When children grow up, the first lesson they learn is that friendships are with conditions. For example, Alice and John are friends on kicking football. Nancy, however, do not like to play football. In contrast, Alice and Nancy are friends on drawing pictures. When children understand to play multiple characters, they also understand that friends on one character may not be friends on another character. Both John and Nancy are friends of Alice. But these friendships have different meanings. Friendships start to be associated with context, this is an important quality transition when we grow up.

Community-sensitive Link Resources

Community-sensitive link resources will be a phenomenon on Web 3.0. From one side, this is a certain consequence due to the prevalence of community-specific data resources and service resources. When both of the two endpoints of a link hold some domain-specific meanings, the link itself automatically obtains its community-specific meaning. On the other side, the booming of community-sensitive link resources will greatly leverage the operation of community resources. When individual web spaces present richer and richer community-specific information, it is these community-sensitive link resources that weave these resources in independent web spaces into an interconnected network.

The prevalence of community-sensitive link resources may bring many impacts on web evolution. First, it reverses the relation between horizontal search engines and vertical search engines. Currently, vertical search engines are built upon generic horizontal search engines such as Google. It is because these vertical communities are too unmatured to provide community-specific search by themselves. On the other word, these special communities are not explicitly existed due to the lack of community-sensitive link resources. The prevalence of community-sensitive link resources will totally overthrow this scenario. When individual communities become well weaved based on the community-sensitive links, the community-specific search (vertical search) could be greatly facilitated by these new link resources rather than relying on the information provided by horizontal search engines. On the contrary, it is horizontal search engines that must rely on the more precise searching results provided by individual vertical search engines to maintain their competitive performance. Does it mean the sunset of Google and Yahoo? Not necessary because we may still need horizontal search engines. But if they do not prepare from this change, it may be the sunset of them. The force of web evolution is unstoppable.

Another effect caused by the prevalence of community-sensitive link resources is the emergence of primitive mapping resources we have mentioned earlier. Horizontal-search-engine developers will compete their performance on leveraging cross-community search. Mapping cross-domain resources will be a critical issue, and thus these primitive mapping resources is going to be an important type of new link resources. We can expect that the entire structure of web industry would become more and more sophisticated and be closer and closer to the real-world industry structure.

Summary

In this Part 2, we have discussed the laws of web evolution. Based on these rules, we have predicted the fascinating Web 3.0. Our web evolution theory tells that the web is cloning our real world in details as well as in the large scale. Individually, we are stagewisely clone ourselves with respect to our growth stages to the web. Generally, the entire web becomes more and more like our real human society. The residents of WWW, a virtual world, are our virtual clones in varied stages. As a matter of fact, however, this new discovery is not surprising since the beginning the goal of Computer Science is to simulate humans. As the most exciting and influential product of Computer Science, the World Wide Web is the best realization to the goal of Computer Science, i.e., how we have simulated ourselves, both individually and societally.

We are cloning ourselves by materializing our consciousness on the web, no matter it is due to intentional plan or mindless activity. This whole perspective lays on everybody's subconsciousness. We are looking for immortality; we want to be remembered; we are not dust in universe. Nobody can abandon these thoughts if only we give him a tiny hope; and the invention of WWW gives everybody a big hope. This is why WWW is a legend and this is the most fundamental driving force to the web evolution. On the basis of this driving force, idealists can build their religions and materialists can make their profits. Everybody can realize their own dreams on the web because the web directly touches the deepest self of everybody.

Some day, the progress on biological cloning will meet the progress on mental cloning. As a result, a really new figure of immortality may appear. But we need to think of this future, right now, whether this is our expectation. In fact, even if a clone fully resurrects both of our physical body and mental consciousness, one thing is certainly non-cloneable, which is the self-consciousness, our self-identity. The self-consciousness is the only but the most essential part of our lives that is ineluctably passing away with us. Clones may be fully alike us on both external shape and internal thoughts. But to the end, they still will never be us because our "self" is lost forever. Do we really like somebody else that claims everything originally belonging to ourselves, and it is just not ourselves?

Here we ends this Part. In Part 3, we will discuss some technical details on implementing our predictions. We are going to show that these predictions are not only theoretically sound, but also practically realistic. But, however, based on our web evolution theory, this realization will still not happen until the accumulation of Web-2.0 resources reaches a certain threshold in quantity, which we have no idea of its value. This is something we can never enforce even if we have a theory in hand.