• site guide
  • Pangaro Incorporated

    An Examination and Confirmation of a Macro Theory of Conversations
    A Realization of the Protologic Lp by Microscopic Simulation

    A thesis submitted for the degree of Doctor of Philosophy
    Paul A Pangaro
    Department of Cybernetics, Brunel University
    May 1987

    Please review the notes that accompany the index for this document.


    Conversation Theory is a theory of interaction. From interaction (the theory asserts) arises all individuals and all concepts. Interaction, if it is to allow for evolution, must perforce contain conflict, and, if concepts and individuals are to endure, resolution of conflict.

    Conversation Theory as developed by Pask led to the protologic called Lp which describes the interaction of conceptual entities. Lp contains injunctions as to how entities can and may interact, including how they may conflict and how their conflict may be resolved. Unlike existing software implementations based on Conversation Theory, Lp in its pure form is a logic of process as well as coherence and distinction.

    The hypothesis is that a low-level simulation of Lp, that of an internal and microscopic level in which topics are influenced by "forces" that are exerted by the topology of the conceptual space, would, in its activation as a dynamic process of appropriate dimension, produce as a result (and hence be a confirmation of) the macroscopically-observed behavior of the system manifest as conflict and resolution of conflict. Without this confirmation, the relationships between Conversation Theory and Lp remain only proposed; with it, their mutual consistencies, and validity as a model of cognition, are affirmed.

    The background of Conversation Theory and Lp necessary to support the thesis is presented, along with a comparison of other software approaches to related problems. A description of THOUGHTSTICKER, a current embodiment of Lp at the macro level, provides a detailed sense of the Lp operations. Then a computer program (developed to provide a proof by demonstration of the thesis) is described, in which a microscopic simulation of Lp processes confirms the macroscopic behavior predicted by Conversation Theory. Conversation Theory thereby gains support for its use as a valid observer's language for every-day experience, owing to this confirmation and its protologic as a basis for psychological phenomena in the interaction of conceptual entities of mind.


    There are many individuals who must be thanked for their help in the research and production of this dissertation. First and foremost is my thesis advisor, mentor and friend Dr Gordon Pask whose intellectual and spiritual life have been the greatest influence in my career.

    To the many individuals who made up System Research Ltd over its long existence my thanks to them must be anonymous. My appreciation is especially strong for those who suffered the pressures of its research programme and research conditions and who may or may not be individually identified for their very tangible contributions to Conversation Theory. Elizabeth Pask provided emotional support and personal expression of a kind that is rare in the world and without which I could not have persisted at System Research.

    Mr Colin Sheppard of the UK Admiralty Research Establishment (ARE) provided contract support for the construction of THOUGHTSTICKER at a time when its subtlety and power could be seen only as a concept. He must be acknowledged and thanked for continuing the type of crucial and discriminating support championed by Dr Joseph Zeidner, who during his tenure as Technical Director of the US Army Research Institute supported the work of Pask for its own sake. Mr Dik Gregory also of ARE provided intellectual support and contributions to the construction of THOUGHTSTICKER during its development.

    Dr Jeffrey Nicoll while Director of Research at PANGARO Incorporated constructed the complex innards of THOUGHTSTICKER and hence conquered both the Symbolics environment and the work of the Pask on the subject of Conversation Theory. He has also contributed to its formal and theoretical side. He provided undaunted moral support for my efforts on the dissertation and continues to be an important collaborator and close associate.

    Mr Peter Paine as my dual in PANGARO Limited has provided strong support and has allowed me to take the time and resources to complete this dissertation, often to his own disadvantage when timescales and responsibilities of contracts were very great.

    Others who provided moral support without which I could not have completed are Herbert Brun, Patricia Clough, Graham Copeland, Karen Rose Elder, Michael Granat and Symbolics Education Services, Christina Gibbs, Heather Harney, Kevin Kreitman, Shelby Miller, Abe Raher, Vivian Scott, Louis Slesin, Ricardo Uribe, Eric Wolf, and especially Heinz von Foerster, for his foundations for Conversation Theory and the untiring vitality he has expressed to me.

    I. Preface

    I.1 Structure of the Dissertation

    In the writing of this dissertation, while covering the necessary points on the main issues of the thesis itself, I was encouraged by my colleagues to insure that I had provided the following elements:
    1. Background on Conversation Theory itself.
    2. A personal history of my involvement with Conversation Theory, including why I had adopted it as an approach to the problems that interested me.
    3. An indication of my own contributions to the field.
    Upon review of the drafts up to a certain point, I thought that Point 3 had not been adequately expressed. I believe this had been the case in part to keep a dispassionate tone in a scientific work. Also, I specifically did not want to overstep a basic humility by giving attribution to myself as a single individual where the ideas were so much the combination of past efforts and more recent expressions of individuals other than myself. It is particularly awkward to make such differentiations in the context of a cognitive theory which emphasizes the ever-shifting definition of "individual" based on beliefs rather than biological identification. The theory also discourages attribution by providing a detailed model of how new ideas can arise only from the seeds provided by others, in a dimensionality of time that is neither linear nor fully ordered. However I can state that the core of the thesis is entirely my own, namely, that the addition of the process component to software manifestations of Conversation Theory provides a confirmation of important, predicted and otherwise unconfirmed cognitive features. The software written to prove this by demonstration is solely mine.

    Point 2, concerning a personal history of my relationship to the Theory and the context in which I adopted it, is fulfilled from a personal perspective in the next chapter, and from a software development perspective in Appendix C. It has been rewarding to reconstruct the personal side and to express, albeit post hoc, how my career has proceeded from the ideas rather than vice versa.

    Point 1, concerning background, is less direct because the story to be told cannot emerge as a simple narrative. The subtlety and scope of the meanings require a hermeneutic circle. This cycle of interpretation is expressed in the body of the dissertation as starting with my personal interest in the Theory and related techniques (Chapter II), moves to the Foundations of the thesis in Conversation Theory (Chapter IV), and uses the software of the Theory to explain its elements and procedures in detail (Chapter V). Then (Chapter VI) the limitations of past software is revealed and the true operations emerge by the addition of the process component of the Theory. The summary (Chapter VII) is a recapitulation of the main points and possible extensions. This is followed by Appendices, with the technical details of the implementations, Bibliography, Glossary and Figures. As an Annex to this dissertation the THOUGHTSTICKER User Manual (Pangaro et al 1985 in the Bibliography) is attached for completeness.

    The explication of Conversation Theory within the text is thus achieved only upon completion of the cycle whereby the symmetries and aesthetics noted in the beginning are achieved by an innovative approach to implementation which fully explores the central tenets of the Theory.

    I believe that all of the requirements are therefore fulfilled and I hope the result is an effective examination of both the original Theory, and its confirmation and extension represented by this dissertation.

    I.2 A Context for AI and Cybernetics Terminologies

    With the surge of interest in the field of Artificial Intelligence (AI) primarily due to technological advances since 1980, certain concepts have gained acceptance and comprehension within a wide audience of researchers and software development projects in academia, industry and government. Because of this, terms such as "knowledge elicitation", "knowledge representation" and "machine reasoning" now have common meanings and provide a background in which discussions in those communities may take place (Barr & Feigenbaum 1981).

    Each of these ideas had been given full treatment with detailed meaning and context for interpretation within Conversation Theory (CT) as developed by Pask and others (Pask 1976a, Pask 1980a), well in advance of their recent uses within AI. Unfortunately, up to the 1980s, CT received wide exposure only within the fields of cybernetics and computer-aided instruction. Within those spheres and as illuminated by CT, many core concepts of epistemology and human discourse were given tangible meanings that both reflect a common sense usage and a precise and (within a cybernetic interpretation of the term) scientific meaning. The terms "individual", "conversation", "agreement" and "understanding" are prime examples of this (Pask 1975a).

    AI, engaging many more researchers and hence research publications, is perforce divided in opinion and much more fragmented in technique than CT. (It is an editorial comment to note that the fragmentation is evidence of the lack of coherence and direction in the field.) Thus a perverse situation has arisen where consistent and agreed meanings within Conversation Theory cannot be explained using terms from AI without both distortion and ambiguity. And, common every-day terms cannot be used unqualified to describe Conversation Theory without losing a freshly-new and yet scientifically-strict meaning. Therefore, although I will often draw on the metaphors of Artificial Intelligence I will endeavor never to do so without immediately providing the significant difference to the realm of Conversation Theory.

    In general and to avoid constant qualification, references to AI do not indicate the entire field of Artificial Intelligence, but rather those areas within AI that relate to the subject of this thesis, namely, knowledge representation and machine intelligence. None but the most obstreperous proponent of AI will object to this usage.

    II. Background

    II.1 A Context for Adopting Conversation Theory

    In 1976 I was engaged in software research projects centering around the use of highly interactive computer graphics systems which in the present day are taken for granted on any home video game; in those days such equipment was extremely rare as it was only just being developed. The Architecture Machine Group, a research facility at the Massachusetts Institute of Technology, was producing innovative hardware systems for the creation of new types of media environments: large screen displays, many simultaneous auxiliary displays, touch panels and tablets for input of commands, graphics and even gestures. The work of this laboratory has influenced a generation of workers in the field of interactive computing. Its name must not be confused to imply so narrow a field as mechanical architecture; rather, it was concerned with the influence of mechanical and electronic and digital artifacts on all aspects of the "built environment."

    II.1.1 The Needs of Man-Machine Interaction

    My background and interests at that time were centered on the issues of man-machine interface (MMI) specifically for the creation of computer graphics. These visual results might be static or dynamic, but always for the purpose of expressing ideas, whether to oneself (as an aid to the process of design) or to others for the purpose of communication. At MIT I had already had the privilege of access to the newest and most powerful computer graphics systems anywhere; what I felt was lacking was a powerful framework in which to express the problems of MMI.

    It was my conviction that to make a machine produce images representative of abstract ideas:
    • There should be a close connection between the formulation that the user conceives on the one hand, whether in diagrams, pictures, movement, etc., and the gestures made to the machine, whether in typing text, programming, drawing, whatever, on the other.
    • All of the power of "programming" should be available to the designer/user, in the sense that procedures and conditional branching could be used to great advantage, for general modelling as well as the conveniences of repetition and variation.
    The combination of these two ideas, and exposure Pask's protologic, led to my design for a visual programming language for simulation-based graphics of great expressive power (implemented by a research team and described in Pangaro, McCann, Davis & Steinberg 1977, and Pangaro 1980).

    One common paradigm of the era was that the human's task was to tell the machine what was required. I however felt that this was not a complete image; that it was also the requirement of the machine to tell the human what could be done. These requirements were not fixed or ordered in time. They would vary depending on the background and needs of the human as well as the machine. The system's capabilities evolved (although not in the same time frame) in parallel to the human's, as new versions of system code or new capabilities were made available. Hence requirements would emerge over time, rather than be done "all at once" at the start of the interaction. It seemed essential to me that insofar as needs and knowledge evolve so must the interaction.

    Therefore it was clear to me that a kind of teaching/learning communication was necessary, and one which was symmetric: both the human user and the mechanical machine had to both teach and learn.

    II.2 Available Models before Conversation Theory

    Obviously the interaction between human and machine was much more limited than that between human and human; but I imagined that since one limiting requirement (in some important aspects, perhaps the main one) was that of the human and hence the human to human model might be a useful place to start. Surely there was enormous history, cultural and scientific, technical and artistic, on that subject.

    II.2.1 Shannon's "Information Theory"

    From the scientific community, Shannon's communication theory (Shannon & Weaver 1964) seemed to be the only direct foray into this problem, especially in that it was named to address this very problem. The conception here is that communication involves a channel between entities playing roles (perhaps alternately) as "sender" and "receiver." The concern of the approach is to control the uncertainty with which a "message" is transmitted across the channel. Transmission is defined as the correct receipt of a sequence of encoded data which makes up the message. Variation in the noise of the channel determines a statistical measure of "goodness" of the channel. Much can be said by communication theory about the redundancy required to insure a given and desired level of certainty about the datums [sic] getting through unaltered.

    Given the robustness of the formulation and the major concern for insuring the accurate (indeed "perfect") data required for computers to operate (especially in the era of the 1950s when the limits of performance of vacuum tubes were being reached) this approach was a landmark for many of the problems in communications and computers.

    Application of this model to human conversation however is fraught with compromise and difficulty:
    • The data are exactly that, data: objective encodings or symbols that stand alone, require no context, and are either one symbol or another, unambiguously.
    • The class or alphabet of symbols is a fixed set and cannot be expanded; the ability to recognize one from another is predicated on the need for both the sender and receiver to have agreed on the fixed set beforehand.
    • The redundancy described exists within the encoding scheme as applied to symbols; there is no bearing on the redundancy realized by the interpretation of the message as a whole.
    These objections individually and together remove the utility of the approach for application to human discourse. This work remains a foundation in branches of "communication technology", true, but at the practical level it serves little more than to express some technical issues associated with bit transfer in hardware channels. Weaver (in Shannon & Weaver 1964) admits to multiple layers of interpretation to the problem of "communication theory":
    • Level A. How accurately can the symbols of communication be transmitted?(The technical problem.)
    • Level B. How precisely do the transmitted symbols convey the desired meaning? (The semantic problem.)
    • Level C. How effectively does the received meaning affect conduct in the desired way? (The effectiveness problem.)
    Weaver then says, "...[communication theory] admittedly applies ... to problem A, namely, the technical problem of accuracy of transference of various types of signals from sender to receiver."

    CT was conceived specifically to handle Level B. Unfortunately, Weaver's characterization of Level C does not account for "second-order cybernetics" where the recursion over interaction produces coherent systems of belief/behavior/language (Maturana 1978) and hence it is not comparable to the goals of CT. At the end of the day, CT would encompass the issues referred to by Weaver in Level C, but only in a larger context of society and culture.

    It is these further interpretations of communication beyond Level A, concerned with the "semantics" (their term) of communication, that for me was the issue. Linguistics of the period was centered on Chomsky, who expressed the capability as absolute and pre-existing (Chomsky 1968); this approach did not seem to hold hope for aiding an interaction that I saw as incremental, evolving and flexible beyond what might be programmable in genetics. Semiotics and related work was not specific enough to provide any hints about how to write code. Neither would psychology, so concerned with the "objective" and "scientific" as to avoid any admittance of qualities in the study of cognition that we call human.

    II.2.2 Artificial Intelligence

    Much had been said by this time about "intelligent machines"; the field of AI had already been through a number of cycles from promise to difficulty to redefinition of promise (Feigenbaum and Feldman, 1963;

    Minsky 1968; Nilsson 1980). Despite the difficulties the focus of the field remained (and does to this day in the latter 1980s) on the "intelligence in the machine" [sic]; little or nothing is said about communication with such an intelligent machine or between man and machine. And this was for me the precise focus of need as I formulated it then: solutions to the problems of communication must be part of any solution for machines of intelligence.

    AI has always seemed to me based on an over-confidence in Turing computability (Minsky 1967). This has been supported in arguments by Jerry Lettvin in which he specifically ties the AI community to work by McCulloch and Pitts on the equivalence of simplified threshold networks to Turing computability (Lettvin 1985).

    The coupling of these two mathematical results unfortunately allowed the AI community to avoid questioning its foundations, based in the presumption that the power of Turing mathematics is supreme (a mistake Turing himself did not make, as I learned by examining his unpublished works at Kings College Cambridge). This over-confidence has prevailed until recently when AI, physics and cybernetics were united in new work to extend the definition of computability (Deutsch 1985). These extensions were presaged by Pask (e.g. Pask 1958).

    Born and raised on a mathematical formalism and whatever technological capabilities that followed, AI could not see that it could not see its limitations (to paraphrase von Foerster). Cybernetics was simultaneously proceeding from an epistemological basis of what can be known and, especially in CT, moving to theoretically sound and practical formalisms on the nature of knowing. More detail on how CT accomplishes this is given later in this chapter.

    II.3 Reasons for Adopting Conversation Theory

    I was introduced to Conversation Theory first in the form of Pask himself, who was consulting for the Architecture Machine Group. Pask had influence there by critiquing research programmes, inventing metaphors and providing a rich interconnection with other workers in many fields, which he brought to a Group concerned with increasing the bandwidth (my term) of interaction between human and machine-based systems. One tangible result was collaboration of the entire group (and fortunately myself included) in the production of a major work called Graphical Conversation Theory (Negroponte 1977). This was a research proposal submitted to the US National Science Foundation, which would interpret CT in light of the newest and most powerful computer technology. (Alas it was not funded.)

    These interactions led me to the study of Pask's papers, frequent visits to his research laboratory, and eventually to collaboration on research projects. It was this collaboration under contract to US and UK establishments that funded the implementation of THOUGHTSTICKER described in this thesis.

    It is a subtle task to separate out a set of personal, individual reasons for my becoming interested in CT, or for using it as the basis for endeavors in computing, or for using it as the foundation of a research dissertation in cybernetics. With the understanding that any such delineation is for descriptive purposes only, here follows an attempt to linearize what must be, as its origins in CT would tell, a set of reasons that are ultimately holistic and hermeneutic.

    II.3.1 Symmetry across Individuals

    CT restores a symmetry to the modelling of all interaction. No hierarchy exists between, for example, teacher and learner; both must "teach" and "learn" from the other in order to achieve communication or, as is preferred within CT, conversation (see the Glossary for a definition). These interactions are considered to be "I/you" referenced, because one individual treats the other as of equal rank, in that the language is one of command and question; the other individual has options and may or may not respond, cooperate, etc. This provides an aesthetic as well as an ethical formalism (Pask 1980b).

    II.3.2 Symmetry within Individuals

    CT models discourse within individuals by levels which are symmetric to each other and to those in other individuals. The model stratifies any language of discourse into distinct levels (at least from the perspective of the observer) and creates dependencies between these levels. Thus, a "higher" level determines the actions at the level "lower" to it. These interactions are considered to be "it" referenced because no choice or response is allowed by the "lower" level (Pask 1975b).

    One consequence is that given a level considered to be one of "method", the lower level is where that method is carried out. Thus, the lower level is an "environment" for the "higher" level. Consider that the environment may be an external world of actions, or merely further levels of cognitive activity. This symmetry provides aesthetic satisfaction as well as an implication that computation can encompass actions in a world of physical objects as well as mental constructs. This interpretation served as the basis for my design of the Expertise Tutor, a software prototype developed under contract to the UK Admiralty which contains precisely this multiple level of discourse and access for the user. It is the first system of which I am aware which makes this distinction of levels both explicit and accessible to the expert and user alike (see further description in Appendix Section C.7).

    II.3.3 Subjectivity and Objectivity in the Same Framework

    The above two points provide a brief description of the "conversational framework", a structure in which scientific observation can be made and descriptions of interaction may be given. The framework provides for both "objective" interaction, where no interpretation is made (relative, as always, to an observer) and "subjective" interaction, where any result can be seen only from a context within the interaction of the two (or more) individuals.

    Scientific discourse has always insisted in "objective" enquiry. It is this very insistence which has kept psychology out of the realm of mental activity (by its own admission). CT provides, I think uniquely, a framework in which objective, "hard-valued" measurement can be performed in the domain of mental activity. For example, a hard-valued, objective and scientific meaning for "agreement over an understanding" is obtainable within CT (Pask 1975c). Because of this, the requirements of MMI for the transfer of information about a system's capabilities and a user's desires can be specified.

    II.3.4 The Language of Conversation

    The interactions described must occur in a language, and here is the crux of any framework. If the language is a "natural" one, such as English, immediately any machine interaction is disqualified. Although it may appear that our present-day interaction with computers is "in English", in fact English words and phrases are used merely as tokens to indicate constant and pre-determined meanings. No interpretation is involved and hence the use is not of "natural" language.

    It may therefore seem that, similarly, CT is inadequate for any advances in MMI. This is not the case for two significant reasons:
    1. CT as a framework is adequate for any language so long as it is one of question and command (von Wright 1963). It may be gesture or dance, visual or aural, images or imagery.
    2. The use of language tokens can be kept at a mechanical level within the software, with the user providing a "semantic" value to it.
    3. Interpretation is brought in when a user relates topics together to formulate the "meaning" of the relationship (as in the CT construct of a "coherence", detailed in Section V). The activity is basically hermeneutic and the meaning arises in the circular interpretation and use of tokens by the user.

    II.3.5 Generality of "Individuals"

    Interactions occur across an interface, among individuals. The distinction among individuals is made by the observer, who asserts the existence of the interface. The individuals so distinguished and the observer can be considered as duals of each [sic] other. The emergence of a distinction among individuals comes at the moment of distinction between self and other, who are the same type of individual.

    In CT, individuals are "P-Individuals" or psychological individuals, rather than a simple reduction of physical individuals. Thus a single human may be modeled as consisting of many P-Individuals, different at different times, for varying purposes, evolving in the course of experience. This would encompass the requirement for a design of user interface software which is adaptive to the changing needs of the user, in a variety of guises and situations.

    Similarly, the same framework can be used to model an interaction between a human and machine. The specifics of processes that are available within each are clearly different; however they can be specified and the resulting needs for mutually-understood interaction are achieved.

    II.3.6 Cognitive Bases of CT

    CT was developed not out of whole cloth but from a history of empirical research on the nature of interaction, conversation and understanding (Pask 1975a, Pask 1975c). The theory which resulted therefore incorporates its origins into its terminology and structures. The terminology has a great deal of "common sense" appeal and the theory provides explanatory support for many everyday events (including forgetting, remembering, mnemonics, confusion, ambiguity, uncertainty, and conflicting desires). This is particularly evident when contrasted with competing theories (cf. Minsky 1986).

    II.3.7 Mediation of Language by a "Knowledgebase"

    Knowledgebase is a term which is used with abandon in the field of AI to mean a structure internal to a computer which "contains knowledge" and which can be manipulated, perhaps to make inferences or deductions, in software. CT maintains primary interest in a "knower", while the "knowledge" cannot be held as independent from such an individual.

    CT defines related structures in its dual called Lp (pronounced "L-sub-P" and explained in detail in the course of the text). Completely consistent with CT and all of the points described above and below in this text, Lp is a class of well-specified processes that operate on a class of well-specified structures that can be adequately computed in present-day, serial digital computers. ("Adequately" is a point taken up later and the distinction between various levels of simulation of Lp is a central point of the entire thesis.)

    II.4 The Emergence of this Thesis

    Given of the above, it seemed extraordinary that there should exist a theory of aesthetic elegance, simple formal symmetries, based in cognitive behavior, and with a detailed calculus of knowing that could be programmed.

    Preceding sections describe the conditions under which I was introduced to CT, especially in the context of MMI. My interests had always been considerable, however, in the nature of cognition and communication, and how computers may enhance or otherwise influence these daily human activities. I have often heard others working within CT and cybernetics say that they had some affinity or intuition for the ethos beforehand; upon introduction, the expressiveness of the framework was immediately apparent. The simple elegance of CT to describe mental events, and its coherence with the arts and humanities (Pask 1968, Pask 1976b) as well as sciences (Pask 1979), were a constant source of interest for me. My initial entry from the perspective of MMI grew into a general interest in its tenets. The desire to influence an entire field with the sweeping power of CT by producing computer-based implementations of its ideas became (and remains) very strong.

    All of the above reasons drew me into the world of Conversation Theory and each successive revelation within it confirmed to me its power and utility for my interests, both theoretical and practical.

    Any such software based on CT, to be useful in commercial applications (namely, every-day use) and to be of sufficient power to influence the world's view of MMI according to the ethics of cybernetics and CT, would require considerable investment and relatively single-minded course of activity.

    Since my first exposure to CT, I have devoted considerable time to designing and coding software systems based on its tenets; details of my own and others' contributions can be found in Appendix C on the history of THOUGHTSTICKER. (That it also required the creation of a company framework is a detail of management and of politics.) It was in the course of development of this software that two issues converged: the need for maintaining the process component in the simulation of the calculus of Lp; and the evolving display of the Lp structures for the sake of the user. This discussion is taken up in detail in Chapter VI.

    This thesis returns emphasis to the importance of the process component of Lp in any research and development centered on CT. In a very real sense, without process the theory is lost, as one of its trilogy of features is missing (the others being distinction and coherence). This is due to the central role that process plays; for example, that CT states that memory is not recall of a static configuration but a dynamic recalculation or reproduction. All of the formal expressions of relationships with CT contain production arrows which are not merely transformations from state to state, but continuous processes whose continued execution and persistence is the given cognitive element (topic, memory, concept). Hence the process component is key, and one contribution of this thesis is the reinstatement of that component to software manifestations of CT.

    III. Review

    III.1 Knowledge-Based AI Systems

    Once I made the shift from principles of MMI to general theories of cognition, it was necessary to ask the question as to whether, in all of the techniques developed in the field called AI, some of its ideas or software results might be appropriate, useful, or better than those of CT.

    III.3.1 Semantic Nets

    Semantic nets (Quillian 1968) appear on first review to be closely related to Lp structures; an early question often posed in discussions about CT with AI researchers is, How are Lp structures different? Because of this apparent (but not actual) close association, a brief description follows. (Details of Lp structures can be found in Chapters V and VI.)

    Semantic nets consist of nodes and links. Nodes refer to objects or attributes, linked by arrows (or, in programming terms, pointers) which have values in themselves. For example, [FRED IS-A BIRD] relates the nodes FRED and BIRD by the link IS-A. There must be many types of links, covering ideas such as ELEMENT-OF, HAS-PART, GENERALIZATION-OF, EXAMPLE-OF, and so forth. If in doubt, you simply create a new link willy-nilly.

    The ability to create new links seems to provide for a general scheme without boundary. However this very generality is its downfall. The class of link types becomes very large and it rapidly becomes apparent that any subtlety or power of the scheme is simply shifted one level into the operators that the links represent. The nature of the computation contained in the links (such as generalization, inference, and so forth) is not well-specified.

    It is beyond the scope this text to explore this question with too many specifics; however, there are clear differences which can be briefly listed and which provide justification for choosing Lp above semantic nets in my research efforts that followed the initial enquiries:
    1. Semantic Nets emerge primarily from programming constructs; they have some common sense appeal but no basis in cognition or empirical research.
    2. There is no theory of knowing which shows that semantic nets are minimal, necessary, sufficient, or even useful representations of human knowing.
    3. Further refinements (Brachman 1979) to the approach have added considerable complexity but without achieving major advances or overcoming the objections put forth even from its proponents (Brachman 1985).
    Lp, as shown in this thesis, has none of these disadvantages, and considerable advantages.

    III.3.2 Frames

    One major development (in a sense "on top of" semantic nets) is that of Minsky's Frames (Minsky 1975). This approach accumulates semantic relations (in the sense as shown just above) into frames of knowledge that are related to contexts of interpretation; for example, while in a restaurant, while going to a play, etc. This refinement handles cases of "default" knowledge, where the scheme can attempt to fill in about items not explicitly explained (a simple form of generalization). Unfortunately the old problems remain; each of the above objections to semantic nets could be paraphrased to apply to frames. Even more one is given the feeling that these are structures that are conveniently computed by programming languages such as LISP, and hence their popularity within AI.

    Minsky has most recently revived his concept of the "Society of Minds" theory of cognition (Minsky 1986). (This idea and others contained in a paper called "Consciousness" were circulated privately in the MIT AI community in the 1970s.) Basically, the society of minds puts forth the idea that a mental organization consists of many, possibly conflicting sub-units. These smaller units each require resources to be computed and provide competition for the limited resources. The approach is intended to address (what I will call) "post-Freudian" problems. These are Freudian, because they deal at the psychological level identified by his followers as the concern of Freud. They are also "post-" because they are the interpretation of Freud rather than Freud himself.

    Minsky offers engaging argument but neither theory nor confirmation of his ideas. In fact, considered as metaphor and ignoring the Freudian overtones, Society of Minds is quite consistent with CT's modelling of the "P-Individual" being the unit of perspective within a mental organization, conversing (and competing and conflicting) with other P-Individuals in the same organization. Pask however provides details of:
    • How the P-Individual is composed, namely, processes that can be modeled by Eigen functions.
    • The means by which they converse, namely a language capable of question and command.
    • How the interaction can be modeled, namely the "conversational paradigm" (Pask 1975b).
    • A detailed model of the structures that make up the transactions, i.e. the Lp calculus.
    • How conflict and its resolution can be modeled, via Lp and the operations described in the main body of this thesis.
    • How a continuing process of "saturation" occurs, forcing the interaction of otherwise independent cognitive structures which in turn creates new structures or reveals conflict, ambiguity, confusion.
    Given the specificity of CT and the delightful but vague and unfulfilled images of Society of Minds, the decision to use CT to attack problems of my interest was a simple one. In fact, it was just such a formulation that caused me not to pursue my work in a doctoral programme at MIT to which I had been accepted, in order to pursue the line of research described in this dissertation.

    III.3.3 Expert Systems and Rules

    Expert systems have received major attention most recently. These utilize "production rules" in the form of "If...Then..." statements. For example, "If the temperature is above 50 Celsius and the smoke detector has been set off, conclude there is a fire in the room." Such statements are said to represent the knowledge of experts, and to provide the means to model how experts actually make decisions. Statements are processed together to create new conditions that "fire" other rules, which fire yet further rules, etc:

    "If there is a fire in the room set off the fire alarm and the sprinklers." Given that some rules represent desired conclusions that are distinguished from others, the system is said to "decide." Alternatively, the expert system can work backward from conclusions to necessary pre-conditions and thereby diagnose initial causes.

    PROLOG is a programming language designed to process these descriptions of "knowledge", and the general approach has its origins in first-order predicate logic. Comments about semantic nets still apply: the approach is not based on cognitive theory or empirical studies of human knowledge; the scheme is not known either to be sufficient or necessary to explain human cognition; and extensions do not solve fundamental problems with the approach. Expert systems have recently become a popular means to approach the problems of training, wherein tutorial strategies are encoded as "If...Then..." rules: "If the student has failed test A and test B then conclude topic X not understood." At some point there should be a general recognition of this fashion as no better than an intricate but equally ineffective form of training as programmed instruction (in the same way that programmed instruction is now widely recognized to be an impoverished technique of computer-aided instruction; see Section III.3.)

    Other AI approaches are more tangential to the requirements of a cognitive approach to software and MMI design. Work in natural language parsing is still focused largely on translation and getting knowledge "into" a knowledgebase (Barr & Feigenbaum 1981). Shank's work on Scripts (Shank & Abelson 1975) has some interest in communicating with users, but although he takes an increasingly iconoclastic view of other approaches within AI (Shank 1980) his alternatives are still within AI's limitations. Winograd is the closest to a cybernetic view but his publications do not provide a sufficiently tangible alternative to begin coding (Winograd & Flores 1986).

    III.2 Related Work in Cybernetics

    Despite its popular associations with robots, cybernetics does not of itself refer to computers. A surprisingly small amount of work in cybernetics overlaps with, or has produced approaches to, MMI or conversational software.

    III.2.1 Foundations

    At the theoretical level, related work in cybernetics has generally been a precursor to CT and/or provided a foundation upon which CT could provide the specific results that it does (for example, von Foerster 1960). The emergence of second-order (retold in von Foerster 1985) and reflexive interpretations of science (Bateson 1960) provides the beginnings within cybernetics of an approach to systems that is both scientific and subjective. However these foundations require interpretations, in both empirical studies and detailed formulations, before they can be translated into tangible prescriptions for action, which came only with Pask.

    III.2.2 Laing

    The utility of a reflexive view of interaction (again on the theme of the problems of MMI as discussed above) is most effectively presented in Laing 1966. The interpretation in the context of conventional MMI would be something like "I [the user] know what functions the system knows. The system knows nothing about me." A more advantageous approach which I desired would be something like "I [the user] know that the system knows what I know about the system." This could perhaps be extended to incorporate goals, as in "I [the user] know that the system knows my goal is to ..." Thus the user could proceed with greater confidence and efficiency. Laing thus provides a metaphor of desire, but nothing detailed on which to base a software approach.

    III.2.3 Personal Construct Theory

    In terms of software, the work of Kelly in the extraction of grids of constructs for purposes of explicating knowledge otherwise internal to a knower (Kelly 1966, Bannister & Mair 1968) is closely related to the interests of CT. This has powerful implications as seen in practical and modern software implementations (Personal Construct Theory and the software Pegasus, in Shaw 1980; and MAUD software, Humphreys 1975).

    In these latter two cases, the software is used as a means of extracting constructs internal to the knower, and in a form which is self-consistent. This exactly parallels one of the intentions behind Lp, where the names of topics and the relations that they are contained in are delineated and named by the user. The software, again as in Lp, is used to reflect back to the user on the implications of the constructs and their structures, as for example in cases of ambiguity and contradiction (Humphreys 1980).

    These other approaches both preserve the subjective quality of the "extracted knowledge" and emphasize the self-consistency of the result. However, Lp additionally provides a framework that is based on the epistemology of observation, empirical confirmation of the utility of its references to individual learning style (independently confirmed by Marante & Laurillard 1981, and Bogner 1986), and an extended set of operations which encompass many more events that are recognized as cognitive (Pask 1983).

    III.3 THOUGHTSTICKER and Computer-aided Instruction

    Computer-aided instruction (CAI) has been widely available on computers since the advent of minis and micros, starting in the 1970s. Largely accepted as useful tools for training by computer, some criticisms have arisen over the years (see Kearsley 1977 for a view inside the field, and also Pask 1972). The following section presents a self-contained explication of how THOUGHTSTICKER can be applied to the problems of a user learning from computers, in direct comparison to existing software training approaches. THOUGHTSTICKER represents a complete revision of all existing techniques.

    III.3.1 Intelligent" Training

    THOUGHTSTICKER is an intelligent software system for training and information management. The system is "intelligent" in the sense that it mediates between an expert knowledgebase and a user to provide some of the features of human conversation: a shared vocabulary, history and context of the dialogue. It is the most effective system of its kind available on any hardware.

    THOUGHTSTICKER was developed as an enhancement to conventional computer-based training (CBT) and provides substantial improvements to CBT in:
    • Ease of use, for both courseware creation and delivery of training
    • Management of the courseware creation process
    • Sensitivity to individual learning style
    • Training efficiency and effectiveness, especially in complex tasks
    • Flexibility to encompass job-aiding and advising, as well as training.
    The software consists of two independent parts: the means for creating the knowledgebase (the Authoring Module); and for giving access to the knowledgebase (the Tutoring Module or Browser). Both are conceived and implemented as generic solutions that can be tailored to the specific requirements of the application, its users, the target hardware and interactive media (including videodisc, CD-ROM, graphics and sound). THOUGHTSTICKER is attached easily to existing application software and simulations for a complete training solution.

    III.3.2 Background of the Term "THOUGHTSTICKER"

    The term THOUGHTSTICKER refers to software based on a cybernetic approach to the problem of measuring understanding in human conversations. In the 1970s, THOUGHTSTICKER was developed at Pask's laboratory as an extension of Pask's studies of the 1950s and 1960s in human learning and individual conceptual style. These studies culminated in a comprehensive approach to educational technology (the CASTE system) that has been widely influential in educational theory and computer-aided instruction.

    The term THOUGHTSTICKER was coined by Pask to mark the maturation of a general approach to knowledge representation whose elements reflected cognitive structures. The name itself emphasizes that in order to converse we must externalize our thoughts into a tangible form for ourselves and for others. Using a computer as the medium for this conversation means that thoughts must temporarily take a static form in the computer, before becoming dynamic again as they are interpreted by a user. THOUGHTSTICKER models mental structures with a few simple but powerful constructs that:
    • Capture the author's or expert's precise approach to the subject matter, but still
    • Allow the user to learn the subject matter according to his or her conceptual style.
    THOUGHTSTICKER software is the medium for the conversation, not a participant.

    The power of THOUGHTSTICKER derives from:
    • A theoretical basis in cybernetics and learning theory. The advantages of Conversation Theory as a model for learning have been supported by experiments in cognitive style. THOUGHTSTICKER is derived directly from these ideas.
    • Evolutionary development in application to complex training problems, including those with training in the performance of a task. Extensions for job aiding and expert advising have also been demonstrated.

    III.3.3 Existing Applications

    Prototype knowledgebases have been constructed by me and my colleagues in PANGARO Incorporated, and in the subjects of AI and cybernetics, naval strategy, introduction to computer usage, and word processing.

    For the Behavioural Science Division of the UK Admiralty, THOUGHTSTICKER has been integrated into an Expertise Tutor, consisting of a naval simulation and expert knowledgebase (described in Appendix C). The Tutor provides tactical training as well as basic rules and operations of the game. This system is effective because it provides the user with equal access to descriptive knowledge (elements, relations, goals), prescriptive knowledge (methods, tactics), and the environment (the simulation itself).

    For the US Army Research Institute, a videodisc interface controlled by THOUGHTSTICKER has been developed to demonstrate training of a vehicle identification task.

    Most recently a prototype training course has been developed for Symbolics Education Services. (Symbolics, Inc. is the manufacturer of advanced software engineering and Artificial Intelligence workstations; the most advanced implementation of THOUGHTSTICKER runs on this hardware.) Derived from an introductory, paper-based workbook written by Education Services, this course presents the basic components of the Symbolics computer, concepts of symbolic processing, and how to use certain features of the machine such as the editor and command processor. The learner can immediately practice what is to be learned via the "hands-on" capability: in the course of learning about the editor (for example) the editor window is automatically displayed and commands may be tried step-by-step by the learner concurrently with their presentation in the training material.

    III.3.4 The User Experience

    THOUGHTSTICKER facilitates the user in any training and information

    management activities by:
    • Allowing a mixed-initiative dialogue so that the user may either give the system control, or direct the conversation based on immediate needs (e.g. uncertainty or current goal).
    • Producing distinctly different actions and responses for different individuals, based on the background, purposes, context and cognitive style of the user.
    Thus the user is provided with more focused and efficient interaction than conventional computer-aided instruction and information management systems.

    These results can be achieved because THOUGHTSTICKER "models the user" throughout the interaction, creating a history with each individual that is maintained even across sessions. Because this user model is the basis of all actions by THOUGHTSTICKER, the interaction has more of the qualities of human conversation: context, focus, and shared vocabulary.

    III.3.5 Comparison to Computer-Aided Instruction

    The following two pages contain a brief, "side-by-side" comparison of conventional computer-aided instruction techniques and THOUGHTSTICKER.
    Conventional Computer-Aided Instruction  THOUGHTSTICKER                           
    Based on concepts of "programmed         Based on a cognitive theory of human     
    instruction" developed in the 1950s      conversation developed over the period   
    and substantially unchanged since then.  of 1955 to the present, and affirmed     
                                             in empirical studies.                    
    The subject matter is given a            Uses a robust knowledge representation   
    pre-ordained sequence in which it is     scheme to provide a true                 
    to be learned; there is no other         knowledgebase; all conceptual            
    structure to the material.               dependencies are represented in a        
                                             network structure with no fixed paths.   
    All users are treated identically, and   Sensitive to an individual's cognitive   
    thereby are presumed to have the same    style, modifying responses accordingly.  
    cognitive learning style.                                                         
    The author of the subject matter makes   Sensitive to individual variation in     
    assumptions of prior knowledge of the    user's prior knowledge and can be        
    user; very little variation of           tuned by a variety of user profiles      
    material is possible despite differing   (for example, naive computer users;      
    backgrounds in the user population.      experienced computer users but not of    
                                             this particular type; users of another   
                                             particular vendors' hardware).           
    Additional questioning by the user is    User is free to ask questions and        
    limited or not allowed. Remedial         explore throughout the knowledgebase     
    material is offered to the user upon     at any time. The user helps direct the   
    supposition of reasons for user's        remedial dialogue, which is derived      
    failure and usually from a static        from a combination of user's focus,      
    model based on averages or likelihood.   the structure of the knowledgebase,      
                                             and the history of the interaction.      

    The comments about computer-aided instruction can stand as generalizations across a number of commercial products because they characterize a field which is substantially homogeneous. Although specific features of training packages vary, the instructional model and the organization of the subject matter does not.

    The driving force of the interaction is the user's interests and uncertainties. THOUGHTSTICKER has specific features that help the user discover these interests and uncertainties, and then explore or resolve them. By allowing such strong initiative on the part of the user, THOUGHTSTICKER provides an effective, efficient and supportive training experience.

    III.3.6 THOUGHTSTICKER's Training "Knowledgebase"

    THOUGHTSTICKER is constructed as a generalized information management system. Its internal database, called a knowledge representation or knowledgebase in modern parlance, supplies a flexible, "relational" format that is suitable for any subject matter.

    To describe the format briefly (to be detailed later in Section V): topics are defined and associated in relations by the author or expert. These objects together define a network or mesh of "knowledge" and thus determine the structure of the knowledgebase. There are no pre-defined types of relation; the author is free to create relations as desired. THOUGHTSTICKER contains training heuristics, many concerned with the user's purpose and conceptual style, for moving over this structure. The conditions which determine the action of these heuristics are:
    • The User Profile: A preset stereotype of the background of the trainee. The author pre-determines what classes of users are expected to interact with THOUGHTSTICKER. For example, these classes may represent a particular range: novices at a particular task, individuals with some exposure to comparable tasks, and experts. The User Profile can be styled by the author as a single default state, or chosen from a descriptive list by the user, or determined with highest accuracy and detail from a pre-test. Given such a User Profile for a particular user, the choices THOUGHTSTICKER makes are more directed to that individual's level. However, the Profile is only a starting basis and the two mechanisms described next provide further refinement of THOUGHTSTICKER's actions.
    • The User History: A tracking of all actions and results since the user started, whether at the present session on the machine or in the user's history with THOUGHTSTICKER over time. The history consists of, among other details, a record of terms used by the user and the system, topics and explanations shown, and the current context of conversation. This shared history is used by THOUGHTSTICKER at each moment to choose an explanation or a new focus of attention. The result is more directed for the user and hence more efficient and satisfying. The disk requirement for storing this User History is modest.
    • The User Model: A representation of the user's conceptual learning style. As in the User History, the User Model influences THOUGHTSTICKER's choices at each moment, but by applying criteria associated with the user's preferred modes of learning. For example, these may include a preference for examples before general descriptions; or preference for thoroughly completing current areas of learning before touching on new areas; or preference for graphics over text. The User Model can be configured by the author, the user, or by the results of a pre-test. It can even be modified on the fly, provided the user is imposed upon to give feedback on the effectiveness of explanations. In addition, the User Model may include the broader components of the user's purpose. Thus THOUGHTSTICKER can respond differently if the user wishes to learn the entire subject, or the performance of a specific task, or a single precise command name.

    III.3.7 Aids to Authoring

    It is widely reported that the major expense in using computer-aided instruction is the cost of "authoring" the material, that is, creating the subject matter that the learner is to see.

    Conventional computer-aided instruction provides basic utilities for the creation of text and graphics to be assembled into frames for the user to view. In addition, features for managing the user's records, keeping statistics across groups, etc., are generally available.

    Like conventional CBT, THOUGHTSTICKER can provide any "management" functions relevant to a particular site; for example, tracking a student population, creating output reports, or collecting feedback on the effectiveness of any aspects of the course. These requirements are best defined for the specific needs of a CBT application, and tailored accordingly.

    Unlike CBT, THOUGHTSTICKER is exceptionally strong in providing tools for creation and maintenance of the knowledgebase. The power of its environment for providing such features, utilizing the bit-map display, menus, the mouse, etc., is unrivaled. The author uses a full-feature editor to create text material to be integrated into the knowledgebase. Graphics functions or particular devices (such as videodisc) can also be provided for specific training areas. A variety of tools provide views of the resulting structure and show the implications for the learner. In addition, semi-automatic tools are used to convert pre-existing, machine-readable text of the subject matter into THOUGHTSTICKER data files.

    Conventional computer-aided instruction systems provide authoring tools that are basically passive so far as the content of the presentation to the learner is concerned. THOUGHTSTICKER provides a number of active tools that facilitate the authoring process:
    • THOUGHTSTICKER suggests key topics by which to represent the explanation in the knowledgebase; it searches the text as provided by the author, looking for variations and similar terms in the current author's, as well as other authors', knowledgebase.
    • THOUGHTSTICKER checks the existing knowledgebase of all authors and reports how its contents relate to the new statement. It suggests how the statements might be related (identical, containing, contained, etc.).
    • In certain cases THOUGHTSTICKER can detect a possible conflict between statements (technically speaking, it does this not by the semantics of the text but the structures of the knowledgebase the text expresses;
    • THOUGHTSTICKER does not yet contain natural language processing). The system offers a series of methods to resolve the conflict depending on the structures: statements may be declared "not accepted", they may be merged with others, distinctions may be added, etc.
    • In all cases the author's input is tagged to that author and other key parameters such as time of entry. Some THOUGHTSTICKER user interfaces provide the identity of the author at all times; others display it when the distinction is required. Any authors' denials of a statement are also so tagged, and hence many-valued disagreement and consensus may be stored. (A denial is the modification of a statement relative to a user, as to whether that user accepts the statement as valid or not. This applies the user's own statement as well as to the statements of other users.) In this way, local extension or modification of the contents of the knowledgebase is easily achieved while still preserving the original.
    • To stimulate the author to add further structure and material to the knowledgebase, THOUGHTSTICKER will propose new structures which do not yet exist and which, if instated by the author, will not conflict with existing structures. This process can be focused by having the author indicate areas to extend or areas to avoid. Alternatively, THOUGHTSTICKER can suggest areas that are "thin" compared to others; in this way the author is encouraged to achieve a uniform level of detail.
    Unique to THOUGHTSTICKER, the combination of these features make the process of creating the subject matter much more efficient. In addition, multiple authors, possibly at different sites, can contribute to the same knowledgebase without interfering with each other. The original knowledgebase can be augmented and tailored to differing needs at different locations.

    III.4 Related Software Systems

    THOUGHTSTICKER, Lp and dynamic graphics displays of knowledge representations have implications for the domain of software systems as they are now presented in both commercial systems and research programmes. The work of this dissertation presents innovations in these areas, briefly discussed in the following sections.

    III.4.1 Database Management Systems

    The concept of a database is a simple one: to store and index data in a form for swift and convenient retrieval and update.

    Approaches to database management come in various forms; the most flexible of which is the most complex to implement but also the most general and most useful. These "relational database" concepts find mature implementations in modern, commercial database programs available on computers from large to small. Their power derives from a complete flexibility in how the data is indexed: truly relational systems can be indexed on any entry in the database. This corresponds to THOUGHTSTICKER's capability for every topic object to be accessible directly, and for any relations that topics exist in to be used as a means to move from relation to relation. Details of implementation aside, THOUGHTSTICKER is functionally a complete relational database.

    Consider that database connections are arbitrary and unconstrained;

    THOUGHTSTICKER provides structures that model cognitive relationships. Hence, the result may be considered a "knowledge representation" or "knowledgebase" rather than a mere database. Of course in both cases the data contained must be interpreted by a human to make it alive with meaning and become true "information"; however in the case of THOUGHTSTICKER, the structure reflects contextual relationships that are valid in the construct of the creator or author of the structure.

    For the user, the nature of the two systems (relational databases and THOUGHTSTICKER) is completely different. Database require that the statement by the user be a "well-formed expression" which can be syntactically parsed and interpreted logically. For example,

    (AND (OR (subject = cybernetics) (subject = protologics)) (type = thesis))

    would retrieve all records on the subject cybernetics or protologics which are a thesis. Modern systems often employ pseudo natural language input schemes, whereby the same search could be performed by typing, literally,

    Show all records of the subject cybernetics or protologics, that are a thesis.

    THOUGHTSTICKER in its present forms is not tailored to perform precisely this type of search. However, it can be used in such a mode by two of its mechanisms:
    • It can use features of the entries, such as type of entry and contents of the relation to prefer or exclude some entries over others.
    • The course of the conversation includes a context of previous requests which are used by THOUGHTSTICKER to determine what data is retrieved.
    Note that the relational database search is independent of all previous and future searches; it is without context. THOUGHTSTICKER, by contrast, builds a history of interaction by tracking all requests and modifying subsequent responses. For example, first a request for cybernetics as a topic would recall all such available entries (i.e. relations and their models; see Section V for full explanations of these terms). A second request for the topics protologics would first provide those relations which overlapped or were close to cybernetics; thereafter, entries that were individually related to one or the other. Finally, a request for entries on the topic thesis would first retrieve entries that are in cybernetics or protologics.

    Thus the retrieval is not a one-shot and without context, but rather an emerging purpose that is created by the history of requests on the user's part. The result is less immediate. (Of course the conventional search patterns could be added as a capability to THOUGHTSTICKER, making it a subset of relational database systems.) However, for application to research where a fixed answer is not sought but rather a picture is to emerge over a series of refined retrievals (which database retrieval usually is) THOUGHTSTICKER holds great promise as a revision to the nature of database management. The free form manner in which statements are added into the database and the lack of restriction on "keys" are substantial improvements.

    III.4.2 "Thought Processors"

    There was a brief flurry of interest in commercial personal computer markets for programs that were erroneously dubbed "thought processors." In fact, each of these were merely word processors with a fixed format for creating outlines. With the appropriate command, a given line in the outline could be expanded to contain sub-lines. The process is fully recursive. The resulting outline would make an excellent basis for writing a full document; hence the claim of "thought processing", which here means instead to help plan the writing process.

    THOUGHTSTICKER is rather more like a true thought processor because of its power in cognitive modelling. The structures that result and the process of conflict resolution are a strong partners in the thinking process. In fact, THOUGHTSTICKER is to modelling thought processes as word processing is to writing. No other commercial or research software can make such a claim.

    IV. Foundations of Conversation Theory

    This chapter provides background and the bases of the argument of the thesis. A very brief synopsis of this chapter was the content of the Abstract.

    IV.1 Interaction and Conflict

    Conversation Theory is a theory of interaction. The minimum psychological observable is that of an interaction between two distinguishable entities, the distinction of which is made by an observer (Pask 1975b, Pask 1975c). The role of the observer and the interaction are so inextricably linked that they are duals; one does not exist without the other (Pask 1976c, Pask 1980c). Hence, from interaction arises all individuals, all distinctions and therefore all "conceptions." These make up (or "inhabit") the organization of systems as a whole.

    The existence of a distinct entity is an observer phenomenon that is consistent with other distinction logics (Varela 1975). The persistence of an entity is the result of a convergent process rather than, for example, the physical existence of a mass (von Foerster 1977).

    A range of possible types of interaction arise within and among systems. Trivial interaction is that which is consistent to and meshes smoothly with the existing organization and therefore merely reinforces that organization. Information introduced into a system which is not "novel" is an example of this (Pask 1980b). The crucial case is when the information introduced is novel so far as the system under scrutiny is concerned. Hence, if interaction is to allow for change and evolution of organization, it must perforce consist of occasions where processes (by definition, programs that are executed in one or more processors, Pask 1980b) are not mutually consistent and do not smoothly mesh and where the organization is in danger of change. This is conflict. Without conflict, the organization cannot undergo change.

    One outcome of conflict can be destruction of the organization. Alternatively, if concepts and individuals are to endure under the influence of conflict, it is necessary that conflict be resolved, with the accompanying persistence of organization albeit a modified one.

    IV.2 The Requirements of Representation

    Conversation Theory arose in the context of learning environments where the subject matter to be learned required a representation outside of the human subject matter expert. The independence of knowledge (or more precisely, "knowables") from a knower is an absurdity which is often mooted for the purposes of practical implementation in current digital machines, and for the sake of discourse. Hence it is a simple error to lose this point. Current AI research, of course, is predicated on the possibility of knowables without a knower and the nature of this contradiction is not always acknowledged (as it is in Dreyfuss & Dreyfuss 1986, and Winograd & Flores 1986). CT at all points re-affirms the role of the knower.

    In addition, other aspects of this epistemological stance of CT imposes certain requirements on the needed knowledge representation, requirements which AI has generally not benefitted from. Communicability, stability (memory), ambiguity and its resolution are all central to cognition and a knowledge representation based on CT must encompass these issues.

    IV.3 The Rise of Lp: Coherence, Distinction and Process

    The needs of a knowledge representation as constrained by CT led Pask to invent the protologic called Lp. The term "Lp" arose in context where CT had already been concerned with descriptions in a language called "L". Pask's work had previously involved a formalism containing "L", a symbol standing for any true language (natural, spoken languages as well as the language of dance, gesture, or signs). The requirement was that L have the capacity for questions and commands as well as statements and possibilities. (Classical mathematics and predicate logic does not; see von Wright, 1963 and more recent echoes in Winograd & Flores 1986.) Because the representation underlying cognition was more primitive than that language (in the sense that all languages could be modeled with a common structure and kinetics), Pask added the "p" subscript, meaning "proto" (meaning "primitive" or "original", as in a substrate). Lp is therefore a substrate or structuralism [sic] on which would rest a logic or language to carry the richness of human discourse.

    Lp describes the interaction of conceptual entities by providing rules that constrain the interaction of these entities and hence model their evolving organization. These interactions are described at the level of concepts, that is, as the interaction of topics in relations that form conceptions. Lp contains injunctions as to how topics can and may interact, including how they may conflict and how their conflict may be resolved.

    In its pure form, Lp is a logic of process as well as coherence and distinction:
    • Process, in that all entities are the result of
    • interactions, where an entity is that which is stable and recognizable.
    • Distinction, in that there arise in the course of the interaction of processes, entities that did not exist before and that are distinguishable from one another by further processes.
    • And coherence, in that conceptual entities "cohere" together: their dynamics are such that their process interaction creates stabilities, themselves conceptual extensions of the original.
    Due to their characteristics (such as their kinetics, leading to their stability) Lp entities imply models for memory, uncertainty and innovation.

    IV.4 The Distinction of Micro and Macro

    The attribution of a term such as micro or macro is made by an observer relative to some purpose. In the context of software simulation, it refers to the "grain" at which elements are chosen as primitive, and the relations between elements are simulated by procedures which related them.

    To choose a level of description and to name it "macro" is equivalent to stating that there will exist some elements of the database which will be considered indivisible atoms and processes below a certain level will be asserted rather than acted upon. In the present case of THOUGHTSTICKER software as described below, the topics of Lp will be considered as atoms, and their relationship will be asserted to be dynamic but represented as a static structure. This is not a condemnation; it is nothing more than a proper declaration of the status of the database elements, and it serves to clarify the observer's intentions in the nomenclature of declaring it to be "macro."

    For some purposes, such as tutorial representations of subject matter as detailed below, such a "macro" description of topics is sufficient, because the relationship among topics is to be activated by the user, and the mere existence of their relation is sufficient from the perspective of the software.

    For other purposes, this level of grain of the simulation may not be sufficient. In particular, it cannot represent the true implications of CT as a model of the dynamics of mentation. It requires a process interpretation of the structures of Lp. This can be achieved by an increasing series of more detailed simulations. The first of this series retains topics as atoms, but provides a process relationship between them (this being the main topic of the remainder of the thesis). A second would be to break the topics down into sub-components, thereby exposing their "internal" structure to scrutiny. This is unnecessary for demonstration of the thesis and is not explored further. However it is appropriate to comment that such an extension of the simulation would be necessary to provide additional evidence in support of the more subtle implications of Lp, in its power to model generalizations of concepts and their creation in abduction.

    IV.5 Static, Macro Representations of Lp

    All previous software programs based in some way upon Lp operations have used a description of Lp that encompasses coherence and distinction only. The level of description of these programs has been that of the topics (considered as indivisible atoms); and a level "higher" than the topics themselves, namely, a level of topic relations.

    The topics are represented as static elements in a database; they exist not by nature of a process which is executed but rather because of a configuration of 0/1, binary data in a static software structure. Similarly relations are static aggregates of tokens associating (either by means of pointer structures or common names) the topics they relate. It may be tempting to consider that in order for these entities to be used by the digital machine, a "process" in the form of a program is executed by the digital machine to access them, and that this is sufficient to achieve "process interaction." However, this misses the crucial point that the topics and their relations in a true Lp processor are embodied because they are executed as processes, and their attributes and interactions arise from execution; not because they are being accessed as a static token. The qualities of the entities are the result of execution and not simple reference to a list of static attributes. Process interaction can only be simulated in a serial digital machine by pairwise checking; in a true Lp processor, the medium in which the processes are executed (here unspecified) also affords the means for their interaction.

    In existing software implementation of Lp, conflict is detected by a simple counting and comparison scheme. The software makes reference to the static data structures and conditions for conflict (described in Section V.4 on THOUGHTSTICKER) are calculated.

    It is important to realize that this calculation is performed by a program that has available to it all necessary information of the organization of the system as a whole. It is a privileged position which is akin to a global or "god-like" view. It is therefore a position taken by a process that is independent of the system itself (in the sense that an observer is outside the system). Because it is independent, the calculation in this form cannot be performed in this way by the system itself.

    Because of this view, the level at which the dynamic interaction of the topics is simulated, is here called "macro." For restricted applications of Lp, such as in a knowledge representation scheme for training, this may be sufficient.

    IV.6 Deficiencies of the Macro

    However, the macro position has two deficiencies: the fundamental tenet of Conversation Theory, that of true process, is missed; and, conflict does not arise internal to the system, but rather is computed external of [sic] the system; that is, macroscopically. It may be seductive to say that the existence of conflict can be denigrated and trivialized to a mere artifact of the level of description and its historical origins in subject matter representation. However the rise of conflict within systems must be recognized for its power to model the initiation of distinctions, and hence as a powerful engine for innovation arising within the organization of a system.

    Without a process component, there is no "available energy" for the system, and further mechanisms would need to be hypothesized. With process, the entire theory holds together in a consistent manner.

    IV.7 Hypothesis: Theory Confirmation in Micro Simulation

    All theories consist of descriptions in a language. A description may imply or produce a further description which in science is often called a "result" of the theory. More precisely, such further descriptions are hypotheses or hypothetical statements that are deduced from the body of the theory. A hypothesis or "theoretical result" is usually compared to observations of some environment and when correlations exist the theory is, in some part, "confirmed."

    The hypothesis put forth in this dissertation is that a low-level description of Lp, that of an internal and microscopic level in which topics are influenced by "forces" that are exerted by the topology of the conceptual space, would, in its activation as a dynamic process of appropriate dimension, produce as a result (and hence provide a confirmation of) the macroscopically-observed behavior of the system manifest as conflict and resolution of conflict. The resulting implementation would reify a system modeled in Lp by producing a system whose topological space was constrained by the interaction of its entities.

    This has some similarities to the work in new "quantum computability" (Deutsch 1985), which is another revision of "classical" computational theory (i.e. that attributed to Turing). There as here the desire is to achieve certain classes of computation which otherwise would not be possible; in particular, computation which would not be possible in any "Turing architecture" consisting of a finite state machine and a tape (memory). This position is in sharp contrast to the historical view of Turing computability as sufficient for any class of finite computation, including that of brain (for an excellent discussion of the interactions between these views see Lettvin 1985). Since the digital computer is based on the Turing model, it was considered just a matter of engineering before computers were smart like humans. The revision requires new hardware architectures.

    IV.8 Dynamic, Micro Representation of Lp

    To return to the original intentions of Lp as founded on process, a new software model must be put forward to reify Lp structures. Unfortunately there are fundamental limitations presented by present-day serial, digital machines and a true Lp embodiment must await new architectures (which are beginning to appear, see Section VII.3.4). However the basis for a new approach can be set out now, and simulated in current hardware. The software written for this dissertation, although a simulation, restores the process component to Lp embodiments and provides a direction for future work in fully concurrent machines.

    In brief (as the details will be presented in Chapter VI), the individual entities that exist within an Lp structure exist due to the execution of a process rather than their existence in a static database. This can be simulated within serial machines if the interactions (relations) between entities (topics) are expressed as a continuous computation of relationships within a topological space. These relationships are represented to the observer as relative positions on a display screen.

    The topics themselves are atomic units and not processes whose execution result in stable (but dynamic) entities. For the purposes of practical implementations, some level of "atom" must be chosen to begin the simulation. But while this is the case, their relations as manifest in a graphical display exist due to the interactions of processes. These processes individually are the action and interaction of each entity within the organization of the system under execution.

    Interpreted graphically in this way, topics compute their positions relative to their neighbors in relations that they share. The computation is performed in accordance with Lp rules. The resulting positions, which may or may not be stable, represent the conceptual relationships of the topics relative to each other.

    Within the simulation of the micro interactions of Lp, macro features of CT should emerge. For example, for certain initial configurations ambiguity or contradiction should be detected. This prediction is confirmed, as will be shown in Section VI.5.

    The next major Section presents a detailed view of THOUGHTSTICKER software at the macro level, which is a necessary precursor to discussion of the micro of Lp and results.

    V. Conversation Theory Software

    V.1 The Birth of "THOUGHTSTICKER"

    As noted in Section III.3.2, THOUGHTSTICKER was invented by Pask and collaborators at System Research Ltd in the late 1970s (Pask 1976a). Its development was more or less coincident with the development of Lp in that the creation of THOUGHTSTICKER (as a software manifestation) both fed on and was fed by development of Lp (its formal and notational basis). However, and as already noted, to be a full-blown logic for CT, Lp required a bifurcation principle (described below). Pask has stated that this was available only after the enquiries of Vittorio Midoro about the notation of analogy and distributive coherence (an overlap of a single topic in more than one relation) co-existing in the same diagram. Pask had already realized that some principle was needed to explain how new structures arise from computation performed from within a system. This, along with the principles of conservation, duality and complementarity already formulated, would make CT a complete, "scientific" theory. Midoro's enquiry led to the simple and elegant bifurcation principle that shows how distinctions and hence new structures arise from within an organization. Midoro was at the University of Genoa at the time, and hence Pask has called the bifurcation rule the "Rule of Genoa."

    It will be demonstrated that THOUGHTSTICKER in all of its software forms represents the embodiment of Lp at a macro level, an argument that is one foundation of this thesis. To further clarify what is meant by this and to provide necessary background a detailed description of THOUGHTSTICKER follows.

    V.1.1 Raison d'Etre of THOUGHTSTICKER

    The background to the THOUGHTSTICKER system may be seen as two,

    concurrent, threads:
    1. The development of Conversation Theory as a scientific and psychological model for knowledge and beliefs; and
    2. To "compute" knowledge structures inside of presently-available digital machines, a goal that is analogous to the attempts of AI.
    This chapter focuses on the former, although some comments on the latter are necessary.

    V.1.2 Represent What?

    Both AI and cybernetics have encountered the same dilemma, albeit from quite different paths:
    • What is knowledge that it may be represented in a concrete structure; and
    • What is a representation that it may reflect the process of knowing?
    Without question the issues here are very deep and are properly treated in other places, the literature of philosophy being one (in a monograph particularly focusing on CT and Lp, see also Nicoll 1985).

    In the context of the use of computers in human decision making situations, it can be shown why the issue arises at all by the aid of the following parable: To help humans to perform calculations such as check-book balancing, word processing and orbits of satellites, the computer must manipulate with facility the elements of these domains, such as numbers, representations of text, calculations under specified equations, and so on. Without this capability, the computer would be useless for these tasks.

    Similarly, for the computer system to provide any help, support, advice, what have you, in the "thinking process" (alias the "decision making" process) it must manipulate with facility the elements of the domain: the knowledge of the user. This presumes, not unreasonably, that the computer is to calculate a domain beyond mere numbers and measures. The domain becomes the non-quantitative, non-specific and often inchoate world of beliefs, conceptions and impressions. (See comments in Section III.4.2 concerning so-called "thought processors.")

    V.1.3 Attempts at Knowledge Representation: "Expert Systems"

    The goals of knowledge representation are easily said, but not so easily done. The 25 year history of AI has attempted to deal with these issues from the "bottom up": starting from the computer technology and a reductionist view of mental processes. Some consider that the process has borne fruit (Michie 1982; Feigenbaum & McCorduck 1983) while even the most impressive of so-called "expert-systems" are limited in the extreme (Duda & Shortcliffe 1983).

    The "expert system" paradigm is one which considers that the "knowledge" of experts may be captured by a manual process, and converted into a form computable by present-day computers. This manual conversion is performed by a "knowledge engineer" who codes the "expertise" into rules which are easily calculated over by the digital engine. The tribulations and disadvantages of such a presumptive approach have been discussed elsewhere, in Pangaro & Nicoll 1983.

    For our purposes here, it is sufficient to point out that it may be possible to divide the global problem of the reification of knowledge into two stages:
    1. Capturing a representation which is useful to the user and to others but which cannot be computed over by the digital engine, that is, the computer does not know; and
    2. Elaborating the structure of representation so that the digital engine may itself perform (e.g. decide) in a manner which reflects somewhat the form as well as the content of the original human thinker. Humberto Maturana has made the point (Maturana 1986) that going the route of representation separate from ontology is a fundamental misunderstanding of the nature of knowing and any attempts in that direction are doomed. His position is beyond the compass of this discussion, as the center of this thesis are the issues of demonstration and confirmation; hence representation is desired.
    Expert systems and AI in general attempt the second, and more difficult stage, first. The goal of pragmatic research programs (Sheppard 1981; Pask 1981) is the former, with a clear plan of extension into the latter, as techniques and technology catch up to the more demanding requirements of "machine intelligence."

    This claim of capability (for it is only a claim until demonstrated in working systems) is based on the substantial theoretical and experimental work which Conversation Theory represents as embodied, within its limitations, in the software system called THOUGHTSTICKER.

    V.2 The origins of THOUGHTSTICKER

    In some sense it is impossible to pinpoint a "first" implementation of the concepts behind THOUGHTSTICKER as it is now discussed. Pask and his associates produced many machines from the middle 1950s to the middle 1970s, each of them contributing important ideas to Conversation Theory and its fruition in THOUGHTSTICKER. In the late 1960s and early 1970s, a few machines were made that were clear predecessors to THOUGHTSTICKER. These were the CASTE and EXTEND systems (Pask 1975c). Each had an electro-mechanical interface manipulated by the human subject, connected to software programs for purposes of recording data and performing some calculations most conveniently done in software.

    V.2.1 The Demands of Course Assembly

    The very need for a system to represent the structure of knowables grew out of the problem of representing subject matter for environments for learning. The ultimate structure of the representations grew out of the epistemological foundation of cybernetics, in the form of Conversation Theory itself.

    It is interesting to note that the need for a representation of subject matter for teaching came before the theory of conversations or its strict calculus of knowledge representation. Once CASTE was mature as a "Course Assembly and Tutorial Environment", THOUGHTSTICKER was conceived as a software aid to assembling the structures which would "hold" the subject matter for tutorial purposes.

    The distinctions between CASTE and THOUGHTSTICKER are a source of confusion since recent usage of these terms has tended to imply different implementations in different hardware but both based on Conversation Theory and Lp. For the record, recent uses of the name THOUGHTSTICKER emphasize the knowledge representation functions and the name CASTE emphasizes the tutorial heuristics. However, any THOUGHTSTICKER demonstration usually includes some CASTE functions for purposes of practical use and demonstration, while CASTE operates on the structures produced by THOUGHTSTICKER. Hence either term implies the other and neither can be independent.

    EXTEND was a related software program which allowed for the user (whether in the role of "teacher" or "learner") to extend the subject matter representation.

    Thus it was subsequent to CASTE, EXTEND, and even THOUGHTSTICKER that, with the invention of a bifurcation principle, CT produced what Pask would consider a complete, scientific theory capable of encompassing, minimally, the domain of epistemology.

    V.2.2 THOUGHTSTICKER Defined

    A precise distinction between Lp, Lp software and THOUGHTSTICKER was originated by C Sheppard and R (Dik) Gregory of Admiralty Research Establishment (ARE), Teddington, UK. "THOUGHTSTICKER" indicates a user interface written in software and connected to a software embodiment of Lp structures and processes ("Lp software"), within the constraints of present digital technology, which constraints are very great compared to the intention behind the formal protolanguage itself ("Lp"). Multi-process, concurrent, conflict-ridden as well as conflict-free computation are a few of the gross omissions inherent in any present-day THOUGHTSTICKER. Even the proposals of modern AI for non-von Neumann, many-processor digital hardware is not capable of the proper processing that is required for Lp. Some of these points will be encountered more fully in the ensuing argument, below.

    Even given these real restrictions, the potential benefits of a THOUGHTSTICKER are very great as applied in a direct way to database construction and retrieval, computer-aided instruction, and a variety of tasks that involve multiple-authors and/or multiple locations. Its practical implications for decision support and machine intelligence are only implied and as yet unexplored.

    V.2.3 THOUGHTSTICKER in its current forms

    The specific history of THOUGHTSTICKER implementations is offered in the Appendix C as it is tangent to the main thesis. For our purposes here it is sufficient to describe current implementations.

    At present there are two major embodiments of THOUGHTSTICKER in software available for examination.

    Microcomputer BASIC Versions

    There are two versions running in the Apple II microcomputer with additional hardware boards. One, called Apple CASTE, was developed largely for the Admiralty Research Establishment, UK, with some modules and features added for the US Army Research Institute (ARI), by PANGARO Incorporated. Another is called C/CASTE, based on the original code of Apple CASTE and developed for the US Army Research Institute at Concordia University under the direction of Pask. The systems have as their strengths that they are self-contained in available and inexpensive hardware, and are well debugged and documented. Their limits are in the size of the database they may comfortably contain and the restricted set of Lp operations they perform.

    Both contain the basic Lp operations (up to but not including condense/expand and generalization). Apple CASTE emphasizes the authoring and presentation of text models for Lp entities, although a simple Apple graphics module can be used. In contrast, C/CASTE emphasizes the multi-display presentation of tutorial material including computer-controlled slides of the subject matter and maps of the knowledge representations.

    Both use CASTE in their names, relating them to the Course Assembly System and Tutorial Environment, the system that preceded THOUGHTSTICKER and Lp, because the focus of their use is the tutorial application of CT.

    Symbolics LISP Versions

    This is an extended version of Lp operations, including simple generalization, bifurcation, and extended conflict resolution, running on the Symbolics LISP Machine. The THOUGHTSTICKER code is manifest in a number of forms on the Symbolics, including a series of user interaction frames for studying the evolution of the knowledge representation; a "naive" interface for users without knowledge of CT; and the Expertise Tutor, used to teach a naval command and control task. The power for the system is very great due to the power of the environment of the Symbolics, its speed, size and efficiency of experimentation and implementation. The implementation surpasses all previous versions in raw functionality and capability.

    Further details of the history of software development of THOUGHTSTICKER, including its successful integration into a complete CT system of discourse (the Expertise Tutor), are found in Appendix C.

    The explanations below will use the Symbolics version for its examples, although the particular details of screens and menu functions are minimized as they are specific to this implementation; the concepts are however general in the context of CT.

    V.2.4 Lp at the Macro Level

    As defined above in Section V.2.2 and in distinction to Lp software and Lp itself, THOUGHTSTICKER is a software program which provides access to a set of Lp functions in software. The formal description of its functions is that of Lp, which in turn is the dual of CT, a macro theory of conversations from whence it arises. Although it cannot be a complete implementation of Lp (for both technical and theoretical reasons) THOUGHTSTICKER is used below to explain the operations of Lp. Details of the full operation of the software can be found in Pangaro et al 1985, attached.

    V.2.5 Uses of THOUGHTSTICKER

    To elicit knowledge from users, software such as THOUGHTSTICKER may operate in one of two modes:
    1. In the background, accepting input from a domain (as from a simulation called HUNKS for the ARE , or from the Team Decision System (TDS) as developed for ARI); or
    2. Directly, with the user and the elicitation software engaged in the one-on-one interaction.
    The second is the model used for the following description as it is the more general case. The discussion focuses on THOUGHTSTICKER as an engine for receiving and representing knowledge without direct concern for the ultimate structure and its kinetics, the central issue of this thesis. However it is essential to present in detail the meaning and interpretation of the structures at the macro level (to the "user" and his or her "psychology") and from that presentation make the case for the correctness of interpretation of the theory at the micro level (to the "topics" and their atomic interactions).

    The frames used as examples below are from the "research" version of THOUGHTSTICKER, that is a set of interaction windows constructed in 1983 and 1984 whose purpose is to allow the user to easily explore the knowledge representation scheme behind THOUGHTSTICKER. A simpler, "Naive THOUGHTSTICKER" is also available for users not wishing to be exposed to the internal issues of the scheme.

    V.3 Making Statements

    In a one-on-one interaction with THOUGHTSTICKER, it is the user's responsibility to take initiative in making assertions which THOUGHTSTICKER endeavors to represent in its internal structures. It is THOUGHTSTICKER's responsibility to conform the user's actions to its internal requirements and to engage the user in a dialogue when there is conflict or disagreement between the user and THOUGHTSTICKER. There are additional features of THOUGHTSTICKER that provide stimulation to the user in hopes of initiating novel assertions.

    The primary way in which the user adds to THOUGHTSTICKER's internal knowledge representation is by making text statements. Further transactions are required to characterize the meaning within these statements. All this is accomplished by display screens, menus of functions, and the ability to point to sections of the text to indicate words and phrases within the user text.

    Figure 1 shows a Write Watcher "window", as a software screen of the Symbolics is called. It consists of "panes", each of which is surrounded by a border and contains elements such as text and symbols. The user in the role of "author" begins by typing statements at the keyboard, which are entered into middle pane as if into a word processor which is contained inside of THOUGHTSTICKER. The usual English language conventions of grammar and punctuation are followed to the discretion of the user.

    The first important distinction to make about interaction with THOUGHTSTICKER is that the system performs no semantic processing. This means that details of sentence structure and grammar are ignored; the text in its entirety is recorded by THOUGHTSTICKER for later retrieval. This is not to say that a natural language interface would not be to advantage at the THOUGHTSTICKER interface; it is an elaboration which is hoped for in future development.

    Let us presume that the user has made a statement which reflects his or her belief about a particular subject. In this case, the statement is "To represent knowledge is the goal of artificial intelligence programming." The result of typing the statement is seen in the lower pane. Certain words or phrases appear in that pane in bold type. This indicates that users have previously distinguished these words and phrases to THOUGHTSTICKER as significant and to be noted whenever they appear. These are the topics of CT.

    V.3.1 Models, Topics and Relations

    It is important to differentiate the various elements of the authoring process in terms of the strict definitions of CT:
    1. The text of sentences which are typed by the author are models. They are not modified by the system itself but are "executed" (that is, printed onto the screen) as a manifestation of the object they model, whether a topic or entailment (relation).
    2. Models are associated with entailments. An entailment is a grouping of a particular type, for example, a coherence or an analogy.
    3. The elements of entailments are topics. These are the exteriorized elements of concepts, which are themselves clusters of processes. Topics are the public elements of concepts (whether shared among different individuals or merely exteriorized into the THOUGHTSTICKER interface). In the present THOUGHTSTICKER, topics are represented by words or phrases.
    The implications of entailment will be discussed below but to keep this exposition reasonably brief, exceptions and qualifications to statements will be minimized. Some implications worth noting are: models may very well (and in certain cases, should) be graphics or sound; topics are not the same as words or phrases but are efficiently represented so; models need not contain identical words/phrases as the topics of the entailment they model.

    THOUGHTSTICKER can extract from the text of the model topics' words and phrases which it already has noted, as Figure 1 shows. Not all topics will have necessarily been asserted to THOUGHTSTICKER, and new topics can be added manually by the author. This is accomplished by pointing at the words or phrases in the lower pane. THOUGHTSTICKER checks to see whether the text is close to previous topics (for example, is the new topic "goal" similar to previous topics, such as "goals" or "goal structures"). The user may indicate that these new topics are intended to be the same as earlier ones, or to be kept distinct.

    The resulting topics are displayed in a separate pane (second from the top) labeled "Topics."

    V.3.2 Instating Entailments

    Up to this point, the author has made a statement into THOUGHTSTICKER in the form of a few sentences of text. This is to serve as a model of an entailment involving a set of topics, which the author has also indicated to the system. The next step is to integrate this new knowledge into the pre-existing models and entailments which THOUGHTSTICKER holds.

    V.3.3 Coherence

    As noted, commands to THOUGHTSTICKER are made by choosing items on a menu. The label on the menu choice in Figure 1 is "Instate"; clicking here will provide further choices, to instate the utterance into the database as a coherence, analogy, or topic model. Referring to the term from CT, a coherence is an entailment between topics which requires, first, that each topic in the entailment "make sense" with its neighbors in the entailment. Thus, the meaning of knowledge must be derivable from Artificial Intelligence and goal; recall that the purpose of the model is to represent this meaning.

    However, within a coherence every topic must be supported and "producible" from all of its neighbors in the relation; hence, artificial intelligence may be explained as something which has as its goal the representation of knowledge. Equisignificantly, a goal is seen to be made from the entailment of knowledge and artificial intelligence; the topic goal is their entailment.

    This requirement for coherence between topics distinguishes THOUGHTSTICKER from all other knowledge representation software available in the field of AI.

    There is the issue of choosing how to model a given utterance, namely, how it should be instated. The requirement for mutually-producible topics, just described, is the minimum test for coherence. Lesser conditions can qualify as analogy. Topic models are used if they require no further breakdown of detail, that is, if the topic itself is an "atom" of the subject matter. (The "Naive" modes of THOUGHTSTICKER provide a semi-automated approach to this, where questions about the utterance are used by the system to guide the user through considering how to best model the utterance in the knowledgebase.)

    V.3.4 Subjectivity of Statements

    A few observations are in order at this stage. One may argue as to whether one of the topics should really be "artificial intelligence programming", rather than simply programming. This, as everything about the process which the author undertakes, is a matter of opinion. Nothing about the representation is true, immutable, or correct. It is merely the belief of the author, in the context in which he or she is making statements. Thus, THOUGHTSTICKER is a system for reflecting subjective assertions, namely, beliefs.

    One may also argue about whether "goal" is really produced from the remaining topics. Again, this is a matter of the opinion of the author, but unless the sense of production can be comprehended by users other than the author, the results will not be so useful. THOUGHTSTICKER contains specific features, described below, which endeavor to make the knowledge structure as comprehensible as possible (but always, of course, within the limits of the communication skills of the author). Without the concept of coherence, however, THOUGHTSTICKER is merely a keyword database retrieval system. With coherence, it is a unique system for testing agreement between individuals.

    As will be seen in later chapters, coherence may be seen as a set of forces imposed by and acting upon topics. In THOUGHTSTICKER, these forces are computed by external fiat (see next section); however CT requires that they be dynamic forces acting within the relations themselves.

    V.4 Contradiction Checking

    THOUGHTSTICKER must insure that new assertions are "consistent" with old ones, in order to maintain the coherence of the knowledge representation as a whole. THOUGHTSTICKER uses the requirement for coherent entailment as a basis for evaluating the internal consistency of the knowledge representation, as follows.

    Upon the user's injunction of "Instate", the system compares the proposed entailment against all previous entailments. Because THOUGHTSTICKER does no semantic processing, all evaluation is done purely structurally, by examining the topics in their entailments in the entire knowledge representation. In essence, THOUGHTSTICKER searches for overlaps of the proposed entailment with past entailments. If THOUGHTSTICKER determines a potential contradiction, the user is warned as shown in Figure 2. A new menu has popped-up on the screen, stating that the proposed entailment has conflicts in the mesh and offering among other menu choices "Try to resolve conflicts." The other menu choices refer to alternative interpretations of the same text model. These are available if the user wishes to back away from asserting a complete coherence. However by far the most interesting case is that of conflict resolution, described in the following sub-sections.

    V.4.1 Cases of Contradiction

    The situations that THOUGHTSTICKER can detect are from the following cases:
    1. There is no overlap of topics: the new entailment is completely unrelated to existing ones, and indeed contains topics which appear no where else. The proposed entailment cannot be contradictory, but is unrelated. As, for example in the present case, "Cybernetics is the epistemology of science." (Intended topics are bold.)
    2. There is overlap on one topic only: in CT, this is called distributive entailment, because the meaning of the overlapping topic is distributed across more than one entailment. There is no structural contradiction and hence the proposed entailment may be accepted. For example, "Artificial Intelligence as a field was established in the 1950s by McCarthy, Minsky and others."
    3. There is overlap on all but one topic; for example, there are some identical topics present in both the proposed entailment and a previous entailment; and two further, but different, topics, one in the proposed entailment and one in the previous entailment. For example, "The goal of Artificial Intelligence is to make computers smart like people." Within CT this is the classic case of contradiction and requires some explanation, below.
    4. There is a common subset of topic names between the proposed entailment and a previous one. Is the entailment with fewer topics a "conceptual subset" as well, in that it is entirely contained in the larger one? "Artificial Intelligence involves the embodiment of knowledge into computers for the purpose of makeing computers smart like people..." would be an example of this case.
    5. There is complete overlap: all topics in the proposed entailment are contained identically in a previous entailment. Since THOUGHTSTICKER (as noted) performs no semantic processing, the question arises: Are the entailments truly identical? Or, was a new, different entailment intended by the author? For example, "The goal of Artificial Intelligence is to program machines to behave as if they contained the knowledge of human beings."
    Of course, any of the cases of contradiction may exist with more than one previous entailment.

    In all of the cases cited, the procedure is one of comparison with past entailments and (if necessary or desired) a resolution of the conflict based on the author's intention. It is conceivable that any keyword retrieval system could point out the condition of keywords in common, although without CT as an underpinning, there would be no reason to draw any implications from the particular structure in each case. THOUGHTSTICKER can provide specific aid in interpreting and resolving the contradiction, as exemplified below by the particular and perhaps most interesting case of classic contradiction, where all but one topic in both entailments overlap.

    V.4.2 Resolution of Conflict

    Returning to the same statement example, THOUGHTSTICKER had detected a condition of possible conflict between an existing entailment and the new statement just made by the author. It is now up to the user and THOUGHTSTICKER to modify the structure if resolution of conflict is desired.

    THOUGHTSTICKER responds to the user injunction "Try to resolve conflicts", shown in Figure 2, by offering a new window called the Resolver, shown in Figure 3. This window shows the proposed entailment on the middle left, with the previous entailment that it conflicts with, on the middle right. The "Shared topics" is shown in a pane in the upper middle of the window, "Artificial Intelligence" and "knowledge", are in both entailments. "Goals" is present only in the proposed entailment (left side) and "data structure" is present only in the previous entailment (right side). The symmetric menu choices on each side represent various procedures for the author to follow to resolve the conflict; for example, to "Deny" one model or the other. Other functions are aids to the user, for example "Undo" returns to a previous state, and "Describe" gives relevant details of the structure of the nearby database. As before, details of function are available in Pangaro et al (1985).

    As noted, the significance of the detection of conflict comes from the implication of coherence: each topic is producible from the remainder of topics in the same entailment. The present situation implies that the same topics (the ones which overlap in both the proposed and previous entailment) may produce either of two topics, thus:
    • Artificial Intelligence and knowledge produce goals; and
    • Artificial Intelligence and knowledge produce data structure.
    Which is it? Either/Both? Each, but with qualification? Let us examine the possible resolutions in detail:
    1. The non-overlapping topics are really the same topic: in this case, goals and data structure were perhaps originally intended by the author to have the same meaning. Here, it is not the case, although one can easily imagine an author inadvertently using two different names for topics (for example, goals and purposes, or data structure and internal representation) while meaning the same thing.
    2. The two entailments should really be merged into one, relating all the topics of both entailments. In this case, the result might be a model such as: "The goal of Artificial Intelligence is to capture knowledge in software data structures." Let us suppose for our purposes here that this is not sufficient for the author, as the previous entailment was making a slightly different point.
    3. One or more of the overlapping topics are not really a single topic but are two (or more) as related by analogy. For example, Artificial Intelligence is, in the proposed entailment, a field of programming; whereas in the previous entailment, the statement is about the proponents of Artificial Intelligence, the individuals themselves. A fine distinction, to be sure, but one which must be accommodated within any knowledge representation scheme. THOUGHTSTICKER would accommodate this by splitting Artificial Intelligence into Artificial Intelligence programming and Artificial Intelligence proponents, as joined by an analogy entailment, Artificial Intelligence. The overlap of topics would now merely be a distributive entailment and the structure would be coherent.
    4. Or, the author's intention is yet more subtle than any of the above, and it was only through the author's process of semantic comparison that the real intention is clear. In our example, this requires editing of both of the existing models and a modification of the topics in the corresponding entailments. Choosing "Modify" results in the appearance, as in Figure 4, of a new pane which is used to modify the proposed entailment. Figure 5 shows the same for the previous entailment. These smaller panes are analogous to the original Write Watcher window and allow text editing, choosing of topics, as well as the ability to examine previous uses of topics in other entailments.
    The outcome, where the text models and the topics in the entailment have been changed, is one where there is no longer conflict within the entailments we have been dealing with. This is indicated in Figure 6 by the new menu choice "Local Resolution" in the top middle of the window, which when chosen accepts the two entailments in their modified form. However, the resolution is only local; this means that other difficulties may exist between the new entailments and previous ones. The procedure is thus recursive.

    Contradiction checking by THOUGHTSTICKER does not in itself result in a definite judgment that resolution is mandatory. The judgement is entirely the user's, and the user may decide to perform a resolution or not, depending on the purpose of the resulting knowledge representation.

    V.4.3 To Resolve

    In art, contradiction and its dual, ambiguity, are often used for conscious effect. For psychological modelling, also, the existence of contradictory structure may be appropriate. The important idea is that THOUGHTSTICKER can represent what the author wishes, and the flexibility to provide for any belief is one of its strengths. For tutorial purposes (and for distributed planning and decision making), surely a self-consistent structure will be appropriate.

    V.5 Analogy

    Like coherence, analogy is a type of relation within CT. A complete discussion of analogy would require space far beyond what is practical here, as it is the foundation of coherent relationships between topics and the basis for condense/expand operations (where the evolution of analogies leads to the creation of independent organizations of structures and possibly to innovation). A synopsis is provided below rather than omit the topic but it is not complete in the implications of analogy to Lp.

    V.5.1 The Form of Analogy

    At a level of modeled structure, analogy consists of a group of topics, a similarity term and one or more difference terms. The similarity term indicates how the topics are similar, while the difference term indicates how they are distinguishable, i.e. distinct.

    At all places at the interface, THOUGHTSTICKER allows for the user's choice of analogy or coherence in instating a relation. Since current Lp software does not dynamically interrelate coherence to analogies, the contradiction checking is not modified by the existence of analogies; in future this should be the case. Likely forms of resolution could be determined by the software itself using analogical structures, and proposed to the user as candidates. In an automatic Lp processor, each proposal could be executed concurrently with the "richest" paths instated as new structures.

    V.5.2 The Relation of Analogy and Coherence

    Analogy is the most primitive form of relation between topics. Consider that an analogy (at least) relates the (say) two topics that produce a third. If, in addition to that production, one of the two producing topics and the produced topic can also produce the second producing topics, then a further and distinct analogy exists between those topics. The addition of the final production (the other producing plus produced produces the first producing) produces a condition that is recognized as the requirement for coherence. Put another way, the existence of the necessary set of analogies is coherence.

    The rise of analogies is the necessary pre-condition to the existence of coherence.

    V.5.3 Analogy and Distributivity

    The distributive case refers to a single topic overlapping in two or more coherences. This is a very common event as a large structure of relations could not easily exist without such a means to overlap relations. This status of distributivity is important for a variety of reasons within Lp.

    Consider the topic on which there is an overlap. From one perspective, the topic is an atomic unit that applies to the (let us say in this example) two coherences that it is present in. Put another way, the two relations overlap on the similarity of the topic in both relations. However, since the topic is produced in different ways in the two relations, some differences must exist that could be extracted out of the two means of producing the topic in the individual cases of the two relations.

    Clearly the distributive case contains within it an analogical relation centered on the topic in common to the relations.

    Some comments are made in Section VI.4 concerning the role of analogy in the microscopic simulation of Lp.

    V.5.6 Adding Coherent Relations: Saturation

    The philosophy of THOUGHTSTICKER is to provide the user with feedback which is provocative to the authoring process. Contradiction checking is one such feedback process, which indicates the way in which new statements relate to previous ones. There is a converse situation, in which the system could suggest entailments which are not yet present in the knowledge representation, but which nonetheless would be permissible according to the rules of checking contradiction.

    Saturation is an operation whereby THOUGHTSTICKER suggests new entailments to be made. This is analogous to a conversational partner asking for more information about the relation of existing topics in the conversation, but in new combinations. The challenge is to propose new combinations for which the author is likely to want to, and be able to, provide models. The term "saturation" implies that the entailments among topics are being filled in or saturated, making a richly interconnected network of relationships.

    In a large domain, the number of combinations of topics is very large and most combinations would not be sensible to use as the basis for new models. Arbitrary combinations chosen by the system would be absolute nonsense nearly every time. The exceptions might be the seeds for innovation, as when very different ideas are juxtaposed, making a new and unforeseen entailment (which is the entire concept behind DeBono's "lateral thinking"). At issue is the efficiency of the entire process, and the likelihood of useful suggestions.

    THOUGHTSTICKER must contain additional mechanisms for "focusing" the saturation process to minimize absurd suggestions and to stimulate the author in an efficient manner. Experience has shown that a combination of these techniques results in an effective authoring process.

    The first means of focusing the suggestion process is to use contradiction checking to avoid new entailments that would conflict with existing ones. THOUGHTSTICKER produces a possible combination of topics and checks the possible entailment against existing ones. If a conflict exists, the suggestion is discarded and a new possibility is generated combinatorially.

    A second means to focus the saturation process is for the author to specify what range of topics to choose from in the composition of new entailments. The author indicates one or a few topics to start from, and requests THOUGHTSTICKER to gather all topics which touch upon those topics in existing entailments. The process may be repeated, reaching further out from the initial topic(s) as far as the author wishes (with the limiting case of the inclusion of every topic of the mesh in new suggestions). This is equivalent to asking the system to make suggestions within a certain area of the knowledge representation, for example, the part dealing with "Artificial Intelligence" and all topics which connect to it.

    A third means of focusing the saturation process is to require THOUGHTSTICKER to include certain topic(s) in any new proposal, or, conversely, to avoid using certain combinations of topics. The former is equivalent to specifying a theme around which new entailments are to revolve, for example, all new suggestions are to contain "Artificial Intelligence." The latter is equivalent to specifying that "Artificial Intelligence" and "windows" can be eliminated as a possible combination because it is incongruous, or simply because the author has nothing to say about them together.

    Saturation derives its power from coherence and contradiction checking. It is conjectured within CT that the saturation process is constantly at work in the processes of intelligence, connecting and re-connecting concepts as they are generated or integrated from outside information. It is this process which is the foundation of agreement. Without the interrelation of pre-formed concepts with the influence of new concepts, knowledge would be trapped within its own capsules, and perforce could not evolve or even come to exist. The saturation operation of THOUGHTSTICKER mimics this process of mind in a crude fashion to provide a limited but provocative partner in the process of knowledge elicitation.

    V.5.7 Tutorial Aids

    Presuming that an author (or team of authors) has built up enough models, entailments, and topics to constitute enough subject matter that learning it is worthwhile, THOUGHTSTICKER provides a series of user transactions to make such learning efficient and adapted to the user. These transactions are normally described under CASTE (Course Assembly System and Tutorial Environment), detailed in Pangaro & Harney, 1983.

    V.5.8 Implications of THOUGHTSTICKER

    At one level, THOUGHTSTICKER is a system for absorbing the utterances of human users and forming structures which reflect the kinetic knowledge of the original user. It is important to stress, however, that this is possible only through the process of agreement.

    CT has much to say about the process of agreement and strictly defines it. Informally, it may be considered to be the matching of descriptions and procedures associated with a particular concept, across participants. THOUGHTSTICKER, as a software embodiment of CT, represents concepts as topics and their entailments. The entailments themselves have models attached, which may be text descriptions (as in the present version), or pictures, or sound.

    It is the users' responsibility to perform the matching process across topics. This is done by examining the topic's entailments and giving the entailments meaning via its models. In the authoring process, this is manifest by the author checking that the use of a topic name in a new entailment is consistent with previous uses. THOUGHTSTICKER allows for this, as for example in Figure 7, by displaying at the user's request all known names for a topic as well as all known entailments. A full tutorial may also be requested, placing the author in the role of student for the purposes of exploring what the knowledge representation already contains.

    The process of agreement is, at present, performed in the mind of the user, but is facilitated by the features of THOUGHTSTICKER. Consider that the ability to form agreement is the heart of human conversation and that THOUGHTSTICKER is one of the first systems for facilitating the process.

    V.5.9 Many Authors Conversing

    The examples thus far have implied a single author. Forming agreement within an author's knowledge representation is clearly important. The issue becomes much more important when there are many authors, perhaps distributed across many individual systems that are geographically separated. In this situation there is no opportunity to share meanings outside the system itself, and much more responsibility is placed on the interface itself to facilitate agreement.

    The situation of the previous section concerns a match between two particular representations for topics: the words or phrases are shared between the two cases, or perhaps they differ only by a difference of singular/plural, or grammatical tense. For example, THOUGHTSTICKER detects the similarity between "coherence" and "coherent." Another case is "language" and "programming language", where there may be clear differences of meaning --- unless of course all uses of "language" in that subject matter mean programming language. This is a transparent example of how uses of terms that are personal to one user or to one subject area may be handled by THOUGHTSTICKER. At any point, the system offers the opportunity to explore how the existing topics are used in their various entailments by picking an option on a choice menu.

    A considerably more subtle side of this general problem of agreement over use of terms occurs when different words or phrases are used to represent the same topics of different authors. Here, THOUGHTSTICKER is not capable of evaluating whether such is the case, at least not without a natural language processor. Two extremes may be considered:
    1. Where the authors have entirely separate vocabularies, and no two topics are represented by words or phrases that are at all related. This is equivalent to the case of speaking different languages entirely, say, English and Japanese, a situation in which no conversation may occur. In such human situations, some commonality of need or context is maintained as the basis for exchange, and the role of additional modes, such as gesture, facial expression, etc., is paramount. No system or mechanism is capable of making connections across this gulf.
    2. There is some overlap of terminology, but it is by no means complete. THOUGHTSTICKER responds in this situation with the existing mechanism of contradiction checking, and displays the overlaps of related topic words and phrases. This allows the user to interpret the models of the entailments (the text explanations which were authored) and evaluate whether other authors' terms are the same, or at least connected by analogy, to his or her own. In this sense, the "contradiction checking" mechanism at the heart of THOUGHTSTICKER could be better called "agreement checking."
    Of course, it is the latter case which always occurs in reality, whether in discourse that is face-to-face or mediated by THOUGHTSTICKER. No two individuals have identical vocabulary and concepts, or they would be the same individual. The current THOUGHTSTICKER has some restriction in mode of expression, namely, restriction to the text which comprises the models and the topics. Nonetheless, mechanisms within THOUGHTSTICKER aid the user in reaching agreement with others' (as well as one's own) elicited knowledge structures.

    It is important to note that THOUGHTSTICKER could provide a much richer environment for agreement checking if it contained other types of models; for example, graphics and animation, or sound. The author would construct such models and THOUGHTSTICKER would attach them to the entailments. This would allow a greater range of interaction for users and conceivably achieve a confidence of agreement not possible through the single mode of text.

    V.5.10 Personalized Vocabularies

    Once these difficulties of agreement are handled for the case of different user vocabularies for the same or similar topics, the encouragement to maintain a common vocabulary may be relaxed. Users may diverge on opinion but if they "agree to disagree" in the CT sense, at least they may converse.

    THOUGHTSTICKER allows users to maintain their own vocabularies. Any topic may have a series of names consisting of words or phrases, each of which is recognized to be associated with the particular topic. Recall that topics are represented by words and phrases; they are not the words or phrases themselves. Topics are the stable and agreed-upon (and therefore public) elements of concepts. Each user may assert a primary name to be used in the displays containing topics and entailments. Furthermore, THOUGHTSTICKER allows users to maintain a series of "contextures", each of which allows different names.

    For example, as noted in Figure 7, the contextures called "PL" calls a particular topic knowledge, while the contexture "Holist" calls the same topic knowables. This particular use of the capability may seem pedantic, but the general capability is consistent for keeping individuals distinct, whether within one individual (within the contexture) or across individuals (among different contextures).

    VI. The Essence of Process: Micro Simulation of Lp

    VI.1 Knowledge Representation Display

    Against the backdrop of the detailed description of THOUGHTSTICKER above comes the central issue of this thesis. This section presents the genesis of the thesis as a display "problem", the solution of which immediately and inexorably led to its extension into a micro confirmation of the existing macro theory of conversations.

    VI.2 Displays in THOUGHTSTICKER

    The authoring process described in the previous chapter results in structures which reside inside of THOUGHTSTICKER. These are intricate networks of topics joined together by their entailments. It is possible and indeed useful for the author to represent these structures graphically.

    VI.2.1 Experimental Software Facility

    An experimental facility was constructed in software to allow for a wide range of experiments. All of the power of windows and menu-driver user interfaces were brought to bear, resulting in a kind of laboratory in which many experiments could be performed and reproduced. The capabilities of this software is implied in Figure 8a through 8c, which contain the primary choice menus that were used to produce the results for the thesis.

    Figure 8a shows the "top-level" menu; note especially how additional experiments are performed immediately, using the same parameter choices, by the "Next Experiment" menu selection. The remainder of the window is covered ("tiled" in the modern parlance) with a series of snapshots of the dynamic display. This features was used to produce all of the output for the various Figures, by performing a screen dump to the laser graphics printer.

    Figure 8b shows the result of choosing "Change which relations to display" from the previous Figure. This shows a set of available relations (whether from a test suite, or from an actual entailment mesh available from THOUGHTSTICKER); those in inverse video (black background) are those to be dynamically displayed. Variations may thus be tried in rapid succession.

    Figure 8c shows the result of choosing "Major Overhaul" from the top-level menu in Figure 8a. This menu allows detailed modification of all simulation details, such as the nature of the force laws, relative strength of the forces, some display enhancements, etc.

    The discussion below relies on reference to the successive figures as produced by the experimental software environment just described.

    VI.2.2 Discussion of the Programming

    The Symbolics environment is an exemplary one in which to develop software of an experimental nature, where the results and implications are not known beforehand. Issues such as efficiency of calculation or size of database did not require any consideration for this thesis. Advantage was taken of the "object-oriented" programming features of the environment, to improve the software development cycle and make for efficient modifications and extension of features. Effort was made to provide a clear display with smooth refresh to give an exceptionally good feel for the dynamics of the interaction of the topic elements.

    These display features were embedded into the Naive THOUGHTSTICKER interface for access by users, whether authors viewing the evolution of their structures, or learners seeing the structure of the subject matter during learning. A capability for hardcopy output, used in the creation of the Figures as direct screen printing to a laser graphics printer, was also incorporated.

    Certain features of the simulation required careful consideration in the course of construction. Primarily, of course, the interpretation of Lp dynamics required caution in interpretation, to insure that CT was not being compromised or "fudged" to achieve some pre-ordained result. In fact, a number of schemes alternative to the one presented in detail above were tried, first to insure the robustness of the general approach by evaluating close alternatives, and second to confirm a proper mapping to Lp dynamics.

    As a simple example, the generation of repulsive forces across the entire structure eliminates any possibility for ambiguity, and corresponds to a post hoc and global knowledge of the integrity (or not) of the structure. More subtle were alternative interpretations which did not preserve analogy as the basis of coherence; for example, favoring some topics in the relation above others or not providing a symmetric view of all topics within the coherence. These were not interpretations consonant with CT, they did not provide consistent results when applied over trials with various configurations, and neither did they exhibit the properties of CT as predicted at the macro level.

    VI.2.3 Coherence Displayed

    Consider that topics in a entailment cohere, that is, they make sense together; they are in the same topological neighborhood. Concurrently, these topics (if they are in a stable entailment which does not contradict with other entailments) are distinct; they are not blurred together or confused with one another. It is a great advantage to the user to display the relationships contained in the knowledge representation, as a means of understanding the existing structures as well as the implications of new ones.

    Figure 9 simultaneously displays two coherent entailments which are distributive on the topic "ARI." The models for the two entailments inside of THOUGHTSTICKER are:
    • "ARI is examining the use of CASTE for Training" and
    • "ARI is an acronym for Army Research Institute."

    VI.2.4 Animated Interpretations of Topic Relations

    THOUGHTSTICKER displays the structure of the knowledge representation as an animated sequence. Each topic is represented on the screen by its wordor phrase, and lines connect topics contained in the same entailments. The topics are originally displayed at random positions, and thereafter they move smoothly around the screen. Figure 9a through Figure 9c show the result of such a dynamic interaction between the two entailments described above. The topics are animated according to the followingrules:
    • Topics in the same entailment are attracted to each other, and hence move toward one another over time; but also
    • Topics in the same entailment are distinct, and so if they come in close proximity to other topics from the same entailment, they repel each other.
    These two force processes are shown diagrammatically in Figure 10.

    These two rules are parallels of the notions of coherence (attraction, same neighborhood) and distinction (repulsion, distinct entities). The addition of the dynamic element during simulation provides the third component of Lp, namely, process.

    Figure 11 shows the effect on a larger set of relations.

    VI.2.5 Pruning Displayed

    The interpretation of the Lp operation of Pruning is shown on Figure 12a through 12e, where the sequence gives some flavor of the dynamics. In addition to the above rules, an additional rule is imposed, which places a force on the topic "CASTE", drawing it up in entailment to the others. Again the topics are at first positioned randomly on the screen, the forces between each topic are computed and their positions are changed accordingly. Again, the dynamic simulation gradually becomes stable. The resulting hierarchy displays in graphical terms the dependencies that were inherent in the original network, or heterarchy.

    VI.2.6 Contradiction Displayed

    Given just these rules, the question is asked whether these simple dynamics would display the behavior of, say, contradiction checking.

    Figure 13a shows an initial and random positioning of the topics of two

    entailments as modeled by:
    • "ARI is examining the use of CASTE for Training" and
    • "ARI uses PLATO for Training."
    There is a contradiction contained in the entailments using the topics

    ["ARI" "CASTE" "Training"] and ["ARI" "PLATO" "Training"]. Figures 13b and 13c show intermediate, still pictures during the dynamic simulation. Finally, in Figure 13d, the resulting display shows that there is, in fact, no distinction between "CASTE" and "PLATO" as embodied in the entailments as they stand.

    Note that the software has not examined the structure "globally", as it were, in the way that the contradiction checking does. Relationships are computed only within each entailment. The software simulation has merely imposed the simple rules described above, acting simultaneously on each topic in an analog to the meaning of the relationships as specified in Lp. The result is consistent with CT in that there is not enough distinction within the present entailments to maintain a distinction between the two topics, and so they occupy the same position on the screen. The addition of further distinction would eliminate the contradictory situation; for example, ["ARI" "CASTE" "Pask" "Training"] and ["ARI" "PLATO" "CDC" "Training"].

    This example is shown in Figure 14a through 14c.

    VI.3 Conflict Terminology: Ambiguity and Contradiction

    The two terms, ambiguity and contradiction, refer to the same cognitive situation; the term used is an observer's label.

    "Ambiguity" emphasizes that there is a lack of available distinction between two (or more) topics; hence it is ambiguous which topic is indicated, or indeed whether there are two distinguishable topics instead of one. In the display of Figure 13d, the topics become ambiguous because they are indistinguishable from each other.

    "Contradiction" emphasizes that when a production is begun with particular topics, two (or more) entailments are activated and these processes conflict. It would be contradictory to produce two distinct topics from the same production of topics. Figure 13 can be interpreted as displaying the production from the same topics resulting in a conflicting or unknown result.

    Since the observer names the cognitive event as ambiguity or contradiction, it is an error to be concerned with one or the other when conflict is detected by the Rule of Genoa as specified by Lp and embodied in, for example, THOUGHTSTICKER.

    VI.4 The Activation of Analogy versus Coherence

    As noted in Section V.5.3, analogy is a more fundamental form of Lp relation in that it is a pre-condition to the existence of coherence.

    It is a basic issue to decide how to simulate an Lp structure. Clearly since analogy is more basic, it should be the basis for process activation. This is the case in the micro simulation presented, although it is clearer to describe the software in terms of coherences and hence the later sections take this perspective. In actuality, the simulation behaves like processes of analogy because it relates a given topic in the coherence to all of its neighbors at once, to compute its new relationship to them. This is an analogical relationship. It then proceeds to each of the other topics in the coherence in turn, representing in the end the complete set of analogies that must exist to form the stable structure and inter-relationship that is a coherence. It is because all of the necessary analogies exist that the complete coherence (a) produces relatively stable positions for the topics in the simulation and (b) shows the conflict points within those structures that contain them. These two points will be brought out in the detail below.

    VI.5 "Forces" Model

    VI.5.1 Movement toward Micro Modelling

    The concept for computing Lp structures in the manner described above arose in two ways.

    First, it arose from a desire to capture and compute the essence of Lp as a kinetics (Pask 1980d), restoring its status as a process model. This is a crucial distinction, of the kind which gives meaning to "simulate" versus "reify", and one which separates CT from AI. To continue research on Lp based on the static representations of THOUGHTSTICKER (which are adequate and practical for applications such as knowledge representation in training) would not lead to the substantial advantages that a kinetic model would; for example, there would be no potential for innovations arising within the computed structures themselves. It therefore seemed essential to me that this avenue be pursued.

    Second, the process model for computing Lp structures was attractive for its parallels with the software models of "actor" semantics (Hewitt 1972; Hewitt & Baker 1977) as well as physics (Deutsch 1985). The "actor" models became popular with the increased research activity in parallel computing especially when the limitations of single-processor, "von Neumann" architectures became more apparent to workers in the field. These appealed to me, both because of interest in simulation-based computation, whether for graphics and animation, or for the extension of conventional models of computation. Most recently, the development of quantum-mechanical models (Deutsch 1985) has emphasized the need for alternative models of computation.

    Here is provided an interpretation of stable entities of Lp as individual "actors" (in the sense that they are individual and separable) influencing each other according to the relational organization between them. Thus there are "forces" acting between the entities, or topics, which influence their "motions" around each other. (Alternatively one may take the Einsteinian view that the "shape of space" is determined by the relations.)

    Another characterization of the search space represented by the forces model is that of a minimum energy state which is sought by the interaction of the elements of the organization. The minimum energy state represents a configuration of the relations in the organization which is maximally stable, in that minimum energy is required to maintain it. Perturbations to the energy of the system in that state, without changes in organization, result in a convergent process back to the minimum energy state.

    These acting forces determine the actors' kinetics, namely, their behaviors relative to each other as determined by the relations among them. As these actors represent topics and the organization represents cognitive relations, their resulting behavior is interpretable as a cognitive "result" as determined by Lp. The execution of the processes within this interpretation results in confirmation of the macro prediction within CT of such phenomena as pruning (see Section VI.5.4), conflict, conflict detection and resolution.

    In the translation of the forces model into software there must come quantification of precisely how the entities are to interact, at what relative rates, etc. Unlike Newtonian mechanics where experiments may be performed to show the rates of gravitational acceleration, or in high-energy physics where the relative mass of tiny particles can be derived, Lp does not thus far provide a quantification of the forces involved. ("Arc-distance" is a measure of conceptual distance within a structure, and is reflected in the display results of the simulation; however this is a different quantity from what is being discussed here.) It is not inconceivable that experiments could be done to derive some of these (see Section VII.3.3 for speculation along these lines). A series of experimental runs were performed to explore a range of possible interpretations of "forces", and these are given below.

    VI.5.2 Basic Force Calculations

    The equations for these calculations are contained in Appendix A.

    The equations used require position information in 2 dimensions, as usual called x and y. These positions are updated repeatedly, as fast as the simulation can run. There is a "time slice" parameter which determines the amount of time interval that is considered to have passed between the previous iteration and the current one, and this "delta-time" determines the overall rate of the simulation. There is no need for tracking the actual elapsed time between iterations because there is no need for mapping the simulation to clock time or any other "real time" considerations in the simulation.

    It is interesting to consider that the value for delta-time is the basic "thermodynamics" of the system, a background energy relative to which all interaction takes place. Lower values require longer to come to stable configurations; higher values come to stability sooner but only after passing through more-highly energetic, and hence less stable, states.

    The x and y positions are updated from velocity values, also computed in x and y. These in turn are modified each iteration by acceleration values for x and y. It is the acceleration values that are actually modified by the interactions of the entities in the simulation.

    There are two forces at work in the simulation. The attractive force operates on those entities (topics) that exist in the same coherence. It operates without much effect at a distance and with increasing effect as the distances between them decrease. Thus the attractive effect that they have is proportional to the distance between them. Each topic determines its distance to each other topic in the same coherence, and is accelerated toward each such topic in proportion to its distance.

    The repulsive force also operates in proportion to distance, with closer distances creating greater repulsion, and again the result is applied for each relevant topic to the acceleration.

    In both cases the calculations are performed in both x and y dimensions. After all such interactions are computed, the velocity and then the position of the topic are updated. The new position is used to plot the topic on the screen.

    The precise effect that distance has is determined by an exponential parameter. For a value of 2, the simulation is a standard Newtonian inverse square law. For a value of 1, the simulation is a linear law. Both these values and some intermediates were used in a series of trials, as described in the next section.

    Other parameters control further relative interactions, such as the relative strengths of the 2 major forces' interaction. After some experimentation it was seen that these could be kept equal and hence their absolute value is irrelevant since they are normalized throughout the equations.

    A parameter was used to insure that long topic names would still appear left-to-right on the display and not interfere with other topic names, but this was used only for purposes of display clarity; all results were obtained without this additional calculation being performed.

    It was found that a generalized "center-of-mass" offset was useful in insuring that the entire structure did not drift off screen. This computes where the average of all topics on the complete display would be placed. The offset from this center of mass to the center of the screen is derived. Then, all of the objects are moved by that offset before display. Again, the primary results were computed without this adjustment, which was seen to be unnecessary in most cases anyway and was added later as a cosmetic enhancement when the micro simulation was added to THOUGHTSTICKER to display the knowledge structure as a user aid.

    VI.5.3 Linear and Squares Result

    Coherent structures of varying complexity are easily displayed. Cases of contradiction were demonstrated for a variety of values for the exponential parameter, with a range of values between 1.0 and 2.0, representing the linear and square law result respectively.

    It was seen that the end result was not significantly different for any value in this range; time to settle and slight overall variation in final distances were the only tangible differences. For the value of 1.0, the structure might spread outside the scope of the display screen; this could be prevented by the adjustment factor of the relative strengths of the 2 forces. However, the configurations were consistent with other values for the exponential greater than 1.0.

    For values above 1.0 to 2.0, the end configurations were consistent with the exception of the absolute distances which resulted once the structure stabilized. This of course reflects the variation of balance of the forces. Beyond this difference, which does not effect the final result, only the time to stabilize and the range of motion of the topics during the stabilization time varied. As might be expected, the higher values for the exponential parameter led to higher energetics and longer stabilization times.

    The following table presents a series of trials with significant cases of contradiction as detected by the Rule of Genoa. The results from the micro simulation are given with explanation.

    Adicity refers to the number of topics in a coherence; hence 3-adicity indicates 3 topics, etc. The equal sign, "=", does not indicate equivalence but rather the lack of distinction between the topics related by the sign; for example, q = r indicates lack of distinction between the topics q and r. The phase "close to" means that although the topics do not fully overlap, they are quite close to each other and closer than any other pair in the structure.

    Figure 15 in its various roman-numbered sub-figures contains each case in order.

    Table 1: Contradiction Cases Results>
    Case as determined in macro CT theory    Results computed from micro simulation   
    from Rule of Genoa                       of Lp:                                   
    I. Full Genoa (1 non-overlap)            Full overlap of ambiguous topics         
                                             detected in all trials:                  
    3-adicity in 2 coherences:               d = e                                    
    (t p d) and (t p e)                                                               
    II. Full Genoa (1 non-overlap)           Full overlap of ambiguous topics         
                                             detected in all trials:                  
    4-adicity in 2 coherences:               d = e                                    
    (t p q d) and (t p q e)                                                           
    III. Full Genoa (1 non-overlap)          Full overlap of ambiguous topics         
                                             detected in all trials:                  
    5-adicity in 2 coherences:               d = e                                    
    (t p q r d) and (t p q r e)                                                       
    IV. Subset                               No overlap                               
    4 adicity and 3 adicity:                                                          
    (t p r q) and (t p r)                                                             
    V. Partial Genoa                         Partial overlap in 2 results:            
    4-adicity and 3-adicity:                 r close to d (shown) or                  
    (t p q d) and (t p r)                    r close to q or                          
                                             no overlap                               
    VI. Partial Genoa                        Partial overlap in 6 results:            
    5-adicity and 4-adicity:                 q close to f and d close to r (shown)    
    (t p r e f) and (t p q d)                d close to f and q close to r or         
                                             d close to f and q close to e or         
                                             d close to e and q close to r or         
                                             d close to e and q close to e or         
                                             d close to r and q close to f            
    VII. Partial Genoa                       Full Overlap in 2 results:               
    4-adicity in 2 coherences:               d = e and q = r (shown) or               
    (t p q d) and (t p r e)                  d = r and q = e                          

    VI.5.4 Prune Case

    The Lp operation of pruning has been achieved by the addition of a further force representing the hierarchical relationship between a head node (i.e. the focus of the pruning operation) and the remainder of the structure. This force acts as a further acceleration on the head node(s) only, in the vertical direction relative to the orientation of the display. The head node(s) drift upwards, attracting the topics to which they are related by coherence, pulling up others connected to those, and so forth. Because of the "center-of-mass" correction described above, the result did not drift up but rather arranged itself relative to the vertical axis.

    An alternative interpretation of the Pruning operation might have been the sequential "activation" of each topic in sweeping arc distances down the structure. This could have been simulated in display by changing the representation of each topic, for example by moving from bold to non-bold characters, or tracking down the line connections between topics. However, each of these was seen to be computationally expensive and none would serve to clarify the display for the user. The interpretation of Pruning as an additional force is valid and consistent with the interpretation of the Lp relations of coherence and distinction as forces of attraction and repulsion.

    The result is a display of pruning much like those constructed by hand from an entailment mesh. Of course topics may overlay each other if many coherences are processed at once. In this case an additional (but artificial) calculation may be made, forcing all nodes to be distinct. The result is a pleasing display for the user. One might attempt to justify the use of such an additional calculation by asserting that in a coherent mesh all topics are in fact distinct from one another; however this would compromise the very essence of the microscopic simulation in which such a result comes from individual computations distributed locally throughout the structure. As emphasized earlier in Section IV.5, such a statement as to the distinction across the entire mesh is a global and macroscopic observation that can be made only from outside the structure.

    An additional use of the pruning calculation is for the purpose of asserting a focus of attention for the user during the computation of coherence and distinction. For example, choosing the 2 topics around which an ambiguity exists in a case of Genoa conflict causes these topics to rise higher in the display than their neighbors, thereby giving prominence to the important parts of the situation at hand. Of course, knowing which 2 topics have this condition is a similar type of "global" knowledge. Note that choosing any 2 topics in this particular case (and any set of topics in any other case) does not alter the detection of ambiguity by the microscopic computation; rather, it affords an alternative focus of attention in the figure. All trials presented below were confirmed results for both head node/prunings and no such additional pruning force.

    VI.6 Discussion of Results

    Figure 15 shows the detailed results of each of the following cases, numbered accordingly; for example, Case I, "Basic Contradiction Detection: Full Genoa" is Figure 15 I.

    VI.6.1 Basic Contradiction Detection: Full Genoa

    Full Genoa is the case where only 1 topic in each coherent relation does not overlap with the other relation. It is the simplest case of contradiction.

    As shown in Cases I, II and III, the micro simulation succeeded in displaying contradiction in all cases of equal adicity of relation; results are given above up to adicity 5 but consideration of higher adicity and trials up to adicity 7 confirm this. This is as expected when considering the equality of forces in each relation and the symmetry of the computation across the relations.

    VI.6.2 Subset

    The subset in Case IV might be surprising in that it showed no discernible overlap or even close proximity. However the result is consistent and confirms one class of result predicted in Lp for this case (Pask 1978). The overlapping topics themselves, although they are present in both relations indicated by the same name in the diagram, they in fact cannot be the same topic because they contribute to the relations of different adicities; therefore they must be in part different topics (the issue has been discussed most extensively in Clark 1980). Although the issue is not fully resolved so far as the theory is concerned, this micro simulation result would support the point of view that topics take some of their characteristics from the relation (i.e. in this case the relations adicity) they are present in.

    VI.6.3 Ambiguous contradiction: Partial Genoa

    Partial Genoa exists in the macro theory whenever the extent of the ambiguity cannot be completely characterized; this has been called "ambiguously ambiguous" to distinguish from Full Genoa which is "unambiguously ambiguous" (Gregory 1982).

    Case V correctly showed the closeness of the topics r and q, or r and d in the structure. However there was a third result in which no overlap occurred. This showed a limitation of the projection scheme used to display the n-dimensional structure in the limiting 2-dimensions of the display. The resulting structure, depending on initial conditions, was a condition where the topics would find states in which the overlap would not occur due to forces keeping the ambiguous topics widely separated. This is due to the limitation of the simulation model in which the mapping to 2 dimensions occurs in the computation (in contrast to a computation in n dimensions that is then projected under user control; see Section VII.3.1). Topics as they settle in position interfere with others from "getting around" their neighbors to the true minimum configuration.

    The probability of this occurring was lowered by increasing the delta-time parameter, representing the absolute thermodynamic energy of the system. This increased the energetics of the individual topics and encouraged motion away from the states of local minima.

    Although this condition was present in some configurations, the basis of the entire simulation approach is not compromised, because within the dimensionality that is completely handled by the present calculations, all results were consistent with CT. Only cases beyond the dimensionality of the present software resulted in some trials in a partial result. The full, n-dimensional computation would avoid this problem and allow for much more complex computation than could be shown by the present forces model.

    Case VI did not contain such non-minimal cases and provided consistent results.

    Case VII, where the ambiguity is increasing, provided a consistent result in which the mapping of which topics were ambiguous with which, was shown.

    VII. Conclusions & Summary

    VII.1 Lp Software at the Macro Level

    THOUGHTSTICKER is a software manifestation of the calculus Lp, itself the dual of Conversation Theory. It has been argued in this thesis that THOUGHTSTICKER exists at a "macro" level, relative to any true embodiment of Lp involving the process component in addition to the coherence and distinction components that THOUGHTSTICKER already possesses. Even at this macro level, it is a substantial enhancement to previously existing software systems. This derives from two classes of enhancement: those taken from Conversation Theory, and those invented or developed in the course of writing Conversation Theory into THOUGHTSTICKER code.

    The strengths of THOUGHTSTICKER derived from CT come from the cognitive basis of CT. Modelling knowing [sic] is substantially improved above other AI techniques because of this. The resulting software provides advantages above "thought processors" and computer-aided instruction systems because of the stimulation provided the author during the knowledgebase creation process, including indication of existing topics, detection of potential conflict, and saturation. The resulting knowledgebase has properties that make it particularly suitable for exploration in a manner consistent with a variety of conceptual styles. These advantages to THOUGHTSTICKER derive from its origins.

    THOUGHTSTICKER strengths derived from its implementation are many and various.

    Some have to do with the advantages of modern menu-driven MMI, with a mouse pointing device, multiple windows and panes on the same screen, bit-mapped, high-resolution displays and so forth. The advantages too of object-oriented programming, for rapid prototyping and swift experimental change of features has also aided the implementation substantially. Conceptual features, however, make up the bulk of advantages of THOUGHTSTICKER. Unique to this implementation, these are:
    • A full set of tools for the detailed manipulation of the Lp structures.
    • An evolving set of high-level, semi-automatic procedures for converting conventional courses to THOUGHTSTICKER structures, examining the implications of various tutorial strategies on any course, searching for lack of uniformities in the structures, etc.
    • Complex heuristics for providing a many-dimensional classification and delivery of training based on conceptual styles.
    • A true personalization of the user interaction based on a complete history of interaction between the user and the system.
    • Facilities to manage the problems of multiple authors.
    These advantages of THOUGHTSTICKER derive from the ingenuity of the implementation as constructed by Jeffrey Nicoll and myself. A more detailed breakdown of the responsibilities, for the purpose of documentation and with the understanding that all such histories can only be coarse in nature, is contained in Appendix C.

    VII.2 Macro theory and Micro confirmation

    A microscopic simulation of the forces within a relational structure representing the relationship of entities within Lp is seen to exhibit certain behaviors. These behaviors were already under consideration in the macro theory of CT. There, ambiguity and conflict were detected by the calculation of certain structural relationships, a calculation performed from "outside" the system by a process (perspective, individual, observer) that had access to the entire structure. The microscopic simulation, which does not perform globally and has only local information, results in configurations interpretable as ambiguity and/or conflict in the same cases as indicated in the macroscopic theory.

    Therefore, the macro behaviors which had previously had the status of observable events in the cognitive domain as described within CT now can be computed directly from the microscopic processes dictated by the protologic Lp. Although Lp was drawn from experience of CT, before these experiments it had no independent and empirical confirmation. In addition, evidence is provided for resolving the open question within CT on the interpretation of "sameness" of topics as dependent on the adicity of their relations.

    Conflict is the name given to a condition observed from outside the system. It arises perforce in situations of contradiction/ambiguity where the existing structure is unstable and where some changes to the structure result in stability while others do not.

    The locus within the structure where such changes must be made can be performed by a macro process from outside the system or a micro process from inside. Theories, of which CT is one, can provide the means for the macro calculation and there is little magic in this: all of the information is known globally. Such theories are useful for post hoc explanations of how a system behaved; they are however useless in creating such change within a system.

    Systems contain distinctions in two senses. Primarily, they contain the distinctions attributed by observers (Pask 1963). However, systems that innovate must be capable of creating distinctions from within; otherwise nothing new can arise. Therefore any cognitive theory must provide for a mechanism whereby distinctions arise within the system rather than outside as imposed by an omnipotent observer. In other words, the system must, within its own structure, be capable of computing sufficient similarities and distinctions to create a separable observer; this is tantamount to saying that it is capable of computing the new distinction.

    It is therefore crucial for any theoretical framework to show how distinctions can arise, as well as explain them once they have. Conversation Theory and its dual, Lp, is one such framework.

    VII.3 Extensions to the Work

    VII.3.1 Dimensional Control

    For cases involving more than a few coherences, the display of the result as a projection onto 2 dimensions can be cluttered and the bifurcation point can be obscured. Because the dimension of the structure is greater than 2 dimensions, the problem cannot be avoided without the ability to project onto more than 2 dimensions, impractical for the present technology and even if soluble for low dimensions is certainly not practical to display for higher dimensions. An alternative would be to perform the calculation in dimensions that are relative to each structural relation (rather than simply in x and y) and to then control which dimensions are projected onto the 2 dimensions of the display. This control could be determined by the user, which may be useful in scanning for interesting structures and points of potential future bifurcation. It would also be possible and desirable to provide an automatic software control, determining the projection dynamically as a result of the condition of the simulation. The condition(s) of interest would be in part controlled by the user. In some conditions, the user's interest will be in maximizing the distance between certain topics, to determine their entailment. In others, the intentional overlay of relations would allow for the extraction of similarities and differences as represented by the topics entailed in the relations. Such a "driving through knowledge" would be a powerful tool for the user, and later point to the means for automating certain operations (such as saturation) on the user's behalf.

    VII.3.2 Cognitive Force Values

    It is conceivable that the relative forces, simulated above for values from linear to square-law, could be more precisely quantified, and specific values determined, experimentally. The experimental result would need to be correlated across a number of individual trials and subjects but might provide a higher degree of veracity to the simulation. For example, the role of the number of topics in a relation (the adicity) on the rapidity of detection of conflict might allow the derivation of the exponential parameter. The relative balance of the adicity in the relations involved in conflict might have the effect of speeding or slowing the process as well.

    The influence of analogies and generalizations that provide additional connections, and hence influence on the structure, might also be involved in quantifying the parameters. Such quantification would probably be necessary in the further computation of condense/expand by micro simulation.

    VII.3.3 Pruning and Resolution

    The dimensional extensions discussed above would allow for greater information to be drawn from pruning cases. For example, suppose that multiple head nodes were chosen and were accelerated in opposing directions in a dimension in addition to those already allocated to the existing relations. Eventually in the computation, certain topics would be pulled in that dimension to the point where the strain could be detected by calculation. This could be indicated graphically to the user, who could choose to insure that topic remain a single entity, or could allow for bifurcation. Upon the allowance for bifurcation, the implications would then spread through the structure and further places for possible bifurcation would be indicated by the same computation. The result would be a highly efficient and visually exciting means for extending the structure through conflict detection and resolution.

    VII.3.4 New Hardware

    rise of new hardware architectures that allow for massive parallel computation provide the means for exploring the implications of the microscopic computational model offered in this thesis. It would become practical to perform THOUGHTSTICKER operations on massive entailment structures by microscopic simulation rather than the current, compromised macro calculations. This would allow for continuous management of the knowledge, especially as its creation and evolution becomes distributed throughout large and geographically disparate electronic networks.

    By far the most exciting prospect however is the relaxation of the restriction to parallel computation to concurrent computation. This is becoming more feasible in the newest hardware architectures (Hillis 1985). Now the potential is to create a true Lp engine, capable of evolution and innovation within itself. The microsimulation proposed here, as extended to include condense/expand operations including generalization, would be the basis for such an engine.

    Appendix A. Equations

    [To be added to this Web version, available sooner on request.]

    Appendix B. Software Program Listings

    The experiments for software constructed for the micro, dynamic Lp simulation of this dissertation was begun on a Commodore "PET" microcomputer, written in "PET" BASIC, in 1981 and 1982. Simulations were performed dynamically and displayed as time slices in a graphical arrangement on the 52-column display. Hardcopy was available, along with a variety of timing and running modes. Linear and Square-law calculations were prototyped and experiments run.

    The simulation was re-coded on an early vintage LISP Machine introduced by Symbolics, Inc. in 1982, the Model 3600, Serial #129 (i.e., number 29, as they start at 100). The source code program listing following the basic calculations of the PET version was written in ZetaLISP (technically, "Old-ZetaLISP") in 1983. It uses the graphics handling of that environment and the Flavors system of object oriented programming. The code was written during the very early days of experience with the machine and hence is not an exemplary use of the modern LISP dialects.

    Appendix C. Short History of THOUGHTSTICKER

    C.1 The first "THOUGHTSTICKER" Software

    THOUGHTSTICKER as a software system first existed at the laboratory of System Research Ltd in Richmond-upon-Thames, Surrey, under the direction of Dr Gordon Pask, Research Director. The hardware configuration consisted in a variety of peripheral devices:
    1. Text retrieval, in the form of "pigeon hole" slots with printed paper (computer storage was at a premium and was entirely taken up with program);
    2. Hi-resolution line graphics displays, for the display of the evolving knowledge representation, especially the verification of valid constructions;
    3. Manual, paste-up board for user construction of complete knowledge "maps", with computer-controlled status indicators;
    4. A digital computer, the heart of the system, running the software of the THOUGHTSTICKER system.
    The software for this original system was written in a LISP variant. Both THOUGHTSTICKER and the LISP-like language it was coded in, called LISPN, was written by Robert Newton of the staff of System Research Ltd. LISPN was an interpreter written in assembly code for the Computer Automation LSI machine, one of the first mini-computers available in England. LISPN contained many of the primitives of the LISP 1.5 of McCarthy, but specifically not the lambda operator for function definitions. This would seem a fundamental contradiction to LISP itself, and it was, as insisted on by Pask. One justification of this was to avoid any observer's confusion over the intention of the implementation, and as a snub of the lambda calculus itself, which Pask felt was an unnecessary trivialisation of Currey and Pheys combinator logics. This THOUGHTSTICKER was limited in complexity of data structure and in performance. All system code was swapped from 8 inch floppy disk, rather inefficiently. The system as whole with its peripherals was a tour de force, considering the era of computing and the circumstances of funding. One of its major contributions was its originality: it was one of the first and perhaps the first truly domain-free software programs for knowledge representation. It could represent the knowledge and beliefs of any individual, in any domain, using the same software. It also was the first program to attempt to display in a useful and psychologically meaningful way the topological structure of the knowledge.

    C.2 The first micro-based implementation: MTHSTR

    The limitations of the original THOUGHTSTICKER were clear to anyone viewing it, and the primary one was perhaps its imprisonment in a completely non-standard, baroque and incompatible environment of the research laboratory (literally, and often referred by those who visited, as the research basement) of System Research. The obvious path for future exposure and ultimately extension was to re-implement the system in some compatible and reasonably available micro-processor environment. For the moment, the limitations of memory size implicit in micro computers would be set aside for the other benefits.

    Robin McKinnon-Wood, long-time collaborator of Pask and contributor to the Cambridge University Language Research Unit, conceived of and coded a database scheme for the BASIC language. The hardware was produced by Research Machines Limited (RML), and ran a Z80 processor in a custom operating system. This was in 1978-79 and CP/M was not yet widely available, at least on machines distributed in the UK. The BASIC was a fairly good implementation of what would become the MicroSoft version of the language. This scheme was clever in that it used the string capabilities to emulate what would be simple list-processing primitives in LISP. In some 6 pages of transferable code, MTHSTR, as it was dubbed, contained the basic operations of Lp; specifically, instatement of coherences, Genoa contradiction checking (for certain cases only), Prune and Selective Prune (for most cases but none of great depth), and other utilities such as listing topics, coherences and pruning structures. McKinnon-Wood, an algorithm man from way back, understood the meaning of Turing computable and universal machines. The cleverness was performing complex, multi-dimensional parsing using a lexical scheme and one-dimensional string arrays. The means for representing the Lp structure of a coherence was arrived at in collaboration Pangaro. This involved eliminating the redundant storage of each pruning of each coherence, and instead storing a single cluster in which the prunings would be unfolded. Some further experiments were done in APL and a "structural" (i.e. macro-level) condense/expand was written in MacLISP by Pangaro but none of these efforts produced a complete THOUGHTSTICKER.


    The Applied Psychology Unit (APU) of AMTE Teddington (now named the Behavioural Science Division of ARE Teddington) became interested in Pask's work and funded an effort to place THOUGHTSTICKER into the HP1000 systems which they had on-site at APU. Pangaro, initially with the aid of McKinnon-Wood and the MTHSTR code, provided written specifications for the database routines and Lp operations, including a crude notational approximation of condense/expand as extracted from long arguments with Pask himself as to their nature and significance. These specifications were given to a programmer, working for an established Ministry of Defense contractor. The system was later taken over by Peter Clark, a computer scientist and student of Pask who endeavored to finish it. The entire project was fraught with difficulties, including but not limited to difficulties with using contractors not familiar with CT, obsolete operating systems, poor management of computing facilities, and Collapsing research organizations. Despite this, TSTIK, as it was called, contained a menu of some twenty commands and some features only recently duplicated in significantly improved circumstances. Included in TSTIK were instatement of coherences, full Genoa contradiction check, pruning and selective pruning, saturation, condensation/expansion (crude), bifurcation of topics, and merging of universes. It was used for a brief time in an exploratory way but was lost when the computing environment in which it ran was superseded.

    C.4 Apple CASTE, a version of THOUGHTSTICKER

    MTHSTR itself as a historical artifact first demonstrated THOUGHTSTICKER functions in a micro environment. More importantly, its database scheme became the basis for a system with many cosmetic and functional extensions, running in the Apple II microcomputer. For the record, it should be noted that the Apple configuration required to run this version of THOUGHTSTICKER is not a pure Apple system: it requires a Z80 processor card, 80-column text display card, the CP/M operating system, and, to run efficiently on reasonably-sized demonstration environments, 256K RAM-Disk extender cards.

    These extensions, and ultimately a complete re-write and major expansion of MTHSTR, were made by Pangaro under contract to APU. (Later, Scott Henderson of the staff of PANGARO Incorporated made some further changes.) The system was initially called CASTE, to emphasize its authoring and tutorial capabilities. Perforce it contains THOUGHTSTICKER elements, especially instatement of coherences, full Genoa contradiction checking, pruning and selective pruning, and saturation. The primary contribution of this version was its ease of use, the existence of its usable documentation, and the fact that it proved the viability of the approach in micro computers.

    One innovation of the system, conceived in collaboration by Dik Gregory of APU and Pangaro, was the use of sentences of English language text to contain concepts in the system. The PASCAL TSTIK also contained this feature at one point, but the full development was done in the BASIC version. These sentences would be provided by the author and be the primary information seen by the student while being tutored.

    A "Rules Tutor" for the naval simulation game called HUNKS was constructed by Pangaro in the Apple. This game involves opposing commanders of fleets of vessels and the ability to command the simulation to move the vessels, fire missiles, and call for information on graphics displays. Gregory used the system to construct both text and simple graphics frames to provide tutorials for learners of rules of the game. The configuration required a second Apple II, running the HUNKS code and driven by the CASTE software in the other machine. The remote machine was thus a graphics engine for the representation of tutorial material and game situations.

    A variety of test domains have been constructed for this implementation, including a word processing tutor (which contains basic concepts of word processing as well as command conventions and user "help features") and a database for corporate documentation. This Apple system written in Microsoft BASIC then became the basis of a multi-Apple version developed under the direction of Pask while under contract to the Army Research Institute (ARI), Virginia. Pask and his associates at the Centre for System Research and Knowledge Engineering, Concordia University, Montreal, Canada, added additional controllers for slide projections and multiple-screen tutorial modes, and made certain functions (notably prune) more robust.


    The purpose of developing the Apple CASTE version of THOUGHTSTICKER was to bridge the time between the demise of the PASCAL version and the planned version to be written in LISP on a hardware configuration of sufficient size and performance to thoroughly test the capabilities of THOUGHTSTICKER in a serious research implementation. APU placed a contract in early 1982 with PANGARO Incorporated to obtain the necessary hardware and expertise in Washington DC, while themselves beginning the procurement process for their own, identical hardware.

    The hardware chosen by Colin Sheppard of APU was the Symbolics 3600, a special-purpose mini-computer which is a hardware engine for running ZetaLISP, a superset of MacLISP, itself derived from McCarthy's LISP 1.5. The Symbolics system is unsurpassed in performance, features and software support. It arrived at PANGARO Incorporated in Washington DC in April of 1983.

    Over the course of 18 months, a THOUGHTSTICKER was developed using the object-oriented programming techniques of the Symbolics called Flavors. A set of user interface windows were developed, along with the underlying code, by Jeffrey Nicoll of PANGARO Incorporated. This THOUGHTSTICKER has the functions of Lp up to but not including "condense/expand" although the primitives required for it are present. The emphasis is on displaying the details of the evolving knowledge structure and maximizing the choices allowed to the user at any time. This was an explicit choice, in order to study the implications of all of the Lp operations under the pressure of practical use. It is necessary therefore to have some familiarity with CT and Lp in order to use these user interfaces to this version of THOUGHTSTICKER, called the "research implementation." Information basic to its use is provided in the THOUGHTSTICKER User Manual, available as an Annex to this dissertation.

    Unlike any other implementations, or any considerations given by the underlying theory, this version of THOUGHTSTICKER as conceived by Pangaro and Nicoll and written by Nicoll is centrally concerned with the differentiation between "P-Individuals", who might be different individual users or the same user under differing circumstances of goals, needs, or occasions. Conflict resolution therefore includes tools for declaring existing assertions accepted or not accepted relative to the current "contexture", alias "persona" (as a P-Individual is called in the system).


    A further interface was started in 1984, originally as a request of ARI, which would attempt to allow THOUGHTSTICKER to be used by individuals not at all familiar with CT or Lp operations. This version, dubbed "Naive THOUGHTSTICKER" was coded by Pangaro under the specific design constraint that choices not be offered as "pop up" menus, and that a linear set of questions be asked of the user in order to perform all functions. The result is an excellent tutorial environment. Authoring is less successful, in that the implications of choices made by the naive author cannot be seen until some familiarity is gained, and the variety of options available for conflict resolution make it difficult to be patient with the linear resolution questions when the desired change is known beforehand. The experience gained in writing both the research implementation and the Naive interface is being used in a complete re-writing of the system as the basis of commercial versions of intelligent training software in the Symbolics.

    C.7 The Expertise Tutor

    THOUGHTSTICKER has also been integrated into a complete (prototype) system for training and job-aiding called the Expertise Tutor (the XT), as an interpretation by Pangaro of Colin Sheppard's original concept of Intelligent Support Systems. The XT provides not only the "descriptive" elements of how to play the HUNKS naval simulation game (such as the mission, number and types of vessels, playing rules, etc.) but also the "prescriptive" elements such as how to formulate a correct command, what strategies might be employed and what conditions of the game should be attended to at any given time.

    THOUGHTSTICKER provides the descriptive elements and the Naive interface is integrated into the XT user window. The "prescriptive" elements are provided by a system called the SEEKER, conceived and written by Scott Henderson of PANGARO Incorporated and based on a prescriptive interpretation of entailment meshes as implied originally by Pask and interpreted by Pangaro. The SEEKER was a means for referring to HUNKS objects and the status of the HUNKS game, while linking these references into sentence-like tactics that would produce commands for the HUNKS game. Thus a SEEKER-based computer player was constructed by Henderson, as an added bonus to the SEEKER concept. The original purpose was only to provide a means for presenting prescriptive knowledge to a learner and allowing the definition of new tactics on-the-fly in the course of running the simulation.

    Thus knowledge of the game is acquired by the user both descriptively and prescriptively, and the user could at any point choose which mode to operate in. The underlying databases were linked (although they could have been and perhaps should have been the same database, but this was impractical given the development strategy of the project; however the effect for the user was the same without noticeable loss of efficiency). With the addition of the HUNKS simulation itself, the complete user interface consisted of the explicit differentiation between the levels of interaction between learner and teacher as set out in Conversation Theory. As designed by Pangaro, this is the first embodiment of the complete conversational structure in software.

    C.8 Do-What-Do

    A few years before the XT was constructed, Pangaro had proposed an interface concept called "Do-What-Do" in which the distinction between descriptive and prescriptive intention on the part of the user was both made explicit and made available to the user in software. All user interface interaction was seen as either question ("What is this object in the interface...") or command ("Perform this action..." or "This is the object I mean..."). Thus an interface would be completely accessible to any user by means of these two basic commands, and all of the user modelling features of THOUGHTSTICKER (such as the history of shared vocabulary and common context) would continue to tune the choices made by the software.

    When the XT described in the previous section was built, the "Do-What-Do" concept was implemented in a basic form. The "Do" and "What" distinction was implemented by two distinct buttons on the mouse. Thus "Click-Left" meant "Do" and "Click-Right" meant "What." Results were favorable though the research programme did not allow for any explicit exploration or further development of this feature.

    C.9 Interactive Videodisc Interface

    In 1986, the US ARI contracted with PANGARO Incorporated to connect an interactive videodisc interface to THOUGHTSTICKER. Although the direct purpose was to allow for experiments to be performed comparing THOUGHTSTICKER with other forms of training (including platform, alias classroom, instruction, and conventional computer-aided instruction) the facility provided was a generalized one. Thus when a user is provided with a model of a topic or relation, a videodisc still or sequence with optional audio channel(s) is shown. These are chosen by the expert in the authoring mode with a basic facility for attaching videodisc models to the THOUGHTSTICKER database.

    Appendix D. Bibliography

    Bannister, D. & Mair, J. M. M. (1968): The Evaluation of Personal Constructs. Academic Press, New York.

    Barr, Avron & Feigenbaum, Edward A. (1981): The Handbook of Artificial Intelligence, Volume I, William Kaufmann, Inc., Los Altos, California.

    Bateson, Gregory (1960): Steps To An Ecology of Mind. Ballantine Books, New York.

    Bogner, Marilyn Sue (1986): "Course Assembly System and Tutorial Environment: An Evaluation." Proceedings of the National Educational Computing Conference, San Diego, California, June 1986.

    Brachman, Ronald J. (1979): "On the Epistemological Status of Semantic Networks." In N. V. Findler (Ed.), Associative Networks: Representation and Use of Knowledge by Computers, New York.

    Brachman, Ronald J. (1985): " 'I Lied about the Trees' Or Defaults and Definitions in Knowledge Representation." In The AI Magazine, Fall 1985 Issue, Menlo Park, California.

    Clark, Peter (1980): "Saturation, Bifurcation and Analogy in Lp: A Position Paper." Presented at the Third Richmond Conference in Decision Making, Richmond, Surrey, 1980.

    Chomsky, N. (1968):"Formal Properties of Grammars." In Handbook of Mathematical Psychology, Volume II, Chapter 12, John Wiley, New York.

    Deutsch, David (1985; "Quantum computability." Royal Society, London.

    Dreyfuss, Hubert & Dreyfuss, Stuart (1986): "Why Computers May Never Think Like People." In Technology Review, January 1986, MIT, Cambridge, Massachusetts.

    Duda, Richard O. & Shortcliffe, Edward H. (1983): "Expert Systems Research." In Science, Volume 220, Number 4594, 15 April 1983.

    Feigenbaum, E. A. & Feldman, J. (1963): Computers and Thought.

    McGraw Hill, New York.

    Feigenbaum, E. A. & McCorduck, P. (1983): Fifth Generation. Addison-Wesley, Menlo Park, California.

    Gregory, Richard (Dik) (1981): "Genoa Rules...OK?" Technical Memorandum, Admiralty Marine Technology Establishment, E1/P4.3/215/82, Teddington, Middlesex.

    Hewitt, Carl (1972): "Procedural Semantics: Models of Procedures and Teaching of Procedures." In Randall Rustin (Ed.), Natural Language Processing, Algorithmics Press, 1972.

    Hewitt, Carl & Baker, H. (1977): "Laws for Communicating Parallel Processes." In Proceedings of IFIP Congress 77 Toronto, August 1977.

    Hillis, W. Daniel (1985): The Connection Machine. The MIT Press, Cambridge, Massachusetts.

    Humphreys, Patrick & Wisudha, A. (1975): "MAUD: -- An interactive computer program for the structuring, decomposition and recomposition of preferences between multiattributed alternatives." Technical Report 79-2, Decision Analysis Unit, Brunel University, Uxbridge, Middlesex.

    Humphreys, Patrick & McFadden, Wendy (1980): "Experiences with Maud:

    Aiding the Decision Structuring versus Bootstrapping the Decision Maker." In Acta Psychologica 45, North Holland Publishing Company, Holland.

    Kearsley, Greg P. (1977): "Some Conceptual Issues in Computer-Assisted Instruction." In Journal of Computer-Based Instruction, Volume 4 Number 1.

    Kelly, G. A. (1966): "A Brief Introduction to Personal Construct Theory." In D. Bannister (Ed.), Perspectives in Personal Construct Theory, Academic Press, London and New York.

    Laing, R. D., Phillipson, H., & Lee, A. R. (1966): Interpersonal Perception, Tavistock Publications, London.

    Lettvin, J. Y. (1985): "Warren McCulloch and the Origins of AI." Reprinted in Trachtman, P (Ed.), Cybernetic, publication of the American Society for Cybernetics, Volume 1, Number 1, Summer-Fall 1985.

    Marante, G. J. & Laurillard, D. M. (1981): "A View of Computer Assisted Learning in the Light of Conversation Theory." Institute for Educational Technology, University of Surrey.

    Maturana, Humberto (1978): "Biology of Language: The Epistemology of Reality." In Miller, George A. & Lenneberg, Elizabeth (Eds.), Psychology and Biology of Language and Thought.

    Maturana, Humberto (1982): "What is it to see?" Reprinted in

    Trachtman, P (Ed.), Cybernetic, publication of the American Society for Cybernetics.

    Maturana, Humberto (1986), in Pedretti, A. et al (Eds.), Conversations in Cybernetics, Princelet Editions, London.

    Michie, Donald (Ed.) (1982): Introductory Readings in Expert Systems.

    Gordon and Breach, Science Publishers, Inc., New York.

    Minsky, M. (1967): Computation: Finite and Infinite Machines. Prentice Hall, Englewood Cliffs, New Jersey.

    Minsky, M. (Ed.) (1968): Semantic Information Processing. MIT Press, Cambridge, Massachusetts.

    Minsky, M. (1975): "A Framework for Representing Knowledge." In P. H. Winston (Ed.), The Psychology of Computer Vision, McGraw Hill, New York.

    Minsky, M. (1986): Society of Minds. Simon and Schuster, New York.

    Negroponte, N. (Ed.) (1977): Graphical Conversation Theory, Proposal of the Architecture Machine Group, MIT, to the National Science Foundation, 1977.

    Nicoll, Jeffrey F. (1985): "To be or not to be." Technical Memorandum, PANGARO Incorporated, June 1985.

    Nilsson, Nils J. (1980): Principles of Artificial Intelligence. Tioga Publishing Company, Palo Alto, California.

    Pangaro, P. (1980): "Programming and Animating on the Same Screen at the Same Time." Creative Computing, Volume 6, Number 11, November 1980.

    Pangaro, P. (1982): "Beyond Menus: The Ratzastratz or the Bahdeens."

    Proceedings of the Harvard Graphics Conference, 1982.

    Pangaro, P., Steinberg, S., Davis, J. & McCann, B. (1977): "EOM:

    A Graphically-Scripted, Simulation-Based Animation System." Memorandum, Architecture Machine Group, MIT, August 1977.

    Pangaro, P. & Harney, H. (1983): "CASTE: A System for Authoring and Tutoring." Data Training.

    Pangaro and Nicoll (1983), "Deleting the Knowledge Engineer: A practical implementation of Pask's protologic, Lp." Based on a paper presented at the Seminar on Machine Intelligence in Defence, Ministry of Defence, Weymouth, Dorset, UK.

    Pangaro, P. (Ed.) (1985): "THOUGHTSTICKER User Manual." PANGARO Incorporated. Attached to this dissertation as an Annex.

    Pask, G. (1958): "Physical Analogues to the Growth of a Concept." Proceedings of Symposium on Thought Processes, National Physical Laboratory, Teddington, Middlesex, November 1958.

    Pask, G. (1963): "The meaning of cybernetics in the behavioural sciences." In Cybernetics of Cybernetics, published by the Biological Computing Laboratory, University of Chicago, Urbana, Illinois.

    Pask, G. (1972): "Anti-Hodmanship: A Report on the State and Prospects of CAI." In Programmed Learning and Educational Technology, Volume 9 Number 5, September 1972.

    Pask, G. (1975a): Conversation, Cognition and Learning. Elsevier Scientific Publishing Company, Amsterdam.

    Pask, G. (1975b): "Aspects of Machine Intelligence." In Negroponte, N. (Ed.) Soft Architecture Machines. MIT Press, Cambridge, Massachusetts.

    Pask, G. (1975c): The Cybernetics of Human Learning & Performance.

    Hutchinson Educational, London.

    Pask, G. (1976a): Conversation Theory: Applications in Education and Epistemology. Elsevier/North Holland Inc., New York.

    Pask, G. (1976b): "An Outline Theory of Media for Education and Entertainment." Proceedings of Third European Meeting on Cybernetics and System Research, Vienna 1976.

    Pask, G. (1976c): "Revisions in the Foundation of Cybernetics and General Systems Theory as a Result of Research in Education, Epistemology and Innovation (Mostly in Man-Machine Systems)." In Proceedings of 8th International Congress on Cybernetics, Namur, Belgium.

    Pask, G (1978): "The protologic Lp." Technical Memorandum of System Research Ltd, Richmond, Surrey.

    Pask, G. (1979): "Against Conferences or the Poverty of Reduction in SOP-Science and POP-Systems." In Proceedings of Society for General Systems Research, London, 1979.

    Pask, G. (1980a): "Developments in Conversation Theory." In International Journal of Man-Machine Studies, Volume 13.

    Pask, G. (1980b): "The Limits of Togetherness." In Proceedings of IFIPS 80, 1980.

    Pask, G. (1980c): "Organizational Closure of Potentially Conscious Systems." In Zelany, M., Autopoesis, Elsevier/North Holland Inc., New York.

    Pask, G. (1980d): "An Essay on the Kinetics of Language." In Ars Semiotica, Volume III:1.

    Pask, G. (1983): "THOUGHTSTICKER Developments: A Concept Paper." Centre for System Research and Knowledge Engineering, Concordia University.

    Quillian, M. R. (1968): "Semantic Memory." In Minsky, M. (Ed.) (1968), Semantic Information Processing. MIT Press, Cambridge, Massachusetts.

    Shannon, Claude E. & Weaver, Warren (1964): The Mathematical Theory of Communication. University of Illinois, Urbana, Illinois.

    Shank, R. C. (1980): "Language and Memory." In Cognitive Science, Volume 4.

    Shank, R. C. & Abelson, R. P. (1975): "Scripts, plans and knowledge." Proceedings of the Fourth Joint International Conference on Artificial Intelligence, Tbilisi.

    Shaw, Mildred (1980): On Becoming a Personal Scientist. Academic Press, London.

    Sheppard, Colin, "Man-Computer Studies Section: Rationale and Work Programme." Applied Psychology Unit, Admiralty Marine Technology Establishment, Teddington, England, 1981.

    Varela, Francisco (1975): "A Calculus for Self-Reference." In International Journal of General Systems, Gordon and Breach Science Publishers, Ltd., Volume 2.

    von Foerster, Heinz (1960): "On Self-Organizing Systems and their Environments." In Yovits, M. C. & Cameron, S. (Eds.), Self-Organizing Systems, Pergamon Press, London.

    von Foerster, Heinz (1977): "Objects: Tokens for (Eigen-) Behaviors." In Inhelder, B., Garcia, R., & J. Voneche (Eds.), Homage a Jean Piaget: Epistemologie genetique et equilibration, Delachaux et Niestel, Neuchatel.

    von Foerster, Heinz (1980): "Cibernetica ed Epistemologia." In Bocchi, Gianluca & Ceruti, Mauro (Eds.), La sfida della complessita, Feltrinelli, Milano.

    von Wright, G. H. (1963): Norm and Action, Kegan Paul, New York.

    Winograd, T. & Flores, F. (1986): Understanding Computers and Cognition: A New Foundation for Design, Ablex Publishing, Norwood, New Jersey.

    Appendix E. Glossary of Terms

    This Glossary defines terms from a perspective consistent with the basis of this dissertation, rather than common usage.

    Adicity: The number of elements in a relation. For example, a coherence with three topics has an adicity of three. The term is a form of "-adic" as for example, a triadic (three element) relation.

    Agreement: Declaration made by an observer about transactions witnessed across a distinction defining P-Individuals, based on the assertion by each of the individuals that the others' derivation and use of a concept is consistent with their own. See Understanding.

    Analogy: A relation in Lp associating topics by declaring their similarities and differences.

    Author: A term from computer-based instruction, the author is the creator of the subject matter and usually a teacher or subject matter expert. In THOUGHTSTICKER however, there are situations where the knowledgebase needs to be extended locally by users, in order to customize it to local practices or needs, or to expand its contents. In this situation, the user, even possibly the learner, may take the role of author; the system maintains the identity of each author and hence can control access to the material as appropriate.

    Authoring or To Author: The act of creating the subject matter.

    Authoring Module: That part of the THOUGHTSTICKER systems which allowsa user to create or extend the knowledgebase. See Author, above.

    Bifurcation: The splitting of a topic into two or more topics, creating distinctions which keep the topics differentiated.

    Branching: This is the primary format for conventional computer-aided instruction. The subject matter is arranged in a "tree" of fixed routes which the learner follows. The results of tests of the learner determine "branching" through the structure. THOUGHT-STICKER removes the restrictions of branching by utilizing a knowledge representation for the subject matter, freeing the learner to explore the subject based on individual experience, needs, and conceptual style. THOUGHTSTICKER can also be used in a sequential mode for convenient incorporation of existing computer-based instructional material, but still with liberal opportunities for the learner to be driven by uncertainty and curiosity.

    Browser: That part of THOUGHTSTICKER that allows inspecting the knowledgebase. This term reflects the application of THOUGHT-STICKER to information management, where the user is accessing, rather than extending, the knowledgebase.

    CASTE: The Course Assembly System and Tutorial Environment was an electro-mechanical system built to facilitate the construction of courseware, in an era when hardware was too expensive to also conceive that delivery of training could be done by a related system. The basis by which it structured the subject matter was a forerunner to THOUGHT-STICKER.

    CBT/CAI: Computer-Based Training and Computer-Aided Instruction are terms from the training industry. The former is concerned with all aspects of training whether pedagogical or not, including managing the records of a student population and tracking the progress of individual students. The latter emphasizes the pedagogical component, especially in how the delivery of training via computer can be made different than other forms of training (classroom, self-study from books). THOUGHTSTICKER goes far beyond existing systems in the effectiveness of its pedagogical techniques and it maintains a complete knowledgebase of the learner's progress which is available for use at all times during the interaction.

    Coherence: An Lp relation among topics, in which any topic (isolated for the purpose) may be reproduced or reconstructed from the remainder. This affords a high degree of redundancy, the basic unit of memory, and the basis from which conflict detection and resolution emerges.

    Communication: Transactions between individuals that involves transfer of previously-agreed symbols; no new symbols may be transferred. Communication implies transfer of data about transmitted state(s) as related to a known class of possible states. See Conversation.

    Conflict: The interaction of processes whereby simultaneous execution cannot persist without either one or more processes being destroyed or being resolved (via structural changes in the processes).

    Conflict Detection: A software simulation of concurrent processes whereby conflicts among relations are computed.

    Conflict Resolution: A procedure as guided by macro software whereby conflicting relations are modified to eliminate conflict.

    Conversation: A term from CT for generalized interaction between P-Individuals which consists of transactions in an agreed language designated "L" and taking place on multiple "levels" as defined by an observer. The transactions may be "I-you" referenced, insofar as they are held across the distinction between P-Individuals at the same level; or "it" referenced insofar as one individual treats the other as its environment and does not allow, but rather insists on, response.

    Conversation Theory or CT: The name given to a theory of individuals and cognition.

    Courseware: The subject matter as created by the author, represented in the training software, and delivered to the student. It may consist of any combination of text, graphics, simulation, and videodisc media.

    Cybernetics: The study of systems from their information and relativistic (observer-bound) basis. See Second-order Cybernetics.

    Do-What-Do: A conceptual approach to MMI, and an implementation in the system called the Expertise Tutor. Given the requirements for any transaction language to be capable of question and command, the most direct explanation to a user of a software interface is to consider the basic interactions as either a question ("What is this [element of the screen]?") or a command ("Execute this command!"). This is the What and Do portion of the term. The additional Do at the end, completing Do-What-Do, emphasizes the iterative nature of the entire process and also how merely asking is not sufficient; some execution is necessary for comprehension of the environment. This approach was implemented by dedicating 2 of the 3 mouse buttons of the Symbolics machine to Do and What. The third, middle button was then used for a global orientation. but not fully integrated into the scheme (as for example, a "Why" button might be).

    Expertise Tutor: See Do-What-Do.

    Frames (from AI): Data structures intended as models of human thought, were slots and their values stand for memories. "Default values" supply information when specific experience has not provided any.

    Frames (from Computer-Aided Instruction): The fundamental unit of experience for the learner in computer-aided instruction. Usually text or some combination of text and graphics, frames are linked in a fixed structure within which the learner "branches." Frames are a gross accumulation of many features of the knowledge to be learned which are neither broken down by the author nor distinguishable by the system. Hence a CAI system cannot learn very much about the individual learner, nor individualize its interaction very much, because it can distinguish very little of the learner's attention or uncertainty.

    Individual: See P-Individual.

    Knowledge Representation or Knowledgebase: Term from the field of artificial intelligence referring to a software structure that is intended to represent knowledge, often of an expert. A number of approaches exist and none are considered to solve the problem for the general case. THOUGHTSTICKER uses a knowledge representation that was developed from a cognitive theory especially for the purpose of representing knowledge to be learned. Hence the heuristics that operate on the knowledge structure are well-suited to variations in conceptual style and provide highly individualized interaction for each user.

    Knowledge Elicitation: The process of extracting information from a human, usually for the purpose of then placing it into a knowledge representation.

    Narrative: The term for a THOUGHTSTICKER sequence that tells a story and has some of its import conveyed by the particular order it is seen in; hence its name. The THOUGHTSTICKER features which are concerned with providing conventional computer-based instruction as part of THOUGHTSTICKER use the Narrative facilities for implementing a frame sequence. The learner is still free to explore away from the Narrative and to return easily to it as desired.

    Language or L: A transaction medium capable of question and command as well as description and predication.

    Lp: One example of a protologic or protolanguage. Proto, meaning "below", is used to indicate that Lp is not itself a language or logic; it is a substrate on which one might be built. See the glossary entry Language.

    Learner: In computer-based instruction, this term takes on its usual meaning. However, the learner may be focused on the subject matter for a variety of purposes: to cover the entire subject to acquire general skills; to perform a specific task requiring a subset of skills; or to answer a specific question or perform a single operation. THOUGHTSTICKER reacts differently for each of these purposes (see Persona).

    Macro Software: Simulation at a level relative to atoms of the knowledgebase, such that the elements taken as atoms (which normally have further structure of sub-functions as indicated by the theory) are taken as static. Thus in the case of CT, topics are considered as static nodes in the knowledgebase. See Micro Software.

    Man-Machine Interface or MMI: The user experience at the computer console. In context, MMI may also refer to the software required to implement a particular user experience at the interface to the software.

    Micro Software: Simulation at a level where atoms from the theory are the basic units of the simulation, rather than some "higher" level. Thus in the case of CT, topics are interpreted as dynamic processes each with individual interactions acting upon it.

    P-Individual: For "psychological individual", a conceptual entity that is distinguished by an observer based on criteria of differences as defined by the observer, between or across conceptual points of view rather than physical boundaries. Hence, a single individual "person" consists of many (and often conflicting) perspectives; alternatively, many persons can make up a group, as for example in religion or politics, and be the "same individual" so far as a particular set of their beliefs is concerned.

    Persona: A term unique to THOUGHTSTICKER, the persona refers to (1) the individual currently using the system, and (2) the current goal of that use (see Learner, above). The persona can be used by THOUGHTSTICKER to guide its heuristics and present information in an adaptive way for this individual with a current goal. The user, whether author or learner, is identified to the system, and all history is attached to, the persona.

    Relations: Structural associations between elements, usually topics, in the knowledgebase. See Coherence, and Analogy.

    Rule of Genoa: The name given by Pask to CT's bifurcation principle, named after the city of Genoa where resided Vittorio Midoro, who asked a question about representing coherence and analogy in the same diagram and thereby provided a hint as to the creation of the principle.

    Saturation: As an Lp process: existing topics are joined to others in coherences. As a THOUGHTSTICKER function: new coherences, which do not conflict with existing relations, are proposed to the author for possible instatement.

    Second-order Cybernetics: The later development of cybernetics which emphasizes certain systems' capability to refer to themselves, i.e. to model their own behavior or the behavior of others. The full implications of the subjective nature of experience appears in second-order.

    THOUGHTSTICKER: Macro software based on CT and its knowledge calculus, Lp.

    Topic: Minimal, atomic unit of a THOUGHTSTICKER knowledgebase. Simulated in macro software as a static node, the topics of CT are dynamic repertoires of processes which converge in value, as do Eigen values. Achieved in micro software by an approach where each topic is a process.

    Tutoring Module: That part of the THOUGHTSTICKER system which allows a user to explore the knowledgebase. This term reflects the application of THOUGHTSTICKER to training, where the user is in the role of student.

    Understanding: P-Individuals hold agreements over understandings.

    See Agreement.

    Appendix F. Figures

    [Omitted here due to technical hassle s. To follow in future, available on request.]


    [Not included here.]

    - end -

    © Copyright Paul Pangaro 1994 - 2000. All Rights Reserved.