This section elaborates our definition of understanding—the process of acquiring a working knowledge of an object65. Such definition relies on two main aspects: a formal, abstract understanding, and a more subjective, empirical one. We will see how the former had some traction in computer sciences circles, while the second gained traction in programming circles. To support those two approaches, we first trace back the genealogy of understanding in theoretical computer science, before outlining how concrete complementary approaches centered around experience and situatedness outline an alternative tradition.
Between formal and informal
Understanding can be differentiated between the object of understanding and the means of understanding (
Elgin, 2017)
True Enough by Catherine Z. Elgin, 2017.
. Here, we concern ourselves with the means of understanding, particularly as they are related to the development of computer science. As the science of information processing, the field is closely involved in the representation of knowledge, a representation that programmers then have to make their own.
Theoretical foundations of formal understanding
The theoretical roots of modern computation can be traced back to the early 20nope! cmdcentury in Cambridge were being laid by both philosophers of logic and mathematicians, such as Bertand Russell, Ludwig Wittgenstein, and Alan Turing, as they worked on the formalization of thinking. In their work, we will see that the formalization of knowledge operations are rooted in an operation representation of knowledge.
Wittgenstein, in particular, bases his argumentation in his Tractatus Logico-philosophicus on the fact that much of the problems in philosophy are rather problems of understanding between philosophers—if one were to express oneself clearly, and to articulate one's through clear, unambiguous language, a common conclusion could be reached without much effort66. The stakes presented are thus those of understanding what language really is, and how to use it effectively to, in turn, make oneself understood.
The demonstration that Wittgenstein undertakes is that language and logic are closely connected. Articulated in separate points and sub-points, his work conjugates aphorisms with logical propositions depending on one another, developing from broader statements into more specific precisions, going down levels of abstraction through increasing bulleted lists. Through the stylistic organization of his work, Wittgenstein hints at the possibility to consider language, itself pre-requisite for understanding, as a form of logic This complements the older approach to consider logic as a form of language. In this sense, he stands in the lineage of Gottfried Leibniz's Ars Combinatoria , since Leibniz considers that one can formalize a certain language (not necessarily natural languages such as German or Latin), in order to design a perfectly explicity linguistic system. A universal, and universally-understandable language, called a characteristica universalis could resolve any misunderstanding issues. Quoted by Russell, Leibniz notes that:
If we had it [a characteristica universalis], we should be able to reason in metaphysics and morals in much the same way as in geometry and analysisIf controversies were to arise, there would be no more need of disputation between two philosophers than between two accountants [] Let us calculate. (
Russell, 1950)
Logical positivism by Bertrand Russell, 1950. [link]
Centuries after Leibniz's declaration, Wittgenstein presents a coherent, articulated theory of meaning through the use of mathematical philosophy, and logic. His work also fits with that of Bertrand Russell and Alfred Whitehead who, in his Principia Mathematica , attempt to lay out a precise and convenient notation in order to express mathematical notations; similarly, Gottlieb Frege's work attempted to constitute a language in which all scientific statements could be evaluated, by paying particular attention to clarifying the semantic uncertainties between a specific sentence and how it refers to a concept (
Korte, 2010)
Frege’s Begriffsschrift as a lingua characteristica by Tapio Korte, 2010. [link]
.
Even though these approaches differ from, and sometimes argue with67, one another, we consider them to be part of a broad endeavour to find a linguistic basis to express formal propositions through which one could establish truth-values.
Such works on formal languages as a means of knowledge processing a direct influence in the work on mathematician Alan Turing—who studied at Cambridge and followed some of Wittgenstein's lectures—, as he developed his own formal system for solving complex, abstract mathematical problems, manifested as a symbolic machine (
Turing, 1936)
On Computable Numbers, with an Application to the Entscheidungsproblem by Alan Turing, 1936.
. Meaning formally expressed was to be mechanically processed.
The design of this symbol-processing machine, subsequently known as the Turing machine, is a further step in engaging with the question of knowledge processing in the mathematical sense, as well as in the practical sense—a formal proof to the Entscheidungsproblem solved mechanically. Indeed, it is a response to the questions of translation (of a problem) and of implementation (of a solution), hitherto considered a basis for understanding, since solving a mathematical problem supposed, at the time, to be able to understand it.
This formal approach to instructing machines to operate on logic statements then prompted Turing to investigate the question of intelligence and comprehension in Computing Machinery and Intelligence . In it, he translates the hazy term of "thinking" machines into that of "conversing" machines, conversation being a practical human activity which involves listening, understanding and answering (i.e. input, process and output; or attention, comprehension, diction) (
Turing, 2009)
Computing Machinery and Intelligence by Alan M. Turing, 2009. [link]
. This conversational test, which has become a benchmark for machine intelligence, would naively imply the need for a machine to understand what is being said.
Throughout the article, Turing does not yet address the need for a purely formal approach of whether or not a problem can be translated into atomistic symbols, as we can imagine Leibniz would have had it which would be provided as an input to a digital computer. Such a process of translation would rely on a formal approach, similar to that laid out in the Tractatus Logico-philosophicus , or on Frege's formal language described in the Begriffschrift . Following a cartesian approach, the idea in both authors is to break down a concept, or a proposition, into sub-propositions, in order to recursively establish the truth of each of these sub-propositions, and then re-assembled to deduce the truth-value of the original proposition.
Logical calculus, as the integration of the symbol into relationships of many symbols formally takes place through two stylistic mechanisms, the symbol and the list . Each of the works by Frege, Russell and Wittgenstein quoted above are structured in terms of lists and sub-lists, representing the stylistic pendant to the epistemological approach of related, atomistic propositions and sub-propositions. A list, far from being an innate way of organizing information in humans, is a particular approach to language: extracting elements from their original, situated existence, and reconnecting ways in very rigorous, strictly-defined ways68.
As inventories, early textbooks, administrative documents as public mnemotechnique, the list is a way of taking symbols, pictorial language elements in order to re-assemble them to reconstitute the world, then re-assemble it from blocks, following an assumption that the world can always be decomposed into smaller, discrete and conceptually coherent units (i.e. symbols). One can then decompose a thought in a list, and expect a counterpart to recompose this thought by perusing it. As a symbol system, lists establish clear-cut boundaries, are simple, abstract and discontinuous; incidentally, this makes it very suited to a discrete symbol-processing machine such as the computer (
Depaz, 2023)
Stylistique de la recherche linguistique en IA: De LISP à GPT-3 by Pierre Depaz, 2023. [link]
.
With these sophisticated syntactic systems developed a certain approach to cognition, as Turing clearly establishes a possibility for a digital computer to achieve the intellectual capacities of a human brain.
But as Turing focuses on the philosophical and moral arguments to the possibility for machines to think, he does address the issue of formalism in developing machine intelligence. Particularly, he acknowledges the need for intuition in, and self-development of, the machine in order to reach a level at which it can be said that the machine is intelligent. The question is then whether one is able to represent such concepts of intuition and development in formal systems. We now turn to the form of these systems, looking at how their form addresses the problem of clearly understanding and operating on mathematical and logical statements.
Being based on some singular, symbolical entity, the representation of logical calculus into lists and symbols, within a computing environment, becomes the next step in exploring these tools for thinking, in the form of programming languages. Considering understanding through a formal lens can then be confronted to the real world: when programmed using those formal languages, how much can a computer understand?
Practical attempts at implementing formal understanding
This putting into practice relies on a continued assumption of human cognition as an abstract, logical phenomenon. Practically, programming languages could logically express operations to be performed by the machine.
The first of these languages is IPL, the Information Processing Language, created by Allen Newell, Cliff Shaw and Herbert A. Simon. The idea was to make programs understand and solve problems, through "the simulation of cognitive processes" (
Newell, 1964)
Information Processing Language-V Manual by Allen Newell, F. M. Tonge, Edward A. Feigenbaum, Bert F. Green Jr., George H. Mealy, 1964.
. IPL achieves this with the symbol as its fundamental construct, which at the time was still largely mapped to physical addresses and cells in the computer's memory, and not yet decoupled from hardware.
IPL was originally designed to demonstrate the theorems of Russell's Principia Mathematica , along with a couple of early AI programs, such as the Logic Theorist , the General Problem Solver . As such, it proves to be a link between the ideas exposed in the writing of the mathematical logicians and the actual design and construction of electrical machines activating these ideas. More a proof of concept than a versatile language, IPL was then quickly replaced by LISP as the linguistic means to express intelligence in digital computers (seeComputation as an end
).
This structure of Lisp is quite similar to the approach suggested by Noam Chomsky in his Syntactic Structures , where he posits the tree structure of language, as a decomposition of sentences until the smallest conceptually coherent parts (e.g. Phrase -> Noun-Phrase + Verb-Phrase -> Article + Substantive + Verb-Phrase). The style is similar, insofar as it proposes a general ruleset (or the at least the existence of one) in order to construct complex structures through simple parts.
Through its direct manipulation of conceptual units upon which logic operations can be executed, LISP became the language of AI, an intelligence conceived first and foremost as logical understanding. The use of LISP as a research tool culminated in the SHRDLU program, a natural language understanding program built in 1968-1970 by Terry Winograd which aimed at tackling the issue of situatedness—AI can understand things abtractly through logical mathematics, but can it apply these rules within a given context? The program had the particularity of functioning with a "blocks world" a highly simplified version of a physical environment—bringing the primary qualities of abstraction into solid grasp. The computer system was expected to take into account the rest of the world and interact in natural language with a human, about this world ( Where is the red cube? Pick up the blue ball , etc.). While incredibly impressive at the time, SHDRLU 's success was nonetheless relative. It could only succeed at giving barely acceptable results within highly symbolic environments, devoid of any noise. In 2004, Terry Winograd writes:
There are fundamental gulfs between the way that SHRDLU and its kin operate, and whatever it is that goes on in our brains. I don't think that current research has made much progress in crossing that gulf, and the relevant science may take decades or more to get to the point where the initial ambitions become realistic. (
Nilsson, 2009)
The Quest for Artificial Intelligence by Nils J. Nilsson, 2009. [link]
This attempt, since the beginning of the century, to enable thinking, clarify understanding and implement it in machines, had first hit an obstacle. The world, also known as the problem domain, exhibits a certain complexity which did not seem to be easily translated into singular, atomistic symbols.
A critique of formalism as the only way to model understanding was already developed in 1976 by Joseph Weizenbaum. Particularly, he argues that the machine cannot make a judgment, as judgments cannot be reduced to calculation (
author, year)
. While the illusion of cognition might be easy to achieve, something he did in his development of early conversational agents, of which the most famous is ELIZA , the necessary inclusion of morals and emotion of the process of judging intrinsically limit what machines can do69. Formal representation might provide a certain appearance of understanding, but lacks its depth.
Around the same time, however, was developed another approach to formalizing the intricacies of cognition. Warren McCullough's seminal paper, A logical calculus of the ideas immanent in nervous activity , co-written with Walter Pitts, offers an alternative to abstract knowledge based on the embodiment of cognition. They present a connection between the systematic, input-output procedures dear to cybernetics with the predicate logic writing style of Russell and others (
McCulloch, 1990)
A logical calculus of the ideas immanent in nervous activity by Warren S. McCulloch, Walter Pitts, 1990. [link]
. This attachment to input and output, to their existence in complex, inter-related ways, rather than self-contained propositions is, interestingly, rooted in his activy as a literary critic70.
Going further in the processes of the brain, McCullough indeed finds out, in another paper with Letvinn and Pitts (
Lettvin, 1959)
What the Frog’s Eye Tells the Frog’s Brain by J. Y. Lettvin, H. R. Maturana, W. S. McCulloch, W. H. Pitts, 1959.
, that the organs through which the world excites the brain are themselves agents of process, activating a series of probabilistic techniques, such as noise reduction and softmax, to provide a signal to the brain which isn't the untouched, unary, symbolical version of the signal input by the external stimuli, and nor does it seem to turn it into such.
We see here the development of a theory for a situated, embodied and sensual stance towards cognition, which would ultimately resurface through the rise of machine learning via convoluted neural networks in the 2000s (
Nilsson, 2009)
The Quest for Artificial Intelligence by Nils J. Nilsson, 2009. [link]
. In it, the senses are as essential as the brain for an understanding—that is, for the acquisition, through translation, of a conceptual model which then enable deliberate and successful action. It seems, then, that there are other ways to know things than to rely on description through formal propositions.
A couple of decades later, Abelson and Sussman still note, in their introductory textbook to computer science, the difficulty to convey meaning mechanically:
Understanding internal definitions well enough to be sure a program means what we intend it to mean requires a more elaborate model of the evaluation process than we have presented in this chapter. (
author, year)
So, while formal notation is able to enable digital computation, it proved to be limited when it came to accurately and expressively conveying meaning. This limitation, of being able to express formally what we understand intuitively (e.g. what is a chair? 71) appeared as computers applications left the domain of logic and arithmetic, and were applied to more more complex problem domains.
After having seen the possibilities and limitations of making machines understand through the use of formal languages, and the shift offered by taking into account sensory perception as a possible locus of cognitive processes and means of understanding, we now turn to these ways of knowing that exist in humans in a more embodied capacity.
Knowing-what and knowing-how
With the publication of Wittgenstein's Philosophical Investigations , there was a radical posture change from one of the logicians whose work underpinned AI research. In his second work, he disown his previous approach to language as seen in the Tractatus Logico-philosophicus , and favors a more contextual, use-centered frame of what language is. Rather than what knowledge is, he looks at how knowledge is acquired and used; while (formal) language was previously defined as the exclusive means to translation concepts in clearly understandable terms, he broadens his perspective in the Inquiries by stating that language is "the totality of language and the activities with which it is intertwined" and that "the meaning of a word is its use within language" (
Wittgenstein, 2004)
Recherches philosophiques by Ludwig Wittgenstein, 2004.
, noting context and situatedness as a important factors in the understanding process.
At first, then, it seemed possible to make machines understand through the use of formal languages. The end of the first wave of AI development, a branch of computation specifically focused on cognition, has shown some limits to this approach. Departing from formal languages, we now investigate how an embodied and situated agent can develop a certain sense of understanding.
Knoweldge and situation
As hinted at by the studies of McCullough and Levitt, the process of understanding does not rely exclusively on abstract logical processes, but also on the processes involved in grasping a given object, such as, in their case, what is being seen. It is not just what things are, but how they are, and how they are perceived , which matters. Different means of inscription and description do tend to have an impact on the ideas communicated and understood.
In his book Making Sense: Cognition, Computing, Art and Embodiment , Simon Penny refutes the so-called unversality of formulating cognition as a formal problem, and develops an alternative history of cognition, akin to Michel Foucault's archeology of knowledge. Drawing on the works of authors such as William James, Jakob von Uexküll and Gilbert Ryle, he refutes the Cartesian dualism thesis which acts as the foundation of AI research (
Penny, 2019)
Making Sense: Cognition, Computing, Art and Embodiment by Simon Penny, 2019. [link]
. A particular example of the fallacy of dualism, is the use of the phrase implementation details , which he recurringly finds in the AI literature, such as Herbert Simon's The Sciences of the Artificial (
Simon, 1996)
The Sciences of the Artificial by Herbert Simon, 1996. [link]
. In programming, to implement an algorithm means to manifest in concrete instructions, such that they are understood by the machine. The phrase thus refers to the gap existing between the statement of an idea, of an algorithm, and a procedure, and its concrete, effective and functional manifestation. This concept of implementation will show how context tends to complicate abstract understanding.
For instance, pseudo-code is a way to sketch out an algorithmic procedure, which might be considered agnostic when it comes to implementation details. At this point, the pseudo-code is halfway between a general idea and the specificity of the particular idiom in which it is inscribed. One can consider the pseudo-code in
nielsen_chalktalk
recognition = falsedountil recognition
wait until mousedown
ifno bounding box, initialize bounding boxdountil mouseup
update image
update bounding box
rescale the material that's been added inside
if we recognize the material:
delete image from canvas
add the appropriate iconic representation
recognition = true
- Example of pseudo-code attempting to reverse-engineer a software system, ignoring any of the actual implementation details, taken from
(
Nielsen, 2017)
Working Notes on Chalktalk by Michael Nielsen, 2017. [link]
, which describes a procedure to recognize a free-hand drawing and transform it into a known, formalized glyph. Disregarding the implementation details means disregarding any reality of an actual system: the operating system (e.g. UNIX or MSDOS), the input mechanism (e.g. mouse, joystick, touch or stylus), the rendering procedure (e.g. raster or vector), the programming language (e.g. JavaScript or Python), or any details about the human user drawing the circle.
nielsen_chalktalk
recognition = falsedountil recognition
wait until mousedown
ifno bounding box, initialize bounding boxdountil mouseup
update image
update bounding box
rescale the material that's been added inside
if we recognize the material:
delete image from canvas
add the appropriate iconic representation
recognition = true
- Example of pseudo-code attempting to reverse-engineer a software system, ignoring any of the actual implementation details, taken from
(
Nielsen, 2017)
Working Notes on Chalktalk by Michael Nielsen, 2017. [link]
Refuting the idea that pseudo-code, as abstracted representation, is all that is necessary to communicate and act upon a concept, Penny argues on the contrary that information is relativistic and relational; relative to other pieces of information (intra-relation) and related to contents and forms of presenting this relation (extra-relation). Pseudo-code will only ever make full sense in a particular implementation context, which then affects the product.
He then follows Philip Agre's statement that a theory of cognition based on formal reason works only with objects of cognition whose attributes and relationships can be completely characterized in formal terms; and yet a formalist approach to cognition does not prove that such objects exist or, if they exist, that they can be useful. Uses of formal systems in artificial intelligence in specific, and in cognitive matters in general, is yet another instance of the map and the territory problem—programming languages only go so far in describing a problem domain without reducing such domain in a certain way.
Beyond the syntax of formal logic, there are different ways to transmit cognition in actionable form, depending on the form, the audience and the purpose. In particular, a symbol system does not need to be formal in order to act as a cognitive device. Logical notation exists along with music, painting, poetry and prose. In terms of form, a symbol system of formal logic is only one of many possibilities for systems of forms. In his Languages of Art , Nelson Goodman elaborates a theory of symbol systems, which he defines as formal languages composed of syntactic and semantic rules (
author, year)
, further explored inAesthetics and cognition
. What follows, argues Goodman, is that all these formal languages involve an act of reference . Through different means (exemplification, denotation, resemblance, representation), liguistic systems act as sets of symbols which can denote or exemplify or refer to in more complex and indirect ways, yet always between a sender and a receiver.
Despite the work of Shannon (
Shannon, 2001)
A mathematical theory of communication by C. E. Shannon, 2001. [link]
and its influence on the development of computer systems, communication, as the transfer of meaning from one individual to one or more other individuals, does not exclusively rely on the use of mathematical notation use of formal languages.
From Goodman to Goody, the format of representation also affords differences in what can be thought and imagined. Something that was always implicit in the arts—that representation is a complex and ever-fleeting topic—is shown more recently in Marchand-Zañartu and Lauxerois's work on pictural representations made by philosophers, visual artists and novelists (such as Claude Simon's sketches for the structure of his novel La Route des Flandres , shown in
routedesflandres
) (
Marchand-Zañartu, 2022)
32 grammes de pensée, essai sur l’imagination graphique by Nicole Marchand-Zañartu, Jean Lauxerois, 2022. [link]
. How specific domains, including visual arts and construction, engage in the relation between form and cognition is further adressed in chapterBeauty and understanding
.
routedesflandres
Going beyond formal understanding through logical notation, we have seen that there are other conceptions of knowledge which take into account the physical, social and linguistic context of the agent understanding, as well as of the object being understood. Keeping in mind the recurring concept of craft discussed inCrafting software
, complete this overview of understanding by paying attention to the role of practice.
Constructing knowledge
There are multiple ways to express an idea: on can use formal notation or draft a rough sketch with different colors. These all highlight different degrees of expression, but one particular way can be considered problematic in its ambition. Formal languages rely on the assumption, that all which can be known can ultimately be expressed in unambiguous terms. First shown by Wittgenstein in the two main eras various eras of his work, we know focus on the ways of knowing which cannot be explicited.
First of all, there is a separation between knowing-how and knowing-that ; the latter, propositional knowledge, does not cover the former, practical knowledge (
Ryle, 1951)
. Perhaps one of the most obvious example of this duality is in the failure of Leibniz to construct a calculating machine, as told by Matthew L. Jones in his book Reckoning with Matter . In it, he traces the history of philosophers to solve the problem of constructing a calculating machine, a problem which would ultimately be solved by Charles Babbage, with the consequences that we know (
author, year)
.
Jones depicts Leibniz in his written correspondence with watchmaker Ollivier, in their fruitless attempt to construct Leibniz's design; the implementations details seem to elude the German philosopher as he refers to the "confused" knowledge of the nonetheless highly-skilled Parisian watchmaker. The (theoretical) plans of Leibniz do not match the (concrete) plans of Ollivier.
These are two complementary approaches to the knowledge of something: to know what constructing a calculating machine entails and knowing how to construct such a machne. In the fact that Ollivier could not communicate clearly to Leibniz what his technical difficulties, we can see an instance of something which would be theorized centuries later by Michael Polanyi as tacit knowledge , knowledge which cannot be entirely made explicit.
Polanyi, as a scientist himself, starts from another assumption: we know more than we can tell. In his eponymous work, he argues against a positivist approach to knowledge, in which empirical and factual deductions are sufficient to achieve satisfying epistemological work. What he proposes, derived from gestalt psychology, is to consider some knowledge of an object as the knowledge of an integrated set of particulars, of which we already know some features, by virtue of the object existing in an external approach. This integrated set, in turn, displays more properties than the sum of its parts. While formal notation suggests that the combination of formal symbols does not result in additional knowledge, Polanyi rather argues, against Descartes, that relations and perceptions do result in additional knowledge.
The knowledge of a problem is, therefore, like the knowing of unspecifiables, a knowing of more than you can tell. (
Polanyi, 1969)
Knowing and being; essays by Michael Polanyi, Marjorie Grene, 1969. [link]
Rooted in psychology, and therefore in the assumption of the embodimed of the human mind, Polanyi posits that all thought is incarnate, that it lives by the body and by the favour of society, hence giving it a physio-social dimension. This confrontation with the real-world, rather than being a strict hurdle that has to be avoided or overcome, as in the case of SHRDLU above, becomes one of the two poles of cognitive action. Knowledge finds its roots and evaluation in concrete situations, as much as in abstract thinking. In the words of Cecil Wright Mills, writing about his practice as a social scientist research,
Thinking is a continuous struggle between conceptual order and empirical comprehensiveness. (
Mills, 2000)
The sociological imagination by Charles Wright Mills, 2000.
Polanyi's presentation of a form of knowledge following the movement of a pendulum, between dismemberment and integration of concepts finds an echo in the sociological work of Mills: a knowledge of some objects in the world happens not exclusively through formal descriptions in logical symbol systems, but involves imagination and phenomenological experience—wondering and seeing. This reliance on vision—starting by recognizing shapes, as Polanyi states—directly implies the notion of aesthetic assessment, such as a judgement of typical or non-typical shapes. He does not, however, immediately elucidate how aesthetics support the formation of mental models at the basis of understanding, only that this morphology is at the basis of higher order of representations.
Seeing, though, is not passive seeing, simply noticing. It is an active engagement with what is being seen. Mills's quote above also contains this other aspect of Polanyi's investigation of knowledge, and already present in Ollivier's relation with Leibniz: knowing through doing.
This approach has been touched upon from a practical programmer's perspective in sectionCrafting software
, through a historical lens but it does also posses theoretical grounding. Specifically, Harry Collins offers a deconstruction of the Polanyi's notion by breaking it down into relational , somatic and collective tacit knowledges (
author, year)
. While he lays out a strong approach to tacitness of knowledge (i.e. it cannot be communicated at all), his distinction between relational and somatic is useful here72. It is possible to think about knowledge as a social construct, acquired through social relations: learning the linguo of a particular technical domain, exchanging with peers at conferences, imitating an expert or explaining to a novice. Collective, unspoken agreements and implicit statements of folk wisdom, or implicit demonstrations of expert action are all means of communication through which knowledge gets replicated across subjects.
Concurrently, somatic tacit knowledge tackles the physiological perspective as already pointed out by Polanyi. Rather than knowledge that exists in one's interactions with others, somatic tacit knowledge exists within one's physical perceptions and actions. For instance, one might base one's typing of one's password strictly on one's muscle memory, without thinking about the actual letters being typed, through repetition of the task. Or one might be spotting a cache bug which simply requires a machine reboot, due to experience machine lifecycles, package updates, networking behaviour. Not completely distinct from its relational pendant, somatic knowledge is acquired through experience, repetition and mimeomorphism—replicating actions and behaviours, or the instructions, often under the guidance of someone more experienced.
-
We started our discussion of understanding by defining it as the acquisition of the knowledge of a object—be it a concept, a situation, an individual or an artfefact, which is accurate enough that it allows us to predict the behaviour of and to interact with such object.
Theories of how individuals acquire understanding (how they come to know things, and develop operational and conceptual representations of things), have been approached from a formal perspective, and a contexutal one. The rationalist, logical philosophical tradition from which computer science originally stems, starts from the assumption that meaning can be rendered unambiguous through the use of specific notation. Explicit understanding, as the theoretical lineage of computation, then became realized in concrete situations via programming languages.
However, the explicit specification of meaning fell short of handling everyday tasks which humans would consider to be menial. This has led us to consider a different approach to understanding, in which it is acquired through contextual and embodied means. Particularly, we have identified this tacit knowledge as relying on a social component, as well as on a somatic component.
Source code, as a formal system with a high dependence of context, intent and implementation, mobilizes both approaches to understanding. Due to programing's ad hoc and bottom-up nature, attemps to formalize it have relied on the assumption that expert programmers have a certain kind of tacit knowledge (
Soloway, 1982Soloway, 1984)
Tapping into tacit programming knowledge by Elliot Soloway, Kate Ehrlich, Jeffrey Bonar, 1982.
Empirical Studies of Programming Knowledge by Elliot Soloway, Kate Ehrlich, 1984.
. The way in which this knowledge, which they are not able to verbalize, has been acquired and is being deployed, has long been an object of study in the field of software psychology.
Before our overview of what the psychology of programmers can contribute on the cognitive processes at play in understanding source code, we must first explicit in which ways software as a whole is a cognitively complex object.