Navyanyāya, “The New Philosophy”, represents the dominant framework through which Indian intellectuals practiced the art of philosophical analysis over the past millennium. Independent of the particular traditions that they belonged to or the specifics of their philosophical and religious commitments, Navyanyāya provided the bedrock of a common pedagogy, language, and analytical toolset necessary for scholars to engage in a cooperative project of inquiry. And, it continues in this role to the present day, at least among traditionally trained paṇḍits, despite the corrosive effect of the colonial experience on native forms of learning.
In this essay, I want to work through my own understanding of this analytical framework with the guidance of the Navyanyāyabhāṣāpradīpaḥ [Primer on the Language of the New Philosophy] by Maheśa Candra Nyāyaratna.
To start off, I want to give something of a quasi-historical genealogy of Sanskrit theorizing on language, as the principle window through which to understand the Navyanyāya project.
Perhaps the earliest systematic theorizing about language in the Sanskrit tradition can be found in the Prātiśākhyas, ancient texts whose dates and origins are hard to pin down. This body of work is an elaboration of two basic insights. First is the atomicity of language: the continuous and infinitely varied flow of sound that is the manifest form of language can be deconstructed into a finite set of discrete, recurring elements. Second is the phenomenon of linguistic stratification: when these discrete units of language combine to reconstitute connected speech, they undergo patterned changes in their form, conditioned by features of their linguistic environments that obscure the underlying structure of language at the level of its sonic manifestation. The end result of these sound changes is that the form of language is stratified between a manifest appearance and a hidden substructure.
These two together—the discovery of segmentation and stratification in language—represent a formative moment in Indian intellectual history. It’s role in linguistics is evident, as we’ll see. But, I also want to point out how the basic pattern of analysis underpinning this realization—take any phenomenon, segment it into its atomic elements, then describe how they combine and transform—is almost infinitely generous. It recurs, again and again, throughout the Indian intellectual tradition in analyzing everything from language, to poetry, to music, to astronomy, psychology, metaphysics, and on and on.
In language, we see this analysis leveraged first to produce a segmentation of the sentences of some target language into discrete words, as in the Prātiśākhyas. The same analysis is then used to segment words into sub-lexical units, dividing a form into base [प्रकृतिः; prakr̥tiḥ] and affix [प्रत्ययः; pratyayaḥ]. This process is then applied recursively onto each successive base until you bottom out into a set of verbal [धातुः; dhātuḥ] and nominal [प्रातिपदिकम्, prātipadikam] radicals. These radicals, together with the affixes they combine with, make up the elemental forms [औत्पत्तिकरूपाणि, autpattikarūpāṇi] of the language being analyzed.
The reconstitution of the elements into gradually more complex molecules, and ultimately into the macroscopic forms observed in everyday language, follows a two step process. First, the elements are strung together into tree-like structures, obeying a set of constraints [शब्दमर्यादाः; śabdamaryādāḥ] that limit the set of possible combinations to just those observed in actual use. These tree-like structures are then transformed, either by substituting whole elements or by transforming their phonetic representation, to produce a linear sequence of sounds constituting a grammatically correct sentence in the target language.
But we’re not done yet. Once we identify the constraints and the transformations that these elements undergo, we come to realize that it’s not just linguistic forms that can be segmented. The constraints and transformations themselves submit to a segmental analysis. And so, we begin again. This time, analyzing these constraints and transformations into elemental constraints and elemental transformations, and articulating the principles by which these atomic bits of computation assemble into molecules and finally into a single complex process by which an abstract linguistic structure is constructed and transformed into a grammatically correct sentence.
The analytical process I just outlined—first applied to linguistic form and then to linguistic process—occupied generations of Indian scholars in the ancient period, now largely lost to history. This immense intellectual labor culminated in the production of a monumental work—in some sense the monumental work—of Sanskrit scholarship: the Aṣtādhyāyī [Eight Chapters].
The Aṣṭādhyāyī corpus consists of four organized lists:
each of which tabulate elemental (1) verbal roots, (2) nominal stems, (3) phonemic segments, and (4) transformations/constraints.
This corpus is supplied with a metagrammar [परिभाषा; paribhāṣā] describing how elemental transformations [विधयः; vidhayaḥ] are put together to form complex computational processes [प्रक्रियाः; prakriyāḥ] which take speaker intentions [विवक्षाः; vivakṣāḥ] as input and generate Sanskrit expressions [शब्दाः; śabdāḥ] as output.
The Aṣṭadhyāyī is the paradigmatic work of theory in the Sanskrit tradition and the analytical principles it encapsulates have had a profound impact on all of Indian intellectual culture. The three central figures responsible for the construction of the canonical text and associated theory—Pāṇini (6th-4th CE BCE) “The Formulator” [सूत्रकारः; sūtrakāraḥ], Vararuci (~3rd CE BCE) “The Annotator” [वाक्यकारः; vākyakāraḥ], and Patañjali (4th-2nd CE BCE) “The Commentator” [भाष्यकारः; bhāṣyakāraḥ]—are practically deified as the Three Sages of Linguistics [व्याकरणमुणित्रयः; vyākaraṇamunitrayaḥ] and their work is seen by the tradition as the product of not just human labor but divine inspiration.
So, for instance, the benedictory hymns with which one traditionally begins the study of the Sanskrit language go:
and
The question of motivation is important, and something Patañjali discusses at length in the opening chapter to his The Great Commentary (महाभाष्यम्, mahābhāṣyam). Of the many, many reasons to study linguistic theory bandied out by Patañjali, one is particularly relevant. For a certain segment of the linguistics community of India, the study of the Sanskrit language is motivated in part by its relationship to Vedic ritual. The Sanskrit invocations (mantrāṇi) used in ritual performance have to be transformed based on context to fit the appropriate gender, number, etc. Technically, a proficient speaker of a language should be able perform such transformations intuitively; but, when a certain level of rigor is required—as in the case of ritual invocations—it’s necessary to be able to check your answer against a fixed, external standard. Tabulating all possible such transformations is an impossible task, as Patañjali explains in a fun story about how the King of Heaven fails to learn Sanskrit “even in a hundred celestial years”, because he tries to memorize each of the infinite possible Sanskrit constructions, one at a time.
Instead, what the Aṣṭādhyāyī does is provide a way of representing a Sanskrit sentence, not as a continuous linear sequence of sounds, but as a network of interconnected discrete bits. Then, properties of the whole sentence can be derived from the properties of the components and transformations of the surface sentence can be derived from transformations performed on its elemental parts. The advantage to this is that, whereas the surface sentences can vary in highly complex and potentially infinitely many ways, there are only a finite number of elemental components and a finite number of elemental ways in which they can relate or transform. These components, relations, and transformations (being finite) can be tabulated and rigorously described such that you can effectively operationalize—make transparent, predictable, and replicable—the often vague and inconsistent intuitions of language speakers. This is what the Aṣṭādhyāyī offers.
So much for the early story of linguistics in India. The next big breakthrough came with Bhartr̥hari (~5th CE) who is supposed to have written a historically important but now lost sub-commentary on Patañjali’s Mahābhāṣyam, dealing with traditional issues in linguistic theory. But, his name to fame is an independent work, called On Sentences and Words [वाक्यपदीयम्; vākyapadīyam], whose signature contribution was introducing a cognitive turn into the tradition.
One of it’s key insights is the idea that, in addition to “public” language [वैखरिणी भाषा; vaikhariṇī bhāṣā] which is the object of traditional linguistic analysis, there is an inner cognitive dimension to language which is present already—prior even to our thoughts and internal monologuing—at the stage of perceptual experience. Bhartr̥hari argued that our experience, even at this primitive pre-cognitive level, must have some sort of conceptual framework underpinning it, in order for us to be able to report on it in public language. Otherwise, our words and even our thoughts would have nothing to “hook” on to, and elements of our experience could not act as normative conditions [प्रवृत्तिनिमित्तानि; pravr̥ttinimittāni] on the form of our speech. Nonetheless, Bhartr̥hari continues, this perceptual language [पश्यन्ती भषा; paśyantī bhāṣā], must be radically different from public language. Whereas public language occurs in sequence and is, therefore, segmentable; the linguistic dimension of our perceptual experience occurs in a single moment of consciousness and, so, isn’t amenable to the sort of segmental analysis characteristic of classical linguistics.
So, there’s a disconnect between perceptual and public language. What we need is some sort of bridge to connect our inner experience to people on “the outside”. This bridge, which Bhartr̥hari calls intermediate language [मध्यमा भषा; madhyamā bhāṣā], shares with perceptual language the property of being a cognitive phenomenon and with public language the property of being sequentially structured. In other words, it acts as a sort of “language of thought” which translates the unstructured language of perception into a form that can be expressed in public language. The corollary to this is that this intermediate “language of thought” is, like public language, amenable to segmental analysis and linguistic theorizing.
While this challenge was taken up to an extent in the period following Bhartr̥hari, a decisive turning point occurs with the Navya movement. Under the general disciplinary heading of “cognitive linguistics” [शाब्दबोधशास्त्रम्; śābdabodhaśāstram], the Navyas worked out a segmental analysis of cognition as consisting of a network of relations between atomic bits of mental content obeying certain constraints on how they are structured and transformed. The result is a system that operationalizes logical intuitions in the same way that a grammar like the Aṣṭhādyāyī operationalizes linguistic intuitions.
With this historical picture in place, we can characterize the key intervention of Navyanyāya into the linguistic tradition as offering something like a grammar of our mental language. And, just as the starting point of grammatical analysis is the sentence, the starting point of logical analysis is the “sentence-shaped” cognition.
Let’s take a simple example (one of the standard examples in the literature). If you’re familiar with the epic The Rāmāyaṇam, then you probably know that the eponymous hero, Rāma, is the son of a certain king named Daśaratha.
In English, we might express this fact in a simple sentence:
Example (1.1a)
To start with, let’s make this even simpler by dropping out whose son Rāma is:
Example (1.1b)
This sentence has four elements: the two nouns “Rāma” and “son”, along with the words “a” and “is”. It’s natural to think of the word “is” as expressing an equality. So, ignoring the “a” for now, we might symbolize the above English sentence as:
Example (1.2)
But there’s an obvious problem with taking this to express a real mathematical equality, e.g. 1+1=2.
For instance, since Daśaratha is also someone’s son, we could write:
Example (1.3)
And, if these are both equalities in the usual sense; then, we should be able to derive:
Example (1.4)
by substitution. But, this is obviously false. So, what gives?
We can get partway to an answer by putting the word ‘a’ back into the equation (no pun intended). Rāma is equal, not to “son”, but to “a son”. The indefinite article ‘a’ indicates that the word it modifies refers ambiguously. So, we need a way to account for this ambiguity in our theory.
One way to do this is to treat the word “son” as a predicate; more like the word “is” than the word “Rāma”. Let’s represent this predicate as “x son y” which is true if x is the son of y. With this, we can symbolize the original English sentence, “Rāma is Daśaratha’s son” as:
Example (1.5a)
What if we want to keep it ambiguous whose son Rāma is? We can just replace daśaratha
with a variable:
Example (1.5b)
This is pretty close to a working solution. One subtle issue, though, is that we haven’t specified the scope of the variable x
.
Take, for instance, the sentence “a man is a son.” We could interpret this in at least two ways: (1) some specific man is someone’s son or (2) every man is someone’s son. We can fix this by specifying which interpretation we mean:
Example (1.6a-b)
This is more or less the solution offered by the early Analytical Philosophers, e.g. Frege and Russell, and is a straight shot to modern forms of quantificational logic:
Example (1.7a-b)
In contemporary (Western) jargon, we call the interpretation in (1.7a) existentially quantified over x
and (1.7b) universally quantified over x
. The symbol ∃
means “for some” (or “there exists”) and is called the existential quantifier. The symbol ∀
means “for all” and is the universal quantifier. The symbol ∈
means “in” and expresses a restriction on the domain of quantification; i.e. the set of possible values the variable can take.
Now, this approach—using predicates and quantifiers—is one way of making sense of ambiguous identity statements like these. But, it’s not how Naiyyāyikas (practitioners of Nyāya) approach the issue at all. I bring this up because of how absolutely pervasive quantificational logic is today. Due to its role in formalizing mathematics and its central place in pretty much all approaches to logic in the westernized world, its very easy to read predicates and quantifiers, etc., into Navyanyāya. It’s best, then, to bring it up now, so we can be intentional about whatever baggage we bring to bear in understanding Navya approaches to similar problems.
To see how Naiyyāyikas think about the problem we outlined so far, let’s go back to the initial sentence:
Example (2.1)
Another way of diagnosing the source of the ambiguity is to notice that unlike the word “Rāma” which is a proper name [पारिभाषिकशब्दः; pāribhāṣikaśabdaḥ) and, therefore, refers to exactly one thing; the word “son” is a common noun [जातिशब्दः; jātiśabdaḥ] and can to refer to many different objects under the same umbrella. In the terminology of Navyanyāya, we say that the reference [शक्यता; śakyatā] of the word “son” extends over all objects by which the type (धर्मः; dharmaḥ) ‘sonhood’ is realized [वर्तते; vartate]. This “type”, which defines the reference of a kind term, is called a reference partitioner [शक्यतावच्छेदकः; śakyatāvacchedakaḥ].
According to the influential Maithilī philosopher, Vardhamāna Upādhyāya (~14th ce), the reference partitioner of a word serves three functions:
Together, these functions represent the ontic, epistemic, and cognitive dimensions of meaning. In the language of Navyanyāya, we can refer to the reference partitioner of a noun stem by applying an abstractional operator [भावप्रत्ययः; bhāvapratyayaḥ]; e.g. suffixes like ‘-tva’ or ‘-tal’. So, taking the Sanskrit word for ‘son’, पुत्रः
[putraḥ], we can refer to its reference partitioner by affixing the -tva operator: पुत्रत्वम्
.
One thing to note, though, is that when Naiyyāyikas append these operators to noun stems, they apply the typical Pāṇinīya morphological and phonotactic rules to the resulting base-affix complex and derive a new noun stem, without explicit markers differentiating the original base from the logical operator. This has the advantage of allowing Navya expressions to fit neatly into ordinary Sanskrit sentences. But, in addition to obscuring the underlying structure of the expression and introducing certain surface-level ambiguities into the logical formula due to the way the Sanskrit rules interact, it also means you can’t read Navya texts without a knowledge of Pāṇinīya techniques. This problem even leaks into translation, where translators often struggle to try and represent Navya formula by “translating” the Sanskrit grammatical devices into supposed English equivalents as if they were elements of natural Sanskrit rather than formal devices of the Navya language. A similar issue faces attempted translations of the Aṣṭādhāyī, which also uses a specialized language to express its grammatical formulae.
Because Pāṇinian morphophonemics is definitely outside the scope of this essay, I’ll make use of two different sets of devices to clarify the structure of Navya formula in this essay—one for the roman script versions and one for the Sanskritic versions of these formulas.
In the case of the Sanskrit, I’ll cite all Navya expressions with either a ‘।’ or parentheses separating a Navya operator from its base. I will also undo any sandhi occuring across this punctuation. Lastly, I will use parentheses and spaces in a peculiar way (which I’ll discuss when we get to them) to clarify the internal structure of compounds where they occur not as natural Sanskrit but as formal devices within the Navya language.
In the roman script representations of Navya formula, I’ll use greek letters to represent Navya operators. I’ll also use some punctuation and parenthesis to represent other Navya devices, which I’ll also introduce when we get there. For the abstractional operator, I’ll use the greek letter ‘λ’.
So, the expression पुत्रत्वम्
representing the reference partitioner of the word son
/ पुत्र
will be rewritten variously as:
Example (2.2)
Ok, but what exactly is a “reference partitioner” anyway? To understand this, we’ll have to back up and talk a bit about the objects that Navyanyāya operations are actually operating over.
The central concept in the Navyanyāya framework is something called a type [धर्मः; dharmaḥ]. A type, broadly conceived, is some discrete bit of information that is characterized by repeatability [अनुगमः; anugamaḥ]. This just means we can observe and identify the same bit of information—the same type—across different contexts.
In any particular observation, a given type is connected with a basis [अधिकरणम्; adhikaraṇam] through a relation called realization (vr̥ttiḥ). On this view, a sentence or a sentence-shaped cogntion simply represents the realization of a type to some basis. We can think about the concept of ‘realization’ as representing the re-contextualization of information that has been generalized and abstracted out of a variety of different contexts. Let’s go back to the sentence we started with:
Example (4.1)
Here, the predicate ‘is a’ expresses the realization of a type. The identity of the type is provided by the word ‘son’ and the identity of its basis is provided by the word ‘Rāma’. But, notice that there is an asymmetry between how the two are identified. The type being realized is identifed by providing a word for which it is the reference partitioner wheras the basis is identified by providing a word of which it is the referent.
To eliminate this asymmetry, we could restate the sentence (4.1) as
Example (4.2)
Now, both type and basis are identified by words that refer to them.
Sentences of this form are called assertions [प्रतिज्ञाः; pratijñāḥ]. All assertion contain exactly two parts: a word referring to a basis and a word referring to a type that is being asserted to be realized by that basis. In the traditional Sanskritic notation, the word referring to the basis is marked with the locative [सप्तमी; saptamī] or genitive [षष्ठी; pañcamī] case and the word referring to the type is marked with the nominative [प्रथमा; prathamā]. For our roman script representation we will use the syntax:
So, we can represent (4.2) using this notation:
Example (4.3)
In Example (4.3), the basis happens to be a specific individual. But, we can also write something like:
Example (4.4a)
Notice that the basis in (4.4) is referred to by a common noun rather than a name. In this case, the subject is identified ambiguously, as just “some man”, without specifying who precisely we’re referring to. All that’s being asserted is that the two types manλ
and sonλ
are co-realized [समानाधिकरणौ; samānādhikaraṇau]. In other words, there exists some object that is the basis of both manλ
and sonλ
.
But, this isn’t anything exceptional. The same is true of (4.3), which just says that the types rāmaλ
and sonλ
are co-realized. It just so happens that rāmaλ
uniquely identifies a single object.
So, in the jargon of quantificational logic, the Navya assertion expresses an existentially quantified proposition. But, this happens not by quantifying over a variable, as in Western approaches to the problem, but by specifying how the type involved in the assertion interacts with the reference partitioner the basis over which it is asserted. This difference is important to keep in mind; because, while it’s easy to think of Navya nouns as if they’re just variables with implicit quantifers and domain restrictions, thinking of them this way makes it hard to understand what happens when you combine nouns together into larger phrases. The lesson is that the algebra of Navya expressions is just different than that of propositions in contemporary Western forms of logic.
Putting it all together: the Navya assertion provides a formal representation of the minimal “sentence-shaped” cognition that we began this section talking about. Importantly, each Navya assertion is itself independent and self-contained. There are no assertion-level operators. Sentential connectives like ‘and’, ‘or’, ‘if’, etc. are understood as operating at the level of types and never at the level of whole assertions. This is another big difference from contemporary quantificational logic with its concept of a ‘proposition’.
The assertion is the fundamental kind of “sentence” in the Navya language and, at some level, all sentences can be boiled down to assertions. Still, because, the basic purpose of the Navya system is to analyze concepts, the notion of a “definition” deserves some special attention.
The term definition [लक्षणवाक्यम्; lakṣaṇavākyam] in the context of Navyanyāya should be distinguished from a defining characteristic [लक्षणम्; lakṣaṇam]. A defining characterstic (or just, characteristic, for short) is the true machine by which Navyas articulate the analysis of a concept. A definition simply expresses the equivalence of the definitional subject [लक्ष्यम्; lakṣyam], i.e. the thing being defined, with a defining characteristic. Both of these two, the subject and its characteristic, are objects—not words or phrases. Specifically, they’re types. So. in other words, a definition expresses a relationship, specifically a relationship of “sameness”, between types.
But, types are finicky creatures and being clear about what exactly it means for two types to be “the same” is tricky. So, what’s being asserted is not exactly a sameness of the identity of the types but, rather, a sameness their bases. In other words, two types are equivalent, in this sense, if every basis that realizes one type also realizes the other. In the jargon of Navyanyāya, we say the two types partition [अवच्छिन्दाते; avacchindāte] one another.
Notice that this is a very weak notion of definition. Under this conception, a definition is not something that tells you what a word “means”, nor does it explain the “nature” of a concept. It just tells you under what conditions you can apply a given concept and what things you can infer from it.
The standard way in which Navyanaiyyākikas write definitions is by placing the subject and its characteristic together in a single sentence with both terms in the nominative case (prathamā vibhaktiḥ). Since this format is a bit ambiguous (for one, it’s not always clear which one is the subject and which one is the definition), and because the same form is used for another type of Navya construction as well (which we’ll see in just a bit), I’m going to use the following format:
Example (5.1)
So, what about an example? Here’s a fairly contrived one, to make the point.
Example (5.2)
Notice that the terms ‘descendent’ and ‘progeny’ are synonymous. So, we can define the first in terms of the second.
The problem with this sort of example is that it seems like you can only write a definition if you have a 1:1 correspondence between two types. But, really, we want to be able to analyze a concept in terms of the relationships between multiple other concepts, not just replace one concept with another. What we need, then, is a way to construct new types by putting existing types together.
Let’s consider the following definition: “a daughter is a female descendent”. Here, the subject is दुहिता
or daughter
. The definition features two types: स्त्री।त्वम्
/ femaleλ
and अपत्य।ता
/ descendent
. But, notice, that daughter
is not equivalent to either element of the definition, on its own. Instead, we could say that daughterλ
is a subtype of descendentλ
; specifically that subtype which is also a subtype of femaleλ
. In Nyāya jargon, we express the idea ‘a subtype’ using the concept of partitioning [अवच्छेदनम्; avacchedanam].
Partitioning is one of those infamous Navya concepts that’s endlessly confusing but central to how the whole machinery works. We’ve already come across this concept once before in the phrase ‘reference partitioner’. As we’ve already indicated, the simplest way to think about this is as a device for subtyping generic types. To be very precise, a partitioner
[अवच्छेदकः; avacchedakaḥ] of some type T
is a type whose basis never extends beyond that of T
[अनतिरिक्तवृत्तिः; anatiriktavr̥ttiḥ]. Given some partitioner P
, we can define a subtype of T
by constraining the scope of its realization to that of the bases of P
. We call this subtype a ‘P-partitioned T’ [प।अवच्छिन्न टः; P।avacchinna T].
But, what if we want to restrict T
by a type Q
that overlaps with, but isn’t contained by T
? In this case, it would be more accurate to say that we partition T
by a subtype of Q
(i.e. a ‘partitioned’ Q). But, as we’ll see in a later section on another infamously confusing Navya concept called “clarification”, the ambiguity inherent in the expression ‘some subtype’ is already built into how Navyanyāya operators work within the overarching framework. So, it’s works to just say ‘Q-partitioned T’.
The most explicit way to partition a type is using the partitioning operators. Navyanyāya provides two: अवच्छेदक
[-avacchedaka-; partitioner] and अवच्छिन्न
[-avacchinna-; partitioned] which can be suffixed to the base type, just like with the abstraction operators. And, again, we’ll represent these with Greek letters: Σ
for the partitioner and Ξ
for the partitioned type.
With this, we can define daughterλ
in terms of a pair of assertions:
Example (5.3a-b)
Note the double abstraction. The phrase Σλ
/ अवच्छेदक।ता
is a common idiom for asserting that one type is the partioner of another.
While, the above approach does work, it has some issues. For one, it’s clunky and unnecessarily verbose. But, worse than this, it doesn’t actually state a definition of daughterλ
; since, it leaves open the possibility that the definition of daughterλ
must still involve additional properties not stated above. For this reason, the Navya framework provides a way to explicitly construct a type using a syntax that invokes partitioning under the hood. This is accomplished via a syntactic relation called coordination [सामानाधिकरण्यम्; sāmānādhikaraṇyam]
Two words are in coordination if they are either (1) in apposition [सहप्रयोगः; sahaprayogaḥ] or (2) coreferential with a pronoun. The word ‘apposition’ is just the name of a particular kind of syntactic relationship between nouns or noun phrases that, in the context of Sanskrit, is indicated either by giving both constituents the same case endings or by putting them together into a compound type called karmadhārayaḥ [कर्मधारयः]. When we cite Navya expressions with appositional syntax, we’ll express apposition between two noun stems W
and V
by putting them next to each other, like in a compound, but with a space separating them. We’ll do this for both the roman script and Sanskritic versions.
Example (6.1)
The second way to put words in coordination is using pronouns; but we’ll talk more about this a bit later.
The bottom line is, coordinative relations impact the scope of the words in coordination. Namely, it places a constraint on the reference of the words in coordination, in addition to the one placed on them by their reference paritioner. This constraint has two parts, one directly involving the referents of the words and the other involving their reference partitioners. For the contraint on the referents directly: whatever any one word in coordination refers to, the other must refer to the samething. Coordinated words are co-referential.
For the constraint involving their reference partitioners: Recall that, on its own, a word W
has a scope defined by some type Wλ
that acts as its reference partitioner. Let’s call this the intrinsic reference partitioner of the word. When that same word is placed in coordination with some other word, V
, to get a coordinate phrase (V W)
, then, the reference of each word, W
and V
, is additionally partitioned by the intrinsic reference partitioner of the other word. For example: the reference of W
in (V W)
is partitioned by both Wλ
and Vλ
.
We can use the λ
operator to extract the reference partitioner of the in-context word by attaching it to the outside of the coordinate phrase: (V W)λ
.
Because syntactic relations in Navya metagrammatical theory are always defined between words (and never between phrases), we need a rule to determine how syntactic relations are resolved in sentences containing embedded phrases. Every Navya phrase has a ‘head’ [प्रधानम्; pradhānam] element. For a coordinate phrase, the head is the element on the left:
Example (6.2)
The rule is that syntactic relations involving embedded phrases are taken to occur between the heads of the respective phrases. So in the phrase (V W)λ
, the abstractional operator λ
takes the head element W
as its argument. However, this head element is considered in context; i.e. it’s reference partitioner is first modified by the coordination relation and only then extracted by the abstractional operator. Moreover, stacked operators are always left associative. So, Wλλ
means (Wλ)λ
and not W(λλ)
.
With this in place, we can write a better definition for ‘daughter’ in terms of ‘descendent’ and ‘female’:
Example (6.3a)
One note: in later Navya style, the preference is to state things in terms of the types, rather than their bases. So, the idiomatic way to write the definition is:
Example (6.3b)
Appositional syntax also provides a different way of writing a
simple assertion. Recall the basic format for an assertion, involving
some type T
and some basis b
Example (6.4)
Here, we assert that T
is realized in b
. But,
we can also drop the basis term to just assert just that T
is realized, without specifying where:
Example (6.5)
On its own, this might not be so useful. But, when we have constructed types, then this sort of assertion makes a substantive statement, namely that a given type ‘occurs’. Or, in the jargon of contemporary type theory, that a type is ‘inhabited’.
For example, to assert that the type daughterλ
is
inhabited, we could assert:
Example (6.6)
Now, notice that this is equivalent to asserting:
Example (6.7)
This gives us a basic equivalence between coordination of two words and the realization of a type to a basis. But, can we do this in reverse? E.G. can we express something like:
Example (6.8)
as a coordinate phrase?
To do this, we need something like a reverse abstraction operator—a realization operator. Luckily, the Navyas give us two:
the suffix मतुप्
[-matup] or the word अधिकरण
[-adhikaraṇa-] can be suffixed to a word referring to a type to generate a word referring its basis; e.g. वर्ण।वत्
[vahni।vat] or वर्ण।अधिकरणम्
[varṇa।adhikaraṇam] refers to the basis of color. For the roman script representation, let’s use the Greek letter ι
. E.G colorι
.
the words निष्ठ [niṣṭha] or वृत्त [vr̥tta] can be suffixed to a word referring to a basis to generate a word referring to a type realized at that basis; e.g. घट।निष्ठम् [ghaṭa |
nisṭḥam] or घट।वृत्तम् [ghaṭa |
vr̥ttam] refers to a type realized in a pot. For the roman script representation, let’s use τ . E.G. potτ . |
With this, we can rewrite (6.8) as either of:
Example (6.9a-b)
We can summarize these relationships in terms of two rules, which together provide something like an internal definition of abstraction and realization.
First, for any two nouns p
and b
, the
following assertions are equivalent; i.e. any one can be deduced from
the other:
Rule (T.1) essentially defines the concept of a reference partitioner: a word refers to an object if and only if that object realizes its scope partitioner.
Second, the following identity holds for all b
Rule (T.2) is sometimes called the “tattvavān tad eva” [तत्ववान् तदेव] rule and connects the concept of type abstraction with realization. Recall that when we stack operators, the order of operations is left associative. I.E ‘xλι’ is interpreted as ‘(xλ)ι’.
From (T.1) and (T.2), we can derive another core equivalence. Given any
two symbols p
and b
, the following assertions
are equivalent:
The partitioning relation is symmetric
And transitive:
If,
Rule (P.2a)
Then:
In addition, the concept of partitioning is connected in an intimate way the concept of abstraction.
Given any two types T
and P
, if the type
PΞ T
/ प।अवच्छिन्न ट:
exists, then the following assertions are always true:
Rule (P.3) <a name=”P.333
(P.3) captures the idea that partitioning is a form of subtyping. So, the partition of a type still falls under the same ‘abstraction’ and can be referred to by the same name as the base type. This rule also allows us to draw an analogy between Navya type abstraction and a concept in modern set theory: if we think of a type as analogous to a set, then abstraction is analogous to the powerset function.
Coordination connects abstraction and partitioning in another basic way:
The symmetry and transitivity of Partitioning imply that coordination associative and transitive:
Let’s go back to the very first sentence we tried to analyze:
Example (7.1a)
So far, we can represent only a part of the content of this sentence in the Navya framework:
Example (7.1b)
But, obviously there is some information that is present in (7.1a) which is missing in (7.1b).
One way to get at this is to take a look at this related sentence:
Example (7.2a)
Where (for those who don’t know) ‘Pāṇḍu’ refers to the father of the protagonists of the other great Sanksrit epic, The Mahābhāratam.
Now, unlike with (7.2a), we can’t simply drop out reference to ‘Pāṇḍu’ and still get a true assertion:
Example (7.2b)
How do we explain the difference between (7.1) and (7.2)? Recall that in a quantificational approach to logic, we would say the difference is due to the scope of the negation operator. So, we can express (7.1a-b) in quantificational logic as:
Example (7.3a-b)
In the same vein, (7.2a-b) could be expressed as:
Example (7.4a-b)
Recall that ‘¬’ is the negation operator (representing ‘not’) and ‘∃’ is the existential quantifier (representing ‘some’).
Or, using the symbol S
for ‘son’, r
for ‘Rāma’ and d
for ‘Daśaratha’ and representing the expression a son b
in prefix notation Sab
, we can rewrite this as:
Example (7.4a-b)
Note, that (7.3b) does not contradict (7.4b) because the negation operator is inside the scope of the existential quantifier ∃
. In this case, we say that the sentence is de re, meaning that we first bind the variable to some individual and then apply the negation: “There is some x such that Rāma is not the son of x.” Naiyyāyikas call this type of negation selective negation [विशेषाभावः; viśeṣābhavaḥ]
Compare this to:
Example (7.4c)
Here, the negation is outside the quantifier. We call this sentence de dicto, meaning that first the negation is applied before binding the variable x
to any particular individual: “There is no x such that Rāma is the son of x.” Naiyyāyikas call this generalized negation [सामान्याभवः; sāmānyābhāvaḥ].
To sum it up: On a quantificational understanding of this problem, the distinction between selective (de re) and generalized (de dicto) negation arises because there is an implicit existential quantifier in the unnegated sentence, “Rāma is a son”. So, depending on where the negation is placed with respect to the quantifier, we get a different reading of the sentence. But, the logic of Navyanyāya is not quantificational; so, we need a different way to account for the ambiguity between selective and generalized negations.
Navyanaiyyāyikas call types like sonλ
, which have “implicit quantifiers”, unclarified types (अनिरूपितधर्माः; anirūpitadharmāḥ).
If a type T
is unclarified, then there exists a clarifying type C
(निरूपकधर्मः; nirūpakadharmaḥ) such that the realization of T
at some basis a
necessitates the realization of C
at some basis b
, and vice-a-versa. In this case, we say that the realization of T
in a
is clarified (निरूपितम्; nirūpitam) by the realization of C
in b
. Note that the clarified-clarifying relation between types is symmetric, just like with partitioning. If C
clarifies T
, then T
clarifies C
.
An example will probably help. Take the unclarified type sonλ
. It has a clarifying type parentλ
. For every individual a
such that a::sonλ
, there exists an individual b
such that b::parentλ
. In other words, you can’t have a descendent without some corresponding ancestor, and vice-a-versa.
We express type clarification in the Navyanyāya framework via—you guessed it—a set of operators. In this case, we have:
निरूपक
[-nirūpaka-, ‘clarifier’] which is suffixed to the referent of an unclarified type and refers to its clarifying type. Let’s use the Greek symbol Δ
for this.
नीरूपित
[-nirūpita-, ‘clarified’] which is suffixed to the referent of the clarifying type and refers to the clarified type. let’s use the symbol Θ
for this.
There’s an important relationship between clarification and partitioning that’s worth discussing before we go further, because it’s essential for understanding how the system works as whole.
Within the universe of Navya types, we can distinguish broadly between “analyzable” [सखण्डः; sakhaṇḍaḥ] and “unanalyzable” [अखण्डः; akhaṇḍaḥ] types. Among unanalyzable types, we can further distinguish between “positive” [भावः; bhāvāḥ] or “negative” [अभावः; abhāvāḥ] types. Analyzable types can contain both positive and negative types in their definition, so they don’t always fall neatly into one category or the other. Regardless, the difference between positive and negative types can be expressed in terms of a relationship between clarification and partitioning.
Given any positive type T
and any type C
, if CΞ TΔ
exists; then:
In other words, partitioning the clarifier of a positive type gives you the clarifier of a partitioner of that type. Intuitively, subtying the clarifier gives you a subtype of the unclarified type.
If, however, the unclarified type T
is negative:
In other words, the subtyping relation is reversed when partitioning the clarifier of a negative type; i.e. subtyping the clarifier gives you a supertype of the unclarified type.
To see the point of this: when we apply a positively signed operator to an unclarified type, clarification of its argument strengthens the assertion. But, if the operator is negatively signed, then clarifying its argument results a weaker assertion. Unless otherwise specified, we can assume that an unanalyzable type is positively signed. But, the default negation operator (as we’ll see in a bit) has negative valence. This is why, removing information (ie. dropping the clarifier) from its argument, results in a stronger (more restrictive) assertion.
With the notion of clarification in hand, we can rewrite the original sentence “Rāma is Daśaratha’s son” as:
Example (7.5a)
Note how the expression raghuτ
implicitly partitions the clarifier fatherλ
. The expression Tτ D
/ ट।निष्ठ ड
is equivalent to (TτλΞ Dλ)ι
/ (ट।निष्ठ।ता।अवच्छिन्न ड।त्व)वत्
; or simply TλΞ D
/ ट।त्व।अवच्छिन्न ड
.
Because these expressions are getting long, let’s also use single character symbols for our Navya expressions: r
for ‘Rāma’, d
for ‘Daśaratha’, F
for ‘father’, S
for son:
Example (7.5a)
Still, writing it this way is a bit annoying, since you need to explicitly name the clarifying type. But, using the clarifier operator, we can write:
Example (7.5b)
Ok, but this is still super clunky. It would be nice to be able to just
drop the redundant reference to Sλ
/ पुत्र।त्वम्
in the embedded phrase. Navyanaiyyāyikas agree; so, they provide the operators:
क
[-aka-] which is suffixed to the referent of an unclarified type and refers to the basis of the clarifying type. Let’s use the Greek symbol κ
for this.
ईय
[-īya-; छ
acc. Pāṇini] which is suffixed to the basis of a clarifying type and refers to the clarified type. Let’s use the symbol Π
for this.
As an aside for those of you following along with the Sanskrit: Naiyyāyikas often just use the words निरूपक
and नीरूपित
in the same way as क
[-aka-] and ईय
[-īya-]. I.E. They will write दशरथ।निरूपित
instead of (दशरथ।निष्ठ पुत्र।त्व।निरूपक)निरूपित
. This is obviously a bit confusing since that makes the terms निरूपक
and नीरूपित
ambiguous between what are effectively two different operators. Still, you can usually tell from context what they mean, as long as you’re aware that they’re used both ways.
These two sets of operators can be interchanged via the following rules:
In any case, we can simplify (7.5b) using this notation:
Example (7.5c)
Type clarification is so fundamental to the Navyanyāya system that, like with partitioning, the language provides dedicated syntax to express this without having to use explicit operators. This syntax is called subordination [उपसर्जनम्; upasarjanam] and is implemented in a way roughly analogous to apposition (for partitioning).
In order to express such a syntactic relation, with b
as subordinate [उपपदम्; papadam] and p
as principal [प्रधानम्; pradhānam], Naiyyāyikas mark b
with the genitive case (pañcamī vibhaktiḥ), with p
as its argument. Alternatively, they put the two together into a ṣaṣṭhītatpuruṣaḥ [षष्ठीतत्पुरुषः] compound with b
and the first element.
In the roman script representation, let’s express subordination in a form that mimics the compounding syntax. But, for clarity, we’ll separate the two component word stems by the ‘:’ symbol. So, the format is:
Example (8.1a)
For brevity, when the first member of the compound is in parentheses, the ‘:’ can be dropped. E.G. a:x
or (a)x
:
Example (8.1b)
And, when all nouns in a formula are single-character in length, we’ll just drop the ‘:’ (so that two characters next to each outer without a space are taken to be distinct nouns in a subordination relationship); e.g.
Example (8.1c)
Let’s look go back to our example sentence:
Example (8.1a)
Here, we already noted that sonλ
is an unclarified type with daśaratha
as its clarifier. So, we can use subordination to write this as:
Example (8.2b)
Or using abstraction and the standard ‘::’ syntax for assertions:
Example (8.2b)
Note that subordination is also left-associating: daśaratha:sonλ = (daśaratha:son)λ
. Additionally, we can write (8.2b) using the single character nouns from earlier:
Example (8.2c)
One interesting point about subordination: You may have noticed that the syntax for subordination looks a lot like applying an operator. This isn’t a coincidence. Put simply, Navyanyāya operators are no more or less than unclarified types. They can even be used as standalone words; e.g. τ
/ निष्ठः
which refers to the type of all realizable entities (i.e. the type of all types). This means that Navyanyāya operator application just is type clarification, and the syntax used to represent operator application just is subordination. The bottom line here is that the syntax of the Navyanyāya language is extremely simple; everything is achieved with just coordination, subordination, and assertions. Logical operators aren’t special syntactic devices, nor are typical logical connectives like conjunction, disjunction negation, etc., treated in any special way by the syntax; they just arise naturally as forms of syntactic subordination. That’s pretty cool!
Now for a few rules about how to manipulate subordinate phrases.
By convention, subordination is left-associative. So:
In this case, A
is the subordinate of B
, which is the subordinate of C
.
Also, subordination distributes over coordination on the right-hand side, if the unclarified type has positive valence:
Note that the reverse direction does not hold:
Instead, for factoring out the principal, we get the weaker rule:
Also, stacked subordinates commute:
Note that the subordinate and principal do not commute:
Lastly, coordination relates to partitioning via the identity:
Remember how we first introduced the concept of type clarification in the context of talking about negation? Let’s turn back to that issue now.
Recall that, in quantificational logic, we represent a negative statement using a negation operator, e.g. ¬Srp
. This is a propositional operator; i.e. it operates on a whole sentence, not a single term. But, as we mentioned at the beginning, there are no assertion level operations in the language of Navyanyāya. Instead, in order to express negative facts, we have to use a negation type.
Near the end of the first section talking about clarification, we introduced the idea of positive and negative types based on a difference in how clarification and partitioning interact between the two. Specifically, whereas clarified positive types are partitioners of their unclarified counterparts; the directionality of the partitioning is transposed for negative types. In this case, the unclarified type is the partitioner of the clarified counterpart. Intuitively, this means that an unqualified negative assertion, e.g. “Rāma is not a son”, is a special case of a qualified negative assertion, e.g. “Rāma is not Pāṇḍu’s son”.
By default, unanalyzable (i.e. primitive) Navya types have positive valence. In order to define a negative type, we build it out of a small set of primitive negative types. These objects are used to represent a variety of logical structures including intuitively ‘negative’ concepts like falsity and contradiction, but also less intuitively negative things like tense-modal operators (e.g, ‘always’, ‘sometimes’) and logical connectives like disjunction (i.e. ‘or’) and entailment.
Here, we’ll just focus on the two most fundamental type: the pure negative [अत्यन्ताभावः; atyantābhāvā] and the co-negative [अन्योन्याभावः; anyonyābhāvāḥ].
Negative types are a special case of a broader category called two-place [उभयनिष्ठः; ubhayaniṣṭhaḥ] types. Whereas any single instance of a normal ‘one-place’ type has just one basis, a two-place type involves two bases simultaneously.
An example of this sort of type is a two-place relation. Let’s define the word P
for ‘production’ [उत्पत्तिः] which refers to a relation of cause and effect between two objects. Note that although P
involves both a cause and an effect (i.e., it’s a two-place type), there is an asymmetry between how it relates to each one. We can have the production of smoke by fire. But, typically, not vice-a-versa. In other words, although a two-place type has two bases, it isn’t realized the same way in each one. We’ll refer to its ‘dominant’ basis as its substrate [अनुयोगि; anuyogi] and the other as its correlate [प्रतियोगि; pratiyogi]. So, s
(smoke) is the substrate of a P
whose correlate is f
(fire).
Or, in Navyanyāya speak, using the operators अनुयोगि
[anuyogi] and प्रतियोगि
[pratiyogi], which we’ll represent with the greek letters α
and π
, respectively:
Example (9.1.1)
Because expressions like (9.1.1) are somewhat clunky, Navyanaiyyayikas obey the following conventions to simplify the syntax (i.e. a bit of syntactic sugar). Given a word referring to a two-place type, e.g. production
, when we say the basis of production
(e.g. उत्पत्ति।मान्
; productionι
) we just refer to its substrate, unless ιλ
is explicitly partitioned by πλ
(as in Example 9.1.1). In addition, given a word referring to a two-place type, Naiyyayikas will use the instrumental [त्रितीया; tritīyā] case (sometimes, genitive [षष्ठी; ṣaṣṭhī]) to specify it’s correlate. However, they can also use subordination syntax with the correlate as the subordinate member.
So, (9.1.1) can also be expressed as:
Example (9.1.2)
Briefly on multi-place types: They’re just extensions of two-place types with multiple correlates. In this case, to specify each correlate separately , Navyanaiyyāyikas use a syntax called collation [द्वन्द्वः; dvandvaḥ], typically represented by a dvandva compound with the whole thing either taking the appropriate case ending or being embedded in another compound. We’ll represent this similarly, but with ‘,’ separating each element of the collation.
The simplest example of a multi-place type is 2
/ द्वय
[-dvaya-] which is realized in a given basis if and only if each of its two correlates are also realized there.
One common use of a pair is to express the conjunction of two types, i.e. the ‘and’ operator:
Example (9.1.3)
means that p
realizes both T
and D
.
In simple terms, a pure negative represents falsity. In other words, it’s used to assert that some bit of information is false or that some concept doesn’t apply in a given context. Like any two-place type, it has a correlate and substate—sometimes called an absential correlate [अभावीयप्रतियोगि; abhāvīyapratiyogi] and an absential substrate [अभावीयानुयोगि; abhavīyānuyogi]. An absential correlate represents the information that is being asserted as false and an absential substrate represents the thing about which the information is false. Navyanaiyyāyikas use the operator अत्यन्ताभाव
[-atyantābhāva-] to refer to a pure negation with a given correlate. Often, though, they just write अभाव
[abhāvaḥ] for brevity. We’ll use the symbol ⊥
For example, if we want to assert that the assertion ‘Rāma is Pāṇḍu’s son’ is false, we could express this in terms of a pure negative:
Example (9.2.1)
The pure negative is often (in English language literature) called an ‘absence’. So, we could read (9.2.1) as “Rāma realizes a Pāṇḍu-clarified sonλ absence”.
The Co-negative expresses inconsistency. Naiyyāyikas represent this with the operator अन्योन्याभावः
[anyonyābhāvaḥ]; orभेदः
[bhedaḥ], for short. We’ll represent this with the symbol ˉ
.
This idea of ‘inconsistency’ is cashed out in terms of a relationship between the inconsistency and falsity. Namely, if two facts are inconsistent with one another, then each one is false where there other other is true. We can formalize this idea as follows.
Given two entities A
and B
, if you assume that:
Rule (C.1)
then, you can infer
Rule (N.1)
However, the reverse does not hold. Falsity does not imply inconsistency. If some bit of information is inconsistent with some other bit of information, then the one is always false where the other is true. But, because Navyanyāya objects can be specified in an ambiguous way, via types, it is possible for some bit of information to be both true and false of the same thing. The classic case involves objects that are temporally or spatially extended; so that some fact about that object can be true at some time or of some part; but, false at some other time or of some other part.
For example, the assertion ‘Rāma is in the City of Ayōdhyā’ can be both true and false, because it is ambiguous:
Example (9.3.2a-b)
By specifying the temporal context, we can disambiguate the two cases. This specification can be expressed through partitioning the basis:
Example (9.3.3a-b)
This phenomenon is called incomplete realization [अव्याप्यवृत्तिः; avyāpyavr̥ttiḥ]. A property is incompletely realized in a basis if both it and its pure negation are realized at that basis. Most types are capable of being incompletely realized. Importantly, this includes the type ⊥
. On the flip side, the type ˉ
is always completely realized in its basis.
The possiblity of incomplete realization puts complicated wrinkles in the theory of inference; and, I would argue, a big part of the historical success of Navyanyāya—the reason it came to dominate logical thinking in India since its inception in around the 10th ce—lies in its recognition of these problems and the innovations it introduced to try and grapple with them. Many of the most interesting Navya ideas—the taxonomy and algebra of negative types, the concept of type partitions, etc.—can be traced back to these debates. For example, Udayanācārya (~10th ce), one of the key transitional figures in the shift from the Old Nyāya (प्राचीनन्यायः; prācīnanyāyaḥ) to Navyanyāya, tries to come up with set of formal criteria sufficient for gaurrunteeing that a type always occurs completely in its bases. Core elements in this criteria include the lack of self-realizing or cyclicly realizing types (i.e. type A
is realized in B
, which is realized in C
, which is realized in A
) and the lack of infinite chains of realization. This same criteria, e.g., occurs in axiomatic set theory as the axiom of seperation, motivated by concerns about inconsistencies in naive set theory. We’ll take a closer look at how the Navya framework deals with one of the famous cases of such set theoretic paradoxes—the Russell-Zarmelo paradox—in the penultimate section of this artice.
For now though, let’s look at a few more properties and rules regarding the two negative types:
Given any word T
, the types Tλ
and Tˉ
are contradictory; i.e. they are never realized in the same basis. This rule also allows for defining abstraction in terms of contradiction:
Rule (N.2)
From this, we can also derive the following double elimination law for co-negatives:
Rule (N.3)
So, with all this in place, we can finally express the negative fact:
Example (9.4.1a)
With the Navyanyāya assertion:
Example (9.4.2)
We can see the significance of swapping the directionality of partitioning between positive and negative types when we think about what happens in each case when we drop the clarifier from (10.1), vs the positive case.
First, the ordinary language version. Compare:
Example (9.4.3a-b)
with
Example (9.4.4a-b)
In the first instance, the sentences in (9.4.3a-b) are stating more or less the same fact, but just at different levels of specificity. On the other hand, (9.4.4a-b) are saying totally different things; in fact, (9.4.4a) is true, whereas (9.4.4b) is false.
We see the same asymmetry in the Navyanyāya formulae:
Example (9.4.5a-b)
To see why we have this asymmetry between the positive and negative case, we go back to rule 7.5b: descendentλ⊥
is a special case of kuru:descendentλ⊥
; i.e. ‘not being a descendent (at all)’ is a special case of ‘not being a decendent of Kuru’. As a result, the generalized negative assertion rāma :: descendentλ⊥
/ रामे अपत्य।ता।अभावः
is equivalent to the outter negation, in the formula of quantificational logic: ¬(∃x rāma descendent x)
.
So, how do we express the inner negation ∃x ¬(rāma descendent x)
? There are a few different ways to do this.
One way is by directly sub-typing the parent type ⊥λ
(which has positive valence):
Example (9.4.6)
Since this is a bit ugly. Naiyyāyikas define the operator विशिष्टाभावः
[viśiṣṭābhāvaḥ; “particularized negation”] as a positive-valence negation type. Let’s represent this with the symbol !
. In effect, this allows us to directly express selective negations / de re negations.
So, we can rewrite (9.4.6) as the much simpler:
Example (9.4.7)
If you remember from a few sections ago, we offered the following definition:
Example (10.1)
Except, this sort of definition doesn’t really work for words like ‘daughter’ and ‘son, if we want to think of them as expressing an unclarified type. For instance, the assertion rāma :: raghu(male descendent)
is true. But, rāma :: raghu:son
is not! Because, although Rāma is a male descendent of Raghu, there are multiple intervening descendents.
A better definition of son might go something like: “a daughter of X is a female descendent of X that is not a descendent of a descendent of X”. In other words, there are no intervening descendants between X and their daughter.
How would we go about saying this? First, let’s define G
as ‘daughterλ’, F
as ‘femaleλ’ and D
as ‘descendentλ’. Now, we now know how to say ‘not descendent’: D⊥
. And, ‘not descendent of a descendent’: (DΘ D)⊥
.
With this, we can get most of the way there:
Example (10.2)
But, this says that a daughter is a female descendent who is not the descendent of a descendent. The constraint is too strong!
We need to specify that the clarifier of (DΘ D)⊥
is the same as that of D
in the last term. In other words, we need a way to express coordination relations between nodes in the syntax tree that are not in apposition. In Navyanyāya, we express such coordination at a distance using pronouns [सर्वनामानि; sarvanāmāni].
Unlike many ordinary languages, the language of Navyanyāya doesn’t really have a notion of different ‘parts-of-speech’. There’s no syntactic difference between the different words in the language; even operators (as we saw earlier) are just ordinary nouns in subordination relations. So, although grammarians of standard Sanskrit recognize different categories of words including nouns [नामानि; nāmāni], verbs [आख्यातानि; akhyātāni], adjectives [विशेषणानि; viśeṣaṇāni], adverbs [क्रियाविशेषणानि; kriyāviśeṣanāni], adpositions [उपपदानि; upapadāni], particles [निपातानि; nipātāni] and so on; the language of Navyanyāya has only one category which Naiyyāyikas tend to treat as just nouns.
But, while there is no syntactic difference between different words within Nyāya, we can sub-categorize them based on how they refer to things. In other words, there is a semantic difference and logical difference (because of how reference partitioners operate as logical devices) but no syntactic difference between categories of words.
We already saw that we can distinguish between common nouns and proper names based on the scope of their reference. In addition, we can add third category of words called pronouns [सर्वनामानि; sarvanāmāni] to this list. Pronouns can also be subdivided into broadly three categories: anaphora, demonstratives, and pure indexicals. The underlying principle behind all these pronomial categories is that whereas, e.g., common nouns relate objects to their properties or actions; pronouns relate objects to elements of the sentences or speech contexts in which they are embedded. They are, in a certain sense, metalinguistic devices; bringing elements of the speech context into statements about objects.
Of these, pure indexicals are the least important for understanding the Navyanyāya system. These are basically words whose meaning depends on things like the time, location, and identity of the speakers and addressees of a sentence. Words in this category include ‘now’, ‘here’, ‘past’, ‘future’, ‘I’, ‘you’, etc. Although they are philosophically interesting and have generated a lot of discussion among Naiyyāyikas, they hold no special place in their logic. As such, I won’t delving much more into them.
The other two categories are more important, though, and deserve dedicated time to talk about them.
In the case of anaphora, represented in sanskrit by the word ‘tat’ [तत्] and its derivatives, we can just think of these as devices for expressing coordinative relations between words not in apposition.
Simply put, an anaphoric pronoun is in coordination with some other word or words regardless of where it actually occurs in an expression. One point of potential confusion with this is that, like with natural language, the word(s) that the pronoun is in coordination with may be unmarked or even implicit.
Take example (10.2) above. The sanskrit version of this formula looks like:
Example (10.2.1a)
Adding in a pronoun to specify the clarifier of the negative descendent type, we would have:
Example (10.2.2a)
Let’s represent an anaphoric pronoun with an circumflex accented letter; e.g. ‘t̂’. Then, near-literal translation of the original sanskrit into the roman script notation would look like:
Example (10.2.2b)
Here, the t̂
is in coordination with an implicit word acting as the clarifier of the positive descendent type in the second term. Making this implicit word, explicit, let’s introduce a dummy word p
referring to the clarifier of the type G
, being defined:
Example (10.2.3)
Note, that sometimes Naiyyāyikas will put in the implicit word themselves, but leaving it implicit is idiomatic. Still, the ambiguity isn’t settled. Since, it’s not clear which word the pronoun t̂
is supposed to be in coordination with. We can make this explicit by introducing a second t̂
in apposition to p
:
Example (10.2.4)
The rule here is that within the same ‘lexical scope’ (typically, the same phrase), pronouns of the same form are always in coordination. What about if we need to have two different anaphoric pronouns? We can just use different letters to represent different pronouns; but since the Sanskritic notation uses actual Sanskrit pronouns this can be an issue. In general, Naiyyāyikas don’t necessarily bother to disambiguate this; sometimes they’ll will use a different pronoun (e.g ‘idam’ [इदम्]) instead, but sometimes you just have to go by context or commentary.
Lastly, given that p
is now redundant, we could just drop it out:
Example (10.2.5)
There’s still some pronoun ambiguity left. Specifically, if we want to understand G
as an informable type, it’s not clear which of points in the definition the clarifier would correspond to. For example, the expression rG
, where d
stands for ‘Daśaratha’, could mean any one of:
Example (10.2.6a-c)
How do we specify which one we mean? Again, you have to infer this from context in the Sanskrit notation, but for use we’ll use the following syntax:
Given some expression of the form
Example (10.2.7)
The clarifier of this type would be in coordination with the pronoun t̂
. In addition, in this sort of expression, t̂
has scope only within the parenthesis.
For brevity (and to try to match the sanskrit as closely as possible), if there is only one variable within a phrase, that it is assumed to be the clarifier of the whole phrase and the preceeding []
can be dropped. Also, if there are no pronouns in the construction then the type being clarified is the referrent of the head element (the right-hand element for both coordinate and subordinate phrases).
So, we can define daughter as:
Example (10.2.8)
One last bit on syntax related to anaphoric pronouns.
Roughly speaking, relations of apposition and subordination allow us to express the syntax of a phrase as a tree (called a dependency tree) with the words occurring at the nodes of the tree. For any given node in this tree, it’s children are in subordination to it and its siblings are in apposition. We can treat the root of the tree as an implicit ‘zero’ word, so that top-level words in apposition can still be treated as siblings.
Within this setup, we can think of anaphoric compounds as short-circuiting this tree structure by allowing coordination relations between arbitrary nodes in the tree, basically turning it into a dependency graph. But, they can do more than this: anaphoric pronouns can cut across phrase boundaries, essentially connecting nodes that belonging in entirely different syntax trees.
There are a variety of ways this can happen, but the only one that’s important to us is the case of embedded relative clauses. In this case, the embedded phrase does not expose any principal member as a coordinate (i.e. appositionally) or subordinate to any word in the embedding phrase. If that was all there was, then such an island phrase would not actually contribute any information to the assertion it is nominally a part of. But, because it can share pronouns with the embedding phrase, it can engage in coordinative relations with elements of the main clause, and impact the content of the assertion that way.
Now, I’m just going to assert without proof (sorry) that phrases involving embedded relative clauses can be transformed using the τ
and κ
operators (and by taking one of the shared pronouns as a pivot) such that it can be placed directly in apposition with at least one of the instances of that pronoun in the embedding clause. In other words, syntax for embedded relative clauses isn’t strictly necessary for the language. Nonetheless, using relative clauses can make certain constructions (e.g. when using λ
to construct anonymous type definitions) simpler and clearer than the alternative. So, Naiyyāyikas sometimes use such expressions; marking the embedded phrase with a relative pronoun (yatpadam; [यत्पदम्]). We’ll represent this syntax using a semicolon in the following format:
Example (10.2.9)
A simple example of this is the following definition of τ
in terms of ι
:
Example (10.2.10)
Ok, one last note, just for the Sanskrit expressions: a pattern that pops up occasionally is when you are constructing an anonymous type definition and you want to refer to the type being defined within the definition. For example, to define the type of all types that occur in themselves (i.e. of all reflexive types); you could do something like this:
Example (10.2.12)
But, the idiomatic way to express this is using the reflexive ‘sva’ [स्व] or ‘atman’ [अत्मन्]; e.g. अत्म।निष्ठ।त्वम्
[atma:niṣṭha:tvam].
A demonstrative pronoun is the verbal equivalent of pointing at something. In English, we use words like ‘that’ and ‘this’. Such pronouns are like anonymous names: they pick out a particular object and always refer to the same object, so long as you’re in the same lexical scope. Arguably, you could just use names instead, so there doesn’t seem to be much point in having demonstrative pronouns in your system. But, the advantage with demonstratives is that they offer a convenient syntax for ‘particularizing’ generic properties.
For an example, let’s go back to this issue earlier about the two kinds of negations in quantificational logic. E.g. when we have a sentence like: ‘Rāma is not the son of someone;’ it’s ambiguous whether the ‘someone’ identifies a particular person which Rāma is not the son of, or if it means that the assertion “Rāma is the son of someone” is false.
Recall, that we could express this difference using generic or particular absences:
Example (10.3.1a-b)
Where r
refers to ‘Rāma’ and S
refers to the reference partitioner of ‘son’.
Demonstratives (which we’ll represent using a breve, e.g. ‘t̆’ instead of a circuflex) provide another, perhaps more intuitive, way of expressing the same idea by ‘particularizing’ the generic negative type:
Example (10.3.2)
Effectively, this forces a de re reading of the pronoun.
Now that we’ve more-or-less gotten a handle on the language of Nyāya, I’m thinking it might be fun to try and see how Naiyyāyikas would represent some of the logical structures (like negation, quantification, logical connectives, etc.) that feature in contemporary Western logics. We’ve already touched on some of this already, but this is a chance to talk more directly and focusedly on these topics. Much of what I’ll be talking about is actually stuff that you can find in Nyāya texts. But, I’ll try and point out when I bring up an idea or approach that is new, i.e. that can’t be directly traced to some historical source.
We’ve already looked at negation fairly exhaustively, so I won’t belabor the point much. If you have a type T
and a basis b
, then you can assert that a::T
is false by asserting a::T⊥
.
The one thing I will re-emphasize is that negation operates on types, always and only types. The closest thing to negating a whole sentence, like a::T
, is with something like this: (aτ T)⊥
.
Assertions are not propositions. You can’t operate on assertions or combine them together to get new assertions. You can, however, operate on or combine types. When Naiyyāyikas want to express complex propositions, they treat these propositions as types and combine them using higher-order types like ⊥
and Σ
, etc.
Conjunction (ubhayaḥ) is the ‘and’ connective between propositions. We can express this in terms of the three-place type ‘pair’: (T,U)pair
is a type that occurs
any where both T
and U
occur, and vice-versa. We can generalize this
to arbitrary number of constituents resulting in a multi-place type
called a group
[कूटः; kūṭaḥ].
An interesting type that sometimes shows up in Navyanyāya texts is the so-called negative group (kūṭābhāvaḥ). The negative group is defined as you’d expect कूट।अभावः
/ group⊥
and is present wherever at least one of its constituents is absent.
The type double
[दद्वयः; dvayaḥ] provides convenient syntax for pairs whose correlates are of the same type. With this, we can represent a conjunction of two negations as: ⊥double
/ अभाव।द्वयः
.
In the Vācaspatyam, a dictionary of śāstric terms and their derivations by the great 19th ce Pt. Tārānātha Tarkavācaspati, we find the following definition of disjunction [अन्यतरत्वम्; anyataratvam; the “or” operator].
Example (10.4.1)
In other words, disjunction is a co-negative clarified by a double co-negative.
Quantification is how we started talking about all of this. So, we’ve finally come full circle.
We’ve already seen how to express an existentially quantified proposition; it’s pretty easy. Let’s take the dummy example: “birds are animals”. To say “some birds are animals”, we assert:
Example (10.5.1)
Where b
means ‘bird’ and a
means ‘animal’.
What about universal quantification? There are two ways to go about this. First is via negation and the second is via partitioning.
One thing I want to rule out right of the bat. There’s a well known relationship between the universal and existential quantifiers, that you’ve almost certainly come across if you’ve ever taken a course on logic. Namely, the sentence “all birds are animals” is equivalent to “it is not the case that some birds are not animals”. Or, alternatively, the sentences “all birds are animals” and “some birds are not animals” are negations of each other.
Based on this, it would seem like we could express the universal statement “all birds are animals” as a Navya assertion by negating the existential:
Example (10.5.2)
Now, (10.5.2) does translate to “some birds are not animals”. But, because there’s no way to negate this assertion from the outside. I.E. Just stacking another pure negative just gives us:
Example (10.5.2)
translating to “some birds are not not animals”. This just reduces back to b::aλ
. A no go.
The trick is to represent the assertion as a type. We can do this by recalling that what a Navya assertion actually asserts is the co-realization relation between two types. The co-realization relation (represented as ∃
) can be defined as follows:
Example (10.5.3)
With this in place, we can rewrite (10.5.2) as relationship between types:
Example (10.5.4)
We can construct the negation of the co-realization relation in two different ways, via pure and co- negatives
Example (10.5.5a-b)
The difference, if you recall from the negative type section, above, is that ⊥
can be realized incompletely in its basis (and therefore does not preclude ∃
from also occuring there); whereas ˉ
does. We want the second one. Adding in the inner negation, we can define the universal quantifier as:
Example (10.3.5a-b)
With this in place, we can construct the universal statement about birdness and animalness:
Example (10.5.6)
As a quick side note, the type t̂⊥ιτλˉ
\ तद्।अभाव।वद्।वृत्त।त्व।भेदः
has a special name: ‘अव्यभिचारः’ [avyabhicāraḥ] which translates as “regularity”.
This concept of regularity between types gives us a first-pass definition of a very special relation called ‘pervasion’ [व्याप्तिः; vyāptiḥ]. The pervasion relation is the relationship between types that allows for type-inference; i.e. the ability to infer the realization of a type at a basis based on prior knowledge of its realization of some other type. For example, inferring from “eagles are birds” to “eagles are animals” based on knowledge that being a bird entails being an animal.
Defining pervasion as regularity goes back a long way, arguably all the way back to Diṅnāga (~6th ce). But, a major insight of Navyanyāya was that if we permit incomplete realization of types then this definition falls apart. Basically, getting the concept of entailment to work in a paraconsistent setting is tricky business, which is why Gaṅgeśa Mahopādhyāya fills up almost a whole volume of the Tatvacintāmaṇiḥ (the “bible” of Navyanyāya) on just this problem.
Lastly, a quick note on expressing generalizations vs partitioning, instead of negation:
Example (10.5.2)
That seems way easier, why didn’t we just lead with that! Mainly because Naiyyāyikas think of negation as the more fundemental operation and understand partitioning in terms of it. So, I wanted to specifically bring out the role of negation in expressing generalizations within this framework.
The sort of technique discussed above works okay with a single variable being quantified over, but can it scale up?
Let’s go back to the original example that got us talking about quantifiers.
Example (10.6.1a-b)
Where, P
refers to the set of people and Dxy
means ‘x is a descendent of y’
(10.6.1a) is easy, we already know how to do this:
Example (10.6.2)
where p
means ‘person’ and d
means ‘descendent’.
Existential quantification comes for free due to the way assertion works.
What about universal quantification, as in (10.6.1b)? First, notice that there’s an ambiguity with the quasi-english statement of the sentence that isn’t there in the first-order formulism: in the sentence (every person) descendant (some person)
, it’s unclear if y
should be read de dicto or de re. The difference is between whether we’re saying that (a) for each person there is someone who they’re the descendent of, or (b) there is some person that everyone is the descendent of. Clearly, these are different sentences.
The first order formulism can disambiguate these based on the order of the quantifiers: (a) ∀x∈P ∃y∈P Dxy
vs (b) ∃y∈P ∀x∈P Dxy
.
How does Navyanyāya go about dealing with this? At its heart, the idea is the same in both the Navya and the quantificational approach. We express complex ideas by building up a formula in layers, starting inside and working outward. The difference is, whereas the structural unit in quantificational logic expresses a proposition (when its open variables are bound); for Navyanyāya formulas, the structural unit expresses a type.
So, just as with, e.g., a first-order formula, we build a Navya formula layer by layer, inside out, with the inner formula subordinated to the outer formula. The trick is that, at each layer, we have to be careful about the orientation of the type each formula expresses: what is its basis and what is its clarifier.
For the case of binding y
first and then x
, the orientation of the type D
is already appropriate. So, we can just translate this as:
Example (10.6.3a)
x
first? We just have to swap the direction of the assertion, first we swap the direction of the type: DΔ
[अपत्य|त्व|निरूपक], then we need to specify that its clarifier (x
) is being universally quantified over. Putting this together, we get the type:
Example (10.6.3b)
Note, how we get the existential quantification over the basis (y
) for free, since its implied by simply asserting that the type in (10.6.3b) is realized.
What if we want to do the same thing, but also universally quantify over y
? This should be pretty straightforward, by now:
Example (10.6.4)
Ok, but can this scale to three or more variables? For simplicity we’ll just consider the case of a single three-place relation with a bunch of quantifiers stacked in front; rather, than some complicated formula with a bunch of nested propositions. Since all classical first-order formulas, at least, can be rewritten in prenex form, this still covers a large set of cases for translating formulas from quantificational logic to Navya assertions.
Now, given some triadic relation R
, there are 48 (3! x 23) different ways of quantifying over its three arguments, we’re just going to pick one at random to give the gist of things.
Let’s say our formula is:
Example (10.7.1)
Now, how would we translate this into a Navya style formula? First, notice that much of the analytical tools deployed by Naiyyāyikas is designed for two-place relations. To levarage these tools, we need a way to represent a three-place relation like R
in terms of two-place relations.
Naiyyāyikas typically do this by breaking a single three-place relation into a conjuction of two-place relations.
So, e.g., the relation Rxyz
asserted of could be written in terms of three seperate relations Sxy
, Txz
, and Uzy
. We could then represent the relation R
, asserted between three items x
, y
, z
with the Navya formula:
Example (10.7.2)
Now, in order to introduce the quantifiers, we start with the inner formula:
Example (10.7.3a)
We can translate this into the Navyanyāya language as:
Example (10.7.3b)
Now, we just need to introduce the universal quantification over X
:
Example (10.7.4)
I want to talk about modality very briefly, because it’s a big topic and this is getting too long, anyway.
There is a difference between saying that, e.g., that “the sky is blue” and saying “the sky is always blue”, or “the sky must be blue”, or “the sky should be blue”.
Words like always/must/should and their counterparts sometimes/could/may are called modal operators. Like with the words ‘all’ and ‘some’, quantificational approaches to modality dominate logical thoery in the West. And, Naiyyāyikas, again, expresses these sorts of concepts through unclarified negative types.
The bulk of Navya interest in modality is focused on tense-modality; i.e. ‘sometimes’ and ‘always’. Working with negative counterparts, they define two types: प्रागभावः
[prāgabhāvaḥ] and पराभावः
[parābhāvaḥ] roughly corresponding to the concepts “not yet” and “no longer”. These Sanskrit terms are often translated as “prior absence” and “posterior absence”.
As we have been, let’s represent them with symbols ^
(posterior absence) and ~
(prior absence). We can combine these to get a type expressing the concept of “never”: (^,~)pair
. This is sometimes called नित्याभावः
[nityābhavaḥ] which translates to “constant absence”. Let’s represent this with the symbol -
.
Now, we can assert that the sky is always blue via the constant absence of an absence of blue:
Example (10.7.1)
We can assert that the sky is sometimes blue via the absence of a constant absence of blue:
Example (10.7.2)
Below is a cheat sheet for all the Navyanyāya operators with their Sanskrit names and the symbols I’m using to represent them. The top block consistent of the three fundemental Navyanyāya operators: realization and the two negation types. The second block consists of some of the core operators that can be defined in terms of the first block. The last block contains the operators used to manipulate multi-place types.
Sanskrit Name | Symbol | Description |
---|---|---|
atyantābhāva | ⊥ | Pure negative |
anyonyābhāva | ˉ | co-negative |
adhikaraṇa/-mat | ι | Realization |
Sanskrit Name | Symbol | Definition | Description |
---|---|---|---|
-tva | λ | ˉιˉ | Abstraction |
nirūpaka/-ka | κ | ûv̂; v̂û | Clarifier |
nirūpita | Π | t̂κλκ | Clarified by |
nirūpaka | Δ | t̂Πλκ | Clarifying type of |
nirūpita | Θ | t̂Δκ | Clarified by type |
niṣṭha/vr̥tta | τ | t̂ιλκ | Type of |
avachedaka | Σ | t̂⊥ιτλˉ | Partitioner |
avachinna | Ξ | t̂Σλκ | Partition |
sāmānādhikaraṇyam | ∃ | t̂ιτλ | Co-realization |
avyabhicātaḥ | ∀ | t̂⊥ιτλˉ | Regularity |
Sanskrit Name | Symbol | Definition | Description |
---|---|---|---|
pratiyogin | π | two-place correlate | |
anuyogin | α | πλκ | two-place substrate |
[1] Brief Notes [BN]. (1891). Maheśa Chandra: Brief notes on the Modern Nyāya system of philosophy and its technical terms. Hare Press.
[2] Ganeri, J. (1999). Semantic powers: Meaning and the means of knowing in classical Indian philosophy. Oxford University Press.
[3] Ganeri, J. (2008). Towards a formal regimentation of the Navya-Nyaya technical language I. Logic, Navya-Nyaya and Applications: Homage to Bimal Krishna Matilal, 109-124.
[4] Guhe, E. (2022). The Logic of Late Nyāya: A Property-Theoretic Framework for a Formal Reconstruction. In Handbook of Logical Thought in India (pp. 955-977). New Delhi: Springer India.
[5] Ingalls, D. H. (1988). Materials for the study of Navya-Nyaya logic (No. 40). Motilal Banarsidass.
[6] Lowe, J. J. (2024). Modern Linguistics in Ancient India. Cambridge University Press.
[7] Phillips, S. H., & Ramanuja Tatacharya, N. S. (2004). Epistemology of perception: Gaṅgeśa’s Tattvacintāmaṇi; jewel of reflection on the truth (about epistemology). American Institute of Buddhist Studies.