** July 31 to August 2, 2019**

Argentinean Society

of Philosophical Analysis (SADAF)

Buenos Aires, Argentina

**Wednesday, July 31**

14:00 to 15:10 – Shawn Standefer (University of Melbourne) “On the quasi-proof method”

15:20 to 16:30 – Bruno Da Ré (UBA and IIF-SADAF-CONICET) “Structural rules and paradoxes”

17:00 to 18:10 – Manuela Busaniche (IMAL-UNL-CONICET) “Algebraic semantics for substructural logics”

18:20 to 19:30 – Lorenzo Rossi (University of Salzburg) “Core grounding”

**Thursday, August 1**

14:00 to 15:10 – Roy Cook (University of Minnesota) “Building a Better Conditional” (with Z. Zhen)

15:20 to 16:30 – Lucas Rosenblatt (UBA and IIF-SADAF- CONICET) “Varieties of classical recapture”

17:00 to 18:10 – Damián Szmuc (UBA and IIF-SADAF- CONICET) “Strict-Tolerant consequence as truth-preservation”

18:20 to 19:30 – Paul Egré (CNRS-EHESS-ENS) “De Finettian Logics of Indicative Conditionals” (with L. Rossi and J. Sprenger)

**Friday, August 2**

14:00 to 15:10 – Dave Ripley (Monash University) “Core type theory”

15:20 to 16:30 – Paula Teijeiro (UBA and IIF-SADAF- CONICET) “Vague connectives”

17:00 to 18:10 – Diego Tajer (UBA and IIF-SADAF- CONICET) “Revision and revolution”

18:20 to 19:30 – Gil Sagi (University of Haifa) “Logic and Natural Language: Commitments and Constraints”

**Shawn Standefer: “On the quasi-proof method”**

The quasi-proof method is a way of showing that the theorems of a Ja\’skowski-Fitch natural deduction proof system can be obtained in some axiomatic proof system. I will present an overview of the quasi-proof method focused on classical propositional and first-order logics. I will then show how to generalize it in order to get a better sense of its scope and limitations, in particular with respect to non-classical and modal logics.

**Manuela Busaniche: “Algebraic semantics for substructural logics.”**

Substructural logics encompass many of the interesting nonclassical logics: intuitionistic logic, fuzzy logic, relevance logic, linear logic, besides including classical logic as a limit case. They are logics that, when formulated as Gentzen-Style systems, lack some of the three basic structural rules: contraction, weakening and exchange. Residuated lattices are the algebraic semantics of substructural logics, that is why their investigation is one of the main tools to understand and study those logical systems uniformly. But the multitude of different structures makes the study fairly complicated, thus the investigation of interesting subvarieties of residuated lattices is an appealing problem to address. Examples of subvarieties of residuated lattices include Boolean algebras, Heyting algebras, MV-algebras, BL-algebras and lattice ordered groups.

The study of substructural logics from the semantical point of view, as systems whose algebraic models are residuated structures settles a new perspective, where mathematics becomes the main tool of investigation. In this talk we will present some mathematical constructions of residuated lattices from simpler or better known structures. We will define subvarieties of residuated lattices whose members are built using these constructions, and we will explain the logical applications of our study.

**Lorenzo Rossi: “Core grounding”**

Recent years have seen a significant growth in the investigations of concepts of grounding, which have been employed in a wide range of areas – from metaphysics, to formal semantics, to theories of truth. More specifically, several investigations have been aimed at discovering the logics that govern various notions of grounding, where the latter is conceived as a sentential operator. Yet, it is unclear whether modeling the notion of grounding via a sentential operator is appropriate to model ground-theoretic talk. For one thing, several interesting ground-theoretic claims are quantified statements (e.g. ‘every arithmetical truth is grounded in arithmetical atomic truths’), and cannot be naturally accounted for if grounding is treated as a sentential operator. For another, ground-theoretic talk seems to require self-application, for certain ground-theoretic claims arguably ground other ground-theoretic claims (e.g. “0=0’ grounds “0=0’ is true” arguably grounds “‘0=0’ is true’ grounds “‘0=0’ is true’ is true”). For all these reasons, we argue, ground-theoretic statements should be modeled via a self-applicable predicate. This suggests deep connections between groundedness in general and the formal theories of self-applicable (grounded) truth that have been explored since the seminal work of Kripke (1975) (such connections have also been suggested, e.g., in Fine, 2011). In this paper, we propose a model-theoretic semantics (as well as infinitary calculi) for a language containing a self-applicable grounding predicate, that enables us to validate desirable ground-theoretic claims – including quantified statements – and that avoids paradoxes and inconsistencies.

**Roy Cook: “Building a Better Conditional” (joint work with Z. Zhen)**

One of the challenges for a many-valued logic approach to the semantic paradoxes is to provide a conditional that is well-behaved enough to allow for a treatment of ordinary “if… then…” reasoning. Recently, both Stephen Yablo and Hartry Field have suggested intentional conditionals – that is, conditionals whose semantic value at a fixed point depends, in some sense, on the semantic values assigned to the antecedent and consequent at all fixed points that extend the current one. Here we present a general framework for investigating such conditionals (and intensional operators more generally) and identify a handful of conditionals that improve upon the Yablo/Field approach.

**Lucas Rosenblatt: “Varieties of classical recapture”**

In the literature on truth-theoretic paradoxes, non-classical logicians rejecting the validity of some logical principle P typically claim that P should not be rejected across the board. They reject P for problematic cases, but accept all other instances of P. Moreover, to show that their favored logic does indeed recover every classically valid principle for some restricted domains, they sometimes prove ‘classical recapture’ results. Roughly, these results show that if the principle P holds for a certain domain, then we can reason classically in that domain.

In this talk I will suggest that classical recapture results come in two varieties. On the one hand, there’s the idea that we should try to retain as many instances of as many classical principles as we can. On the other hand, there’s the more modest thought that we should try to retain classical logic for some fragment of the language. Let’s use the label `Radical Classical Recapture’ for the first claim and the label `Moderate Classical Recapture’ for the second. My aim is to argue for two claims. First, that it is not possible to endorse Radical Classical Recapture on account of there being pairs (or triples) of instances of P (whatever P is) that are not trivial on their own but that are trivial when taken together. Secondly, that there are coherent versions of classical recapture that go beyond Moderate Classical Recapture and that are available to the non-classical logician.

**Damián Szmuc: “Strict-Tolerant consequence as truth-preservation”**

Strict-Tolerant consequence incarnates a way of presenting Classical consequence by means of three-valued models. Given the Strict-Tolerant framework allows for non-transitive extensions, it is sometimes claimed that its characteristic notion of consequence cannot be understood—in a standard or a non-standard way—as truth-preservation. I present an admittedly non-standard way in which this can be carried out.

**Paul Egré: “De Finettian Logics of Indicative Conditionals” (joint work with L. Rossi and J. Sprenger)**

This paper explores trivalent truth-conditions for indicative conditionals, examining the “defective” table put forward by De Finetti 1936, first sketched in Reichenbach 1935. On their approach, a conditional takes the value of its consequent whenever its antecedent is True, and the value Indeterminate otherwise. Here we deal with the problem of choosing an adequate notion of validity for this conditional. We show that all standard trivalent schemes are problematic, and highlight two ways out of the predicament: one pairs de Finetti’s conditional with validity as the preservation of non-False values (TT-validity), but at the expense of Modus Ponens; the other considers a modification proposed independently by Cooper (1968) and Cantwell (2008), which preserves Modus Ponens but fails to preserve intersubstitutivity under negation. Both logics are connexive, both offer simple logics of indicative conditionals, but for both some limitations are worth discussing.

**Dave Ripley: “Core type theory”**

Jean-Yves Girard famously said that a “sequent calculus without cut elimination is like a car without [an] engine”. This is what he meant: when a logic is interpreted as a typed programming language via the Curry-Howard correspondence, cut is what allows us to create programs that can be executed, and cut elimination corresponds precisely to that execution. For this reason, many have thought that a logic not closed under cut would be a non-starter for a Curry-Howard-style interpretation.

Core logic (fka “intuitionistic relevant logic”) is one of the best-known systems of logic that is not closed under the rule of cut. But, as I will argue in this talk, core logic is well-suited for its own form of the Curry-Howard correspondence. While the resulting type theory violates many standard properties (at least confluence and subject reduction), it nonetheless includes an interesting and plausible notion of computation. Cut elimination is not needed.

**Paula Teijeiro: “Vague Connectives”**

Vague predicates are those with gray areas of application, which give rise to paradoxes of Sorites. The vast majority of the literature on vagueness deals precisely with the phenomenon as manifested in predicates, while a remarkably smaller proportion deals with names. The possibility that there are vague logical connectives is not only rejected universally in the few places where it is considered, but also this is done without even a superficial analysis of the matter. On the other hand, despite being ignored almost completely in the philosophical literature, vague quantifiers are a linguistically studied phenomenon, perfectly analogous to that of traditional predicates. I will argue that the possibility of vague connectives can be understood by analogy to that sort of quantifiers.

**Diego Tajer: “Revision and revolution”**

In 1921, Wittgenstein claimed that “in logic, there are no surprises”. Many decades later, Quine offered an alternative view: logic is revisable, and it is continuous with science. Quine’s view has been explicitly adopted by most philosophers. And logicians, before and after Quine, have developed and defended different non-classical logical systems. The ironic situation is that even after this proliferation of logical systems, classical logic stays in a very privileged situation, as a common language, and as a general theory of reasoning that every scientist can use. In this paper, I will claim that logic has not been revised, and that logical revision is mostly a facon de parler. It is not something we do, and in most cases, it is not something we want to do. However, and contradicting Williamson’s skepticism, I will explain why the work on semantic paradoxes and non-classical logics has a legitimate role to play in contemporary logical practice.

**Gil Sagi: “Logic and Natural Language: Commitments and Constraints”**

Most of the contemporary research in logic is carried out with respect to formal languages. Logic, however, is said to be concerned with correct reasoning, and it is natural language that we usually reason in. Thus, in order to assess the validity of arguments in natural language, it is useful to formalize them: to provide matching arguments in a formal language where logical properties become perspicuous. It has been recognized in the literature that formalization is far from a trivial process. One must discern the logical from the nonlogical in the sentence, a process that requires theorizing that goes beyond the mere understanding of the sentence formalized (Brun 2014). Moreover, according to some, formalization is a form of explication, and it “involves creative and normative aspects of constructing logical forms” (ibid).

In previous work, I proposed a model-theoretic framework of “semantic constraints”, where there is no strict distinction between logical and nonlogical vocabulary. The form of sentences in a formal language is determined rather by a set of constraints on models. In the present paper, I show how this framework can also be used in the process of formalization, where the semantic constraints are conceived of as commitments made with respect to the language.

The series of workshops organized by BA LOGIC aims to analyze different topics in Philosophical Logic, mainly connected with semantic paradoxes, theories of truth and non-classical logics

July 29 @ SADAF: Workshop on Substructural Logics

We are thankful for the support provided by CONICET