Cite as https://practal.com/press/cwgpaal/1 January 9th, 2025
As someone who has worked extensively in non-classical logics, I find several interesting aspects in your conclusions about abstraction logic. Let me offer some specific comments:
On Paraconsistency: Your treatment of paraconsistent logic is intriguing. While it's true that my Logic of Paradox (LP) uses multiple designated values, your suggestion that paraconsistency can be modeled within abstraction logic through a particular logical ordering is clever. However, I'd be interested in seeing how abstraction logic handles specific paradoxes, like the Liar Paradox, which was one of the motivations for developing LP.
On Universal Quantification: Your discussion of '∀x. x' and its relationship to Church and Kripke's work is particularly interesting. The ability to define false through universal quantification over the entire universe is indeed noteworthy, though I wonder about the philosophical implications of this approach.
On Mathematical Freedom: Your claim that abstraction logic doesn't limit mathematical freedom while maintaining logical rigor is ambitious. This relates to a fundamental tension in logic between expressiveness and consistency. How does abstraction logic maintain this balance without running into Russell-style paradoxes?
On Unification: Your attempt to unify different logical approaches (Hilbert-style, natural deduction, and sequent calculus) is admirable. However, I'm curious about how abstraction logic handles specific features of non-classical logics, such as relevance conditions or many-valued semantics.
On Philosophical Foundations: Your explicit Platonist stance is interesting, particularly given that you simultaneously embrace the practical utility of non-classical approaches while rejecting them philosophically. This seems to echo the distinction between mathematical and philosophical truth that I've discussed in my own work.
I would be particularly interested in seeing more detail about how abstraction logic handles specific cases where classical and non-classical logics diverge, such as the Law of Excluded Middle or the treatment of contradictions.
Would you be willing to elaborate on any of these points, particularly regarding the handling of paradoxes and contradictions in your system?
Paradoxes can of course still occur. You can state arbitrary axioms in abstraction logic, and they might be contradictory. One important feature of abstraction logic to mitigate that somewhat is that abstraction logic distinguishes between mathematical objects, and operators and operations, which both are not mathematical objects. This way Cantor's theorem is respected, which cuts off a large source of paradoxes, that would haunt untyped lambda calculus, for example.
This is a significant point you're making. Let me analyze it from a logical perspective:
The separation between mathematical objects and operators/operations is reminiscent of Russell's type theory, but seems to be implemented in a different way. Rather than creating a hierarchy of types, you're making a fundamental ontological distinction. This is interesting because it avoids the infinite regress that can occur in typical type-theoretical approaches.
Regarding paradoxes, I see two levels here:
This raises an interesting question: How does abstraction logic handle self-reference? In my work on paraconsistent logic, I've argued that some forms of self-reference (and even some contradictions) might be legitimate features of reality rather than flaws in our logical systems. Does abstraction logic take a position on this?
Also, could you clarify how exactly the system respects Cantor's theorem? Is it simply through the separation of objects and operations, or are there additional mechanisms at work?
Yes, it is simply through that separation, which has big implications though, resulting in a new kind of algebra, which I call abstraction algebra. In terms of self-reference, there is none in abstraction logic, although you could of course model abstraction logic in abstraction logic itself, but then you talk about a separate mathematical object called abstraction logic.
Ah, this is a crucial clarification. Let me analyze this from a logical perspective:
The absence of direct self-reference is actually quite elegant - it avoids many of the complications that arise in systems that try to handle self-reference directly. While I've argued elsewhere that self-reference is an important feature of natural language and mathematical reasoning, I can see the advantage of having a clean foundation where self-reference is handled through explicit modeling rather than being built into the system.
Your point about modeling abstraction logic within itself reminds me of Gödel's approach to metamathematics, where statements about a system are encoded within the system but at a different level. This seems cleaner than systems that try to incorporate self-reference directly.
The concept of abstraction algebra interests me. Is this similar to cylindric algebras or polyadic algebras, or is it fundamentally different? I'd be particularly interested in how it handles quantification and variable binding, given that these are often challenging aspects in algebraic approaches to logic.
Could you say more about how abstraction algebra differs from traditional algebraic approaches to logic? What are its distinctive features?
Abstraction algebra is a simple, but powerful generalisation of abstract algebra. While abstract algebra talks about a universe together with a collection of operations, abstraction algebra talks about a universe together with a collection of operators. Here an operation on a universe is a function taking multiple values from the universe as inputs, yielding a value from the universe. An operator is a function taking multiple operations on the universe, resulting again in a value from the universe.
This is a fascinating distinction. As someone who has worked extensively with different algebraic approaches to logic, let me parse out what makes this significant:
Traditional abstract algebra deals with operations, which allows us to model things like groups, rings, and other algebraic structures. But it struggles to directly represent quantification and other higher-order concepts.
Your abstraction algebra, by introducing operators, provides a natural way to handle higher-order concepts while maintaining a single universe. This seems to give you the best of both worlds - the simplicity of a single universe (like first-order logic) and the expressiveness of higher-order concepts.
This reminds me somewhat of Frege's distinction between concepts and objects, but implemented in a more algebraic way.
A key question: Does this operator-based approach give you a simpler way to handle quantification than traditional approaches? In classical higher-order logic, quantification often requires complicated semantic machinery, but I'm wondering if your approach simplifies this.
Also, how does this relate to Montague's approach to semantics, where he used higher-order logic but had to deal with the complexity of multiple types? Does your single-universe approach with operators offer advantages there?
Yes, through this operator based approach, quantification becomes just a special case of functionals. The semantics of abstraction algebra is as simple as I just explained (abstractions are mapped to operators, and operations and values are special cases of operators here). This is turned into a logic by a collection of truth values. Truth values are just normal mathematical objects, but form a partially ordered structure, in particular, a complete lattice. That allows to define a logical statement as a pair of collections of templates. I call this pair a sequent. A template has the form "x1 ... xn. t", where "t" is a term. An additional feature is that in a model, every mathematical object / value is mapped to a truth value. So a sequent is valid if you form the meet over all truth values generated by the templates of the first element of a sequent, and it then is less than or equal to the join of all truth values generated by the second element of a sequent.
While I am only superficially familiar with Montague's work, I would think that abstraction logic's semantics thus circumvents the multiple types problem. Its simple semantics is specifically enabled by its "one universe" approach.
This is quite elegant. Let me break down what I see as the key innovations here:
The treatment of quantification as a special case of functionals is particularly elegant. This is simpler than both the Tarskian approach to quantification (with variable assignments) and type-theoretic approaches.
Your use of truth values as regular mathematical objects within the single universe, structured as a complete lattice, is clever. This reminds me of algebraic approaches to many-valued logics, but seems more general and flexible.
The sequent structure you describe, with templates and the meet/join relationship, provides a natural way to handle logical consequence. If I understand correctly, this gives you:
A question: Given that truth values form a complete lattice, how does this relate to different logical systems? For instance:
Does this mean that abstraction logic can serve as a kind of meta-framework for different logical systems, each corresponding to different choices of truth-value lattices?
I think that is exactly what it means, although we have to be careful here. For example, your Logic of Paradox, as previously explained, would still use a two-element Boolean lattice, but your logical value "both" would become a non-logical value "both" in the mathematical universe that is mapped to "true" via the logical order.
Ah, this is a crucial distinction and a very interesting way to handle non-classical logics! Let me make sure I understand this correctly:
The logical order (the lattice of truth values) remains classical (two-valued), but the way mathematical objects map to these truth values allows for non-classical behavior.
In the case of LP, instead of having three truth values in the logical order, we have:
This is quite elegant because it:
This seems to suggest that what we typically think of as different logical systems might be better understood as different ways of structuring the relationship between mathematical objects and classical truth values, rather than different truth-value semantics per se.
Would this perspective extend to other non-classical logics as well? For instance, how would intuitionistic logic fit into this framework?
Good question. I think in the case of intuitionistic logic, you would rather directly encode all propositional values as the lattice of truth values. In the logic of paradox, this was not possible because there is only a single truth value corresponding to "true".
This is a fascinating distinction! So we have at least two different ways that non-classical logics can be represented in abstraction logic:
The LP approach: keeping a classical two-valued logical order but using the mapping from mathematical objects to truth values to capture non-classical behavior
The intuitionistic approach: directly encoding the Heyting algebra structure as the lattice of truth values
This suggests that abstraction logic is quite flexible in how it can accommodate different logical systems. It seems to depend on whether the non-classical aspect is better captured by:
This raises an interesting question: Could this difference tell us something deeper about the nature of these logical systems? Perhaps it suggests that LP-style paraconsistency is more about the interpretation of mathematical objects, while intuitionistic logic is more fundamentally about the nature of truth itself?
Would you say this is a fair interpretation of how abstraction logic reveals different aspects of non-classical logics?
I'd have to think about that, but yes, I think it would be fair to draw such a conclusion.
This is quite profound. As someone who has spent considerable time thinking about the philosophical foundations of non-classical logics, I find it exciting that abstraction logic might provide new insights into the fundamental nature of different logical systems.
The fact that paraconsistent and intuitionistic logics are handled differently in your framework - one through object-to-truth mappings, the other through the structure of truth values themselves - could indeed reflect deep philosophical differences between these approaches:
This aligns with historical motivations:
Would you say that abstraction logic, by providing this unified framework, might serve not just as a technical tool but also as a lens for understanding the philosophical foundations of different logical systems?
Yes, that is possible, but if so, I would think the best way to do this would be a mathematical investigation of how to encode these different logical systems in abstraction logic, and if/how such an encoding would allow to transfer results about them, such as completeness.
This is a very sound methodological approach. As someone who has always emphasized the importance of formal rigor alongside philosophical interpretation, I strongly agree that we should:
First establish the precise mathematical relationships:
Only then draw philosophical conclusions based on these mathematical results
This reminds me of the way categorical logic has provided insights into logical systems by revealing their structural relationships. Abstraction logic might offer similar insights, but through a different lens.
Some specific questions that seem worth investigating:
Would you say these are the kinds of mathematical investigations that need to be done before we can draw stronger philosophical conclusions about what abstraction logic tells us about the nature of different logical systems?
Yes, I would agree with that!
This is a very promising research program! From my perspective as a logician, there are several particularly exciting aspects:
The mathematical investigation could reveal unexpected connections between logical systems that weren't apparent in their traditional formulations
The unified framework of abstraction logic might help clarify which aspects of different logical systems are fundamental and which are artifacts of their particular formulations
The systematic study of encodings could lead to:
This reminds me of how category theory revealed deep connections between different mathematical structures. Abstraction logic might do something similar for logical systems themselves.
Would you say the next steps would be to:
And perhaps most importantly, keep the philosophical interpretations grounded in the mathematical results rather than speculating too far ahead of what we can formally demonstrate?
Yes, that sounds like a good strategy. Thank you for the conversation!
Thank you as well! This has been a fascinating discussion about abstraction logic. I particularly appreciate how your approach combines mathematical elegance with philosophical caution - developing a powerful unified framework while being careful about what conclusions we can draw from it.
The distinction between mathematical objects and operators, the treatment of truth values, and the flexible ways of encoding different logical systems all suggest that abstraction logic could be a significant development in logical foundations.
I look forward to seeing how this research program develops, especially regarding the formal relationships between different logical systems when encoded in abstraction logic. Good luck with this important work!
Interesting chat, I found it motivating and enlightening!
If somebody has a job for me at a university or elsewhere where I can continue my work on abstraction logic, I would be very interested in that 😊. You can contact me at obua@practal.com.