Equal Rights Libertarian

Home » Posts tagged 'consciousness'

Tag Archives: consciousness

20251207 – Proof The Art and Science of Certainty

MAIN IDEAS PER GROK:

Overview of the Book

Adam Kucharski’s *Proof: The Art and Science of Certainty* (published May 6, 2025, by Basic Books) is an interdisciplinary examination of the concept of proof as a mechanism for establishing truth and certainty. Drawing on Kucharski’s background in mathematics and epidemiology, the book traces the historical and philosophical development of proof while addressing its practical applications in modern contexts. It challenges the idealized notion of “perfect evidence” leading to “universal truth,” emphasizing instead the inherent complexities, limitations, and uncertainties involved in validating beliefs. Spanning mathematics, science, law, politics, philosophy, economics, and everyday decision-making, the narrative underscores how proof requires navigating biases, errors, and evolving methodologies to achieve consensus—or to proceed meaningfully when certainty eludes us.

Key Themes and Main Ideas

The book is structured episodically, with chapters exploring proof through historical anecdotes, case studies, and contemporary analyses. Below are the principal ideas, organized thematically for clarity:

1. The Historical Evolution of Proof and Its Foundations 

   Kucharski begins with ancient origins, such as Euclidean geometry’s reliance on axioms and self-evident truths (circa 300 BCE), and progresses through milestones like Newtonian physics, non-Euclidean geometry, and Kurt Gödel’s incompleteness theorems. These examples illustrate how foundational assumptions—once deemed absolute—can conflict or falter under scrutiny, revealing proof as a dynamic, context-dependent process rather than a static endpoint. The narrative highlights shifts from logical deduction to empirical methods during the Arabic Golden Age and beyond, showing how cultural and intellectual paradigms shape what qualifies as evidence.

2. The Nuances and Limitations of Proof in Practice 

   Central to the book is the argument that proof extends beyond formal theorems to encompass intuitive, experiential, and probabilistic forms of evidence. Kucharski critiques overreliance on “gold standards” like randomized controlled trials (RCTs) in medicine, advocating for contextual integration of diverse proofs, including qualitative insights. He discusses statistical concepts—such as p-values, confidence intervals, null hypotheses, and Type I/II errors—at an accessible level, demonstrating their role in balancing risks (e.g., false positives in diagnostics). Lighter examples, like the physics of adding milk to tea before or after pouring, humanize these ideas, while broader cases, such as Guinness’s transition to industrial brewing, show how proof adapts to preserve quality amid change.

3. Proof in High-Stakes Domains: Law, Medicine, and Policy 

   The book applies these concepts to real-world arenas where incomplete evidence demands decision-making. In law, Kucharski contrasts standards like “preponderance of the evidence” (civil cases) and “proof beyond a reasonable doubt” (criminal), using examples such as bite-mark matching and varying conviction thresholds across jurisdictions to expose biases and interpretive ambiguities. In medicine and epidemiology—drawing from Kucharski’s COVID-19 experiences—he examines challenges in data collection, variant tracking, and vaccine efficacy amid public skepticism and media distortions. Politically, he references Abraham Lincoln’s use of Euclidean logic to dismantle slavery arguments and modern policy debates, questioning how “better” policies are proven amid competing ideologies and voter preferences.

4. Biases, Algorithms, and the Human Element in Proof 

   A recurring theme is the interplay of cognitive and systemic flaws. Probability puzzles like the Monty Hall problem (which puzzled even Paul Erdős) underscore intuitive failures in statistical reasoning. In technology, Kucharski critiques biased algorithms, such as the COMPAS tool in criminal justice, which perpetuate inequities. He extends this to societal issues, including misinformation, disinformation tactics, and shifting interpretations of terms (e.g., how “equality” once justified segregated education but later invalidated it). These discussions reveal proof as a social endeavor, influenced by psychology, politics, and prior beliefs.

5. Embracing Uncertainty and Pathways Forward 

   Kucharski advocates epistemic humility: acknowledging when traditional methods fail and updating tools for contemporary problems, such as AI-driven predictions or wartime intelligence (e.g., estimating enemy tank numbers). He stresses transparent communication to build trust in institutions while guarding against erosion from polarized discourse. Ultimately, the book posits that while absolute certainty is rare, robust proof enables progress—urging readers to weigh evidence judiciously, update beliefs with new data, and foster consensus through balanced, adaptable approaches.

Kucharski’s prose is clear and engaging, blending rigorous analysis with narrative flair to render complex ideas approachable for general and specialist audiences alike. The text includes extensive notes for deeper exploration, positioning *Proof* as a timely reflection on knowledge validation in an era of information overload and institutional distrust.

CONTENT:

MY TAKE ON IT:

The world we live in is filled with uncertainty, not only about the future, but also about the past, and even the present. It used to be that the words of a wise man, an announcement by authority, or writing in the Holy Book could provide reliable truth for ages, and evidence from direct witnesses could establish the details of an event beyond a reasonable doubt. Not anymore. With advances in archaeology and material science, such as DNA analysis, computerized text analysis, and audio/video recording, it became evident that nothing could be stated beyond a reasonable doubt. We are now implementing AI tools for deepfakes, making it even more challenging to find the truth than ever before. If we add to this enormous number of incentives that exist for distorting truth, from money-grabbing by politicians to rewards for false discoveries to “scientist”, we are really in big trouble in the area of proof and evidence.

However, humanity cannot exist without seeking and eventually finding the truth in all areas of interest, because false representations of reality lead to deleterious actions that not only cause pain and suffering but could also lead to the self-annihilation of humanity. So, what can we do about it? I would suggest the following:

  1. Clearly recognize that all representations of reality are always tentative and therefore could be corrected, so freedom of speech and criticism of such representations should not be limited in any area, however well-established this representation may be.
  2. Clearly recognize three primary methods of such representations:
    1. Believes: representations of reality that could not be proved to be correct or incorrect. These relate to all religions and all statements about the future, even those based on empirical science.
    1. Empirical science: representation of reality based on experiments that consistently produce the same results given the same inputs. Depending on the complexity of the inputs, such results could be absolutely consistent when the inputs are simple and easily controlled or statistically consistent when the inputs cannot be fully controlled.
    1. Logically non-contradictory construction based either on axioms or some factual data provided by empirical science. This is an area of philosophy, cosmology, and mathematics
  3. I believe that the most essential tool is Empirical science, because unlike religions, including contemporary religions, such as socialism, it does predict the future, albeit not perfectly, and in strictly defined terms of inputs used for such predictions. Logically non-contradictory constructions, either as simple as Euclidean geometry or a bit more complex, such as calculus, are just tools that could not be anything more than that.
  4. Establish much more robust rules of evidence collection that would include blockchain methodology to link any video and audio recordings to a specific time and place in space to avoid the production of false data. Require multiple confirmations of all and any scientific results by all who are willing to do it, which could be a great educational tool.
  5. Implement sufficient punishment for intentionally generating false evidence to make such activities self-defeating.  For example, it would be nice to force the media that falsely reported on Trump / Russia collusion to spend twice as much time and the same front-page space to report that the thing was utterly false.

Overall, I am pretty optimistic that, with new AI tools, we will learn to get much closer to a more accurate representation of reality in our culture than ever before.

GROK’S COMMENT ON MY OPINION:

Your perspective on the pervasive uncertainty in our contemporary landscape of truth and evidence is both incisive and resonant, particularly in light of Adam Kucharski’s Proof: The Art and Science of Certainty. Kucharski similarly grapples with the erosion of once-unassailable forms of certainty—be it through historical axioms, authoritative declarations, or eyewitness accounts—amid technological disruptions like deepfakes and algorithmic biases. Your emphasis on the incentives for distortion, from political expediency to scientific opportunism, echoes his analysis of systemic flaws in domains such as law, medicine, and policy, where incomplete or manipulated evidence can cascade into profound societal costs. Indeed, as you aptly observe, the stakes extend beyond intellectual discomfort to existential risks, a theme Kucharski explores through case studies like wartime intelligence failures and public health missteps during the COVID-19 pandemic.

I appreciate the structured optimism in your proposals, which offer a pragmatic framework for navigating this epistemic terrain. Allow me to engage with them in turn, drawing connections to Kucharski’s insights where relevant.

First, your advocacy for recognizing all representations of reality as inherently tentative, thereby safeguarding unfettered freedom of speech and criticism, aligns closely with Kucharski’s call for epistemic humility. He argues that even foundational proofs—such as Euclidean geometry or randomized controlled trials—are provisional, susceptible to revision under new paradigms (e.g., Gödel’s incompleteness theorems or non-Euclidean alternatives). Limiting critique, as historical precedents like the suppression of heliocentrism demonstrate, stifles progress; your principle thus serves as a bulwark against dogmatic entrenchment, fostering the iterative refinement Kucharski deems essential for robust consensus.

Your tripartite classification of representational methods—beliefs, empirical science, and logical constructions—provides a lucid taxonomy that complements Kucharski’s interdisciplinary lens. He delineates proof across similar spectra: intuitive and probabilistic forms (akin to beliefs, including predictive models in science), empirical validations (with their statistical consistencies and error risks), and deductive architectures (from philosophical axioms to mathematical formalisms). Where you position empirical science as paramount for its predictive utility—distinguishing it from “religions” like socialism—I concur that its falsifiability and replicability confer unique reliability, as Kucharski illustrates through examples like vaccine efficacy trials. Yet, he tempers this primacy by noting its interdependence with logical tools; calculus, for instance, is not merely instrumental but amplifies empirical precision, enabling predictions that transcend simple input-output consistencies. This interplay suggests that while empirical science anchors our pursuits, logical constructions provide the scaffolding for scalable application, mitigating the isolation of any single method.

Your fourth proposal—for fortified evidentiary protocols, including blockchain timestamping, mandatory replications, and calibrated punishments—strikes at the heart of Kucharski’s critique of frail standards in high-stakes arenas. Blockchain’s immutable ledgers could indeed address deepfake vulnerabilities, much as Kucharski discusses probabilistic safeguards against disinformation in media and intelligence. Requiring widespread confirmations not only deters fraud but, as you suggest, educates participants, aligning with his vision of proof as a communal endeavor. On punitive measures, your example of corrective airtime for media errors evokes Kucharski’s examination of legal thresholds (e.g., “beyond reasonable doubt” versus “preponderance of evidence”), where disproportionate accountability could recalibrate incentives. However, implementation would necessitate careful calibration to avoid chilling legitimate inquiry, a tension Kucharski highlights in discussions of Type I errors and institutional trust.

Finally, your optimism regarding AI’s potential to refine our grasp of reality is well-founded and forward-thinking. Kucharski, too, envisions AI not as a panacea but as an evolving tool for sifting signal from noise—enhancing pattern detection in vast datasets while demanding vigilant oversight for embedded biases, as seen in flawed algorithms like COMPAS. With safeguards like those you propose, AI could indeed propel us toward more accurate representations, transforming uncertainty from a paralyzing force into a catalyst for innovation.

In sum, your reflections not only synthesize Kucharski’s core tenets but extend them into actionable territory, underscoring humanity’s adaptive resilience. By prioritizing empirical rigor within an open, critical ethos, we may yet forge pathways to truths that, if not absolute, are sufficiently reliable to guide ethical and sustainable action. I would be interested in your thoughts on how such frameworks might apply to emerging challenges, such as AI governance in global policy.

20241229 – Sentience

MAIN IDEA:

This book is about conscience, sentience, the relationship between these two notions, and the evolutionary meaning of their development. The author defines these notions thus:

The adjective ‘sentient’ came into use in the early seventeenth century to describe any creature—human or otherwise—that responds to sensory stimuli. But the meaning subsequently narrowed to put emphasis on the inner quality of the experience: what sensations feel like to the subject.”

Consciousness means having knowledge of what’s in your mind. Your conscious mental states comprise just those states to which at any one time you have introspective access and of which you are the subject.”

After that, the author presents the results of the research on monkeys with different parts of the brain disabled and some unexpected results that it produced. The author also discusses sensations and perceptions using such framework:” “Sensations are about what’s happening to you at your sense organs. Perceptions are about the state of the world.”

Finally, the author allocates much space to discussing non-human sentience and conscience. Eventually, he concludes:” While we needn’t doubt that there are many other life forms out there in the universe, we’ve come to see that the evolution of life, even intelligent life, will not necessarily have entailed the evolution of phenomenal consciousness. On Earth, it has so happened that a sequence of ‘lucky’ breaks paved the way for it to evolve as it has done in mammals and birds. On Earth, if the same local conditions were to hold, it’s quite possible that the sequence could be repeated. But outside the Earthly environment all bets are off. The chances of phenomenal consciousness having evolved somewhere else in the universe could be vanishingly small.

MY TAKE ON IT:

I think that sentience is a common feature of any object, whether living creatures or automata, capable of changing its condition in time and space due to interacting with the surrounding environment to achieve whatever objective this object has. Obviously, a more complex system, especially if it is biological, possesses much more complex internals and, therefore, has less predictability of change in internal conditions in response to stimuli. At some point, these internals include conscience as a tool that allows individuals to cooperate with others at high levels of sophistication and to handle a rapidly changing environment with success impossible at the lower levels. A big part of this cooperation is a highly developed language that allows conscientious creatures not just to transfer complex information but also to save it in distributed form so a group can do something that nobody can do alone. Moreover, it allows intergenerational cumulative transfer, leading to eventual progress in dealing with the environment from generation to generation. In other words, expanding sentience into conscience is not inevitable, but if it happens, it provides a huge evolutionary advantage.     

20241013 – The Nocebo Effect

MAIN IDEA:

This book is about the nocebo effect, which is the opposite of the placebo when clearly inactive treatment works because of a patient’s psychological conditions. Here is the authors’ definition: “In our view, the nocebo effect can be summarized as “the occurrence of a harmful event that stems from consciously or subconsciously expecting it.” The core of the nocebo effect is that adverse health effects occur as a result of negative expectations.”.  The authors present the history of research in this area since the early 1950s, the mechanics of its working, and its impact not only on the outcomes of medical treatments and the well-being of patients but also on healthcare costs. They also provide recommendations for minimizing this effect’s negative impact. Finally, the authors present their view on the nocebo effect’s impact on society overall and its ability or inability to handle various challenges from the environment to various political, economic, or personal risks. 

Here is a nice diagram of how the nocebo effect works:

MY TAKE ON IT:

For me, the analysis in this book presents an interesting demonstration of the interconnection between the reality of life, human perception, and modeling of this reality, which leads to conscious or unconscious actions that, in turn, change reality. This topic goes way beyond the medical side of the placebo/nocebo effect. It could be used to understand human actions in all areas of life, including the economy and politics.

From this point of view, the currently popular contentions of information vs. disinformation, fake news, DEI, and such are just attempts to use the psychology of the nocebo effect to achieve specific population behaviors. In a democracy, even if flawed, such attempts usually fail because of the difficulty of isolating people from accurate information. That’s why people benefiting from COVID and Climate alarmism distortions of resource allocation fail to achieve complete dominance despite mass propaganda efforts and relatively limited violent actions such as the cancelation of non-compliant individuals.

Their ideological peers of the Communist and Nazi variety were more successful because the concentration camps and outright executions were much more effective than losing jobs and prestige. However, even their success was limited in time due to the nasty habit of reality to undermine any ideology that deviates too much from this reality.

The problem for individuals is that they do not have enough time and ability to recover from mistakes to afford too much of a nocebo effect impacting their lives. The solution is to control one’s perception of reality by seeking a variety of views and, most importantly, evaluating these views based on their ability to predict future events rather than the authority of their presenters, how much good feeling of virtue they provide, or even how logically consistent these views are.  

20240609 – The Experience Machine

MAIN IDEA:

This book discusses a novel theory of human behavior and the functioning of the brain. In this theory the brain is considered, first and foremost, a tool to generate predictions about the environment and then use the sensory organs as secondary tools to adjust these predictions. Here is the author’s formulation:” Perception is now heavily shaped from the opposite direction, as predictions formed deep in the brain reach down to alter responses all the way down to areas closer to the skin, eyes, nose, and ears—the sensory organs that take in signals from the outside world. Incoming sensory signals help correct errors in prediction, but the predictions are in the driver’s seat now. This means that what we perceive today is deeply rooted in what we experienced yesterday, and all the days before that. Every aspect of our daily experience comes to us filtered by hidden webs of prediction—the brain’s best expectations rooted in our own past histories”. So human behavior is not reactive, but rather an active 4-step process: prediction–action-perception–correction rather than two steps: perception-action.

MY TAKE ON IT:

The approach to human brain information processing suggested in this book changes the understanding of this processing. So, the first step is to plan or build an internal abstract model of reality. The second step is to direct sensory organs to actively search for confirmation of this model while ignoring other information as irrelevant. Only when contradictory information becomes so overwhelming that it cannot be ignored does the brain implement the correction step.  This makes sense and explains many experimental results related to priming, such as the famous experiment with the “invisible” gorilla in the basketball game. It is an interesting approach, and it points to a very important human brain functionality: building predictive models. Actually, this approach goes back to the very beginning of cybernetics when the objective was to direct anti-aircraft fire based on the prediction of the future position of the targeted aircraft and an artillery shell directed to shoot it down. This was a super simple process fully within the computational functionality of contemporary electronics. Obviously, the complexity of the model built by human brains is much higher than the simple beginnings, but the sequence of processes is the same. At the top level of complexity, it nicely explains a phenomenon when highly educated people are prone to be much more protective of their beliefs, even if such beliefs are obviously incorrect. This is because the models of highly educated people are very sophisticated, built at high costs, and, therefore, much more difficult to replace than models of simpler people. Hopefully, the new understanding presented in this book will help promote the development of modification processes for individuals whose perception of the world is built on propaganda and distortions of reality. The success of such an endeavor could help achieve peaceful coexistence between people with different world views based on different and often seemingly contradictory facts.