
MAIN IDEA:
Here is the author’s summary of what this book is about:” This book seeks to tell the story of how knowledge has been passed from its vast passel of sources into the equally vast variety of human minds, and how the means of its passage have evolved over the thousands of years of human existence. In the earliest times—back even in hominid days, before Homo was even on the verge of becoming sapiens—the transmission was effected near-entirely as a consequence of experience. The experience of rain and cold required the seeking of clothing and shelter; to accommodate and reverse the experience of hunger necessitated the finding and preparation of sustenance; to counter the perils of hostility—whether experiencing it from wild beasts or from other humans, and so knowing its dangers—required preparedness and, perhaps, the acquisition of some kind of martial equipment, and which might overcome the approaching challenge.”
In addition to retelling the history of the development of human knowledge, the author also looks at the meaning of new developments such as computers and AI that threaten to take the process of knowledge acquisition away from humans:” If our brains—if we, that is, for our brains are the permanent essence of us—no longer have need of knowledge, and if we have no need because the computers do it all for us, then what is human intelligence good for? An existential intellectual crisis looms: If machines will acquire all our knowledge for us and do our thinking for us, then what, pray, is the need for us to be? “

MY TAKE ON IT:
This book provides a good overview of the history of knowledge. It poses a serious question for what is next when computers with AI technology very soon will be able justifiably to sing to humans an old song from the musical “Annie, Get Your Gun”: Anything You Can Do (I Can Do Better)”.
However, I do not see an existential problem here as long as humans remain subjects and computers remain objects. Any activities directed to achieve some objective always include:
- Explicit or implicit formulation of: “Why do it?”
- Definition of “What to do?”
- Detailed, even if flexible, the algorithm of actions: “How to do it”
- Finally, activities of doing it.
The development of knowledge-based technology sequentially moved these stages from humans to computers and machines. First machines substituted multiple humans doing something by multiplying the power of the machine operator. For example, unloading trucks with bags of rice used to require several men who were carrying bags on their backs. Then came the forklift and only one man could unload the truck by telling the machine how to unload the truck by moving control levers. The next came a computer-based warehouse control system that produces signals telling forklift operators (human or automatic) what to do: go to this truck, unload it, and put bags there. These activities, however complex, could be done by computers and machines. However, only humans can define the objective of doing all this and answer the question “Why do it”, because computers do not recognize the self as an entity with wants and needs. Is it conceivable to create a self-directing computer with wants and needs? I am sure it will be possible someday, even if the human brain has 85 billion neurons: way beyond the current technological level. It also requires decades of development to bring this brain to the functionality level of a regular human adult. It would probably be done as an experiment within a century or so, but what is the point beyond the proof of “yes we can do it?”. It would be pretty much like the moon landing: “Yes we did it”, but do we really need to do it again?