Testing One-Two-Three

The patient’s spiking fevers had lasted over two weeks, and all efforts to diagnose an infection (the most common cause of fever) had yielded nothing.  She was admitted to the hospital for further workup, and late one evening, I was asked to see her, as the rheumatologist on call.  The electronic health record showed a normal admission physical exam (except for her temperature) and dozens of normal blood tests.  The only abnormality was a markedly elevated erythrocyte sedimentation rate, which is a nonspecific indicator of inflammation– it can be high in infections, malignancies, autoimmune diseases etc. and did not narrow down the diagnosis by much.  The inpatient ward team’s primary question for me was:  Out of all the numerous lab tests associated with rheumatologic diseases, which should they order?  ANA?  SSA/B?  RF?  ANCA?  aCCP?  What’s the best rheum test to diagnose an autoimmune disease? they earnestly inquired on the consult request.

After taking a detailed history of her illness, I looked at her skin, her mouth, nose, ears, felt for lymph nodes, listened to her heart and lungs, pressed on her abdomen– most are routine features of the physical exam– without finding anything abnormal.  It seemed the admission exam was accurate, and it was going to be a long evening.  Then I threw back the bed covers, to take a look at the legs.  The abnormal finding was immediately obvious:  her right foot was pointing at the ceiling, while the left foot was pointing at the door.  Hard as she tried (with me cheering her on), she could not point it upwards.  How long had that been going on?  Oh, about two weeks, she said– was it important?

She had what is called a foot-drop!  I went back to the electronic record, which made no mention of it (“Neuro, nonfocal,” it said).  And yet her conspicuous foot-drop, a result of a condition known as mononeuritis multiplex, led to the correct diagnosis of vasculitis (after a biopsy):  PAN, polyarteritis nodosa.  Long before the biopsy result came back, the patient’s fever had already resolved on treatment begun later that night.  Further blood tests did not contribute to her care; the “best rheum test,” it turned out, was throwing back the bed covers.

What brought this episode to mind was an article by Dr. Abraham Verghese in today’s NY Times, How Tech Can Turn Doctors Into Clerical Workers, in which he warns of the downside of electronic health records and artificial intelligence, in terms of mistakes made, the temptation to cut corners by simply making a keyboard click, the decline in the interpersonal aspect of patient care, and physician burnout.  It is human nature, I suppose, to seek the fastest path to a solution; to order more and more tests rather than look at the patient’s feet; and to write the daily progress note electronically by replicating the previous day’s note (with minor additions), assuming that the admission note had not missed anything.  In 2018, we seem to rely less on ourselves, our senses, our analytical skills, and more on all our ingenious inventions– our computer algorithms, antibody screens, DNA sequences, cell counters, all our technology– and in this way, we become subservient to them.

In the dystopian society of the Fourth World trilogy, machines use A.I. to make the diagnoses and prescribe the treatments.  Here’s an excerpt from the first novel, Fourth World:

“The patient, W.P., is a 64-year-old transportation executive who complains of severe, sharp pains and tightness in all of his muscles, of eight months duration.”  Kai began his presentation, reading from his open da-disc to the small group of interns, who were supervised that Friday by Dr. Hol Chan. W.P. was sitting hunched over on the hard examining table, wrapped in a short white cellulose gown, hands spread on his exposed knees.  He had been through this ritual ordeal so many times before. Less than a meter away, his wife sat stiffly upright on a short metal step stool. Standing just to her left, Benn observed her jaw muscles, clenching and unclenching. A state of agitation. Her middle and distal knuckles showed the bony enlargement of mild osteoarthritis.  There was a tiny growth on her forehead, which he diagnosed as a seborrheic keratosis; the Probot would have concurred. Because the room had been designed to accommodate only the patient and one physician, Dr. Chan and her interns were forced to crowd around.

“He is previously healthy, except for a very brief period of PsySoc rehab in his twenties, and his social and family histories are non-contributory.”  Kai glanced nervously at Dr. Chan, who, having heard Kai’s presentations before, watched him with an expression of deep concern as she activated the wall projection.  Kai continued, “I have put W.P. through the Probot twice, and both times the results were identical: signals of tissue injury or regeneration, inflammation, pre-mutagenesis and metabolic derangement are completely absent.  Epigenetic expression, including at the micro-RNA level, is normal. Risk loci mapping and haplotype structure are unremarkable. You can see on the next screen that the central and peripheral chi are not in any way obstructed.  I entered the patient’s history, systems review, family history, physical exam and lab data into the analyzer and found no matching diagnosis. And so, without a suitable coding of his diagnosis, there is no way to initiate the billing process.”

Dr. Chan, studying the wall screen, nodded in agreement.

Kai looked up from his da-disc and shrugged.  “In fact, W.P. is perfectly healthy, even though obviously he is persisting in his illness behavior.”



They Are, Therefore They Think II

This weekend, the director of Stanford’s Artificial Intelligence Lab, Professor Fei-Fei Li, made a speech in which she referenced an article she wrote for the NY Times, How to Make AI That’s Good for People.  In her plan for “human-centered A.I.” the first goal is for artificial intelligence to better reflect the complexity, richness and sensitivity of human intelligence.  Along those lines, my March 23rd post on this blog, They Are, Therefore They Think, addressed a robot’s sense of humor, while contributor Sean Noah focused more on the capacity for introspection.  Both are felt, at least for now, to be uniquely human characteristics, but would probably fit well into any imagined human-centered AI.

Dr. Li’s second goal is to have A.I. enhance us, not replace us:  we would automate the “repetitive, error-prone and even dangerous” aspects of jobs, while leaving the “creative, intellectual and emotional roles” for humans.  In my first novel, young Benn Marr, on his way to medical school in 2196, muses about the vanishing role of humans in medical practice.  Here’s an excerpt from Fourth World:


The medical field essentially consisted of tailoring and applying these peptides in clinical situations.  Diagnostics had long ago been relegated to machines, which scanned, analyzed, and diagnosed the patient. They even prescribed the appropriate therapeutic plan.  Frankly, the production of theragenomic peptides could also easily have been taken over by- and, in fact, seemed particularly suited to- the medical computers. What remained were the sensitive tasks- acknowledged haltingly by the most advanced teaching hospitals- of deciphering patients’ wishes and guiding them through the pitfalls of treatment.

“Wishes” meant the patients’ attitudes toward both disease and treatment, resulting from a global summation of their personalities, prejudices, neuroses, education, religious beliefs, family dynamics, and a host of other factors not amenable to analysis by computers.  After all, physicians had to balance the purely technical or algorithm-driven approach with personalization of care. Didn’t they? Wasn’t the admirable desire to do something for the patient best complemented by a healthy skepticism and sensitivity to the patient’s wishes?  In Benn’s application essay, “The Vanishing Role of Humans in Medical Practice,” he had pointed out that technology did not supply social awareness, creativity, or idealism.  Wasn’t the physician also a humanist? he had asked. A historian, digging out, interpreting and telling individual stories? The essay had focused on this tiny corner of the field of medicine, and while conceding the value of face-to-face human interaction, had also predicted that it would continue to fade away.


Dr. Li’s third goal for A.I. is to ensure “that the development of this technology is guided, at each step, by concern for its effect on humans.”  Machines should not be our competitors, but instead “partners in securing our well-being.”  She is concerned about effects on labor, biases against minorities in machine learning, privacy rights, and geopolitical implications.  Debating and resolving ethical challenges should not be outstripped by the fast pace of A.I. technology– a problem we are already seeing in other fields, such as genetic engineering and stem cell applications.

The Fourth World trilogy imagines a dystopian future:  there are three major laws governing genetic engineering (but laws– including Isaac Asimov’s famous Three Laws of Robotics– are meant to be broken, and therein lies a large part of the story).  In contrast, at the turn of the 23rd century, Q.I., or quasi-human intelligence, is far less fettered by ethical constraints; the concern for its effects on humans has a lot to do with monitoring the population, as China and others are already doing in 2018 with facial recognition, social media and big data; suppression of any dissenting views; and, inevitably, the development of weapons.

My third novel (the working title is Child of the Fourth World), will conclude the trilogy.  A fascinating conversation I had recently with someone who consults on US weapons development led to the following passage (apologies if you are having breakfast):


If Najib were now in Shanghai, he could have chosen from a range of much more sophisticated tools to carry out the assassination.  True, the century-long setback to technological research and development caused by the War of Unification had returned warfare to a more primitive state.  The PWE, faced with widespread pockets of rebellion, had spent substantial resources simply to maintain the status quo. But they had also managed to develop a few fiendishly clever QI weapons on a small scale.  He thought of the virtually unstoppable SHIVA Destroyer (the acronym was especially appealing to Najib, whose ancestors had practiced the Hindu religion). The Single Highly-Identified Victim Aerial Destroyer was a high-speed drone the size of his palm.  It could be launched from as far as three kilometers away, and was capable of individual facial/retinal recognition; assessment of circumstances surrounding the target, including the presence of “friendly” combatants; correcting for evasive and weapon-based countermeasures; then acquiring and attaching to the target’s head with a 99.97% success rate.  Having done so, SHIVA would unleash a directional microwave blast calibrated to melt the target’s cranium and reduce its contents to a viscous liquid resembling hot oatmeal. It was highly effective, but sadly, unavailable to him at the moment.


Ah, the wondrous benefits of technology.  But sometimes we seem like three-year-olds sitting behind the steering wheel of a speeding Maserati; technology is empowering, of course, but if only the human species were mature enough to avoid crashing into the nearest tree!



Swing, Batter, Swing!

The SF Giants’ pathetic lack of offense in a 1-0 loss last night had me grinding my teeth, against the advice of my dentist (probably a Dodgers fan).  Then, in this morning’s NY Times, comes an article (How Do Athletes’ Brains Control Their Movements? by Zach Schonbrun).  Two Columbia University neuroscientists, Sherwin and Muraskin, have been using EEGs to look at a batter’s decision to swing at a pitch (the moment he makes that choice shows up as a burst of neural activity on the EEG), and then correlating their measurements with performance outcomes.  They are applying what is known as rapid perceptual decision-making to the sport of baseball.

Schonbrun points out that a 95-MPH fastball travels 60 feet 6 inches from the mound to home plate in just 400 milliseconds (the blink of an eye), and by then, given the maximum speed of nerve conduction/activation, the time available to react has already been cut in half.  A good batter has to respond to his nerve activations in a very different way than normal people.  Quick:  Is this a fastball or a slider?  At which millisecond does the batter decide to swing at, versus to take, a pitch?

In my second novel, Fourth World Nation, Benn Marr– who possesses certain uncanny abilities due to his unique genetic makeup– has his turn at bat:

Suppressing his excitement, Benn nodded at Hank, picked up a bat and stepped up onto the field.  A thousand hostile baseball fanatics, many wearing black PWE uniforms, glared at him. A metallic voice announced the substitution, to a chorus of catcalls and booing.  Even the programs clutched in the fans’ hands—supposedly there to provide objective analysis of the game—reacted poorly. The crowd rained scorn on Benn as he stood at home plate, their expletives addressing everything from his Asian ethnicity to the “gouging” water rates set by Hydra.

Benn, however, focused his thoughts and heard none of the noise; to his ears, the diamond was still and quiet. Behind him, the mobile QI umpire adjusted his mask. The catcher shifted stealthily to the outer half of the plate, his shoes grinding into the red clay. The pitcher Helmut rolled the ball deep in his glove, his fingers seeking its seams.  To Benn’s eyes, events unfolded as if in slow motion: he anticipated the limited wind-up; the delivery from a low release point; the seams spinning centrifugally; the appearance of a red dot at the center of the ball. It was a slider, unhurried in its journey toward home plate, where Benn waited patiently. He flexed his knees, shifted his front foot forward, then planted his lower body firmly.   As the ball curved low and away, Benn extended his arms and kept his body balanced. On impact, the bat exploded into a hundred shards.

“There it goes, a high fly ball!” yelled the robotic announcer, whose positronic eyes calculated the arc of the ball’s flight, its velocity leaving the bat, and the distance to the warning track, where it bounced off the wall above the leaping right fielder.  All this data was instantaneously transmitted to the fans’ programs, which murmured their grudging admiration. “The Giants have a double!” the announcer added, when the play was over. “So the game is tied up at two runs apiece. Ladies and gentlemen, please… further throwing of trash from the bleachers will result in ejection from the park.”  Discarded programs continued to land on the infield. Many of them were still gushing about the inside-out swing and the broken-bat, opposite-field RBI double.


Choosing a Sharper Blade

Bacteria, it seems, rule our health, living as they do at the interface between our protected internal selves and the wide external world.  We all know that they can act as harmful pathogens, but often they also function as mediators, brokers, an essential part of normal health and homeostasis.  At the time of my retirement from Rheumatology in 2015, an ever-increasing understanding of the role played by the microbiome– the bacterial population of our intestinal tracts– in driving autoimmune diseases such as rheumatoid arthritis and systemic lupus gave me pause:  should I retire, just as such an interesting and important field was opening up?

For example, there is a bacterium called Enterococcus gallinarum that can migrate from the gut to tissues such as lymph nodes and liver, where it triggers autoimmune processes and inflammation; an antibiotic or vaccine against this bacterium reverses its effect on autoimmunity, at least in genetically susceptible mice.

And recently, a team at Yale has looked at a protein called Ro60 in lupus patients.  Ro60 is found in bacteria from the mouth, skin and gut of these patients, where it induces the immune response and antibody production.  In theory, a topical antibiotic might be designed to target Ro60-containing bacteria and thereby suppress the autoimmune process in lupus.

Both of these examples herald a paradigm shift in our understanding and treatment of rheumatologic conditions (and other autoimmune diseases)– diseases where a wayward, dysregulated immune system attacks one’s own tissues, such as kidneys, lungs, brain, skin and joints.  When I was in training in the 1980s, treatments consisted largely of poisoning the immune system with chemotherapy agents and steroids.  As in cancer chemotherapy, we would bludgeon the target, stopping just shy of dangerous toxicity.  Not until the turn of the century did the biologic agents come along:  a new class of medications that finely targeted proteins along the complex inflammatory pathway, cutting with smaller and sharper blades where previously we had operated with dull machetes.

And now, antibiotics and vaccines for autoimmune conditions?  Examining these diseases with ever-greater resolution leads to better identification of culprits and “surgical strikes” with a minimum of collateral damage.  In teaching residents and medical students about lupus and RA, I would often use this analogy:  you’re enjoying a peaceful stroll through the park, when suddenly you encounter a boom box lying on the ground.  It’s playing a type of music you hate (for me, until recently, that was rap), and playing it very loudly.  In the 1980s, I would have taken a sledgehammer and pounded that boom box to bits.  In 2000, I would have found the volume dial and turned it way down.  In 2020, they will be figuring out how to change the radio station– or rather, the live stream.

No matter how far we think we’ve gotten in advancing the field of medicine, there will come a day when we look back at current practices with a mixture of amusement and horror.  In that same vein, here’s an excerpt from my sci-fi novel Fourth World, the first in a trilogy:

“Dr. Vincent?” the intern ventured.  “I understand why you would combine the gene fragments from the biogenome menu to make therapeptides?  And how the therapeptides correct the deficiency state? But you would have to keep administering the peptide indefinitely, right?  Because it wears off?”

Dr. Vincent, unaccustomed to interruptions, stared at him with eyebrows raised.  “Yes, of course, there is a finite half-life for every therapeptide,” she replied warily, sensing the question to follow.  “They have to be administered periodically. But they can be engineered to have extremely long half-lives, as you know.”

“Yes?  But wouldn’t a better solution be to transfer the PerMutation into a stem cell?  Then introduce the stem cell into the host, you know, to grow actual tissue? Once the genetically modified tissue took hold?  The patient could then, you know, make his own permanent supply of the therapeptide?”

Vincent’s face reddened as she consulted the seating register at her lectern.

“No… Mr. Messler.”  Her smooth delivery had been brought to a sudden halt by his naïve- no, appalling- question.  “That would not be a better solution. Not at all! You haven’t studied medical history much, have you, Mr. Messler.  The PerMutations obviously consist of multispecies DNA. Multispecies Proteomics- and subsequently the use of the protein products as pharmaceutical products- is a well-developed field.  But not the incorporation of PerMutations themselves into human beings. It has been over ninety years since the first attempt at introducing multi-species DNA into humans. Can anyone here please tell us what the consequences were?”

Benn and Lora looked at each other blankly.

An intern sitting across from them, a bearded black man a few years older than his peers, spoke up:  “I believe you’re referring to the Boston Gene Project. In one experiment, chimeric DNA, part mouse (from a strain of New Zealand mice with hyper-immune traits) and part human, was inserted via stem cells into patients suffering from a variety of immune deficiencies.  Balancing deficiency with excess: it seemed a straightforward idea. But there were nucleotide sequences in the DNA, previously considered ‘junk’ or nonsense, and even some non-genetic material, such as the associated proteins you mentioned earlier, which turned out to be important.  Ninety percent of DNA doesn’t code for proteins, yet remains biochemically active: for example, directly regulating- or coding for RNA which regulates- gene expression. The ‘junk” interacted in unpredictable ways with the patients’ genes, sometimes destroying them, or worse yet, re-sequencing them and changing the end-products.  In the Boston experiment, subjects developed unexpected consequences: aggressive auto-immune diseases, multiple cancers, and even bizarre body changes involving… non-human tissue.”

“Yes!  And therefore, in vivo application of multi-species DNA became illegal, Mr. Messler- it’s a major violation of the genetic engineering code.  In fact, the law forbidding this application is second only to the universal ban on human cloning! Does that answer your question?”


April’s Fool

We each need our own coping mechanisms to deal with the chaotic Trump presidency, from ranting about it in blogs to taking powerful antidepressants.  Over the past year, at particularly poignant (or pungent) moments, I have sometimes resorted to distracting myself with the music of Mary Poppins, which stubbornly repeats in my mind until the crisis has abated.

On Inauguration Day, this is what I heard over and over:

Super-callous narcissistic ex-reality show host, um diddle diddle, Donald Trump, diddle ay…

The lyrics have evolved over time, of course.  For example, during Trump’s exchange of childish insults with Kim Jong-Un over the threat of nuclear annihilation, I heard:

Please oh please don’t go ballistic with North Korean boasters, um diddle diddle, diddle bomb, diddle pray…

For several weeks now, the Trump administration has clashed with California over ICE raids in sanctuary cities; the census to include a question on citizenship; Obama-era tailpipe greenhouse gas emissions and vehicle mileage requirements; and the transfer of federal lands for development and drilling.  Add all of that to the many Trumpian threats to immigrants, net neutrality, air and water quality, etc. already facing California (see my previous blog, Most Likely to Secede), and no wonder I can’t get Mary Poppins out of my head:

He’s anti-Calif pugilistic blasting tweets ferocious, um diddle diddle, piddle dump, in the middle of the bay…

And now Donald Trump– of all people!– is declaring April Sexual Assault Awareness and Prevention Month.  This despite being credibly accused of sexual harassment and assault by so many women.  It’s not only ironic, it’s laughable!  Today’s outraged mantra:

While keeping tally of his antics loudly braggadocious

This groper’s wiping Stormy’s lipstick off the White House sofas

Awareness Month?  He’s schizophrenic!  Hypocrisy atrocious!  Um liddle liddle, liddle Trump, liddle ay…

Thanks again for coming to the rescue, Mary Poppins!  I’m sure I’ll hear from you soon!

And now, back to work on my real writing project:  the Fourth World trilogy.


They Are, Therefore They Think

Here’s an excerpt from the final novel in the Fourth World trilogy, soon to be released on Amazon.  In this passage, the humanoid robot Protem Two (who is the acting President of the United States), although in possession of what I call quasihuman intelligence, desires a much broader range of cognitive abilities:

Humor?  Why exactly do you want us to introduce a sense of humor into your neural network?” asked the QI Supervisor, a bit startled by the President’s request.  She threw a quizzical glance at the CIA general standing a short distance behind Protem; their eyes met, but he remained straight-faced. Humor? As a young professor of computational linguistics, she had never ventured far into that particular aspect of her field.

Protem Two tilted its head to the left and gazed slightly upward, as though engaged in deep thought.  It was a new affectation which made Protem seem more human; the intriguing thing was, the QI Supervisor could not remember having programmed that subtle gesture.

“Humor has strategic implications, professor.  In my analyses of the masses of data arriving daily from operatives around the world, I have often encountered what might be considered humorous.  For example, a local demonstration might contain elements of satire, parodies of government actions or mocking depictions of PWE Leader Bigelow. A complete and accurate interpretation of these reports, I find, is impossible without any grasp of the humor involved.”

“I see.  So you think we’ve been losing a significant percentage of analytical yield… the result being suboptimal planning for the Resistance?”

“Yes, professor.  A lack of humor actually hampers our long term strategic thinking.  This is not a trivial request, just so that I can have a good laugh once in a while.”

“Ha, that’s quite funny!  You know, Protem, you may already have a sense of…”

“No, professor.  That was a rote extrapolation based on natural language processing, not a true joke.  Learning complex patterns over decades has expanded my computational techniques, but there is more to humor than the resulting algorithms.”

“Well, they say that humor is a uniquely human trait…”

“Which may be approximated by a degree of computational creativity beyond what I currently possess.  My interface with the external world depends on studying vast amounts of pre-filtered text.  By sheer statistical analysis, I can determine what is truly hilarious, versus quite funny, or just mildly amusing.  In contrast, a human child, with its wide range of senses and emotions, has the advantage of feeling afraid, or being surprised, or experiencing pain, pleasure, excitement, disappointment, and so on.  These many forms of input, I gather, help the child develop a rich sense of irony as it grows up, and it is on irony that humor depends.  I would like to go beyond a statistically correct definition, to learn the language and the deeper meaning of humor…”

“I’ll have a talk with my vendors at Cumulonimbus, the Palo Alto company that filters your incoming data.  To approximate a human child, you would begin by engaging the world through multiple senses, or at least the computational equivalents of senses:  a highly-developed avatar, is that where you’re going with this, Mr. President?”

“Precisely, professor.”  Protem paused momentarily, working on its comedic timing.  “Ironically, I would be excited by the pleasure of feeling pain, and, I’m afraid, I would be surprised not to be disappointed.”  There, all seven goals, in one sentence!  Protem arched its humanoid eyebrows, anticipating a satisfactory level of hilarity from the Supervisor.

But she only winced, barely suppressing a roll of her eyes. “No, I’m the one who’s afraid, Mr. President.  Afraid that your humor does leave a lot to be desired; apparently there is much more to the puzzle than algorithms can solve!  I’ll get right to work on it, sir.”


Recently, I mailed the preceding excerpt to a loyal follower of this blog, asking for his opinion about machine humor.  Sean Noah, a doctoral candidate in Neuroscience who also contributes to the blog knowingneurons.com, responded with a fascinating essay on the interface between human and artificial intelligence.  I’ve included his essay in this rather lengthy post, adding a few comments at the end– read on!

The Rise of Thoughtful Machines

–by Sean Noah

In the mid twentieth century, artificial intelligence researchers invented a new type of computational system that could detect patterns in images – a daunting task for previous technology. Because this new system comprised highly interconnected information-processing nodes, resembling the organization and function of the brain, it became known as an artificial neural network.

At that time, neuroscience was still in its infancy, and the understanding of the brain was limited. Scientists knew that neurons could pass signals to other neurons. They had some idea that the connections between neurons were flexible, and that connection strengths could change. And by peering at cells through a microscope it was easy to extrapolate that the total number of neuronal connections in the brain was astronomical. But basic information about the brain’s operation was still mysterious. Nobody had a clue how the human brain’s 89 billion neurons were subdivided into functional groups, how electrochemical fluctuations encoded information, or how neural circuits processed electrical signals. Thus, the similarity between artificial neural networks and biological neural networks didn’t extend very far.

At least, it didn’t initially.

Today, neural networks resemble biological brains more vividly. These artificial systems can perform complicated tasks with surprising intelligence: Researchers are currently developing systems that can learn how to drive a car just by observing a human driver, or that can cooperate seamlessly with humans to solve problems jointly. And the secret to the performance of these advanced neural nets is a complex and inscrutable system of connections buried in so-called hidden layers. The more hidden layers a deep learning neural network has, the more remarkable its problem-solving ability – and the less anyone can understand how it’s working.

Hence, we have reached a peculiar stage in the history of technology wherein the researchers designing systems are also desperately trying to understand how they work.

To investigate the intricate computation occurring deep inside neural nets that classify images, for example, one strategy involves systematically feeding the network different images and singling out one hidden node at a time to find out what image properties cause that node to activate. In a neural net that can identify cupcakes in photos, there might be a hidden node that responds to blue stripes angled at 45 degrees. Or, there might be a node that responds to pink frosting in the center of the frame. By discovering the image properties uniquely recognized by each of many hidden nodes, researchers can start to piece together the function of the hidden layers, and how the composition of these layers can decode information about the image – from pixel to cupcake.

This same strategy is a staple of neuroscientific research. Foundational studies of the brain’s visual system homed in on the precise properties of light and the visual field that activated specific neurons in different regions of the brain. With this method, neuroscientists learned that there are numerous brain areas in the visual system that each respond to different aspects of visual images – some neurons encode the region of space that a visual stimulus inhabits, some neurons encode colors, and other neurons encode more complex properties like object identity. And now that these neurons’ functional properties are clear, neuroscientists are able to form theories about how different visual areas connect, work together to decipher visual information, and distribute it throughout the rest of the brain.

It seems then that neural networks are more aptly named than their inventors ever realized. Neural network researchers are using a strategy to study their creations identical to one neuroscientists use to study the brain, which leads to some thought-provoking speculation: What other neuroscientific research methods could be useful for studying neural networks?

It’s possible to imagine how fMRI, tractography, optogenetics, or event-related potential techniques could be tailored to the study of neural networks. In neuroscience, these popular and powerful methods each capture a different type of data, and so can be used to test different types of hypotheses. The brain is too complex to ever yield complete knowledge of every neuron’s activity at every moment in time, so research questions focus on specific aspects of neural operation: the location of activity in the brain, whether a type of cell is necessary for some behavior, or the time course of a specific neural process. Then, findings from different research programs can be compared and woven together to form a theoretical understanding of how the brain works. This same broad strategy could be applied to the study of artificial neural networks, the ever-increasing complexity of which also thwarts detailed mechanistic understanding.

If we extrapolate further, to the bleeding edge of neuroscience, we tread into the realm of science fiction. Neuroimaging technologies have been steadily advancing, but the most methodological progress is being made in data analysis. Using the same fMRI data that has been available for decades, neuroscientists are now devising sophisticated statistical tools to answer new questions that were once thought to be unapproachable. Many of these advanced analytical tools, such as multi-voxel pattern analysis, support vector machines, and representational similarity analysis are machine learning applications – they are powered by the same technology that drives artificial neural networks. So, if researchers studying artificial neural networks find success in the adaptation of neuroscience methods to their own work, their efforts might eventually include these recent machine learning applications, at which point neural networks would be deployed in the analysis of themselves.

Introspection, the capacity to gaze inward and reflect on the very mental processes that underlie our inquisitiveness, is often considered to be a defining trait of humanity that sets us apart from other animals. But if advanced neural networks can be directed to analyze their own functioning, would that change how we view ourselves? Would artificially intelligent systems need to be recognized on equal standing with us? Or would we simply need to strike one possible essentially human trait off of the ledger of human nature?

Before we start worrying about losing our unique place in the universe, we can take some small comfort in one likely scenario. Namely, it’s possible that self-reflective neural networks would be more successful in deciphering their functioning than we are as humans. As the great American psychologist William James described, our introspection is “like seizing a spinning top to catch its motion, or trying to turn up the gas quickly enough to see the darkness.” In other words, we have the capacity for introspection, but true introspective understanding is elusive. So our uniqueness would then be preserved: In the club of ineffectual self-reflection, we could still be the sole members.


A few aspects of the essay that particularly caught my attention:

  1.  “… we have reached a peculiar stage in the history of technology wherein the researchers designing systems are also desperately trying to understand how they work.”  Those who build the hidden layers of connections enabling deep learning don’t know how those connections work?  Seems like the cart before the horse:  peculiar, indeed.
  2.  Tools used to study human neuroscience may soon be used in an analogous way to study machine neural networks.
  3.  The process of trying to understand a “defining trait of humanity” such as introspection (in my excerpt, I chose humor), the psychologist William James said, is “like seizing a spinning top to catch its motion.”  I hadn’t realized that psychology has its own equivalent of the Heisenberg Uncertainty Principle!
  4.  One day, machines may surpass humans in understanding themselves, a task at which we humans have been largely, and sometimes spectacularly, unsuccessful.

Eyes to the Sky

The accolades continue to pour in for Dr. Stephen Hawking, who passed away this week, thus ending an era for physics, astrophysics and cosmology.  At about the same time, our Chaos President turned his own limited thoughts to space.  To paraphrase Trump:

“Space!” he said, pronouncing the word with a hint of awe.  “Space is a war fighting domain.  We have the army, the navy, the air force… why not a Space Force?”  He waved his hands in the air, as if to frame the idea, then added dreamily, “A Space Force… a Space Force… why not?”  When asked about an upcoming NASA mission to Mars, he said nothing of intrepid exploration, expanding human horizons, the search for extraterrestrial life, the intriguing possibility of terraforming another planet; not even the image of a red Tesla on Mars crossed his mind.  All this man could think to say was, “If my opponent had won the election, we wouldn’t be going to Mars… no, we wouldn’t.”  So space is all about fighting more wars, and Mars is further confirmation that he did defeat Hillary in 2016.

When Stephen Hawking turned his thoughts to space, there were no warships in sight.  Gazing up at the night sky, he saw the universe replete with all-consuming black holes, lively subatomic particles, the river of time flowing past.  Among many scientific milestones, he joined quantum theory with general relativity by proposing that small amounts of radiation (known as Hawking Radiation) managed to escape from black holes, something never before imagined.  His admonition to us, “Look up at the stars and not down at your feet,” has been quoted all week.

Compared to Trump, Hawking lay at the opposite end of the spectrum of neuro-psychological development.  His vision was so wide and deep, his imagination so powerful, that he could actually “see” theoretical, abstract events happening in space.  Trump, on the other hand, not only has extremely narrow vision, he has trouble with simple object permanence.  In Piaget’s first stage of child development, from 7-9 months of age, an infant becomes capable of holding the image of an object in mind, so that if that object disappears from view, the child knows that it still exists.  Hence the game “Peek-a-Boo.”

When Trump holds televised meetings on immigration at the expiration of DACA, or gun control after yet another mass shooting– meetings attended by prominent Congressional members from both parties– he pleads dramatically for a “bill of love,” makes full-throated statements that “something has to be done” to protect the Dreamers or teenage victims of gun violence, demands that “both sides come together… send me something and I will sign it!”  The day after these meetings, when Feinstein, Schumer, Pelosi, Ryan and McConnell have gone back to the Capitol Building, they cease to exist, and Trump turns back (Peek-a-boo!) to ICE, and to the NRA.

Lacking object permanence, can he still be blamed entirely for the thousands of lies coming out of the White House since Inauguration Day?  Yes, he lies all the time, and knowingly (for example when he insisted to Justin Trudeau that the US has a trade deficit with Canada, and later privately admitted he had no idea whether that was true).  But might some of his lies result from a fluid understanding of reality; vision perceived through a narrow concrete tunnel; calcified memory banks incapable of maintaining object permanence?  In other words, is it a form of dementia that keeps Donald Trump from developing a broader, more enlightened perspective?  If not, then please look up at the stars, Mr. President, and not down at your feet.

It is said that the concept of Hawking Radiation was ill-received by science fiction writers– but not this one; after all, in order to explain the space engine in my novel Fourth World, I had to create a subatomic particle called the capacitron!  Who can predict what new empirical evidence will emerge by 2196, what amazing inventions and discoveries are yet to come?

Here’s an excerpt from Fourth World:

On the blank wall facing his bed, a floor-to-ceiling image of the Mars Wellness Institute flickered to life, accompanied by swells of grandiose martial music.  The five-story MWI seemed relatively nondescript, especially as the view expanded to include extravagantly stylish apartment fronts; towering, elegant spires topped by colorful flags which fluttered in a non-existent wind; bustling parks lush with faux-vegetation; and graceful pedestrian arches (look at all those graceful pedestrians, Benn marveled) in the background.  The Highland City Compliance Center came into view, above its imposing stone entrance an engraved quote from J. P. McGrew, the first mayor of Highland City: To Each New Generation on Mars, Greater Wealth and Status.

Rolling his eyes, Benn pictured the buildings and grounds of Tharsis One, which consisted of dull metal sheds of all sizes, lumped together in seemingly haphazard fashion, and often resting on bare soil, with a rudimentary first-generation terrasphere arching over all.  J. P. McGrew must not have meant each new generation at Tharsis. No elegant spires, graceful pedestrians or colorful flourishes here. No grand public projects of any kind. In fact, over ninety percent of the habitable structures in Tharsis One were hidden beneath the planet’s surface, in case of a breach in the terrasphere.  We live like moles, safe only underground, thought Benn with a shudder.

In contrast, the metrospheres of the New Colonies, built out of new/improved “chain-link” metallopolymers and lined with stout plasma shields, allowed the raising of cities a hundred times the size of Tharsis One.  These materials admirably resisted gamma rays, meteorites, the extreme seasonal temperature variations in the South, the six-month long winters, and the horrifically violent dust storms that returned each spring. Not to mention the occasional Marsquake.  No, the new colonists had no need to cower underground as the Martians did. They breathed purified, odorless air; their children played in bright, radiation-free sunlight filtered by translucent domes high overhead; they engaged in professional and social lives approximating those they had left back on Earth.  Benn struggled simultaneously to imagine the Utopian life, and to resist even thinking of it, as he stared at the visual on his narrow bedroom wall.