Eric Vandenbroeck 10 June 2018:

As recently pointed out in Nature, from the nineteenth-century benches of microbiologists Louis Pasteur and Robert Koch to the sequencing of the human genome in 2003, the past 200 years have seen medicine advance at an extraordinary pace.

To avoid antibiotic resistance undoing a century’s worth of progress, researchers are racing to restock the antibacterial armoury. Others are exploiting the data generated by ubiquitous computers and smartphones to better anticipate outbreaks of infectious disease.

With the potential for gain so great, the prevention of illness is playing an ever-larger part in medicine. Intervention to protect people from long-term disease could begin in the first moments after birth. And although a decline in health in later life might seem normal, there is ongoing debate about where healthy ageing ends and disease begins.

Work to exert greater control over rogue immune systems, as well as to develop technological solutions to paralysis, is showing initial promise. The advent of CRISPR–Cas genome editing has raised hopes for widespread use of gene therapy; meanwhile, this technology is also aiding the search for new drugs. As long as barriers to accessing the best treatments available can be negotiated away, the future of medicine could be very bright indeed.

But of course there are also pitfalls.

About Hard health data, waste, and how a proper google search might help you.

We know that authorities encouraged the health care efficiency movement of the1990s, evidence-based medicine (EBM), as a reaction to the exaggerated claims of pharmaceutical salespeople. Hard data, it was hoped, would curb waste.

It is possible however, without breaking any conventional ethical rule, to design medical experiments and analyze their results to maximize the appearance of effectiveness even when a careful examination of the data reveals faults like a small sample size or statistical measures of significance that are not as meaningful as they usually seem.

But the epidemiologist John Ioannidis, who first called broad attention to problems of replicating scientific findings in 2005, has discovered that in practice reviews and meta-analyses have compounded the problem. Since these genres are cited more often than the majority of papers themselves, they have become a favorite means of professional advancement. (1)

Not only that, according to the Open Access Directory project, there were 9,156 peer-reviewed titles in October 2016 and over 2.32 million articles. Of these journals, as many as 10 percent could be fraudulent, according to a librarian and researcher, Jeffrey Beall, who posted a list of over three hundred suspect publishers on the web.

Confronted with challenges to his methods and threats of litigation, Beall retracted his list in January 2017. In a European biochemistry journal later that year, he explained that he had withdrawn the list specifically to end an aggressive campaign directed at his superiors at the University of Colorado. An anonymous and highly personal anti-Beall site has nonetheless remained, claiming to represent “a group of librarians around the world” united for open access against the alleged “predatory blogger.”(2)

But of course now apart from reading things in print we have the many wearables promise healthier living, then, and possibly insurance savings, to many or most of their users. Self-monitoring appears to be a logical development in the movement for more better living. But there is more than one side to efficiency. The same technology that empowers people through self-surveillance and voluntary sharing of personal data with friends and family members has another potential: monitoring by employers and possibly even by governments. For many information technology enthusiasts, this may be not a bug but a feature, an opening for a benevolent paternalism that can combat deadly trends toward obesity and sedentary living, reduce health care costs, and increase longevity. Other studies cited by PwC estimate that there were already 75 million wearables in the workplace by 2016, and that by 2020 fully 8 million people would be required to wear

Representative, self-monitoring is on the way to becoming a domestic rite. Whether it can add ten years to users’ lives, as 70 percent of respondents now believe, remains to be seen.

The unintended consequences of self-monitoring technology were revealed in a study by a professor of marketing, Jordan Etkin. Etkin performed a series of six experiments with undergraduate and adult subjects (web-based, using Amazon’s Mechanical Turk program for payment) engaging in a variety of activities: not only walking (using a pedometer rather than a more advanced device) but coloring shapes and reading texts.(3)

Two writers have shed light on the problem.(4) Cari Romm, in her New York magazine blog, even describes the “existential angst” of stopping measurement of her steps by Fitbit. Paul Ford, in The New Republic, relates how he developed a program in pre-wearables days to track food and exercise to reduce his obesity. He ended his self-devised and temporarily successful program, regaining his weight, unable to bear the self-monitoring regime. “Weight loss—the self-improvement industry in general,” he wrote in his diary, “is a kind of natural, physical postmodernism. You become the text you are editing, rewrite your feelings, the body.” He saw his friends’ Fitbits and other wearables as unsustainable. Romm’s and Ford’s experiences, and those of others, show how self-monitoring can begin as an exciting and rewarding project and become an onerous duty. Depending on the activity, it can take weeks for pleasure to return. It is possible that this is a minority response, despite Etkin’s findings. After all, as we have seen, many experiments in medical and behavioral sciences cannot be replicated. On the other side, a number of other social psychologists have reported similar findings, and numerous other papers report the disadvantages of extrinsic motivation.

A number of other social psychologists have reported similar findings, and numerous other papers report the disadvantages of extrinsic motivation.(5)

The mobile technology revolution’s great innovation is to make both data gathering and comparison possible in real time; significantly the phrase “quantified self” first appeared in Wired magazine in 2007, the year of the iPhone’s introduction. But it remains to be seen who is made more self-confident and competent by the new technology, and who becomes more anxious and dissatisfied.(6)

Artificial intelligence available to most physicians is far more modest and affordable. It can be both efficient and lifesaving by helping professionals adhere to checklists and spot warning signs they might otherwise have overlooked. But it has neither Watson’s near-universal medical journal database nor the benefit of Watson’s extensive coaching. Harvard Medical School researchers published the first review of these programs in 2016, and their results cooled earlier expectations that software running on off-the-shelf computers could exceed the diagnostic skills of professionals. Instead, the doctors prevailed. Two hundred thirty-four internal medicine specialists were presented not with live patients but with “vignettes” of forty-five clinical cases, each reviewed by at least twenty physicians. Each proposed the most likely diagnosis followed by two alternatives. The same questions were asked of the programs. On the first choice, the doctors were right more than twice as often as the programs, 72 percent versus only 34 percent. When alternates were included, 84 percent of the doctors listed the correct diagnosis among the top three; the electronic symptom checkers scored barely above half at 51 percent. The programs fared relatively better with common diagnoses and worse when conditions were more unusual. This may suggest that experienced doctors have acquired not only rules but the ability to identify more complex issues: tacit knowledge that they might not be able to describe in advance when developing an algorithm but that they can exercise in practice.(7)

Artificial intelligence can reduce the often harrowing search for correct diagnoses of rare diseases, and it may be able to help find cures. Electronic medical records properly implemented may yet fulfill their promise if the burden on physicians-and the corresponding risk of Dr. Bernard Lown, winner of the Nobel Peace Prize, as observing that “the usual rules of efficiency are inverted in medicine. The more time a physician spends with patients, the more efficient he or she becomes. Listening costs next to nothing, and so is infinitely more cost-effective than drugs and devices.”(8)

Also ultimately, even if schools and colleges, faced with so many financial and curricular pressures, do not create new programs for search, it is a skill that individuals can develop on their own through practice and through use of resources provided by Google and other search engine companies. The right kind of search, not always the most efficient in the short run, can improve investments, purchases, travel, and health decisions.

 

1.   John P. A. Ioannidis, “The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses,” Milbank Quarterly 94, no. 3 (September 2016): 485–514.

2. “OA by the Numbers,” http://oad.simmons.edu/ oadwiki/ OA_by_the_numbers; Declan Butler, “The Dark Side of Publishing,” Nature 495, no. 7,442 (March 28, 2013): 433–35; Andrew Silver, “Controversial Website That Lists ‘Predatory’ Publishers Shuts Down,” Nature, January 18, 2017; Carl Straumsheim, “Academic Terrorist,” Inside Higher Ed, June 2, 2017, https://www.insidehighered.com/ news/ 2017/ 06/ 02/ librarian-behind-list-predatory-publishers-still-faces-harassment-online; Jeffrey Beall, “What I Learned from Predatory Publishers,” Biochemia Medica 27, no. 2 (2017): 273–79, http://www.biochemia-medica.com/2017/27/273.

3. Jordan Etkin, “The Hidden Cost of Personal Quantification,” Journal of Consumer Research 42, no. 6 (April 2016): 967–83; Alfie Kohn, Punished by Rewards (Boston: Houghton Mifflin, 1993).

4. Rachel Bachman, “To Add Steps, Fitbit Cheats Use Pets, Ceiling Fans, Power Tools,” Wall Street Journal, June 10, 2016; Thomas Heath, “This Employee ID Badge Monitors and Listens to You at Work—Except in the Bathroom,” Washington Post, September 7, 2016.

5. Cari Romm, “The Nihilistic Angst of Quitting Your Fitbit,” New York magazine (blog), August 26, 2016, nymag.com/scienceofus/ 2016/ 08/ i-quit-fitbit-and-fell-into-nihilistic-despair.html; Paul Ford, “I Tried to Build My Perfect Quantified Self,” New Republic, Fall 2015, 4-5.

6. Deborah Lupton, “Understanding the Human Machine,” IEEE Technology and Society Magazine, Winter 2013, 25–30.

7. Katherine Igoe, “Head-to-Head Comparison Reveals Human Physicians Vastly Outperform Virtual Ones,” October 11, 2016, https://hms.harvard.edu/ news/ doc-versus-machine-0; Hannah L. Semigran et al., “Comparison of Physician and Computer Diagnostic Accuracy,” JAMA Internal Medicine 176, no. 12 (2016): 1860–61.

8. Vikas Saini, “Improving Health Care with the Simple Act of Listening,” STAT, October 17, 2016; Abigail Zuger, “Are Doctors Losing Touch with Hands On Medicine?,” New York Times, July 13, 1999; Deborah Lupton, The Quantified Self : A Sociology of Self-Tracking (Cambridge, U.K.: Polity, 2016).

 

 

Home