“Hollywood Failed Bruce Willis”… Really? Or maybe Hollywood stumbled into doing the right thing?

A month or so ago I started seeing the headlines about how Bruce Willis has retired from acting, due to his “aphasia diagnosis.” Which to me was a slightly curious headline – given the fact that aphasia is really much less a disease or disorder per se, and more a syndrome (defined as “a group of symptoms which consistently occur together, or a condition characterized by a set of associated symptoms“).

Apparenty Bruce Willis, who has been in a number of fantastic movies, including all-time favorites of mine like Pulp Fiction and the groundbreaking American action film Die Hard (which spawned a number of somewhat dubious-but-still-passable sequels), has been struggling for years from what looks to me to be symptoms of progressive cognitive impairment.

Although there’s no way to be particularly sure without knowing Bruce Willis’ medical history, ideally supplemented with a competent dementia diagnostic exam (which he’s no doubt received at this point) – one has to wonder – is it Alzheimer’s Disease? It’s possible – while language deficits like anomia (word-finding difficulties) and outright aphasia can occur in Alzheimer’s, remember that Bruce Willis’ current age is 67. Given that he’s apparently been suffering from his language difficulties – his aphasia – for years, that means his disease onset was relatively young for Alzheimer’s dementia, which typically has an age of onset sometime after the age of 65.

Another possibility is vascular dementia – like a number of Hollywood actors, Willis has a history of self-admitted excessive drinking, bouts of sobriety, and also famously smoked in his Die Hard movies – all of which raise the possibility of having vascular events, e.g., strokes – which can preferentially damage the parts of the brain that produce or understand language.

Finally – there is the possibility of a somewhat rarer, but still relatively common dementia syndrome at play – that of Frontotemporal Dementia, or FTD, specifically the FTD subtype of Primary Progressive Aphasia, or PPA. Unlike Alzheimer’s Disease, FTD has a typical age of onset that is somewhat younger (50 as opposed to 65), and the primarily language-driven symptoms of Bruce Willis’ “aphasia diagnosis” also fit.

Regardless of what we’re talking about – again, while there’s no way to be 100% sure – it seems his aphasia is really just an expression of a larger dementia diagnosis, which has been apparently at play for years now. It happens. Age, the right combination of risk factors, and sometimes cognitive impairment can occur.

Is cognitive impairment, per se, the end of the world? Not at all – and in fact, the diagnostic world I live in as a geropsychologist makes allowances for that – I’ve written elsewhere about the entity of mild cognitive impairment whereby a person can, given an increased use of compensatory strategies (like using a calendar, address book, reminder system, etc.) still hold down a job and productive life, despite having some symptoms of memory loss or deficits on other cognitive functioning.

Indeed, this seems to have been what happened to Bruce Willis. Now to get to the point of this article.

In reading some of these articles about Bruce WIllis with interest (I am, in fact, a practicing geropsychologist) – I found this one, by Joe Ferullo, apparently a filmmaker and former CBS and NBC producer and executive, titled “Hollywood Failed Bruce Willis”:

Mr. Ferullo notes in his opinion piece that over the last few years, Bruce Willis increasingly took roles in lower budget action and horror films, and as the last few years of his film career progressed, apparently he made increasing use of what is called an “earwig” in Hollywood (basically a small earpiece designed to pipe dialogue into the ear of the actor who for whatever reason hasn’t been able to memorize their lines), and was often subject to having his lines cut from films he was in.

Joe Ferullo calls this “failing” Bruce Willis – and says in his article that producers were likely “… aware of Willis’s difficulties. But acknowledging those problems to investors, and perhaps to themselves, may have carried too high a cost for some”…. and by not acknowledging these problems, Mr. Ferullo states, Hollywood “failed” Bruce Willis.

Joe Ferullo goes on to say:

“They needed Bruce Willis … and Willis may have needed them. There may have been a sincere effort on the part of his agents and managers to keep him busy and engaged, as a way to delay the worst of what his disease would eventually bring. However, at a certain point, most likely sometime in the last two years if reports are correct, a health line seems to have been crossed. But the work continued. Rumors swirled and stories were swapped — but posters still got made and investors signed checks. Directors and fellow actors reportedly worked hard to protect Willis, to keep the production going and make sure he looked good in the final cut.”

I have some major doubts that Bruce Willis was unwilling to work in any of the films he was working in up until his retirement. This is a man who has clearly loved his job, loved playing the tough guy, the action hero, etc. – and despite his illness, he got to do that up until last year.

I’m also not sure what or where this “health line” Mr. Ferullo mentions actually is or where it lies. But here’s what seems obvious to me – “Hollywood,” apparently, did it’s very best to actually accommodate Bruce WIllis’ cognitive and language issues as best as they could in order to enable Bruce Willis to continue to work and do the job he loved doing. What, exactly, does Mr. Ferullo propose should have been done differently either before or after the “health line” was crossed? Should Bruce have given up acting years ago when his symptoms had first become apparent? Announce to the world “well, I have dementia – so I’m done with doing what I love now?”

I have trouble with this idea.

In fact, I think that Bruce Willis – because he’s an important person, a movie star – he got the privilege of working longer into his dementia (or “aphasia”) diagnosis and doing what he loved long after many of us would have been forced to retire, because in this case, both Bruce Willis’ interests and the interests of the movie industry aligned. Producers and directors provided him with assistive devices (the “earwigs”). They made reasonable accommodations by minimizing his lines and giving him roles that allowed him to maximize his physicality and presence, and minimizing his need to rely on memorization and language. This, to me, sounds like a win – maybe even a triumph.

I’m even reminded of the possibility of Joe Ferullo’s article possibly somewhat betraying a bit of eurocentric cultural bias when it comes to the issue of cognitive impairment. My first ever peer-reviewed article (here) talks about an interesting difference between Caucasian and Latina dementia caregivers and the likelihood of institutionalizing their loved on with dementia. It may surprise some, and not others – but in this case Latina caregivers were significantly more likely to delay when they finally sent their loved one to a nursing home or assisted living facility, and all other things being equal, it wasn’t financial resources or some other factors that best explained this – it was the fact that Latina caregivers, and more often than not the ones “less acculturated” into US white dominant culture were much more likely to see positive aspects to caregiving that allowed them to see the benefit of keeping their loved ones at home, integrated into the family, and integrated into the daily rhythms of life.

That being said – no, I’m not lionizing or making a hero of the movie industry here. They, like Joe Ferullo mentioned, were (and are) motivated by money. If they did the right thing, they stumbled into it. Likewise, I’m not ruling out the possibility that in the rush to continue to cash in on the stardom of Bruce Willis even as his cognitive capacities faded, Hollywood may have somehow harmed Bruce’s health – although there’s no evidence of this I’ve seen in any of the articles I’ve read thus far.

So – at the risk of repeating myself – I disagree with Joe Ferullo. Although motivated by profit and money, Hollywood in my opinion did not “fail” Bruce Willis. They appear to have stumbled into helping him to continue to age in place and do what he always loved doing, long after most of us would have otherwise been forced to give it up.

Good for you, Bruce.

MOVIE REVIEW: “The Father” (2020)

A picture containing text, person, wall, indoor

Description automatically generated

 “I feel as if I’m losing all my leaves. The branches, and the wind, and the rain. I don’t know what’s happening anymore.”

(Warning – spoilers aplenty ahead).

I just saw the 2020, Anthony Hopkins-starring film, “The Father,” which is a film adaptation of a play by Florian Zeller. The above, shattering quote was uttered by the protagonist of “The Father,” an 84 year old widower and retired engineer and London resident, Antony, as he collapses in the arms of a nurse in the final scene of the film, and in this scene – he reveals how truly desperately removed from reality he had become. In a moment of painful insight that at times doesn’t always seem to be possessed by dementia sufferers – he cries and sobs the above quote while she calmly tries to soothe him by promising him prosaic comforts, such as placid walks in the park, a nice lunch, and a nap to fill his day.

I’ve seen several movies that depict dementia. Many of these depictions are offered in a factual and straightforward narrative style – like the 2014, excellent film starring Julianne Moore, “Still Alice,” or the brilliant and heartrending biopic of Iris Murdoch, “Iris,” starring Dame Judy Dench.

While many of these movies and others are excellent in many ways, as someone who is both a cinephile and who has worked extensively with dementia patients for almost 20 years – I just discovered after watching “The Father” the ingredient that I have been missing in all of these films I had seen before. It’s the thing that quality therapists and psychologists always try to look for when they were working with clientele –  the ability to really peer into the heads of the people they are trying to help – what I’ve heard referred to as empathic understanding.

I like this definition from the massively influential founder of the Humanistic School, Carl Rogers:

“[Empathy is] an accurate, empathic understanding of the client’s world as seen from the inside. To sense the client’s private world as if it were your own, but without losing the ‘as if’ quality–this is empathy.”

Obviously, part of this whole accurate empathy business comes down to just plain inborn skill. Some of us are naturally better empaths than others (while others, I don’t know, are better suited for being physicists or mathematicians). But at the same time, it’s never just a walk in the park trying to communicate our empathy to our clients – as all of us professional therapists know, our clients are surprisingly good at calling bullshit on us when we’re no good at it.

Even under conditions of cognitively normal clients the ability to accurately empathize with our clients can be extremely difficult  – for example, I’m a white male with a fairly upper middle-class upbringing: how easy will it be for me to, say, accurately empathize with the trials and tribulations of say, a low SES, minority, female, LGBTQ client? Not very (not, at least, without great respect, compassion, and earnest curiosity amongst other things). But the business of dementia, particularly more advanced dementia, like is portrayed in “The Father” takes the difficulty of trying to ‘crawl into the head’ of one’s clientele and understand their experience, well, to a completely different level.

The movie only takes place across a small handful of sets (it was originally adapted from a stage play, after all), the pacing and continuity of the movie is confusing, but honestly, it couldn’t have been any other way. I haven’t consumed movies about dementia or aging exhaustively, I can’t remember any movie that attempted to portray the experience of dementia from the first person like this one does – although the neo-noir 2000 Chris Nolan gem, Memento, does try to take a stab at this in portraying a protagonist with anterograde amnesia, albeit in a grim, highly stylized and cinematic way.

This first-person take makes the world of Antony confusing, stressful, and unmoored – early on in the movie you get to see demonstrated first-hand one of the long-observed bits of clinical lore from my world, what is likely often labelled “paranoia.”

Antony has a beloved watch that features heavily throughout the movie, but one that he frequently has trouble finding. Throughout the movie he suspects and/or downright accuses at least two caregivers, and his daughter, Anne’s husband of stealing his watch – when in fact, of course, he simply just keeps losing it.

(As an aside, I always explain this to family or caregivers thusly – when we are cognitively intact and we lose things (as I frequently do), we are intact enough to essentially remember that we forgot).

This is not a luxury that Antony has. His sense of time, and the identities of the people around him also are unstable. There is one scene in the movie where his daughter returns to ‘his’ flat (which turns out might never have been his flat at all?) with some dinner, and there is a comparatively brief moment – which felt like an eternity – where Antony honestly didn’t recognize his daughter as she came through the door. The brilliance of the movie is that we weren’t quite sure that it was his daughter either – somehow, she looked different, and Antony’s brief terror and uncertainty was also our own.

The movie has moments like this throughout, where we share in Antony’s convoluted and confused perceptions and attempts (at times, very unsuccessfully) to make sense of his world, and all of the anxiety and worry that he suffered through as a result. We also get to see the effects it has on the people around him, like the caregivers who his long-suffering and devoted daughter, Anne, tries to pair him up with in a desperate attempt to keep him living in a semblance of normality outside of an institution.

Anne and Antony aren’t completely one-sided or fully-sympathetic characters in this movie, either. Antony himself can be manipulative and spiteful – and at one point in the movie. Anne entertains some particularly dark fantasies about her father that simply demonstrate how far she has been pushed (and I know that plenty of dementia caregivers, in their darkest moments, have entertained fantasies like these).

Anthony Hopkins delivers what can only be described as a performance of a lifetime in this movie, with a heart-rending emotional payoff at the end. I’ve literally spent this entire weekend replaying the final scene in my head, haunted by it. If you really want to understand dementia and the caregiving relationship in a way that other movies that have attempted this subject have not been able to do, I highly recommend you watch The Father.

The COVID-19 Economic Depression and our Deflationary Future

So as probably people have noticed – the economy has been in absolutely terrible shape of late. The COVID-19 pandemic, plus the accompanying panicked “lockdown” approach to pandemic disease mitigation (which was also historically novel as well) imposed by governments have sent the US into an economic tailspin with small businesses, particularly restaurants and “Mom and Pop” retail establishments unable to do business – basically going under.

Sucks to be unemployed.

Unemployment is somewhere in the double digits on average. It’s obviously worst for the “Gen Z” crowd (those in their late teens and 20s), apparently – BLS data has them at around 14 to 17 percent, but older adults aren’t doing a ton better. Back in February of this year (which seems like a decade ago at this point), unemployment for adults over the age of 55 was at 2.6 percent according to BLS data, and now it appears to have basically tripled to 7.7 (middle aged adults like me apparently have fared the best – our unemployment rate has “only” gone from 2.5 to 6.2 percent).

Yes, again – for the Gen Z crowd the statistics are horrible – and I don’t want to discount that. They are angry and they have a right to be, they’re drowning in student loan debt, housing and medical care continue to hyperinflate away in price for normal people, so there’s reason to feel like the “American Dream” has left them behind.

But for older adults the predicament is even more dire. There’s evidence from the previous “Great Recession” in 2008-2009 that older adults are about half as likely as their younger adult counterparts to become reemployed after being laid off.

Moreover, it’s my opinion that the books, as they say, are cooked when it comes to the unemployment rate:

According to economist John Williams at Shadowstats.com, when you include so-called “discouraged workers” and include measures of underemployment in your numbers (as well as some other ways of more broadly measuring unemployment), you end up with unemployment numbers likely somewhere in the neighborhood of 5-18 percent higher than the officially reported numbers.

Our Deflationary Future?

I don’t read a ton of books these days, I’m ashamed to admit – although I’m reading constantly (articles, Reddit posts, blogs, etc). I encountered someone by the name of Jeff Booth interviewed recently on a random YouTube channel (the ‘Livorna Podcast’) – never heard of him before, but the guy has a thesis about deflation and monetary policy that is utterly fascinating to listen to, and think about. So much so that I bought his book (above) and am now halfway through.

Booth’s thesis is thus – he says that deflation, which, broadly defined, is the “reduction of the general level of prices in an economy” is something that is utterly inevitable in our coming economic future. Regardless of what policies Jerome Powell (our current US central bank chief) pushes through – Quantitative Easing (QE), Zero Interest Rate Policy (ZIRP), the alphabet soup of “credit facilities” they’ve currently created – and etc. – the world we are rapidly hurtling towards isn’t inflationary, it’s deflationary. The cost of everything is – and will be – going down.

JPOW and his Money Printing Machine

A few thoughts, I’m sure, come up for folks – wait, didn’t I acknowledge that the cost of medical care, higher education, and real estate are all hyperinflating – they’re all going up, right? That’s not deflation.

But remember JPOW and his money printer, above. There’s a reason why they fight against deflation – to put it simply, the Fed is always fighting the last battle, always fighting against the spectre of deflation, which according to conventional wisdom, was what was the cause of the Great Depression in the 1930s – and because the Federal Reserve didn’t react vigorously enough to it (and apparently tightened the money supply, instead of easing it) we suffered from a so-called “deflationary spiral”:

This is why the Federal Reserve feels we need inflation

So here’s the deal – I don’t really buy into the idea that deflation is something to be avoided at all costs. There’s an idea in economics (coined by Joseph Schumpeter) of “creative destruction,” one which I am sympathetic to and applies to long-term credit cycles.

The idea is that times of credit expansion (e.g., where the money supply is increasing, interest rates are dropping, and banks are loaning out more and more money), this creates overcapacity in markets – rampant speculation in assets takes place (e.g., bubbles – like the stock market is now), companies that may not normally be able to stay afloat are instead allowed to continue to exist on a diet of cheap corporate credit. These companies (airline companies, luxury cruise line companies, and even the likes of Hertz Rent-a-Car being a recent, notorious example), they become “zombies” and as opposed to going bankrupt, and undergoing an orderly liquidation process where their assets are sold off and creditors get the scraps, and the market can “start over,” these companies are basically allowed to continue on, like the proverbial undead.

Rent a car, anyone?

How does this relate to older adults?

The Big Picture

The big picture is that for the last year or so I’ve become increasingly convinced that there are some big things afoot in the world economy. It started in 2008 with the Great Recession, became more obvious with the 2018-2019 “repo crisis”, and now in 2020 with all of the economic turmoil that’s ensued, my concerns and interest in the world economy has heightened significantly.

So I’ll be starting a new blog very soon – and will be therefore migrating a lot of the activity that I used to have here at Aging in America to wherever this new home will be. I’ll also probably be attempting to more formally join the Twitterverse as well… I already have a Twitter account, but I plan to more regularly use it now (follow me @DrGeoffLane).

Stay tuned!

COVID-19 – my Soapbox Post as a Geropsychologist, my Experiences Thus Far

I figured it would be very difficult to write another blog article (which I do fairly infrequently now) without referencing the current COVID-19 pandemic, or what has become colloquially known as, “these challenging times” (good lord I’m getting tired of hearing that phrase – regardless of how accurate it may be).

Anyways, as a geriatric psychologist and one who has worked for a number of years in a medical setting (skilled nursing) for a number of years, I figure I have a unique perspective to speak about all of this.

COVID and Nursing Homes – We are Ground Zero

I must say it’s been rather interesting working in the nursing home field right now. Typically, skilled nursing very much functions as a forgotten corner of the medical landscape, the armpit, the red-headed-stepchild. To put it cynically, in an almost brutal fashion – nursing homes are the place that old people “go to die,” it’s where people go when families have become unable to care for a loved one, or worse, where they are “dumped off.” Ironically, while skilled nursing is arguably one of the most extensively regulated sectors of the US economy, many nursing homes have been very difficult places for older people to live, to put it mildly, with high costs and substandard care being often the norm. Generally speaking, nursing homes are, as I spoke of before, generally forgotten places. People don’t really think about older adults when they’re in nursing homes – they can just be tucked away there and forgotten.

Not anymore! The COVID-19 crisis has made everyone wake up, and wake up big time. One reason for this is the following chart, which I’ve referenced before:

20860

While the above link references data from COVID deaths in China, suffice it to say, the data hasn’t played out much different in other well-known hotspots like Italy, or the State of New York. COVID-19, while a very serious illness at most any age (although arguably a non-issue for those under 10 years old), has been called an “almost perfect killing machine” for older adults. Moreover, of the over 90,000 COVID-19 deaths that have been counted in the US thus far as of this writing, apparently somewhere in the neighborhood of 1/3 of them have been occurring with US nursing home residents or workers.

So, no matter one’s personal perspective about how deadly or how not, or how at-risk you are or are not personally, from the COVID-19 pandemic, or no matter how strongly you may feel about so-called “stay at home” or “lockdown” orders, there’s no question – my population, my nursing home patients – they are the most vulnerable and need maximum protection from this, no matter what.

Now, on to the rest of us.

COVID – My Initial Response: LOCK IT DOWN!

Initially, when I saw COVID explode onto the scene, it reminded me of my initial response to 9/11, in the first couple weeks after seeing the twin towers fall. It was a response borne entirely of emotion, in this case, it was borne of a mixture of fear, largely replaced by anger. “Bomb ‘em all!” I said (referring to the terrorists that attacked us), and I largely didn’t care about anything other than making sure that wherever the 9/11 terrorists hid (Afghanistan), we needed to support the government in making sure they did everything in their power to bomb their butts back to the proverbial stone age.

Of course, after Afghanistan, the government then turned it’s attention to Iraq, my skeptical nature kicked in, my emotions calmed down, and we know the rest of the story. The “war on terror” remains, with the attendant issues with civil liberties / surveillance at home, and an entrenched and increased military presence abroad that nags us today. We arguably have grossly overreacted and have paid the price for it, and continue to pay the price for it.

Back in late February and early March, I had a similar response when the national panic about COVID set in, and the escalating lockdowns and shelter-in-place orders began coming out from politicians’ offices both on the local and most notably, state level. Initially, I was frightened, scared, and frankly – supportive. Lock it all down!

The Media Base Case for ‘Lock it Down’

Like everyone else, I encountered several influential articles when this all started to happen very quickly.

The first I recall was from the Washington Post, which featured a series of interactive infographics where you could essentially model the spread of disease across different types of social distancing approaches. The article made a strong case (via simulation) – that mass social distancing, rather than mitigation was the way to go (and doing nothing is obviously madness).

Then there’s this one – “The Hammer and the Dance” – published by Tomas Pueyo which I think everyone ate up simply because he was so eloquent in how he framed the ideas of enforced social distancing (e.g., “shelter in place” orders, or more straightforwardly, what people have been calling “lockdowns.”) Similar to the Washington Post article – it made a strong rhetorical and visual case (the Pueyo piece was packed with graphs) for government-enforced social distancing, at least until treatments or cures can be found, but at least so we can be assured that the healthcare system would not be overwhelmed by COVID-19 cases, so-called “flattening of the curve.”

One of the last major online sources that I personally see as being hugely influential in the beginning was the following online tool from The Institute for Health Metrics and Evaluation (IHME)located at https://covid19.healthdata.org/united-states-of-america.

Regarding the IMHE website – there’s no question about it, it’s a pretty darn cool thing they’ve created. What it does is take the known relevant COVID-19 inputs, which includes confirmed infections, confirmed deaths, hospitalizations, and ventilator usage, and then using the tried and true tool of non-linear regression, they essentially, well, predict the future.

One could argue that the IMHE’s models may have issues with the old “garbage in, garbage out” problem. In the United States, issues with availability of COVID-19 testing have been an enormous problem, and have made accurately determining the Case Fatality Ratio, or CFR – which is calculated via the number of infected people divided by the number of confirmed deaths – extremely problematic. In other words, if you don’t know the proper inputs (e.g., the number of people who actually have the disease), you’re never going to figure out how deadly the actual disease is, or how fast it spreads – you know, useful stuff that epidemiologists need to know!

That being said, because testing capacity in the USA was so sparse in the ‘early days,’ (back in early March), there’s really no other way to put it – statistical modeling of disease outbreak was all we had. And early on, another highly influential group from the Imperial College of London published this, which made the case that the United States was staring down the barrel of 2.2 million deaths if we collectively “did nothing[1]” in the face of the COVID-19 pandemic.

This scared the heck out of everyone, and so began our march into the new “challenging times” we find ourselves living in.

COVID-19 as seen by a Geropsychologist

I’m not unfamiliar with the whole problem of viral pandemics.

In fact, about 12-15 years ago I had a 3-4 year obsession with influenza pandemics (sparked by worries about H5N1 mutating into a pandemic flu), and read everything I could get my hands on about pandemics and how to survive them, including John M. Barry’s masterpiece of nonfiction, “The Great Influenza.”

29036._UY475_SS475_

As everyone knows about me, I work in a nursing home, and in fact, have worked in long term care on and off for the past 15 years. Infection control and grappling with periodic localized epidemics in our workplace is a common feature of being in long-term care, if not facility-based healthcare writ large.

Moreover, I’m more than aware that disease spread is a social phenomenon, driven by the fact that humans are essentially pack animals and are driven to congregate with each other, often in close contact – something that viruses depend on when it comes to engineering their nefarious activities.

Finally, I’m more than familiar with multivariate statistics and modelling, as I’ve used them in my own scholarly work, and frankly need to evaluate them in the work I do as a clinical scientist.

That being said, I’m not an epidemiologist, physician, or disease specialist, and I am very much poorly able to comment on things such as, for example, the true pathogenicity of COVID-19 in all its various forms / strains that may be circulating the globe right now. Is a vaccine possible in the future? Can people become immune after being infected with COVID-19? Is “herd immunity” possible absent a vaccine? I really don’t know – not my area and I’m not going to pretend I know one way or another, although I’m watching this all very closely.

Social distancing =/= Lockdown!

1950124244283584892-covid-19

Where did the idea of “social distancing” begin?

Before we get into that, we should probably get an idea firmly into our heads from the very start. First thing is that social distancing as a way of slowing or preventing spread of a communicable disease is *not* the same thing as government-enforced shutdowns of so-called “nonessential businesses,” or mandatory mask laws, et cetera.

Second, social distancing is not a way of preventing people from getting infected with a virus – instead, it’s a way of delaying infection, so as to prevent a healthcare system from becoming overburdened with the critically ill.

And it has precedent – with the most famous case being the differences in approaches taken between Philadelphia and St Louis in 1918 as the so-called Spanish Flu was beginning to ravage the United States (there’s a number of accounts of this, but I’m taking this from Hatchett, Mecher, and Lipsich’s 2007 research article from the Proceedings of the National Academy of Sciences of the United States of America).

In the case of Philadelphia, they largely eschewed social distancing and government policy reflected this with no bans on public gatherings and no school closures or other enforcement of social distancing measures. St. Louis, however, did the opposite and aggressively enforced a broad series of measures like closures, bans on large groups, etc.  This apparently yielded results, with Philadelphia being much more adversely affected with a much more pronounced death toll than St. Louis – with a death rate of 257/100,000 versus Philadelphia’s 31/100,000.

However, Hatchett et al. notes, there are a couple of caveats – first, these non-pharmaceutical social distancing measures “were limited to the time they remained in effect.” This makes sense – no one would assume that social distancing could ever eliminate a virus, instead, it simply slows it down. Second, Hatchett et al. notes that social distancing may in fact kick the can down the road to a certain degree, with “cities that had low peaks…” (produced by aggressive social distancing) “…during the first wave were at greater risk of a large second wave.”

Why would this be?

COVID-19 and Concerns about a Second Wave

Remember, I’m a psychologist by trade, and a social scientist by training. While I don’t know much more than what I read (albeit obsessively) as a more-or-less layperson when it comes to viral pandemics, I do know that for viral pandemics, a so-called “second wave” of illness is all but guaranteed. What that means is – after this initial peak in infections and deaths is over, we’ll get a pause, and then likely restart somewhere in the fall or winter. This happened in 1918 and resulted in a far more severely pathogenic and deadly illness in it’s second wave than it’s first.

Why do second waves happen? There’s a lot of theories, and many of them depend pretty intimately on an understanding of how viruses tend to mutate and spread, the effects of seasonality (e.g., there’s been a lot of talk about why viruses tend to be less problematic in summer – particularly when it’s warm and humid), and other factors that are out of my scope. But I think we can plan for it.

Here’s my concern. In the United States, due to us being in the “fog of war” (due to an absence of testing capacity – we didn’t really know how severe and how broad-based the lethality of COVID-19 really was), we imposed mass lockdowns on most of the country that have thrown millions out of work and have likely imposed significantly elevated burdens on the poor, those who work low-level retail jobs (e.g., those who cannot take the upper-class privilege of telecommuting), and those who live in domestic abuse situations, or those with mental health issues. Suicide, alcohol abuse, and opioid deaths will most definitely be on the rise as a result of these lockdowns. This isn’t an argument necessarily against these “shelter in place” and lockdown orders – because particularly if the risk of unmitigated spread is high enough, then these costs are perhaps worth it.

However, this *does* note that our current, sweeping, “extreme social distancing” policies typified by these government lockdowns are likely in the process of resulting in some extreme stress on society at large that is likely very difficult to successfully sustain over the long term – and we know that COVID-19 will likely be with us for a long time.

What I worry is that we will not be able to sustain our current path, and that many or most localities will, either because governments will be pressured into relaxing social distancing, or because individuals will begin to skirt these laws and restrictions individually and en masse, we will see COVID-19 roar back, stronger than ever, and we won’t be able to muster the discipline needed to beat it back a second time. As Bjorn Lomborg says in the following excellent Forbes article on April 9th:

 “These policies cannot realistically be sustained for many months, let alone years. Already now, cell phone tracking shows that 40 percent of Italians still move around, despite curfews and lockdowns. In France, ‘virusrebels’ are defying bans and young Germans hold ‘corona parties’ while coughing at older people.

“As weeks of shutdown turn into months, this will get much worse. With many more people at home, this will likely lead to higher levels of domestic violence and substance abuse. As schools stay closed, the skills of the next generation erode. One study shows closing schools for just 13 weeks could initially cost the economy 8.1 percent of GDP. As more become unemployed and the economy plunges, we will all be able to afford much less, also leading to lower-quality health care for everyone. Politically, the outcome could be dire — the previous long-term recessions in the 1920s and 1930s didn’t end well.”

Sustainable, Smart Mitigation – The Swedish Approach

I’ve been growing more and more fascinated with the Swedish government’s approach to the COVID pandemic. Initially, like everyone else, I heard that the Swedish government and the UK were both planning on essentially “going it alone” and going for a targeted mitigation approach (with extreme social distancing enforced only for geriatric populations and those living in nursing homes), and from what I had heard and read initially, I thought this was complete and utter madness.

Now, I’m not so sure. Although the UK government initially was going to join Sweden, now it’s only Sweden who has basically spurned what the rest of the industrialized world (and most of the rest of the world has done) – instead of a mass, “lock it all down” policy, Sweden has chosen instead to keep schools and businesses largely open. Restaurants can still serve customers, but for sit-down service only. You can gather in groups, but no larger than groups of 50. Nursing homes, like everywhere else, however, are strictly locked down – and this is because of the recognition that older adults are the highest-risk groups in this pandemic, and should be treated as such.

For some articles on the Swedish approach:

World Health Organization lauds lockdown-ignoring Sweden as a ‘model’ for countries going forward:

https://nypost.com/2020/04/29/who-lauds-sweden-as-model-for-resisting-coronavirus-lockdown/?utm_source=facebook_sitebuttons&utm_medium=site%20buttons&utm_campaign=site%20buttons&fbclid=IwAR1vgcr59ZS5f-tpDIfBizn4HhIYaA00wjy6jL32GLamiwBfgTAaSUoO_r4

Sweden resisted a lockdown, and its capital Stockholm is expected to reach ‘herd immunity’ in weeks

https://www.cnbc.com/2020/04/22/no-lockdown-in-sweden-but-stockholm-could-see-herd-immunity-in-weeks.html?__source=sharebar|email&par=sharebar

‘Life has to go on’: How Sweden has faced the coronavirus without a lockdown

https://www.boston.com/news/coronavirus/2020/04/29/sweden-coronavirus?fbclid=IwAR20vw1JyldMRRp8BBKzvdo_c80RZwPKnUcNzwG-kfDoBvWkfxJOmlsbahg

Finally, for a really nice interview by those libertarian types at the American Institute for Economic Research, where they interview basically the brainchild of the Swedish approach to COVID mitigation – Professor Johan Giesecke, now head of the Health Emergencies Programme of the World Health Organization.

https://www.aier.org/article/lockdown-free-sweden-had-it-right-says-world-health-organization-interview-with-prof-johan-giesecke/?fbclid=IwAR2BgkE5EnuoqeieYCmGDQ1zaKClHE6vAaew8K6n70FlqaGjLwQJF0sM9Fk

I like the Swedish approach. First of all, it recognizes that pandemic disease mitigation, particularly given the fact that we do not have a vaccine or effective treatment on the horizon, is a marathon, not a sprint. Second, it recognizes that those populations at highest risk (e.g., in this case, the old, and infirm – basically the folks that live at my nursing home!) are the ones that need protection with strictly enforced, extreme social distancing. For the rest of us, given what we have known about COVID, it’s really much smarter for us to continue to go about our lives and feed our families, take care of each other, and contribute to society in the ways we are most capable – because largely locking down the economy has severe costs of it’s own.

Finally, it recognizes that if COVID-19 shifts and mutates into something far more pathogenic and sinister (which it certainly may do, if the worst-case scenario for a second wave of illness comes to fruition) we have the resources to actually do it right – instead of exhausting ourselves with disproportionately extreme, one-size-fits-all lockdown approaches that we tried to implement on an interminable basis, with no clear endpoint in sight.

[1]As it turns out, “doing nothing” means exactly that – the Imperial College’s article makes it clear that they aren’t saying “doing nothing” as in no lockdowns, or no government-mandated social distancing, or even mitigation – they were literally saying 2.2 million COVID-19 deaths would be the result if no one in the US changed their behavior at all. Which, in retrospect, seems like a pretty ridiculous assumption to even be talking about.

Conclusions

Hard to talk about conclusions at this point. We’re not even into the second wave yet, and now we’re hearing about temporary hospitals being shut down in the US, and states going back to business. We’re not even done with the first lap, and yet we’re acting like the marathon is half over.

We’ll see how this goes….

“Incremental Tech” – Gerontech Doesn’t Have to be Bleeding Edge!

A story that I recall reading (or hearing) somewhere ages ago was designed to capture the difference between geriatric medicine and the rest of the medical world, and it goes something like this:

Depending on the procedure, and despite the fact that Medicare has had flat or declining funding formulas for a number of years, we all know that physicians can under Medicare get reimbursed fairly handsomely for conventional medical interventions like general surgery, interventional radiology, et cetera. Specialists like these are not going hungry.

However, geriatric physicians are not whom we look to for invasive, life-saving interventions like what neurosurgeons dispense. Instead, a specialist in Geriatric Medicine (which is a recognized, board-certified medical specialty) will, say, spend 30 minutes in an examination room with an 80 year old, having a discussion about ill-fitting shoes. This is not a sexy discussion to have, and of course is difficult to fit into Medicare billing guidelines for more than a pittance of a payment.

shoes

Sometimes significant therapeutic                                      change can happen with better-fitting shoes!

But, that 30-minute discussion about shoes may be what makes a world of difference for that 80 year old in terms of giving them better mobility, less pain, and better safety in their home. It’s incremental, small-scale interventions that can sometimes be what clinches things for this population.

Sexy vs. Incremental Gerontech

So this is what leads me back to my favorite subject. I know that for years, the kinds of discussions I’ve had on my blog and elsewhere about older adults and technology have been somewhat overfocused on ‘sexy’ technology.

paro with old lady

My first love – social robotics like Paro, and it’s promise to soothe dementia, cure depression, and even treat dementia in its users. I’ve been enamored with virtual reality (VR), not for it’s purported effectiveness as a tool to treat specific phobias, but instead as a way to give frail, chronically ill older adults the experience of walking again, or visiting faraway lands, as a way to raise their quality of live and treat depression. I’ve preached high-flying approaches to connect long-term care dwelling seniors, and to address fall prevention.

But let’s face it- there’s a couple of things wrong with the idea of hitching our fortunes to the notion there will be some killer technological revolution, some set of “killer apps” that will sweep the country and herald in a new era of technological revolution in older adults.

First, technology adoption and user acceptance is a tricky business, particularly in the case of older adults as end-users. Older adults by definition going to be a generation that’s still to this day somewhat unfamiliar with digital technology (although this familiarity gap is rapidly closing).

Second, there’s an egocentric bias built into technology design that is still a problem – innovative technology is still typically being developed predominantly by twentysomething, upper-middle class Caucasian and Asian males in Silicon Valley – not older adults. As such, it’s difficult (unless designers do a lot of legwork at the outset) for them to make sure their products are designed with older adult endusers in mind. Despite the fact that Facebook is now extremely popular amongst Gen Xers and Baby Boomers (the 40+ crowd), it was initially designed for and by college students and only “seeped into” use later after younger people paved the way and worked out the user experience ‘kinks.’

The Reality (In the Form of some Product Reviews)

Technology *will* be changing the face of aging, but it’s possible change may come gradually. So, as is my habit, here are a few exemplars of “boring” and “incremental” products (much like the 30-minute conversation about shoes I described above – Hat tip to Mark Ray’s excellent article at Nextavenue.org).

Smart refrigerators, anyone?

smart refrig

Samsung Smart Refrigerator

I go to the hardware store with my wife, fairly frequently (she loves home improvement projects). Often when I’m there we pass through the large appliances aisle and (particularly if our kids are with us), we stop at the smart fridges.

Basically these are refrigerators that are tied to the internet. They are typically equipped with a large tablet screen on the outside of the refrigerator, and have cameras and sensors inside that can give real-time information about the contents inside. Of course, you can do fun stuff with the tablet – you can run apps on it and play videos, display a weather feed, etc.

But here’s where it gets really useful. Imagine you have an 80-90+ year old older adult living at home, say, with some impaired mobility and increased fall risk (not at all unusual at that age). Instead of having to go through the trouble of getting up, plodding over to the refrigerator and craning their heads into the refrigerator and digging around to find what they need – they can simply open their app on their phones and look inside? (Note – you can also take a virtual tour of your refrigerators’ contents via the mounted tablet on the outside of these kinds of refrigerators as well).

Combine this feature with the increasingly-common availability of online grocery ordering and delivery, and you have something that could be powerfully useful for older adults looking to “age in place” in their homes. But again – nothing earth-shatteringly revolutionary here. Just a refrigerator that has been outfitted with some additional cameras and sensors, and networked to the internet.

The biggest downside to these so-called smart refrigerators is cost – these are usually the most expensive refrigerators in a typical hardware store lineup.

Induction Cooktops

induction range

Samsung Induction Cooktop

I’m not necessarily pushing Samsung products (no conflict of interest here to declare!) but they offer another great example of a simple idea that really can translate into great utility for an older adult who is living at home.

Induction cooktops are basically a system of using electromagnetic charges to heat specially-designed cookware, but that leave the stoves themselves cool to the touch. For older adults in the home the utility is obvious here – if they forget to turn off the stove, there’s limited risk posed after the meal is done.  There’s also some additional benefits – induction ranges are around 60% more energy efficient than gas stoves, and they have obvious safety benefits for households with children in the home as well.

Again, similar downsides to smart refrigerators – induction cookware (and the stovetops themselves) are expensive compared to their traditional counterparts. But given the safety aspects, they may be worth the cost!

Simplified Clothing

Again, it doesn’t require complex artificial intelligence algorithms or computer technology to innovate on a simple idea and make it (in some cases) light years better. In a separate article I wrote for Psychologists in Long Term Care a few months ago, I reviewed a couple of aging-related startups. The first was Authored.

Authored logo

authored pic

This is a company that’s leading the charge to make “adaptive apparel” fashionable and mainstream, as opposed to relegated to a niche market marketed in occupational therapy circles and via long-term-care industry catalogs. “Adaptive clothing,” as a quick reminder, refers to clothing that has been engineered to improve usability for people with functional and/or cognitive issues. For example – older adults with arthritis may have a great deal of difficulty with traditional buttons, but would benefit greatly from using snaps or Velcro. Older adults with dementia who may become agitated when a shirt is pulled over their heads, or may become combative when it’s buttoned from the front, may benefit from a shirt that opens from the back.

For Authored, though, it’s than just adding snaps, buttonhooks, and Velcro to clothes and calling it a day  – Authored has taken pains to re-engineer adaptive clothing from the ground up. From their website: “We have reduced or eliminated irritating seams, used premium and durable fabrics and positioned closures to reduce discomfort while improving accessibility.” They’ve also made sure that their clothes look fashionable, and not ‘fuddy duddy.’

I *like* focusing on what Authored has done because they’re as much about innovation in usability as they are in terms of innovation in aesthetics and style – and I feel like the media attention Authored has gotten thus far (e.g., they’ve been featured on the BBC as well as last year’s Aging 2.0 conference in SF) has probably helped to make companies like Tommy Hilfiger, Target, and Zappos begin innovating their own lines of “adaptive clothing.”

Simplified Eating

Much like what Authored has done for adaptive clothing, Eatwell has been doing the same for adaptive flatware and dinnerware:

eatwell logo

And again – these kind of adaptations can be critical for older adults who have arthritis, or cognitive impairment that may be affecting their perceptual abilities or otherwise. Adaptive tableware has been something utilized by Occupational Therapy for a very long time – and when employed properly, it’s an intervention that can, in the main, extend the amount of time an older adult or person with disabilities can eat independently.

What makes Eatwell’s products different than their competitors is again, they reengineered adaptive tableware from the ground up, utilizing evidence-based research down to the correct color contrasts to employ in their tableware in order to maximize meal percentages eaten by their test subjects. Much like adaptive clothing, adaptive tableware will be a growing sector of the retail market in coming years – it will soon be a much more mainstream product!

Moral of the Story

So sure, artificial intelligence, machine learning, wearables, mobile apps, etc., etc. – these will all have a place in the gerontechnology landscape as the population ages. But we need to keep an eye out for the “incrememntal tech” that will change the world and make the greying of the population a little easier as we progress into the brave new world that’s out there!

 

Older Adults and Bitcoin / Cryptocurrencies. Not there yet….

ripple etehereum and bitcoin and micro sdhc card

Photo by Worldspectrum on Pexels.com

I’ve written a lot about the growing influence of technology on the world that older adults occupy, from the world of cute and fuzzy robots, to smart homes and smart nursing homes, and consumer applications of artificial intelligence (think Amazon Alexa and Apple’s Siri technology).

I’ve written about how the fastest-growing adopters of information technology (internet use) are older adults, and in some areas, older adults (particularly the baby boomers) match or eclipse the degree of adoption and use of certain classes of information technology at this point, specifically health-related technology.

One thing I haven’t written about is the so-called “emerging asset class” of cryptocurrencies – the most well-known of which at this point is Bitcoin.

I ran across article the other day, on the crypto-centric news site NewsBTC. Title of the article: “Nearly All American Seniors Don’t Know or Want Bitcoin: But It’s Not a Concern.” They reference a survey, apparently conducted by the “Gold IRA Guide,” located here, that lines up some pretty compelling statistics about how older adults (defined as people 50 years and older) seem to have a pretty dim view of this “emerging asset class.”

I have an uncle, who has been active on Facebook (and Twitter, I believe) for a number of years, who has occasionally posted about cryptocurrencies, and says he suspects that Bitcoin is either a “Ponzi scheme” or a “solution in search of a problem.” Either way, he’s not interested. Then there’s Warren Buffet – one of the most well-known and successful older adults out there, who has famously referred to Bitcoin as “rat poison squared.”

Neither my uncle nor Warren Buffet seem to be particularly out of step with the results of this survey (from https://goldiraguide.org/new-survey-reveals-american-retirees-currently-have-a-negative-outlook-on-bitcoin/):

chart-screenshot-OA-and-bitcoin

(Photo by David Crowder from goldiraguide.org)

Big takeaways from this very interesting survey. Over half (56%) of all older adults surveyed have heard of Bitcoin but are “not interested” in it. About 1/3 (32.9%) don’t even know what Bitcoin or cryptocurrencies are. A much, much smaller percentage (3%) own some, and similarly negligible percentages are interested but either don’t know how to invest, or are just “keeping an eye on it” for now.

So, there’s a few things going on here to keep in mind. First, by their nature, older adults are a risk-averse group when it comes to managing their finances. They’re more likely than not to be on fixed incomes, and they’re more likely to be retired and naturally be geared towards wealth preservation and be avoidant of excessive risk in their retirement portfolio. Also, keep in mind – this survey was conducted deep in the Bitcoin bear market of 2018, after Bitcoin and the larger cryptocurrency market had lost around 75% of its market value, and all but the most optimistic of cryptocurrency supporters had thrown in the towel.

What is Bitcoin? Briefly, Bitcoin is the first example of purely peer-to-peer currency. There will only ever be 21 million Bitcoins in existence (making it noninflationary), and it requires no central issuing authority (like the Federal Reserve). Bitcoin itself is only 10 years old, having been borne of the 2008 “Great Recession” and subsequent Federal Reserve money-printing binges (QE1-QE3) that followed.

It’s also the first example of true digital scarcity that I’m aware of. You see, one of the features of computers and technology, as well as the internet, is that it’s trivially easy to copy things, like music (MP3s) or books, or websites, etc.

Bitcoin is the first completely digital entity that is uncopiable and un-counterfitable. That’s what makes it unique and valuable, and due to the fact that it can be sent peer-to-peer, avoids third-parties (like Visa or PayPal, who charge transaction fees) and also avoids censorship or confiscation, such as if the government decided to seize your funds, or if a bank decided to close your account.

All of that aside, cryptocurrencies are a very risky asset class. Despite the aforementioned bear market, since its inception, it’s gone from being worth around .008-.08 cents to being around $5,300 dollars today. While that’s an impressive run, it’s undeniable that Bitcoin has experienced sickening volatility compared to other assets, which has made it an investment only for those with the most iron of constitutions.

I have a confession to make. I own some cryptocurrencies. Mostly Bitcoin, but also several “altcoins.” I bought my first Bitcoins in 2011 – and then sold some and spent the rest a year or two later, which I kick myself about to this day. Since then, I’ve bought back in (and turned a decent profit at the end of the bull market of 2017), and right now my crypto portfolio stands at around a little less than 1% of my net worth, with most of my net worth otherwise being tied up in real estate, as well as a 403B and IRA stock portfolio.

Interestingly, I know of a lot of millennials (e.g., those in their 30s or younger at this point) who have an almost mirror-image view of cryptocurrencies compared to the older adults reported on in this survey, with a number of them with exposure in the double-digit percentages and upwards in their investment portfolios – with many of them reporting they trust cryptocurrencies more than the stock market. While I don’t advise this approach, given the cohort that millennials hail from, and their generational experiences (they essentially came of age during the Great Recession), it’s understandable they have the view they do of “traditional” investment vehicles, and that they embrace high-tech innovations like cryptos.

Here’s what I think – I think that much like any new technological advance or innovation, it takes time for older adults to “catch up.” We’ve seen this with information technology as well, although in the case of internet use, the “youngest-old” (baby boomers, mostly) they’ve basically already caught up in terms of their comfort and usage rates. Cryptocurrency is a bit different though – it’s bleeding-edge fintech,it’s both financial innovation and technological innovation, of a kind we’ve never seen before.

So, not only will Bitcoin and cryptocurrency have to scale the obstacles that technological innovations normally have to in order to be adopted by older cohorts (e.g., generation effects), it will have to also be accepted as a much surer thing, as opposed to the way it seems to most older adults today – a speculative vehicle that’s not much better than throwing it all on black in Vegas. In other words, it will have to truly demonstrate that it’s a store of value, in the same vein as gold and silver.

Likewise, as my uncle has told me, it will have to probably at some point show that it’s a solution that actually addresses an identifiable problem.

However, given the sheer scale of the US Federal Governments’ money printing (QE1-QE3), and the incomprehensible degree of US sovereign debt (21 trillion and counting), I’m guessing we won’t have that long to wait….

The Renaming of an Illness: “Dementia” vs. “Major Neurocognitive Disorder” – Five Years Out

About five years ago, I noted a big event – after years of debate, the “big APA” folks (e.g., the American Psychiatric Association) had proposed that “dementia” be from heretoforward renamed as “major neurocognitive disorder.” In the article, I noted that there were some good reasons for proposing this change. For one thing, “dementia,” like “senility,” has some negative etymological baggage:

“the origin of the word ‘dementia’…. (is) a bit harsh as well, from the perspective of those who carry the diagnosis… ‘dementia’ originates from the latin term demens literally ‘mad, raving.’”

So, five years out – has this changing of the terminology cause any big changes in the field? It’s worthwhile looking at exactly why the “big APA” decided to make this this change in the first place.

From the DSM-5 online:

“Dementia is subsumed under the newly named entity major neurocognitive disorder, although the term dementia is not precluded from use in the etiological subtypes in which that term is standard….. The term dementia is retained in DSM-5 for continuity and may be used in settings where physicians and patients are accustomed to this term. Although dementia is the customary term for disorders like the degenerative dementias that usually affect older adults, the term neurocognitive disorder is widely used and often preferred for conditions affecting younger individuals, such as impairment secondary to traumatic brain injury or HIV infection.”

So, what it appears the “big APA” folks are saying is that they want us to stop using “dementia” when we’re talking about some disease entities where the term may be in less “standard” use – but if practitioners wish to keep using the term dementia as they always have, well, they can.

Where may the term be in less “standard use”? Well, given the association of the word “dementia” with older adults, nursing homes, and the aged, it seems pretty fair to say that “dementia” means “old people” to most. In practice, it’s worth noting that in my tiny corner of the geropsychology world, “dementia” continues to be the standard term. This may be for a number of reasons: for one thing, “major neurocognitive disorder” is well, rather wordy. Also, family members and consumers are not aware of the term. They typically have heard of “Alzheimer’s,” and may be aware of the term “dementia” (and at times, often ask “is dementia different than Alzheimer’s”)?

Moreover, it’s not particularly clear that the term “major neurocognitive disorder” has quite seeped into the public consciousness yet. Out of curiousity, I did a quick “Google Trends” dive, looking to compare the terms “major neurocognitive disorder” and “dementia.” Suffice it to say – while searches for the term “major neurocognitive disorder” are on the rise, it seems pretty clear the latter term has gained comparatively zero traction.

Searches for the term “dementia” appear to have over forty-plus times the
average volume, and continue to rise year upon year. It’s worth noting the same is true for searches within Google Scholar – which is a pretty good proxy for search use amongst academics and clinicians – a search for articles between the years 2015-2019 for the term “neurocognitive disorder” yielded just under eight thousand hits. “Dementia,” in contrast, yielded over a quarter million. Clearly, dementia wins.

This probably reflects a few things. One – the term “dementia” is entrenched as the “standard” term. Two, it probably reflects that the clinical wordiness of the new moniker is too unwieldy for consumers, and probably for clinicians and academics as well.
Would the world of dementia care be different if somehow we all got on board and dropped the use of the term “dementia” outright? Would this translate into more humane and person-centered care for persons with dementia / major neurocognitive disorder? At this point, we don’t know, because five years out from the APA’s grand semantic shift, “dementia” continues to rule the diagnostic roost.

Artificial Intelligence, Machine Learning, and Long Term Care

I’ve been wanting to write this article for a long time. I’ve been working in geropsychology and the long-term care (LTC) industry for the last 14+ years. I’ve worked mostly in the area of skilled nursing (what most people know of as nursing homes), as a consultant and staff member. Skilled nursing facilities are where people, typically OA (OA), live and receive medically necessary professional services from nurses and other allied health professionals to provide them assistance with activities of daily living.

Over this time, I’ve learned a few things. Skilled nursing care, at least as it’s practiced in the United States (but this is largely true across the industrialized world) is highly regulated, very labor intensive, and because of these two reasons – is very expensive. Caring for medically complex and frail, chronically ill, primarily older adults (OA) involves a lot of monitoring and supervision, along with the physically demanding tasks of caring for their bodily needs.

Back to the issue of cost – according to recent statistics, the average monthly cost of a stay in long-term care in the United States is over eight thousand dollars per month ($8,121 to be exact). While that number is obviously high, it’s sobering to recognize that the cost is 13% higher than it was just 5 years ago (citation here).

Setting the stage – Recapping the “Demographic Tsunami”

And these trends aren’t going to be slowing down anytime soon. As I’ve written here, and here, and a number of other places, the US and the rest of the industrialized world is currently poised to be swept away by a “demographic tsunami.” This refers to the fact that from the year 2000 to 2050, the world’s population who is 60 years or older will approximately double from about 12% to 22%. The number of “oldest old” (those over age 80), will quadruple.

Moreover, there’s strong data suggesting that there are, and will continue to be growing and severe shortage of healthcare professionals out there to meet the needs of OA (both in long term care and otherwise). In my own particular field, geropsychology, the shortages are already severe and are projected to continue to grow. The story is not much different for the fields of geriatric medicine or geriatric nursing – even the front-lines of LTC, the people who do the real demanding, physically laborious work of caring for elders, the nursing assistants – even this field is experiencing growing shortages.

There’s a number of reasons why this is. One, reimbursement. Geriatric medicine is a field with one of the lowest reimbursement rates of any medical specialty, so, in response, it’s a very poorly sought-out specialty for the hordes of newly-minted physicians who want nothing more than to pay off their increasingly-herculean student loans and to start providing a good living to their families (and who can fault them for that?). Geriatric nursing isn’t much different, and in the case of nursing assistants, hourly wages of 10$ per hour in very demanding conditions provides a poor retention incentive.

So why is this such a problem?

So far, it’s fair to say that the rapidly growing cost of nursing home care and elder homecare and facility care in the United States and industrialized countries is not stopping. While our current generation of 65-year-olds and older are (arguably) the healthiest they’ve been in a long time, the fact is – when people get old, they are at far greater risk of developing a whole host of problems that often require significant and at times, round-the-clock care. For example:

  • Dementia – I never fail to mention this one. The #1 risk factor for developing dementia (such as Alzheimer’s disease), is advanced age – and, dementia is endemic in LTC facilities (at the VA nursing home I am employed at in my day job, around 70% of my population have dementia, which is roughly in line with US averages).
  • Falls and ambulation problems – chronically ill OA are more likely to lose the ability to walk as they get older. Musculoskeletal issues (like degenerative joint disease), deconditioning and muscle wasting, dementia, and other issues can often render OA wheelchair bound or worse. Sometimes they are unable to transfer from wheelchair to bed or toilet without assistance, or require lifts to be moved. Sometimes they are even unable to turn themselves in bed. Also – closely related to the issue of ambulation is falls – older people are much more prone to injuring themselves during a fall, and of falling more frequently. This is due to issues like cognitive impairment and poor judgment (due to the aforementioned issues with dementia).
  • Incontinence – another major issue is lack of control of one’s bowels or bladder. Again, medical conditions like dementia, spinal cord injury, or other neurological problems can put chronically ill OA at risk for this. Obviously, incontinence, when combined with other issues, can require care as incontinence briefs need to be regularly changed so as to prevent other issues.
  • Difficult-to-heal, or nonhealing wounds – often the above issues of problems with ambulation, incontinence, or frequent falls often put chronically ill OA at risk for developing wounds that are often very difficult to heal. Often this is because they are less likely to move when they are in bed or in a chair (leading to pressure sores), or they can bang or scrape themselves during a fall. Due to thin skin and reduced ability to heal (often because of poor circulation, diabetes, etc.), their wounds take a very long time to heal, and without constant care, can at times become infected.

And this isn’t even scratching the surface. As you can see, chronically ill OA require significant amounts of monitoring and care by professionals in order to just exist, and without it, they can rapidly become acutely ill and require much more expensive care (such as in an emergency room).

So what’s the solution?

There’s been much discussion about solutions over the years, from making OA healthier, to strengthening home and family caregiving options, to training more doctors, nurses, and psychologists to help OA living in nursing homes, as well as improving existing models of care.

It’s also worth mentioning that with all the hullabaloo of the so-called “Affordable Care Act,” AKA Obamacare, there has been virtually no attention paid to reforming the broken state of LTC funding in the United States (I’ll quickly get off that soapbox!).

So what if there was some other solution? We all know, as is was said here by venture capital investor Shourjya Sanyai, that the “rapid(ly) aging demographic will directly affect social, economic and health outcomes for these growing economies. Particularly healthcare delivery pathways need to be readjusted, keeping in mind the prevalence of chronic diseases, comorbidities and polypharmacy requirements of the elderly and geriatric patients.

Sanyai goes on: “Given the situation, healthcare providers are starting to offload certain parts of the care-pathways to artificial intelligence (AI) based automatization. AI can now be found in every step of the care-pathway, starting from intelligent tracking of biometric information to early diagnosis of diseases.
So what is AI?

Artificial Intelligence and Machine Learning – Definitions, and the Example of Alexa

The definition of artificial intelligence, as found via Google: “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” Note that artificial intelligence (or AI, as it’s frequently called) is frequently combined with smart machine learning algorithms – which thereby assure a system that can do specialized cognitive tasks and also learn from experience as it does it’s work.

Moreover, one of the nice things about today is that AI and machine learning is a concept that’s rapidly seeping into the consciousness of consumers worldwide:

echo picture

Figure 1. Amazon Echo (http://www.bestaiassistant.com/google-home/amazon-echo-vs-google).

Amazon probably has done more to make the concept of AI (and it’s capabilities) obvious to the masses with the introduction of the Amazon Echo device, which is the most well-known vehicle for its so-called Alexa AI technology. What does the Echo do? One of the things we like most about the Echo in our family (we have several) is that it takes rote tasks and automates them.

For example, we have most of the lightbulbs in our house now controlled by Alexa – instead of having to get up and flip a switch, we say “Echo, turn off bedroom lights,” etc. We’ve also hooked it to our music streaming service, so if we want to hear a song, we say “Echo, play (insert favorite song),” and off it goes – this is as opposed to fiddling with our phones, or a CD player, or whatnot.

There’s more – Alexa will now be doing double-duty as a burglar alarm! One of the newer “skills” that Alexa has been enabled with is something called “Guard Mode” – whereby if a user leaves their home, they can say “Alexa, I’m leaving,” and Alexa will listen for the sound of glass breaking and alert the user. So – Alexa is also a smart monitoring system – while you can still purchase an analog burglar alarm system monitored by “24/7 security personnel” (which is expensive and requires people on duty to constantly monitor your home), you don’t have to – because AI (in the form of Alexa), will do it for you.

AI and Machine Learning in Healthcare

So let’s go full-circle back to LTC. Why would be interested in it?

First, remember all the examples I listed above, regarding the kinds of problems regularly addressed by nursing staff in LTC facilities (e.g., dementia, falls, wounds, incontinence, ambulation and movement issues). What they all have in common is that in order to address them in the LTC environment, they require a significant amount of personnel to perform rote tasks relating to monitoring and rounding. I would suggest that a very significant number of these tasks currently performed by nursing staff could be offloaded to so-called “smart systems.”

Smart Device Assisted Living and Monitoring. What happens when an older adult with dementia tries to leave their nursing home (trying to “go home”)? The current standard approach to “wandering” is to attach “wander guards” to residents at risk, which, when residents cross a perimeter, will alert staff and allow them to redirect them back into the facility. The downside of this is that often these alarms don’t help to locate residents (it just alerts them that the perimeter has been breached), nor does it distinguish between those who legitimately are trying to escape, versus those who merely accidentally trip the alarm when they are merely, say, rolling outside to get a breath of fresh air.

Or how about when an older adult is bedbound, is at high risk for nonhealing wounds, but due to neurological impairment fails to turn themselves? The current standard of care is for nursing staff to regularly “turn” residents (say, on an hourly or per-shift basis). However, this requires that nursing staff, who are often busy, overworked, tired (and human) to remember to remember – moreover, it’s often possible that these residents may be still turning on their own and don’t need this extra intervention.

AI is tailor-made for the situation I sketched out above, in the case of “turning” residents. You can apply a wrist-mounted (or bed-mounted) sensor to a resident and monitor their movements. If a resident has not turned after a certain amount of time, you can have the AI system alert staff proactively, essentially telling them, “hey, Mr. Jones hasn’t turned in a while – can you go help him?” This prevents staff from having to round unnecessarily on residents who are not at risk for wounds and who are turning in bed, and also relieves nursing staff from having to “remember to remember.” One of the companies mentioned in this article are already developing an AI system to address this very issue.

Wrist-actuated actigraphy (such as what you see with Fitbits, and Apple Watches), combined with AI, is also potentially ideal for replacing the old “wanderguard” system – I am familiar with a company called Carepredict, who is essentially doing just this – they have facility residents wear their own proprietary wristbands which detect a resident’s movements within the unit (as well as level of activity). The system is designed to provide “early warnings” to staff when a resident’s behaviors deviate from their established norms – and can precisely locate a resident when they are trying to escape (this is also something AICare is also trying to do).

Not only that, Carepredict claims they can provide “early warning” for staff to let them know if a resident is becoming depressed (say, if they begin to isolate in their rooms when their previous pattern was to be out and about regularly), or if they have stopped eating.

Fall Detection and Prevention. How are falls currently addressed by nursing staff?

Alimed Pressure pad

Figure 2. Bed / chair alarm pressure pad (courtesy of Alimed, Inc).

Currently, it’s the ol’ analog pressure-pad system. In other words, residents identified as being high fall risk are issued pressure pads placed on their beds or chairs, and if a resident gets up from their bed or chair, the alarm sets off a loud racket, and nursing staff come running. The downside of the current system are manifold – one, it has a significant number of false positives – residents who merely move in their chair or bed (something we *want* them to do, actually) set the alarms off. Second, the noise is annoying to residents, and for those with dementia, can serve to agitate them further – thereby inadvertently raising their fall risk. Third – it leads to “alarm fatigue” in staff (due to the frequent false positives) – staff sometimes don’t respond to the alarms because they know they are often wrong. Despite all of this, residents continue to fall at high rates, staff often find a resident on the floor and are left to question these frequently-memory-impaired residents and otherwise piece together what happened, and then institute fall prevention measures after the fact.

Enter Safely-You, a company I’ve been very excited about (although note they’re not the only market participants in this space). Instead of pressure pads, they offer a camera placed in a resident’s room, typically at bedside (since this is where most falls occur) and the camera then continuously monitors the resident, constantly capturing video of the resident.

However, there is never more than 10 minutes of video saved in the systems’ buffer at any one time, and video is only ever permanently saved if a fall is detected. The AI and machine learning built into the system detects falls at apparently a 94% level of accuracy, and immediately alert staff when a fall occurs. Staff are immediately able to review the video and institute fall prevention recommendations based on exactly what they see the resident do (as opposed to what they imagine happened).

Virtual Companions. This is a subject near and dear to my heart (see here, here, and here). Let’s go back to the example of Amazon’s Alexa (and it’s various competitors – like the Google Home or the Apple Homepod) – these digital assistants are useful, but they aren’t exactly companionable – more just disembodied and mildly robotic voices that do what you tell them (although Alexa can tell jokes, or sing songs for you if you ask it).
One of the other “rote” tasks in nursing (hate to put it that way) revolves around the insubstantial yet extremely important task of providing companionship to residents. The hug, the touch on the shoulder, the listening intently to the older adult as they tell a story – these are all vital to the health of OA but due to the abovementioned issue of staffing and sky-high demand for long-term care services, nursing staff are much less able to provide this service to their clients.

So, how about this?

paro with old lady.png

Figure 3. Paro robot doing its thing. Courtesy of the Toronto Star.
Above is the Paro robot – a robotic companion that uses its built-in machine learning algorithms to learn the name users give it, and to respond preferentially to being stroked, and to avoid being hit or dropped. Moreover, it’s adorable – and research tends to suggest that it delivers beneficial affects to the users (which includes calming dementia patients), by stimulating oxytocin production. Oxytocin, of course is the feel-good chemical that parents get when bonding to their children or when new mothers first nurse their babies.

Other Applications for AI and Machine Learning in Facility and Home Care?

This kind of technology has applications that literally are only limited by imagination and a few smart programmers. A worthy mention is the company Winterlight Labs, which has a proprietary assessment tool which claims to assess for the presence of dementia via speech-sample assessment of patients to a degree heretofore impossible using standard, human-administered cognitive tools. This kind of innovation has the potential to put geriatric neuropsychologists out of business!! (Well – maybe not quite yet).

Also an honorable mention goes to CareAngel – they have a system whereby the digital assistant (like Alexa) calls OA and simply asks them how they are doing (they call these “care touches,”) and then has an actual conversation with them. For example, if the older adult says “terrible, I’m in a lot of pain,” then the system asks them additional questions (like what level their pain is, where it’s located, etc). and then depending on their answers, summons a live care provider.

Bottom Line

The bottom line is that AI and machine learning are poised to revolutionize the care of OA both within and outside the LTC industry. This revolution will result in a lowering of costs, mostly in the form of less staff required for routine, rote monitoring and rounding of residents, but also – in the form of less costly trips to the emergency room or ICU due to real or even misclassified falls, as well as infections and injuries. It may even result in less need for humans to provide companionship to residents, as we might be able to offload some of that work to social robots and digital companions (as creepy, and potentially ethically questionable as that may be to some).

Nursing homes and OA care are going to see skyrocketing demand over the coming years. In order for our nation to not get completely swamped by the sheer weight of the cost and labor of caring for our most needy and vulnerable citizens, we’ll need to find ways to innovate our way out of this. The AI and machine learning revolution may in fact help us to do just that!