Philosopher at the helm of neural engineering ethics

Mary Guiden

Professor Sara GoeringSara Goering majored in psychology as an undergraduate at the University of Illinois, where she took classes in neuropsychology. She also worked in a learning and memory lab. “It was interesting stuff,” she said. “I liked studying brains, and looking for differences in those brains.”

This early work with brains in a lab has come full circle for Goering, who has a PhD in philosophy. It now helps inform her current gig with the Center for Sensorimotor Neural Engineering (CSNE), a NSF-backed Engineering Research Center (ERC) based at the University of Washington in Seattle. The philosophy professor was tapped to work with the Center based on her work studying disabilities.

She works regularly with CSNE and her team has sought out best practices, seeking advice from other organizations, including the Quality of Life Technology Center at Carnegie Mellon University.

This year, Goering and her colleagues will talk with people who have had a spinal cord injury to explore potential concerns about technologies such as brain-computer interface-controlled “smart” prosthetics, human exoskeletons, and spinal microstimulation. They’ll then share the findings with CSNE scientists and engineers, and the public.

Below, Goering talks about her work in the emerging field of neuroethics. An abbreviated version of this article also ran on livescience.com

Name: Sara Goering
Institution: University of Washington
Field of Study: Philosophy

What is your field of research and why did you choose it?

I work in ethics, bioethics and neuroethics. I approach ethics from a philosophical perspective because my PhD is in philosophy. But there are people that do bioethics from legal or religious or other perspectives. I ended up in philosophy in part because I like really big questions that are important about how we understand ourselves and our place in the world.

What was the best professional advice you ever received?

Early on, it was do what you really love to do. I talked with advisors about whether I should really go into philosophy, or not. By and large they said, “If it’s what you love doing, try it. But be aware as you’re going for what you really love that it might not work out as a career, so have a back-up plan.”  As the graduate program director for our department, I share that with students, too.

But beyond that, my teammates and I regularly look at each other and say, “Are we still enjoying it?” 

Please describe your current research.

We’re interested in how there are ethical issues that are likely to come up with the neural technologies that are being developed in this engineering center. We are comparing what’s already out there on other interventions for human bodies or brains to neural engineering research.

One question we’ve looked at is: How are pharmaceuticals different from neural engineering?  We as a society seem very comfortable using pharmaceuticals to treat different conditions. But are these drugs significantly different in any way from the sorts of interventions we’re recommending here? It’s an interesting question. It’s not always obvious.

With most pharmaceuticals, there’s a constant reminder I take a pill. But if I have something implanted, especially if it works well and it ends up operating seamlessly, there’s going to be less awareness. 

We might also compare neural technologies to a cardiac pacemaker or other engineered devices that are in our bodies that are running our hearts or making sure that the rhythm is normal. Those devices seem less close to our sense of identity than interfering with or assisting with the functioning of our brains. 

Identity is another interesting topic to explore.  With cochlear implants, there was a huge debate on whether or not these implants would destroy deaf identity. So we’re thinking about how some of the neural technologies might alter somebody’s identity. It’s not that altering it is bad, but we want to be aware of how those things might change, and what the tradeoffs might be for people who might use these technologies.

What are you most proud of?

One of the things that make me most happy about the work I get to do is that it really is interdisciplinary and people from different schools of thought are collaborating on research. I’m not sitting in philosophy thinking about theories and writing only for other philosophers. I’m trying to do something that will really make a difference.

What was your biggest laboratory disaster, and how did you deal with it?

Disaster seems like a big, bad word. One of the things that I had to grapple with was how scientists and engineers look at ethics. I worry that scientists and engineers look at ethics like an oversight, like a finger-shaker, “you can’t do that” regulatory hurdle or obstacle to the work that they’re doing. I want it to be a collaborative practice of trying to think through the big questions that matter about the research. I’m not on the regulatory side of things.

On the other hand, I’m also not primarily about public acceptance of the technology. I went to an ERC national meeting and spoke on a panel titled “Social Acceptance of Technology.” And I said, there, “I don’t think I fit.” If we take ethics into account appropriately, my aim is not to increase the likelihood that any technologies that come out will be more socially acceptable. It’s a happy byproduct.

I want to be able to criticize and critique the direction of the research, rather than thinking that what we need to get is people to figure out that it’s a good thing for them. Maybe it’s not, and then maybe we want to redirect what we’re doing.  So this is not a disaster, but it’s a tension that is involved in the kind of work that I’m doing.

What would surprise people most about your work?

It might depend on who we’re surprising.

Non-disabled people often see disability as a bad thing: It’s an individual problem, a pathology or deficit of the person. A lot of the disability studies work that I’ve done focuses on a more socio-political association with disability. That’s not to say you ignore differences in the body, but you instead emphasize the ways in which environment can be accommodating (or not) to different ways of getting through the world.

It’s surprising to most non-disabled people because they never thought of disability that way. In this work in the center, one of our priorities has been to include what we call “end-user” perspective early-on in the process. An end-user is someone who will be using these new technologies.

Sometimes there are things that engineers are building that they think will be helpful for people with disabilities and then they don’t actually match up with what people with disabilities want or need.  Both parties could learn from the other one, but it’s important that there’s that interaction rather than smart, wonderful people thinking they’re going to help another group of people without actually checking in with them on what they might want or what they might be worried about.

What advice would you give to an aspiring engineer, scientist or philosopher?

It’s important to reach out beyond your main discipline, whatever it happens to be. Doing purely theoretical work isn’t going to be productive. You need to know something about other fields. That might mean putting yourself in contact with a lab or getting in touch with a hospital—whatever your specific area of interest is—so that you have that real experience, so you make sure anything you’re theorizing about touches down somewhere, that it can make a difference.

What is the biggest unanswered question in your field?

Neural engineering is a really interesting idea, that we can create these new technologies. But there are lots of unanswered questions about what it means for identity, and moral, legal, privacy issues.  That’s one of the things that we talk about.

We tend to think of our identity as “up here” (Goering motions to her head).  I am a philosopher, so my brain is going to be part of it, the thinking part of it. But also, I’m embodied, I have this body. We tend to think of our identity as extending to our skin and stopping there.

In one of our testbeds, or research areas, we may have a brain-computer interface that controls a robotic device. If I’m actually controlling it with my thinking, is there a way in which my body schema expands, just like when we think of a blind person with a cane?

So with a cane, if I set it down, it’s no longer in my control. But if the robotic device could extend away from me, is there a way in which now my identity is co-located? It’s fascinating what that could do to our notions of identity. It’s just unexplored, uncharted territory at this point. Philosophers pondered a decade ago that kind of possibility and now, it becomes much more of an idea that we need to think more carefully about.

Why should my [mom, kid, sister, grandpa] be excited about your research?

These are technologies that are likely coming. We want to be really clear about what direction they go in, what concerns they bring and how we might address those concerns.  It will be too late to address them if the technologies are out and on the market by the time we start thinking about it. It’s important to talk about it now. 

Our brains really are so intimate to who we are, and that’s exactly the part of us that we’re now talking about engineering or tinkering with. The precision level eventually might be so fine-grained that it could alter our sense of how we understand who we are, and what our brains are.