Building a bionic eye

Image
From Our Neurons to Yours Wu Tsai Neuro Podcast

While it sounds like science fiction, the possibility of engineering an artificial retina, a bionic eye, is closer than you might think.

Listen to the full episode below, or SUBSCRIBE on Apple Podcasts, Spotify, Google Podcasts, Amazon Music or Stitcher. (More options)


View all episodes

We take this for granted, but our eyes are amazing.

They're incredible. We process the visual world so automatically and so instantaneously, we forget how much work our eyes and our brains are doing behind the scenes, taking in light through the eyeball, transforming light into electrical signals in the retina, packaging up all that information, and sending it on to the brain, and then making sense of what it is we're seeing and responding to it.

In fact, new science is showing that the eye itself, meaning the retina, is actually doing quite a bit of the fancy image processing that scientists used to think was happening deeper in the brain. 

Of course, our eyes are not perfect. Millions of people suffer vision loss or even blindness. Often, this is because the tiny cells in the retina that process light die off for one reason or another, but here's something that may surprise you. While it sounds like science fiction, the possibility of engineering an artificial retina, a bionic eye, is closer than you might think, and that brings us to today's guest.

EJ Chichilnisky is the John R Adler professor of neurosurgery and a professor of opthalmology here at Stanford, where he leads the Stanford Artificial Retina Project. His team is engineering an electronic implant to restore vision to people blinded by incurable retinal disease. In other words, they are prototyping a bionic eye.

Links

Further Reading

Episode Credits

This episode was produced by Michael Osborne, with production assistance by Morgan Honaker and Christian Haigis, and hosted by Nicholas Weiler. Cover art by Aimee Garza.

Episode Transcript

Nicholas Weiler (00:11):

This is from Our Neurons to Yours, a podcast from the Wu Tsai Neurosciences Institute at Stanford University. On this show, we crisscross scientific disciplines to bring you to the frontiers of brain science. I'm your host, Nicholas Weiler.

(00:31)
Here's the sound we created to introduce today's episode. Perhaps that is the sound of a bionic eye. We take this for granted but our eyes, our eyes are amazing. They're incredible. We process the visual world so automatically and so instantaneously. I think we forget how much work our eyes and our brains are doing behind the scenes, taking in light through the eyeball, transforming light into electrical signals in the retina, packaging up all that information, and sending it on to the brain, and then making sense of what it is we're seeing and responding to it.

(01:36)
In fact, new science is showing that the eye itself, meaning the retina, is actually doing quite a bit of the fancy image processing that scientists used to think was happening deeper in the brain. Of course, our eyes are not perfect. Millions of people suffer vision loss or even blindness. Often, this is because the tiny cells in the retina that process light die off for one reason or another, but here's something that may surprise you. While, it sounds like science fiction, the possibility of engineering and artificial retina, a bionic eye, is closer than you might think, and that brings us to today's guest.

EJ Chichilnisky (02:14):

My name is EJ Chichilnisky. I'm on the faculty at Stanford in the Department of Neurosurgery and Ophthalmology with a courtesy appointment in electrical engineering.

Nicholas Weiler (02:24):

Dr. Chichilnisky leads the Stanford Artificial Retina Project. His team is engineering an electronic implant to restore vision to people blinded by incurable retinal disease. In other words, they are prototyping a bionic eye. I started the conversation by asking him, how close are we to creating this technology? And can I say bionic eye, is that offensive word?

EJ Chichilnisky (02:48):

It's not offensive at all. Are we there? No. Is it possible? Absolutely.

Nicholas Weiler (02:54):

Great. Well, but I think before we dive into the details, see how this thing would work, we probably need to take a step back and maybe understand some of the basics. How does the eye actually work so we can understand what's going to be needed to eventually reverse engineer it essentially.

EJ Chichilnisky (03:08):

Sure, absolutely. So the eye is an imaging device, absorbs light from the outside world that is imaged by the lens of the eye onto the retina. The retina is a sheet of neural tissue at the rear of the eye upon which that image is projected. The retina does three major things with this pattern of light. First, it converts the light into electrical signals. Second, it processes those signals quite substantially to re-represent the image in patterns of neural activity that are complex and aren't a simple and direct reflection of it. And the third thing it does is transmit those patterns of activity along the optic nerve to the brain, so into the retina comes an image and out of the retina comes, let's think of it as a million wires containing these electrical impulses that represent the visual sea in a sort of abstracted way.

Nicholas Weiler (04:01):

So you can sort of think of the eye as being a lot like a camera, right? It's got a lens, it focuses the light, there's a sheet in the back that picks up that light and turns it into an image. On modern cameras or on your cell phone, it can do a lot of pre-processing. It can blur out the background or focus on particular colors and things like that. Is that a fair comparison?

EJ Chichilnisky (04:19):

It is. I think those components are there in the retina, the simple focusing of light and the simple absorption of light. It's the additional stuff. As you say, in modern cameras that's the image processing that takes place in the brain of your phone, if you will. The retina, you can think of it as a camera plus an image processing module that re-represents the signal so that it can be transmitted efficiently to a variety of targets in the brain. And it's that last piece, the image processing, that is not well accounted for in present day technology for interfacing to the retina. Existing technologies that do restore vision to people who have lost it, some vision, but in part because they don't do this image processing aspect, the images that are restored are very limited in terms of their usage to an individual.

Nicholas Weiler (05:07):

Just to understand this pre-processing that we're talking about, I think people may have probably heard of rods and cones in the retina, the light sensitive cells that detect the light, but there are also other cells that do processing. Right? I wonder if you could just briefly touch on what kind of processing is actually going on?

EJ Chichilnisky (05:24):

Sure, absolutely. The rods and the cones are two type of, let's call them pixel detectors, very tiny cells that detect light in a particular point in space, and we have rods and cones because the rods are very sensitive and are very useful in night vision. The cones have the ability to adjust their sensitivity over wide ranges of lighting, all the way from bright, noon day light to dusk light so they handle different light levels. Also, the cones provide color information whereas the rods don't. Mainly, we use our rods at night and we use our cones in the daytime. Those cells then create this pixel representation, if you will, of the scene and, as you mentioned, there's a whole bunch of other cell types throughout the retinal circuitry that perform the image processing operations. And when they're all done, the final cells of the retina, that are known as the retinal ganglion cells, package up the information and send the information down the optic nerve to the brain.

(06:20)
But in between the rods and the cones and the ganglion cells, a whole bunch of processing has taken place by dozens of distinct cell types of very particular functions. We don't have a complete understanding of how all that works but we know a fair bit about what the different cell types are and what kinds of stuff they do to the visual signal. The basic science has gotten us to a point of pretty good understanding of what that early visual processing is, at least in some of the cell types, but we're beginning to have a view of how the retina decomposes that image and then represents it when it's time to send it to the brain.

Nicholas Weiler (06:50):

Just to get an example of a kind of thing it's doing, what happens between rods and cones in the brain?

EJ Chichilnisky (06:56):

Sure. So information is pooled from many cones and rods into a subsequent cell and then different cells select out different aspects of a signal from the rods and cones. For example, some cells view information about color. Some cells are sensitive to large stimuli but not so much small stimuli. Some cells respond to rapid fluctuations in the visual scene. Some cells respond more to slow fluctuations. One way to conceive of it is like imagine you had a bunch of different Photoshop filters being applied to the initial signal in the retina. Some of them are blurry, some of them are crisp, some of them are colored in certain ways. Crudely speaking, we can imagine that the retina is sending all these different Photoshop filtered versions of the image to the brain but they're doing so in a particular representation, which is the spiking activity of retinal ganglion cells.

Nicholas Weiler (07:44):

Got it. So then the brain gets all these different versions of the image that maybe are highlighting different components of what you're seeing.

EJ Chichilnisky (07:50):

Yes.

Nicholas Weiler (07:50):

That makes sense. We can get back, I think, to our original question which is, is a bionic eye possible and who would benefit from that kind of technology?

EJ Chichilnisky (08:00):

Right. So certain blinding diseases arise from degeneration in the rods and cones and the big ones to keep track of are age-related macular degeneration, which is extremely common, and retinitis pigmentosa, which is an inherited condition that's less common. Both are diseases that can render you partly or entirely blind and both of them involve a degeneration of cells in the retina that capture the light in the first place. So now if you lose those cells you're no longer sensitive to light so, of course, you can't see anything. However, interestingly, the other cell types in the retina that do all this other fancy computation remain in large numbers.

(08:37)
 The retinal ganglion cells, in particular, that send the information to the brain remain alive and connected to the brain so what that brings up is the possibility that you can create a device that substitutes for the early parts of the retina, the part of capturing the light, part of transforming it into different representations, and then with this electronic device that senses the image and transforms the image, use the output of that to stimulate the retinal ganglion cells electrically to electrically activate them and tell them when to send pulses to the brain.

(09:09)
If you're able to replicate what the retina does normally, that is, if your device is behaving faithfully like the retina would normally behave, perhaps you can introduce the correct electrical signals into the correct cells of the correct types at the correct times and deliver an image representation to the brain that's very much like what you would have had in natural vision. If you can pull off all those things, it's possible that the brain would be able to respond and you would see again. That's the concept. Now, it's not that easy to do all those things I've said. It's easy to capture the image with a camera. Performing the same processing that the retina performs is really a matter of trying to build circuits that do the same thing that the retina does, and that's hard work. And also, stimulating the retinal ganglion cells electrically is actually itself pretty tricky. You can get a bunch of cells to fire but to get them to fire in the correct patterns at the correct places and correct times is not easy.

Nicholas Weiler (10:06):

Yeah, and I think that's why it's so key to understand that the retina is not just a light detector, it's doing all this processing. Because you could imagine that just putting in a light detector and sending those signals into the brain, the brain will be able to figure it out but really it's a brain computer interface. The retina is a piece of brain tissue and you have to figure out how to talk to it.

EJ Chichilnisky (10:27):

That is really exactly the story and I like the way you said that. It is a brain computer interface because these signals from the retina, they end up deep in the brain, in a bunch of different places in the brain. So when you pass current and stimulate, let's say, a retinal ganglion cell the spike from that cell travels deep into the brain and activates subsequent cells. If you deliver a signal that correctly mimics the natural signal, you might be able to give a good interface to the brain. But if you deliver a signal that's completely unrealistic and unlike the natural signal, the brain may have difficulty understanding that.

Nicholas Weiler (10:59):

And so, I wonder if you could tell us where you are in this project. How is the artificial retina working so far?

EJ Chichilnisky (11:04):

So we don't have a device yet, we are on the way to building that device. There's a few things that a device that you implant into the retina or any other brain interface needs to do. One thing it needs to do is sense where it is and which cells are present in the region of the device. It needs to know what's there, and we can do that by recording spontaneous electrical activity. Second thing the device needs to do is to figure out how to stimulate all the different cells in a selective pattern, to stimulate this cell without that cell or that cell without this cell, so the device needs to calibrate its cell, past current, record, past current, record, over and over again in order to figure out, okay, how well can we control this pattern of activity in these cells? And then finally, the device needs to mimic the natural retina when a new image comes in.

(11:49)
That is, take that image, process it according to what the retina would do, know, based on all the basic science, what the different cell types should be doing, and then turn around and use our calibration information to correctly stimulate those cells and those cell types and cause them to fire the right patterns of activity. This requires building many things. One thing is hardware, electronic interfaces to the retina that can both record and stimulate faithfully with large numbers of electrodes. That requires development of new integrated circuit technology, which our electrical engineering collaborators have been doing. We also need to build electrodes, tiny little wires, if you will, that connect up closely to the ganglion cells so we have a chance of stimulating them in a specific manner instead of just grossly stimulating all of them in a non-specific manner.

(12:37)
It also requires many algorithms, that is, knowing how to take the recorded activity and the response to electrical stimulation and use that to figure out how to control the different cells and cell types. Wireless data and power transmission is critical.

Nicholas Weiler (12:53):

Well, it sounds like clearly it's still a work in progress but a huge amount of progress has been made just to look forward to the future of where all this is going. There are a lot of science fiction portrayals of artificial retinas or bionic eyes, Geordi La Forge from Star Trek, The Next Generation comes to a lot of people's minds when they think about this but there are probably countless others. How realistic are these portrayals, particularly the idea that these implants could not only restore vision but also give people other abilities to have augmented reality or see things that aren't possible with the natural retina?

EJ Chichilnisky (13:27):

Very realistic. So even in the existing early days, retinal implants, which have very limited function and provide only the crudest visual sensations, even in those devices there can be a form of augmentation. In particular, if you have a CCD camera that captures the image. CCD cameras are natively sensitive to the infrared so if you don't put an infrared filter on that camera it will sense infrared light, and then it will stimulate your cells according to the infrared light that's coming in so right there you've created some infrared image sensation. Now, I'm not saying you're going to get a high quality image, you won't, not with the existing devices, but you can see that you've immediately given a form of vision that didn't exist before, which was infrared sensitivity.

(14:06)
So is it possible that we'll be able to not just restore the natural patterns that create vision but also introduce new patterns of activity that create new kinds of visual sensations unlike anything we've seen before and maybe in some way augment our capabilities? Absolutely, it's possible. It's a very interesting topic, in part because if we take full control of the neural signal and then try to tweak it and "improve it", we don't even know what that's going to look like. It's not like taking an image and passing it through a Photoshop filter. It's creating patterns of activity in the retina and the brain that have never before been created and will never be created on a computer screen. You cannot make those patterns of activity on a computer screen because you're activating cells in ways that the external world will not activate the cells.

(14:50)
So what happens with that, we don't know, but it could be that you may be able to do things like multi-task better with your visual system. So we have aspects of our visual system that are very sensitive to detecting fine pattern and detail, and we have other aspects of our visual system that are very sensitive to, let's say, movement of objects and the signal over space. If we could segregate those out, then it might be that you can do fine detail in the inspection of one visual scene and movement inspection of a different visual scene at the same time. I'll give you a really silly example. If you could do that, you might be able to read your texts and drive your car at the same time safely.

Nicholas Weiler (15:29):

How's that?

EJ Chichilnisky (15:30):

Well, because one part of your visual system would be analyzing the fine-grained information of the text on your phone and a different independent part of your visual system might be looking at the movement of the objects all around you, that is the cars that you need to avoid bumping into. Now, normal visual display, those things are yoked together. Those cells are activated simultaneously because the light comes in and activates both cell types and you cannot read your texts and drive your car safely at the same time because one signal is confusing the other signal. But if you separate them out and you take full control of them, like a device like this could, then you could embed one image in one collection of cells and a different image in a different collection of cells, and potentially be able to process those images in parallel safely.

(16:10)
Now, we don't even know what that would look like. It wouldn't look like text superimposed on the moving images of cars, I don't think. I don't know actually what it would look like because what would it take to get to a point where your visual sensations are sort of decoupled from one another and can be used for more than one thing, that's the kind of future you're talking, and we will get to that future when we're able to stimulate the different cells at different times under a sort of sensible control with a device like the kind I'm describing that can read the activity, figure out the cells, and then go in and control them in a directed manner, we will be able to perform those kinds of experiments.

(16:43)
It's the dawning of a kind of scientific revolution where we can introduce new capabilities into the nervous system and the brain. It sounds a bit spooky and I don't think any of us is hankering to have some kind of experience like that with an electronic implant tomorrow, but at some point this may not seem so weird. This may seem natural that of course we can take our devices and expand our capabilities to sense the world and to interact with it.

Nicholas Weiler (17:08):

Well, thank you for giving us that glimpse of the future. I really appreciate you coming and joining us on the podcast and answering our questions.

EJ Chichilnisky (17:16):

It's my pleasure. A lot of fun. Thank you.

Nicholas Weiler (17:26):

Thanks so much again to our guest, EJ Chichilnisky. Dr. Chichilnisky's group hopes to have the first in vivo prototype of their artificial retina ready for testing later this year, we'll keep you updated. 

For more info about his work and the Stanford Artificial Retina Project, check out the links in the show notes. 

This episode was produced by Michael Osborne with production assistance by Morgan Honaker and Christian Higas. I'm Nicholas Weiler, see you next time.