This file contains research from my undergraduate days at MIT, when I was entertaining the notion of a neuroprosthetic interface for my wearable computer. My consequential argument against neuroprosthetics is given in the New Scientist interview on this web site - a reposte is given by Gerald Maguire and Richard Normann in the published article.

While the information is a bit out of date, enough people have expressed interest in neural interfaces that I'm releasing it as a starting point for those interested in researching the subject. I'd love to see someone do a serious survey of the field from a wearables standpoint. Note this paper has never been peer reviewed (besides receiving an "A" in the course :-). The phrases "augmented reality" and "wearable computing" are markedly missing - I was not yet confident enough in the terms and concepts to "publically" air them to my professors (other than my advisor). Another notable lack is the artificial cochlea work that was having success at the time at Mass Eye and Ear. I can only imagine that I thought headphones were sufficient and preferable to the medical procedure and so did not bother to mention it.

This paper is unedited from the original. It is in older-style Latex format, and the images were photocopied from the references. The experiments described are not something to play around with. I hate to say something so obvious, but

DO NOT PERFORM SUCH EXPERIMENTS YOURSELF - PERMANENT DAMAGE OR DEATH COULD RESULT.

For those scientists out there, refer to the human experimentation guidelines of your country.


\documentstyle[12pt,clbiba,fleqn,driver]{article}
\def\vect#1{{\bf #1}}
\def\prop{\quad\propto\quad}
\newcommand{\dspace}{\renewcommand{\baselinestretch}{1.5}\large\normalsize}

\input{psfig}

\begin{document}
\begin{titlepage}
\topskip 6.5cm
\centerline{\LARGE Electrical-Neural Interface as Pertaining to
Virtual Reality}

\vskip 1.0cm

\centerline{Final Paper for 9.01}

\vskip 1.0cm

\centerline{Thad E. Starner}
\centerline{December 12, 1990}



\vskip 0.15in
\end{titlepage}

\dspace
\topskip 0.0cm

\section{Abstract}
	With the advent of faster computers and better man-machine
interfaces, the field of virtual reality was formed.  Virtual reality
deals with providing a user with simulated environments.  Generally,
the more involved the user can become in the simulation, the more
useful it is.  Presently, scientists are concentrating on sight,
hearing, and touch to provide the stimuli necessary to make these
simulations or ``realities'' more believeable.  While present
interfaces into these realities are relatively crude, this paper will
explore the future of such interfaces.

\section{Introduction}
	Mankind has improved its standard of living by finding faster
ways to do work.  Historians often talk of the Agricultural
Revolution and the Industrial Revolution as points where discoveries
and inventions have caused a distinct increase in man's efficiency.
More recently we have seen the ``Computer Revolution.''  Computers
have allowed quicker access to large amounts of data.  Businesses can
process transactions quicker, scientists can recover, store, and
process more data, and information can be spread world-wide in an
instant.  Researchers have even tried to create artificial
intelligences to take over the more mundane tasks and help make
complicated decisions.  However, these experiments have repeatedly
fallen short of their goals.  Instead of using computers to develop
artificial intelligences, why not use them to further help the users?
While computers are not that good at making ``intelligent'' decisions,
they are excellent at storing and transferring information.  Humans,
however, seem to have the opposite problem.  While humans easily adapt
to a variety of tasks that require ``intelligence'', they can not
accurately store as much information as modern computers.  This
characterization returns computers to the tasks of data retrieval and
storage.  Thus, the most efficient combination of computer and man
would be for the computer to be used to present data for a user who
could then make decisions upon that data.  The more efficient the
communication between the human and the computer, the more tasks could
be completed.  This is where the field of virtual reality comes into
play. 

	Virtual reality attempts to utilize all of the user's senses
to produce simulated environments.  If done properly, these
environments succinctly express the data that the user needs and
provides the interface for the user to respond or to query for more
data.  For example, an architect may want to 
survey an existing building and determine methods for improving its
design.  A computer can use the blueprints for the existing building
as well as the layout of the surrounding environment to create a
visual simulation.  The architect could then experimentally rearrange
the building simply by grasping sections of a miniaturized simulation
and physically moving them.  The computer can update the appearance of
the building and highlight possible problems based on the physical
properties of the materials used.  Through such an environment, the
architect could make aesthetic decisions about the design and let the
computer keep track of bothersome details.  Through
several iterations of the computer querying the architect on specific
issues and the architect providing decisions, a good model could be
completed without ever having to involve physical materials.  If the
simulation was good enough, the computer should then be able to
provide the revised blueprints, the materials needed, an estimate on
the cost, and the time needed to finish the design.  Interactive
simulations such as this one could be designed for use in surgery,
stock trading, computer programming, market analysis, design, and many
other applications.

	Attempts at creating useful virtual environments
are limited by computation power and interface design.  While the two
fields are often worked on separately, the present state of one 
often affects the research of the other.  For instance, bit-mapped
computer displays were only feasible when cheap memory and fast
screen refresh processors were developed for computers.  This paper
will concentrate on the interface issues, however, and will try to
take the field to its extremes.  

	Presently, man-machine interfaces
concentrate of sight, sound, and touch, usually in that order.
Fortunately, the present interface for sound, i.e. headphones, is
relatively sufficient.  While the user is aware of the headphones,
realistic sounds can be presented.  Unfortunately, the senses of sight
and touch are of higher information bandwidth than that of hearing.

	While normal computer screens are good for displaying
simple information, a large percent of the user's field of view must
be covered in order to draw the user in to an environment.  For
virtual environments, the optimal situation is for the computer to
have complete control of the entire visual field.  In the past, this
issue has been approached from either creating very large screens or
making the user strap on a pair of goggles with a small screen placed
in front of each eye.  The second method allows for binocular
disparity, but has sincere trouble in providing satisfactory
resolution due to both hardware limitations and computational power.
Another method, suggested by modern science fiction, is to interface
directly to the visual system of the user.  This method allows
the computer to directly affect the user's sight.  Since humans only
require high resolution for the fovea, the computer could save on
computational power by not providing as much resolution to the
periphery.  Also, such a system allows total control, which means that
any sort of visual environment, no matter how physically improbable,
could be created.  This ability is especially important when
simulations of environments with different physical properties, such
as space, are required.  However, how practical is such a system?  The
answer to this question comes from the field of neural prostheses.
For many years, a visual prosthesis for the blind has been envisioned.
 The issues that pertain to such a device are the same issues that are
involved in the direct interface for virtual environments.

\begin{figure}[btp]
\centerline{\psfig{figure=fig1.PS,height=3in} }
\caption{Basic visual pathway (from Guyton, p.301).}
\label{fig:basic-strobe}
\end{figure}

\begin{figure}[btp]
\centerline{\psfig{figure=fig1.PS,height=2in} }
\caption{Visual cortex (from Guyton, p.301).}
\label{fig:basic-strobe}
\end{figure}

	Before discussing how to interface to the visual system, a
general overview is necessary.  Figure 1 shows the basic visual
pathway.  After light strikes the retina, the resulting impulses
traverse the optic nerve to the optic chiasm where the fibers from
each side of the eyes cross to the opposite sides of the brain.  These
fibers form the optic tracts which pass into the lateral geniculate
body.  From here the geniclocalcarine fibers carry the signals through
the genculocalcarine tract to the primary visual cortex (see Figure
2).  After the primary visual cortex, signals are passed to Brodmann
areas 18 and 19, also called the secondary visual cortex, where higher
level processing occurs.  

\begin{figure}[btp]
\centerline{\psfig{figure=fig1.PS,height=3in} }
\caption{Visual cortex electrode array device (from Brindley, p.492 plate 3).}
\label{fig:basic-strobe}
\end{figure}

\begin{figure}[btp]
\centerline{\psfig{figure=fig1.PS,height=3in} }
\caption{Implanted electrode array (from Brindley, p.492 plate 1).}
\label{fig:basic-strobe}
\end{figure}

\begin{figure}[btp]
\centerline{\psfig{figure=fig1.PS,height=3in} }
\caption{Electrode placement (from Brindley, facing p.483).}
\label{fig:basic-strobe}
\end{figure}

\begin{figure}[btp]
\centerline{\psfig{figure=fig1.PS,height=4in} }
\caption{Phosphenes induced by stimulation (from Brindley, p.486).}
\label{fig:basic-strobe}
\end{figure}

	From this simple description, there appears to be two areas
where an interface could be done.  The first is directly to the visual
cortex.  Fortunately, the visual cortex is at extreme posterior end of
the brain which is relatively easy to reach. As such, it is one of
the better known regions of the brain.  In 1968 G. S. Brindley and W.
S. Lewin reported the successful implantation of an 80 electrode array
into the visual cortex.  The device was intended as a preliminary
experiment to a visual prosthesis.  A copy of the device is shown in
Figure 3 and an X-ray of the implanted device in Figure 4.  The
larger light rectangles are radio receivers while the smaller, sharp
black dots are electrodes.  Signals were sent to the electrodes by
pressing the transmitting coil of an oscillator (tuned to a particular
radio receiver's frequency) directly over the radio receiver.  The
radio receiver then stimulated its electrode.  A typical signal was
pulses of 100 Hz applied for 200usec.  The cortical electrodes were
arranged as in Figure 5.  With this apparatus, Brindley and Lewin were
able to produce very small spots of white light (phosphenes) in the
subject's visual field.  The positions of these phosphenes are shown
in Figure 6.  Unfortunately, some of the phosphenes were not discrete
points of light but rather fuzzy clouds.  Also, some electrodes
produced multiple phosphenes.  Indeed, some of the electrodes produced
one phosphene at lower levels of stimulation and then produced another
phosphene on the opposite side of the visual field at higher levels of
stimulation.  Brindley was very optimistic in that he thought it was
possible that a modified version of his prototype (more electrodes,
finer control) could permit blind patients to read and write and
avoid obstacles.  From what is now known about the visual cortex, the
author will present ideas explaining some of Brindley's 
results and why a visual prosthesis hooked into the visual
cortex may be difficult.

	Recordings from layer IV of the visual cortex, where the
genculocalcarine tract ends, show that the
receptive fields of the neurons of the region have center-surround
receptive fields.  It also shows that the patterns of excitation in
the visual cortex correspond with constrasting borders of the visual
field.  The more the contrast, the greater the excitation.  However,
as the signals progress out from layer IV, more and more processing
can be seen.  Three types of cells are normally seen in this region.
Simple cortical cells respond to small spots of light and stationary
objects, but they respond best to bars in a specific orientation.
Complex cortical cells respond best when a correctly oriented bar of
light moves in a particular direction.  Finally, hypercomplex cortical
cells fire only to moving angles or lines of a particular length.  
Neurons that
respond to similiar stimuli (like line size) are found close to one
another,  and neurons which service the same spot in the retina are
also grouped.  Alternating columns corresponding
to the left or right eye are prevalent.  On top of all this, the
neurons are mapped topographically according to the layout of the
retina.  These groupings lead to the idea of  
``hypercolumns'' which can be thought of processing modules dedicated to
analyzing the image at one particular spot on the retina.  Due to the
nature of the processing of the hypercolumns, much interconnection
exists.  Brindley could have inadvertantly stimulated some
simple cortical cells at some locations and more complex cortical
cells in other regions.  This could have accounted for the fuzzy
phosphenes that were observed as well as some of the multiple dot
phosphenes.  Furthermore, Brindley reported that when he applied more
current to two adjacent phosphenes, sometimes a line appeared  between
the two.  The extra current could have leaked into one of the more
complex cells of the hypercolumns causing the line.  Therefore, more care must
be taken when interfacing a visual prosthesis.  What is so difficult
about that?  Well, modern techniques still can not hit a specific
neuron.  Often a connection is made and the researcher must
find out experimentally to what he has interfaced.  This is due to the
size of the neurons and electrodes, the physiological diversity amoung
subjects, and the fact that tissue tends to be soft and can move.  Even
given these problems, some promising work has been done by W. H.
Dobelle.  Dobelle claims to have made an experimental device which, by
cortical stimulation, allows a blind patient to see white horizontal
and vertical lines on a black background.  He has also used such a
device to allow a blind person to distinguish braille letters.
However, Dobelle's results also suffer from topographic problems,
intensity of phosphenes, and flicker.  Even if these problems are
solved, there are still more issues with interfacing to the visual
cortex. 

	So far, we have concentrated on the processing done in the
visual cortex.  However, the visual image is used and processed by
other parts of the brain as well.  The lateral geniculate body is
postulated to use images from both eyes for fusion of vision and
stereoscopic depth perception.  Visual fibers also pass to the
suprachiasmatic nucleus of the hypothalamus (control of circadian
rhythms), pretectal nuclei (fixating of eyes and pupillary light
reflex), superior colliculus (simultaneous bilateral movement control
of the eyes), and the pulvinar (secondary visual pathway).
Furthermore, a good deal of processing occurs at the retina itself.
While a first-order visual prothesis may ignore these issues for
simplicity, a virtual reality system should be concerned with these
problems.  Either a direct interface would have to be provided to these
areas and the appropriate processing done on the image before it is
passed to the cortex, or the entire interface should be relocated to
an earlier part of the system.  Fortunately, work is being done on the
latter.  Again the work is in conjunction with a visual prosthesis,
but the research is very pertinent.

\begin{figure}[btp]
\centerline{\psfig{figure=fig1.PS,height=3in} }
\caption{Structure of the Retina (from Guyton p.296).}
\label{fig:basic-strobe}
\end{figure}

	Interfacing to the retina is a very delicate matter.  Figure 7
provides an overview of the retina.  Note that the retina is basically
inverted in its design.  The light sensing rods and cones are furthest
from the light while the axons leading to the optic nerve are closest.
 This may seem like an advantage for our purpose, and perhaps it is,
but it also provides some distinct challenges.  First, a quick
overview of the functions of the various cells in the retina is
necessary.  The rods and cones are the photosensors in the retina.
The rods are associated with monochromatic vision and are the most
numerous in the retina.  The 
cones are associated with color vision and are prevalent in the
fovea.  The fovea is the most densely packed region of the retina and
is used when fixating on an object due to its high resolution and
color capacity.  The rods and cones do not generate action potentials
but instead use a method called electronic conduction.  The bipolar and
horizontal cells also use this method.  Bipolar cells exist in two
types: depolarizing and hyperpolarizing.  The depolarizing bipolar
cell is inhibited by the transmitter substance produced by the rods
and cones while the hyperpolarizing bipolar cell is excited by it.
This allows positive and negative signals to be transmitted to the
amacrine and ganglion cells.  The horizontal cells connect the rods
and cones laterally to more bipolar cells.  This allow for the lateral
inhibition found in the retina.  The amacrine cells connect the
bipolar cells to the ganglion cells.  The amacrine cells are believed
to send signals to the brain when sudden changes in light intensity
occur.  There are several types of ganglion cells (as many as 30 or 40
according to some sources).  Most ganglion cells react to
constrast borders, changes in light intensity, or color opponents in
an image.  The output of the ganglion cells is then transmitted to the
optic nerve.  Thus, the ganglion cells represent the highest
level of processing in the retina.  

	When looking at Figure 7, an obvious plan comes to mind.  Why
not simply stimulate the rods and cones through the pigment layer in
the back of the eye.  In this way, all of the processing and feedback
systems of the visual system could be utilized.  Unfortunately, the
pigment layer is very sensitive and bleeds very easily.  In fact, the
pigment layer is known to detach itself from the retina which will
cause blindness if untreated.  Therefore, we must look for another
method.  Interfacing to the axons coming from the ganglion cells is
tempting; however, many axons run over the surface of the retina.
Thus, when an electrode is used to stimulate an area, many axons are
stimulated causing fuzzy blurs to be perceived.  Also, this method loses
all the processing capability of the retina.  A compromise method is
to interface to the ganglion cells.  This at least preserves one level of
processing in the retina.  Since the other cells in the retina seem
relatively simple, analog hardware or software should be able to
simulate their output to the ganglion cells.  This situation is
fortunate since there seem to be many functions of the ganglion
cells that are not quite understood.  Thus, this method saves the time
of mapping all of the ganglion cells.  

	While interfacing to the ganglion cells sounds like a good
idea, there are many practical issues.  The first is that, to date,
any electrode interface to the retina has caused the retina to die.
  Not only is the retina very
sensitive to electrodes (especially iron based ones), it is sensitive
to pressure.  A increase of internal eye pressure of 10mm of mercury
will cause the retina to deteriorate.  Thus, the issue the actual
physical interface is the most important.  Research is being done now
on how to overcome this problem, but it is assumed that once this
problem is overcome, the hardware drivers for the interface will be
relatively simple.  Once connections are possible, it is then
necessary to determine which ganglion cells are being stimulated so as
to provide the proper input.  A system which, once implanted, would
send test signals to detemine electrode position and then adapt
appropriatedly may be possible.  Unfortunately, it may be many years
before the first experiments in this field prove fruitful.  Therefore,
direct interface to the visual cortex is still a very active
alternative. 

\begin{figure}[btp]
\centerline{\psfig{figure=fig1.PS,height=3in} }
\caption{Somatic sensory nerve endings (from Guyton, p.129).}
\label{fig:basic-strobe}
\end{figure}

\begin{figure}[btp]
\centerline{\psfig{figure=fig1.PS,height=2in} }
\caption{Somatic sensory cortical areas (from Guyton, p.161).}
\label{fig:basic-strobe}
\end{figure}

	The sense of touch is another important issue in the field of
virtual reality.  While special purpose joysticks and sensors have
been devised to provide feedback, the issue is still very fresh.  Again
there is a problem with the sheer bandwidth of information that is
passed to the brain from the mechanoreceptors, thermoreceptors, and
nociceptors in the skin (see Figure 8).  Clearly, interfacing directly to these
receptors is out of the question.  The user would not be able to move
with the equipment that would be necessary for this.  What about
interfacing to the somatic sensory cortical areas?  As can be seen in
Figure 9, this area of the brain is not as accessible as the visual
cortex.  Also, something equivalent to the
hypercolumns in the visual cortex appears to exist in the somatic
sensory cortex which means that making the appropriate connections
could be difficult.  Furthermore, there is evidence of processing by
the spinal cord before the signals reach the cerebral cortex.  All
these problems complicate the issue of direct interface.  However,
there is a non-intrusive method which holds some promise in
controlling the sensed positions of a user.

\begin{figure}[btp]
\centerline{\psfig{figure=fig1.PS,height=3in} }
\caption{Vibration induced position illusions (from Goodwin, p. 712).}
\label{fig:basic-strobe}
\end{figure}

	In 1972 G. M. Goodwin, D. I. McCloskey, and P. B. C. Matthews
published a technique by which illusions of motion could be induced by
vibration.  Basically, when certain areas of a subject's bicep were
vibrated, the subject believed that his arm was further outstretched
that it actually was (see Figure 10).  Experiments by Biguer {\em et
al.} extended this research to vibration of neck muscles to try to
change the apparent direction of a light source.  In the discussion of
results Biguer theorizes that vibration of muscles causes
muscle spindles to discharge which is the signal that a muscle has
lengthened.  Thus, illusions of posture and limb displacement can be
induced by vibrating muscles,  These results are of use for virtual
realities.  One of the hardest tasks is to provide some form of
feedback to the user of unprenetrable or unmoveable objects.  Small
vibrators attacted to the muscles of the user could provide the user
with the sense of not being able to extend or retract his limbs all
the way if an object is unmoveable.  While this is not as direct as
stimulating free nerve endings, it at least provides some feedback to
the user.  Also, the resarch of Biguer shows that stimulation of the
neck muscles can cause an illusion of visual displacement.
Unfortunately, most subjects did not have illusions of head
displacement.  This may be due to the lack of agreement of the
stimulus with information provided by the vestibular nuclei.  Even so,
stimulation of the neck muscles may allow the computer to draw the
user's attention to a particular object.  However, when the context of
a scene is provided, the illusion is harder to create.  Thus, control
of the signals from the semicircular canals may be needed as well.  

\section{Conclusion}

	While present virtual reality systems suffer from a
lack of believeability, they can still be useful in solving problems.
As more research is done on the human nervous system, a greater number
of clever interfaces will be born.  These interfaces will provide the
user with more control and better ``realities.''  While many of the
devices hypothesized in this paper are far from being completed, they
still hold promise.  Maybe one day humans will be able to plug
themselves into their computers and experience worlds and
communication freedom undreamed of today.


\vskip 12pt
\noindent{\bf REFERENCES}
\vskip 6pt

\noindent\hangindent 24pt 
[1]\quad  Agnew, William F. and McCreery, Douglas B. (1990) {\em Neural
Prostheses}.  Englewood Cliffs, New Jersey:  Prentice Hall.

\noindent\hangindent 24pt 
[2]\quad  Biguer, B., Donaldson, I. M. L., Hein, A., and Jeannerod M.
(1988) ``Neck Muscle Vibration Modifies the Representation of Visual
Motion and Direction in Man.''  Brain, 111, 1405-1424.

\noindent\hangindent 24pt 
[3]\quad  Brindley, G. S. and Lewin, W. S.  (1968) ``The Sensations Produced
by Electrical Stimulation of the Visual Cortex.''  J. Physiol., 196,
pp. 479-493.  

\noindent\hangindent 24pt 
[4]\quad Class notes from 9.35 Perceptual Information Processing by Prof.
Richard Held, Fall 1988.
	
\noindent\hangindent 24pt 
[5]\quad Class notes from 9.01 Neuroscience by Prof. Nelson Yuan-sheng
Kiang, Fall 1990.
	
\noindent\hangindent 24pt 
[6]\quad  Ferris, Clifford D. (1977) {\em Introduction to Bioinstrumentation}.
Clifton, NJ:  The Humana Press.

\noindent\hangindent 24pt 
[7]\quad  Fryer, Thomas B., Miller Harry A., and Sandler, Harold
(1976) {\em Biotelemetry III}.  New York:  Academic Press, Inc.

\noindent\hangindent 24pt 
[8]\quad  Goldstein, Bruce E. (1989) {\em Sensation and Perception}.
Belmont, CA:  Wadsworth Publishing Co.

\noindent\hangindent 24pt 
[9]\quad  Goodwin, G. M., McCloskey, D. I., and Matthews, P. B. C.
(1972) ``The Contribution of Muscle Afferents to Kinaesthesia Shown by
Vibration Induced Illusions of Movement and by the Effects of
Paralysing Joint Afferents.''  Brain, 95, 705-748
pp. 479-493. 

\noindent\hangindent 24pt 
[10]\quad  Guyton, Arthur C. (1987) {\em Basic Neuroscience:  Anatomy and Physiology}.
Philadelphia:  W.B. Saunders Co.

\noindent\hangindent 24pt 
[11]\quad Interview with John Wyatt, Nov. 28, 1990.

\noindent\hangindent 24pt 
[12]\quad  Normann, Richard A. (1988) {\em Principles of Bioinstrumentation}.
New York:  John Wiley \& Sons.

\noindent\hangindent 24pt 
[13]\quad  Regan, David (1989) {\em Human Brain Electrophysiology}.
New York:  Elsevier.

\noindent\hangindent 24pt 
[14]\quad  Rizzo, Joseph and Wyatt, John.  (1989) ``Silicon Retinal
Implant to Aid Patients Suffering from Certain Forms of Blindness.''
Unpublished project proposal.

\noindent\hangindent 24pt 
[15]\quad  Wyatt, John, Rizzo, Joseph, Edell, Dave, and Masland, Dick.
(1990) ``Silicon Retinal Implant to Aid Patients Suffering from
Certain Forms of Blindness.''  Unpublished progress report.

\end{document}