Tải bản đầy đủ (.pdf) (135 trang)

Progress in brain research, volume 215

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (11.07 MB, 135 trang )

Advisory Editors

Stephen G. Waxman

Bridget Marie Flaherty Professor of Neurology
Neurobiology, and Pharmacology;
Director, Center for Neuroscience &
Regeneration/Neurorehabilitation Research
Yale University School of Medicine
New Haven, Connecticut
USA

Donald G. Stein

Asa G. Candler Professor
Department of Emergency Medicine
Emory University
Atlanta, Georgia
USA

Dick F. Swaab

Professor of Neurobiology
Medical Faculty, University of Amsterdam;
Leader Research team Neuropsychiatric Disorders
Netherlands Institute for Neuroscience
Amsterdam
The Netherlands

Howard L. Fields


Professor of Neurology
Endowed Chair in Pharmacology of Addiction
Director, Wheeler Center for the Neurobiology of Addiction
University of California
San Francisco, California
USA


Elsevier
Radarweg 29, PO Box 211, 1000 AE Amsterdam, Netherlands
The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, UK
225 Wyman Street, Waltham, MA 02451, USA
First edition 2014
Copyright # 2014 Elsevier B.V. All rights reserved
No part of this publication may be reproduced or transmitted in any form or by any means,
electronic or mechanical, including photocopying, recording, or any information storage and
retrieval system, without permission in writing from the publisher. Details on how to seek
permission, further information about the Publisher’s permissions policies and our
arrangements with organizations such as the Copyright Clearance Center and the Copyright
Licensing Agency, can be found at our website: www.elsevier.com/permissions.
This book and the individual contributions contained in it are protected under copyright by the
Publisher (other than as may be noted herein).
Notices
Knowledge and best practice in this field are constantly changing. As new research and
experience broaden our understanding, changes in research methods, professional practices, or
medical treatment may become necessary.
Practitioners and researchers must always rely on their own experience and knowledge in
evaluating and using any information, methods, compounds, or experiments described herein.
In using such information or methods they should be mindful of their own safety and the safety
of others, including parties for whom they have a professional responsibility.

To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors,
assume any liability for any injury and/or damage to persons or property as a matter of products
liability, negligence or otherwise, or from any use or operation of any methods, products,
instructions, or ideas contained in the material herein.
ISBN: 978-0-444-63520-4
ISSN: 0079-6123
For information on all Elsevier publications
visit our website at store.elsevier.com


Preface
No invention or discovery is ever produced in a vacuum. First, there must be a
perceived unfulfilled need. This will usually be followed by attempts to satisfy that
need which may not always be successful. The most familiar example of persistent
lack of success is the alchemists’ failure to transmute base metals into gold. One of
the sequences of this kind applied to medicine is the introduction of a new treatment
concept. From concept to fruition in the form of a usable new method is painstaking
and time consuming. This part of the process may involve useful but suboptimal new
ideas or methods which require repeated adaptation. Chance also plays a part. Moreover, a treatment perceived imperfectly initially may be improved by totally different
persons from those who first initiated the new notions and the honor may well go to
the discoverer of the successful adapted method rather than to the original creative
thinker who initiated the investigations which ended in success. Furthermore, along
the way, a conservative profession, concerned for both the patients under its care and
the standard of living of its members, may well oppose anything new because unproven novelty may threaten both patients’ safety and practitioners’ domestic luxury.
This sequence of partial success, acceptance, and resistance to change and final
success of a truly effective new method should be seen as characteristic of medical
advances which, like it or not, are sought and implemented by human beings with all
our talents, virtues, and weaknesses. No better example of the sequences involved
can be found than the series of events which led to the discovery of smallpox
vaccination.

Lady Mary Wortley Montagu (1689–1762), daughter of the Earl of Kingston
upon Hull, was a woman of beauty, wit, and independence of spirit unusual at her
time. Her father pressed her to marry a man of distinction and property with the positively Dickensian cognomen of Clotworthy Skeffington, an Irish nobleman whom
she did not fancy. So she eloped in 1712 and married Edward Wortley Montagu in
Salisbury. In 1715, she contracted smallpox which she survived but with some scarring. Her brother died from the disease. She had previously been a Court favorite but
her satirical writings about the Princess of Wales, written while she was sick barred
her from Court. She thus joined her husband who had been appointed British ambassador to Turkey. There she encountered the practice of variolation whereby matter
from an infected person was injected into the vein of someone to induce a mild attack
of the disease, hopefully with minimal scarring and lifelong immunity. The procedure was not without risk because some inoculated individuals could suffer a severe
form of smallpox which could prove lethal. Nonetheless, its acceptance by the upper
reaches of society led to its increasing use. One of those who had survived variolation
but was never as fit afterward as he had been before was Edward Jenner
(1749–1823). While trained by the best in London, he was at heart a country boy
and returned to practice in Berkeley in Gloucestershire where his museum is found
to this day. As a country doctor, he had heard of the practice of inoculating milkmaids
with a bovine form of the disease conferring immunity. Due to the rarity of cowpox,

v


vi

Preface

it was not easy to perform routine inoculations, but those who were inoculated never
suffered smallpox, including Jenner’s little son. The success of the procedure needs
no further comment. Nonetheless, the method was criticized in the medical profession, not least by those who received substantial fees for performing variolation so
that it was a time before the treatment became universally accepted. This sort of reaction following the introduction of a new method in surgery is not unfamiliar. One
could consider Semmelweis and hand washing and Lister and antisepsis, neither of
whom received rapturous applause for their contributions. During the passage of this

book, it will be seen that the processes which ended up with the discovery of smallpox vaccination would also affect the invention of radiosurgery and the perfection of
instruments for its satisfactory performance. This will be particularly illustrated in
Chapter 11.
In the 1930s, the treatments of inaccessible cancers and neurosurgical diseases
were frustrating and inefficient. However, this was a time when understanding of
atomic structure and spontaneous breakdown of unstable radionuclides was expanding rapidly. The frustration with the poor results of existing treatments was the spur
to develop new methods. The first to attempt the use of atomic particles in radiation
treatments were the Lawrence brothers in Berkeley, California, spurred on by no less
a person than Harvey Cushing, who contributed to John Lawrence’s training and
clearly had a great respect for him. The elder brother Ernest invented the cyclotron
to accelerate subatomic particles. The younger brother, John Lawrence, pioneered
the use of these particles in the treatment of disease using both radioactive nuclides
and later well-defined narrow particle beams. It should however be mentioned that
the Berkeley group, while performing extraordinary creative work, were applying a
medical function to a machine designed for a different purpose.
In Sweden, a group of scientists developed and expanded the Berkeley technique
to the point where the clinical treatment of a variety of conditions became possible.
The Swedish group were in contact with the Berkeley group and express their indebtedness in a number of their papers. However, while the particle beam method was
elegant, it was also complex and impractical outside of a laboratory containing a
cyclotron which could generate the particles. This led to the design and production
of the only machine in the world which was specifically constructed to perform
radiosurgery, the gamma unit subsequently to be called the Gamma Knife.
The purpose of this book is to trace the history of the ideas and attempts at radiosurgery treatments from the first hesitant steps in California to the production
of the most modern radiosurgery machine the Gamma Knife Perfexion. The part
played by chance is well illustrated in the above account of vaccination. Mary
Montague was a girl of spirit who opposed her father, married the man of her choice,
sustained smallpox, wrote the wrong thing, and had to travel to Turkey where she
came into contact with variolation which she was in a social position to introduce
into London society. Jenner was a country lad at heart but during his time in London
suffered uncomfortable effects following variolation and was as a country doctor in a

position to be aware of cowpox and the smallpox resistance of milkmaids. The
Lawrences were both talented but by chance John came into contact with Harvey


Preface

Cushing who supported the activities of him and his brother and World War II inevitably did no harm to funding the laboratory where the work was carried out. In
Sweden, Leksell started a medical career by chance and was possessed of a mindset
which enabled him to design useful instruments, perhaps in part because as a child
he’d had the chance to work under supervision in the machine shop of the factory his
father owned. He also had access to a supremely talented physicist B€orje Larsson
20 years his junior without whom the gamma unit would not have been possible.
One consequence of Leksell’s social position was his net of personal relationships,
which included Bo Ax:son Johnson one of the owners and directors of the wealthy
Axel Johnson Group which during the relevant period owned the Studsvik nuclear
power plant, the Motala Verkstad engineering workshop, and the Avesta Jernverk
a workshop which also specialized in metal work. The Johnson Group thus owned
all the industrial facilities which would be required to manufacture a radiosurgery
machine. While there remains evidence of a detailed and comprehensive interest
on the part of the Swedish state to ensure the new machine’s specifications and
patient safety were acceptable, there was no financial assistance from the state.
The contribution from national coffers was limited to grants for the research work
in Uppsala during the 1950s and 1960s which would form the basis for proceeding
with a commercially produced machine. Thus, Leksell’s relationship with senior
levels of the Axel Johnson concern was a happy chance for the development of
the original gamma unit, leading to the entirely private financing of the machine’s
development and manufacture arising out of respect Bo Ax:son Johnson had for
Leksell’s work.
In conclusion, it should be remembered that the nature of scientific advance
means that a day will come when the Gamma Knife Perfexion is not the best instrument for its purpose. However, that day has not come yet and there is no sign that it

will come soon.

vii


Acknowledgments
The author would like to thank the following people without whose invaluable advice
and assistance this book could not have been written. First, Dr. Dan Leksell, the son
of the inventor of the Gamma Knife, has been free with information about the early
days of radiosurgery and has given access to relevant papers which would otherwise
have been inaccessible. He has also been an invaluable adviser on textual purity.
Next, there is Dr. Bert Sarby a physicist who was intimately involved in the development of the early gamma unit and has given freely of his time and his literature to
ensure the accuracy of the text. Hans Sundquist, the engineer who turned the ideas of
designers into practical machines, has also listened to the author’s questions and answered promptly and concisely whenever approached. I should like also to extend
my gratitude to Dr. Rich Levy from Berkeley who was generous with his time
and information about cyclotron radiosurgery. Finally, to my old friend Ju¨rgen Arndt
another physicist with whom I have roamed the world teaching the practice of radiosurgery from Mexico to Tokyo via Beijing. He has repeatedly advised on the
evolving text.
All of the above persons have not only advised on this project but also have read
through the text to ensure their information is correctly relayed. It would be remiss of
me if I did not also thank Professor Erik Olof Backlund, my chief in Bergen and my
mentor in the mysteries of radiosurgery. He has been a kind and consistently enthusiastic support over the years and has also been helpful in supplying valuable and
otherwise unavailable details from the early days.
Finally, I should like to thank my wife, Gao Nan Ping or Annie Gao, as she is
known to her many friends in the radiosurgery milieu. The wife of any man writing
a book has to put up with the absences, trips, and changing moods of the author as he
pursues his aims. Without Annie this book could not have been written.

xv



CHAPTER

Background knowledge
in the early days

1

Abstract
The purpose of this chapter is to outline the medical facilities that were available to the inventors of radiosurgery at the time when the technique was being developed. This is achieved by
describing in brief the timeline of discoveries relevant to clinical neurology and the investigation of neurological diseases. This provides a background understanding for the limitations
inherent in the early days when investigations and imaging in particular were fairly primitive.
It also helps to explain the choices that were made by the pioneers in those early days. The
limitations of operative procedures and institutions designed to treat neurological diseases
are also mentioned.

Keywords
clinical neurology, radiology, contrast studies, operating theaters, neurological hospitals

1 INTRODUCTION
Radiosurgery was first defined by Lars Leksell in the following terms: “Stereotactic
radiosurgery is a technique for the non-invasive destruction of intracranial tissues or
lesions that may be inaccessible to or unsuitable for open surgery” (Leksell, 1983).
As stated in this section, no human activity occurs in a vacuum including the development of medical technology. Radiosurgery was developed out of the perceptions
and efforts of a small group of men who passionately believed that such a method
was urgently needed in the battle against a large number of contemporaneously
untreatable diseases. The possibility of developing radiosurgery was a spin-off of
the developing field of nuclear physics, which was such a characteristic development
of the first half of the twentieth century. What was required would not be clear at the
start, but would become so. There were five essential elements. The first chapters of

this book concern the journey toward understanding and eventually the implementation of these elements; and it was a long journey:

Progress in Brain Research, Volume 215, ISSN 0079-6123, />© 2014 Elsevier B.V. All rights reserved.

1


2

CHAPTER 1 Background knowledge in the early days

1. Images that enable the visualization of the lesion to be treated are an essential part
of the method.
2. A three-dimensional reference system common for imaging, treatment planning,
and treatment.
3. A treatment planning system by means of which the irradiation of each case can
be optimized.
4. A means of producing well-defined narrow beams of radiation that selectively
and safely deliver the radiation dose under clinical conditions.
5. Adequate radiation protection.

2 CLINICAL NEUROLOGY
This book concerns neurosurgery and neuroradiosurgery and surgery of the central
nervous system (CNS). At the time when the processes that would lead to neuroradiosurgery were beginning—around 1930—neurosurgery’s contribution to patient
welfare, while more rational and scientifically based than any at the time in its previous history, had relatively little to offer. Certainly, cell theory had permitted the
analysis of the cellular components of the CNS and their architecture and interrelationships. Based on this new knowledge, clinical neurology had made great strides
with the development of the examination of the CNS based on the understanding of
how its different components were interconnected (Compston, 2009). John Madison
Taylor had introduced the reflex hammer in 1888 (Lanska, 1989). Gradual understanding of how to examine the CNS was propounded by Joseph Babinski
(1857–1932) in 1896 (Koehler, 2007). Ernst Weber (1795–1878) and Heinrich Adolf

Rinne (1819–1868) had introduced means of distinguishing between conductive and
neurogenic hearing loss although the precise date of their tests has proved impossible
to determine. These tests require tuning forks that had been originally invented by
John Shure (ca. 1662–1752) reaching the advanced age for the time of 90 years.
He was distinguished enough that parts were written for him by both Ha¨ndel and
Purcell (Shaw, 2004). It was applied to neurological testing first in 1903
(Freeman and Okun, 2002). The ophthalmoscope was invented by Helmholtz in
1851 (Pearce, 2009). It was developed and its source of illumination was improved
over succeeding decades. During my time at the National Hospital for Nervous Diseases, Queen Square, London, I was told that such was the value given to ophthalmoscopy that there was a time when junior doctors at Queen Square were required to
examine the fundus of patients suspected of raised intracranial pressure (ICP) every
15 min. In 1841, Friedrich Hofmann invented the otoscope (Feldmann, 1995, 1997).
In the 1930s, the examination of the CNS was becoming fairly precise and this
precision would improve over the decades to come until the arrival of computerized
imaging in the 1970s and 1980s. Until then, clinical examination was the most accurate method for localizing pathological processes. However, not all clinical symptoms arise from identifiable foci of diseases. Thus, subacute combined degeneration
of the cord gives a complex picture with some tracts affected more than others.


3 Investigations

Again, in multiple sclerosis, with intermittent lesions varying in time and space, a
simple localization from clinical information would be difficult. However, this is
not that important for the performance of a surgical technique of which radiosurgery
is one because surgical conditions are single and focal in the vast majority of cases.
The advances described in the previous paragraphs greatly increased the accuracy
with which a skillful clinician could localize the position of a pathological process
within the CNS. Even so, the first systematic monograph on clinical neurological
localization was published as late as 1921 by a Norwegian, Georg Herman
Monrad-Krohn (1884–1964), writing in English (Monrad-Krohn, 1954). In 1945,
the more or less definitive text by Sir Gordon Holmes (1876–1975) was published
(McDonald, 2007).


3 INVESTIGATIONS
3.1 ELECTRICAL
As far as functional investigations were concerned, electroencephalogram (EEG)
became commercial in 1935 and electromyography (EMG) arrived in 1950.

3.2 IMAGING
In terms of further radiological investigations, the first visualization of the CNS came
with the use of contrast-enhanced X-ray studies introduced by Cushing’s student
Walter Dandy (1886–1946), specifically pneumoencephalography (1918) (Dandy,
1918) and pneumocisternography (1919) (Dandy, 1919). While these examinations
were undoubtedly an improvement, yet to modern eyes, they still look primitive.
Then, in 1927, came carotid angiography that while a further improvement was still
limited and not without risk. Vertebral angiography became routine in the early
1950s. A brief description of the way these methods works follows. Since the first
radiosurgery information was published in the early 1950s, it is necessary to see how
the necessary imaging for radiosurgery could be achieved at that time. If we bear in
mind that the technique was solely used for intracranial targets, there were basically
three imaging techniques.

3.2.1 Plain Skull X-Rays
Plain skull X-rays existed but were of little value in showing targets suitable for radiosurgery. The right side of Fig. 4 shows an X-ray of the skull, taken from the side,
and indicates that the only reliable location of an intracranial soft tissue is the position of the pituitary gland (see Figure 4).
Following 1918, it became clear that parts of the brain could be demonstrated
using what are called contrast media. These are fluid substances (liquid or gas) that
affect the passage of X-rays through the skull. Either they let the rays pass more
easily, in which case they will darken the part of the image where they are, or they
will stop them passing so easily, in which case the portion of the image-containing

3



4

CHAPTER 1 Background knowledge in the early days

medium will appear lighter. The most frequently used medium in this context was air
and how it worked requires some explanation.

3.2.2 Brain and CSF Anatomy
It is necessary to digress a little and explain some facts about intracranial anatomy.
The brain sits tightly enclosed within the skull but it is floating in a bath of fluid
called cerebrospinal fluid (CSF). This is created at roughly 0.32 ml/min. Figure 1
is a diagram of the anatomy of the brain and the fluid-filled spaces (called ventricles)
that it contains. Figure 2 illustrates how the CSF is made in the ventricles and flows
through the brain. It leaves the ventricles and flows over the brain between two membranes, the pia mater and the arachnoid. The pia mater means soft mother and is
called that because it embraces the brain as a mother embraces her child. The arachnoid is so called after some imaginative anatomists looking through the microscope
considered that the membrane and the space under it looked like a spider’s web. In
Greek mythology, a skillful but arrogant young lady called Arachne challenged
Athena, the goddess of among other things weaving, to a weaving contest. The girl
inevitably lost and was turned into the world’s first spider. Thus, spiders are called
arachnids and this explains the use of the term arachnoid in the current context.
It should be remembered that at any one time, there is about 150 ml of CSF in
the system and two-thirds of it is outside the brain in the subarachnoid space.

3.2.3 Contrast Studies: CSF Replacement Studies
Let us return to imaging. Plane X-rays were of little help, but in 1918, Cushing’s
pupil Walter Dandy had discovered that the introduction of air to replace the CSF
could provide demonstration of the ventricles of the brain and any distortions or displacements of that system. The air could be introduced either into the spinal canal


FIGURE 1
This diagram illustrates the shape of the ventricles within the brain. There are two lateral
ventricles to the side of the midline in each cerebral hemisphere, and the third and fourth
ventricles in the midline are connected by the aqueduct.


3 Investigations

FIGURE 2
This picture illustrates the direction of circulation of the CSF, from production in the ventricles
to absorption in the big venous drainage channel, the sagittal sinus. The straight black
arrows connect a label to the point labeled. The curved black arrows indicate the flow of CSF
and the white curved arrows indicate the flow of blood.

using a spinal tap (Compston, 2009) or via a burr hole enabling direct access to the
cerebral ventricles. The air replaces the CSF, and since it absorbed X-rays less than
the watery CSF, the ventricles could be outlined. The appearance of a pneumoencephalogram (as this examination was called) is shown in Fig. 3. It must be obvious
from these images that the findings would not be easy to see and would require great
experience and expertise to interpret reliably.
Attempts were made to use positive contrast media. These are fluids that absorb
X-rays more than CSF and thus show as a positive or white shadow. It took time to do
this as many of the early fluids were too toxic but eventually a water-soluble medium
was discovered, called metrizamide. Even so, the sort of anatomical information that
could be derived inside the brain from these different methods was too imprecise for
radiosurgical work. However, there was another examination that could be used. This
was the cisternogram (with either air or contrast medium). This placed air or contrast
in the subarachnoid space, over the surface of the brain. The beauty of this was that it
could demonstrate the presence of tumors in the pituitary region and in the internal
auditory meatus. In these regions, any tumor was closely related to the fixed skull
base so that its position could be reliably determined. The two tumor types concerned

were the pituitary adenoma, which naturally enough was in the pituitary region (see
Figs. 4 and 5), and the vestibular schwannoma (previously called the acoustic neuroma), which arises in the bony canal (internal auditory meatus) containing the

5


6

CHAPTER 1 Background knowledge in the early days

FIGURE 3
The ventricles can be seen from the front and side. However, in view of the limited contrast of
the air and brain, detailed visualization was difficult. A precise technique like radiosurgery
would have little advantage from this method.

FIGURE 4
This figure shows that the pituitary fossa is at the base of the skull in the midline and is easy
to visualize even on a plain X-ray. Insertion of metrizamide into the subarachnoid space
enables outlining the contours of a tumor in this region.

hearing nerve on its way from the brain to the hearing receptors in the inner ear, as
shown in Fig. 6. The superficial fixed location of these tumors made it possible to
partially visualize them using cisternograms. This is why they became two of the
earliest targets for radiosurgical treatment. Unfortunately, while there are images
of both pneumocisternograms and metrizamide cisternograms still available in publications from that time, they are not really helpful. Their appearance is so unfamiliar
to modern eyes, familiar with computed tomography (CT) and magnetic resonance
imaging (MRI) they would not help to clarify their use and are thus not included in
this text.



3 Investigations

FIGURE 5
This illustration shows the relationship of the pituitary gland to the undersurface of the
brain and the third ventricle above.

FIGURE 6
The skull base anatomical specimen on the left shows the location of the internal auditory
meatus, which contains two balance nerves: the facial nerve and the hearing nerve. Vestibular
schwannomas grow out of a balance nerve and compress the hearing nerve within the bony
confines of the canal. The CT picture on the right shows these canals in a living patient,
although the luxury of this visualization was not available at the time of the early development
of radiosurgery. However, the anatomy shown here illustrates how it would be possible to
reliably demonstrate the position of a tumor extending from the bony canal using contrast in a
cisternogram.

7


8

CHAPTER 1 Background knowledge in the early days

3.2.4 Contrast Studies: Contrast in Blood Vessels
Another available imaging technique was the angiogram, whereby the arteries and
veins to the brain are shown. This had been introduced in 1927 by the Portuguese
neurosurgeon and Nobel Prize winner Egas Moniz (1874–1955) (Moniz, 1927). This
method is called angiography and consists of injecting a radiation opaque fluid into
the arteries, thereby visualizing them on X-ray film. The method may be used to
show two things: abnormal blood vessels and distortion or displacement of blood

vessels. Abnormal blood vessels in a tumor are illustrated in Fig. 7. Vessel distortion
is illustrated in Fig. 8.

FIGURE 7
The black contrast-filled arteries are shown. The large gray/black object is produced by the
blood vessels in a tumor being filled with the contrast. The tumor is a meningioma.

FIGURE 8
The angiogram on the left is normal. Two black arrowheads indicate the anterior cerebral
(vertical on the left near midline) and middle cerebral (horizontal toward the right) arteries in
their normal position. The angiogram in the middle has two white arrowheads pointing at the
same arteries. The anterior cerebral artery is displaced toward the right and the middle
cerebral artery is displaced upward. The CT on the right shows the tumor responsible for these
changes as we might view it today.


3 Investigations

It may be seen that these images are clearer and in many ways easier to interpret
than the air studies. As far as tumors are concerned, the angiogram had no place to
play in the radiosurgical management. However, there is a dangerous illness where a
defect in the development of blood vessels within the head produces a blood vessel
abnormality called an arteriovenous malformation (AVM). These may be treated not
only by microsurgery but also by radiosurgery. An example of a case treated with
microsurgery nearly 40 years ago is shown in Fig. 9. The term microsurgery is common usage in the professional literature and has a specific meaning. Prior to the
1960s, neurosurgery was carried out using a head light to focus illumination on
the operating field and loupes (spectacles with minor magnification of 2–3 times).
Zeiss had invented the operating microscope (OPMI 1) back in 1953, but it was
not taken up by neurosurgeons until the 1960s. An operating microscope looks nothing like a laboratory microscope. First, it only magnifies up to about 20 times. The
advantages with it however are twofold. The surgeon is operating looking through

binocular eyepieces not unlike those used in binoculars. The instrument is so made
that fiber-optic light is directed along the axis of the microscope so that the operating
field is magnified and beautifully illuminated. The technique using this instrument is
called microsurgery. Because it permits the demonstration of the anatomy in the head
far more clearly, it had a dramatic effect on increasing the success of surgery and
reducing complications.

FIGURE 9
This figure shows the abnormal blood vessels of the AVM indicated by the black arrowhead.
The image on the right taken 2 years after the left image shows the absence of the AVM, its
location indicated by the right arrowhead. There are numerous small straight lines in the
image. These are blood vessel clips used to close abnormal arteries. They were standard
technique at the time this surgery was performed. The reason that the second angiogram
was taken so long after the first was patient anxiety. He was so scared of his AVM that even
after seeing the second set of images, it was another year before he summed up the
courage to go back to work. This entirely rational anxiety needs to be remembered by those
who treat AVMs.

9


10

CHAPTER 1 Background knowledge in the early days

4 OPERATING THEATER LIMITATIONS
Over and above the limitations of diagnostic and investigative neurology in the
1930s, there were practical difficulties in the operating room. Neuroanesthesia
was primitive, with preference given to operating under either local anesthesia or
ether anesthesia. The former permitted contact with the patient, which was for that

time the best indicator of the state of ICP. Ether was known to be quite safe and is
known even today to have relatively little effect on the ICP. Intravenous drips involved glass bottles and red rubber tubes, the use of which was cursed with febrile
complications. The rhesus blood groups had not been discovered so that proper blood
transfusion was not available. Indeed, there is a story concerning blood transfusionrelated dangers during neurosurgery, told to this author in 1986 by Tormod Hauge,
the then-emeritus professor of neurosurgery from Rikshospitalet in Oslo. The surgeon concerned was none other than Monrad-Krohn, mentioned above, who had
written a textbook of neurology in 1921. He had no particular surgical qualification
or experience. For undetermined reasons, he decided to perform surgery on the head
of a patient at a time when blood loss was replaced from a suitable third party by
direct transfusion from body to body. According to the story, the operation occurred
in one room and the prospective donor lay in the room next door connected by tubes.
A time came when blood loss needed to be replaced. However, the roof of the room in
which the donor was lying was adorned with a chandelier, which at the critical
moment fell from the ceiling inducing an instant incurable cardiac arrest in the donor.
(It was also claimed that an attending nurse had a leg fracture.) The bloodless patient
was thus also not able to survive so that the procedure claimed a unique operative
mortality of 200%.

5 INTRODUCTION OF SPECIALIZED CLINICAL NEUROSCIENCES
DEPARTMENTS
Early neurological departments were opened in the nineteenth century. The first was
the National Hospital for Diseases of the Nervous System including Paralysis and
Epilepsy and later the National Hospital for Nervous Diseases at 24 Queen Square,
London, opened in the spring of 1860 (Colville). Neurosurgical departments came
later, and indeed, hospital neurosurgical practice comes in many forms, even today
ranging from an outpatient clinic, to a section of usually a surgical or neurosciences
department, to a fully independent neurosurgical department. Thus, Harvey Cushing
was Moseley Professor of Surgery, not neurosurgery from 1912 to 1932. In Europe,
Herbert Olivecrona was appointed professor of neurosurgery in a separate department located in the Serafimerlasaret hospital in 1935 (Ljunggren). Both these giants
trained many of their juniors and also published their clinical and operative experience extensively, thus creating written information and advice that could assist their
successors to advance.



References

6 CONCLUSION
The first steps in the direction toward clinical radiosurgery were not taken before the
early 1950s. The need was there with high neurosurgical mortality but nobody had
any idea until after the Second World War as to how to approach the issue. The next
chapters outline how appropriate technology came into being.

REFERENCES
Colville, D. UCL Bloomsbury Project. Retrieved from />Compston, A., 2009. A short history of the clinical neurology. In: Donaghy, M. (Ed.),
Brain’s Diseases of the Nervous System, 12th ed. Oxford University Press, Oxford,
pp. 3–25.
Dandy, W.E., 1918. Ventriculography following injection of air into the cerebral ventricles.
Ann. Surg. 68, 5–13.
Dandy, W.E., 1919. Ro¨ntgenography of the brain after the injection of air into the spinal canal.
Ann. Surg. 70, 397–403.
Feldmann, H., 1995. From otoscope to ophthalmoscope and back. The interwoven history of
their invention and introduction into medical practice. Pictures from the history of otorhinolaryngology, illustrated by instruments from the collection of the Ingolstadt German
Medical History Museum. Laryngorhinootologie 74 (11), 707–717.
Feldmann, H., 1997. History of the tuning fork. I: invention of the tuning fork, its course in
music and natural sciences. Laryngorhinootologie 76 (2), 116–122.
Freeman, C., Okun, M.S., 2002. Origins of the sensory examination in neurology. Semin.
Neurol. 22 (4), 399–408.
Koehler, P., 2007. Joseph fe´lix Franc¸ois babinski. In: Bynum, W.E., Bynum, H. (Eds.),
Dictionary of Medical Biography. vol. 1. Greenwood press, London, pp. 142–143.
Lanska, D.J., 1989. The history of reflex hammers. Neurology 39 (11), 1542–1549.
Leksell, L., 1983. Stereotactic radiosurgery. J. Neurol. Neurosurg. Psychiatry 46, 797–803.
Ljunggren, B. Herbert Olivecrona. Retrieved from />tion.aspx?id¼7720.

McDonald, I., 2007. Gordon Holmes lecture: Gordon Holmes and the neurological heritage.
Brain 1 (Pt. 1), 288–298.
Moniz, E., 1927. Ence´phalographie arte´rielle, son importance dans la localization des tumeurs
ce´re´brales. Rev. Neurol. 2, 47–61.
Monrad-Krohn, G.H., 1954. The Clinical Examination of the Nervous System. H.K. Lewis,
London.
Pearce, J.M., 2009. The ophthalmoscope: Helmholtz’s Augenspiegel. Eur. Neurol. 61 (4),
244–249.
Shaw, W., 2004. Shore, John (c.1662–1752), rev. Oxford Dictionary of National Biography,
Oxford University Press, Oxford. Retrieved from, />article/37955.

11


CHAPTER

Some physics from 550 BC
to AD 1948

2

Abstract
This chapter outlines terminology and its origins. It traces the development of physics ideas
from Thales of Miletus, via Isaac Newton, to the nuclear physics investigations at the beginning of the twentieth century. It also outlines the evolving technology required to make the
discoveries that would form the basis of radiosurgery. Up to the 1920s, all experiments on
atomic structure and radioactivity had involved the use of vacuum tubes and naturally occurring radioactive substances. There was a need to make useable subatomic particles to obtain
better understanding of the interior structure of atoms. Because of this, machines that could
make atoms move at high speed were invented, known as particle accelerators. A new era
had dawned. There is a brief mention of the effect of radiation on living tissue and of the units
used to measure it.


Keywords
physics history, vacuum tube experiments, accelerators, units

1 INTRODUCTION
It is a truism that radiosurgery could not be possible without understanding radiation.
This chapter concerns the expanding knowledge of atomic structure and the radiation
discovered during the research into this topic. This radiation is called electromagnetic.
So where and how did this term originate? The importance of this for the current
purpose lies in the way in which subatomic structure came to be understood before
machines existed that were designed to split up the atom into its various components.

2 BEFORE ACCELERATORS
2.1 ANCIENT WORLD
While modern nuclear physics uses mainly particle accelerators of different kinds in
its research, there was a period prior to the invention of these machines when other
methods had to be used. In a sense, knowledge about the relevant phenomena extends
Progress in Brain Research, Volume 215, ISSN 0079-6123, />© 2014 Elsevier B.V. All rights reserved.

13


14

CHAPTER 2 Some physics from 550 BC to AD 1948

back to the time of the ancient Greeks, indeed to the first of the pre-Socratic philosophers, Thales of Miletus (ca. 624–546 BC). This sage is said to have predicted an
eclipse. He measured the height of the pyramids using a method applied in modern
times to measure the height of the mountains of the moon (Sagan, 1980). He also
observed the attraction that lodestones, or loadstones, exert on iron. This stone contains Fe3O4 (magnetite), which is magnetic in its natural state. The name magnet

comes from Magnesia in Thessaly—on the east side of mainland Greece—the location of deposits of magnetite (Da Costa Andrade, 1958). Magnesium, manganese,
and milk of magnesia, that appalling peppermint-flavored concoction beloved by
mothers whose children have indigestion, have the same root. The magnesia in this
punishment for ill health is MgO, magnesium oxide, which is also considered to be a
necessary component of the philosopher’s stone (Fig. 1).
Thales also noted that if amber is rubbed with fur, it acquires the property of
attracting small pieces of paper and other light articles (Semat and Katz, 1958).
The ancient Greek word for amber was elektron, hence the name of electricity. Despite his genius, Thales would seem to have been an archetypal absent-minded professor. Writing over 150 years later, Plato put the following words into Socrates
mouth: “Why, take the case of Thales, Theodorus. While he was studying the stars
and looking upwards, he fell into a pit, and a neat, witty Thracian servant girl jeered
at him, they say, because he was so eager to know the things in the sky that he could
not see what was there before him at his very feet. The same jest applies to all who

FIGURE 1
A small map to illustrate the location of Magnesia.


2 Before accelerators

pass their lives in philosophy” (Fowler, 1921). Thus, curiosity about and knowledge
concerning electricity and magnetism have been of interest for millennia. After this
early work, little happened until the time of Isaac Newton (1643–1727).

2.2 NEWTON TO THE NINETEENTH CENTURY
Following the time of Newton, there were several threads of research that would over
time combine to give an understanding of the nature of electricity. Some of the research was directly related to electricity while some of it related to the nature of light.
(Since light is today considered just one range of electromagnetic radiation, this is
not a problem for us, but in the years succeeding the insights of Newton, this perception was impossible.) So the acquisition of understanding will be unavoidably
fragmented.
Firstly, let us consider research aimed at better understanding electricity itself.

A device called a Leyden jar was invented in 1746 that could store a very considerable charge of static electricity. Such a device is called a capacitor. However, while
this can release its electric charge, that discharge happens virtually instantaneously.
A collection of these capacitors could provide a greater charge and Benjamin
Franklin (1706–1790) used this arrangement calling such a collection a battery,
taking a metaphor from a collection of military artillery. Other scientists, particularly
Volta in 1800, invented a source of continuous electricity using a chemical cell.
Thus, an electric current became available, rather than a discharge. All of these
findings broadened the knowledge of some characteristics of electricity but not of
its intrinsic nature.
Earlier eighteenth-century work with static electricity had shown that sometimes
electrified objects could either attract or repel each other. Various theories were proposed but it was Benjamin Franklin who suggested in 1747 that there was one kind of
electricity that could be added or removed from objects making the objects charged.
If there was too much electricity, then the object had a positive charge, and if there
was too little, a negative charge. Positively charged objects would repel each other as
would negatively charged objects but positive would attract negative. It remained to
decide which kind was which. He considered that rubbed glass had an excess of electricity and was positive. He was wrong. Electricity in fact flows from negative to
positive according to Franklin’s classification, and this convention has been maintained to this day. Thus, more characteristics have been learned. Electricity could
be static or could flow. It was positive or negative but still the essence of the phenomenon remained obscure.
Contemporary relevant research concerned the nature of light. The physicists of
the time were faced with a problem. They knew that sound waves vibrated the air and
that waves in water needed the water for their transmission. However, the nature of
light was explained by two theories (particles according to Newton) (Newton, 1730)
and waves (according to Huygens). Today, it is known that light has some properties
of particles and some of waves, but that duality could not be known in the seventeenth or eighteenth century.

15


16


CHAPTER 2 Some physics from 550 BC to AD 1948

The above description is a pre´cis of the development of relevant research about
electricity or related topics from Newton to the beginning of the nineteenth century.
During that century, further crucial advances were made. Michael Faraday
(1791–1867) discovered that moving electric fields could induce magnetic fields
and vice versa. Thus, they were seen to be two aspects of the same phenomenon.
It was from this discovery that it was possible for Faraday to develop an electric motor and dynamo and permit the construction of machines that could easily generate
electric current in a circuit over long periods. All this research reached a climax with
the work of James Clerk Maxwell (1831–1879) who derived a set of equations that
described all known behavior of electricity and magnetism. The radiation thus became known as electromagnetic. It may be noted that light, electricity, and magnetism have the common characteristic that they can pass through a vacuum. From a
more modern point of view, it is understood that light is a form of electromagnetic
radiation that is different only in that the frequency and energy of that radiation are
perceptible to the visual apparatus of living organisms. The ability to cross a vacuum
is a property of electromagnetic radiation in general and is not limited to any particular frequency of the radiation. It may seem a rather abstruse subject to present here
but it will be seen that vacuum tubes came to be of central significance in terms of the
early examination of subatomic structure (Fig. 2).

2.3 THE DEVELOPMENT AND APPLICATION OF VACUUM TUBES
WITH ELECTRODES AT EACH END
1. The process started in 1855, when a glass blower Johann Heinrich Wilhelm
Geissler (1814–1879) contrived a method for producing a much superior vacuum
than had previously been possible.
2. At the request of the physicist Julius Plu¨cker, he made vacuum tubes with pieces
of metal sealed into opposite ends. These could be connected to an electric
current. The end considered to be positively charged was called the anode and the
end that was considered to be negatively charged was called the cathode, from the
ancient Greek words cathode meaning lower way and anode meaning upper way.
It was Faraday who coined the terms.


Cathode rays
Vacuum tube
-

+
Electrodes

FIGURE 2
The arrangement of the vacuum tube with electric current that produces a green glow from
the cathode.


2 Before accelerators

3. Passage of electricity through a partially evacuated vacuum tube containing some
gas produced light, the color of which depended on the gas concerned. This is the
underlying mechanism of the familiar neon lights. However, when there was a
virtually total vacuum, Plu¨cker noted that there was a greenish glow at the
cathode. A later physicist, Eugen Goldstein (1850–1930), showed that the glow
was not dependent on either the gas evacuated to produce the vacuum or the
substance of which the electrodes were made. Thus, he concluded the glow was
associated with the current itself and he called this emission cathode rays. The
tubes producing them came to be known as cathode ray tubes.
4. William Crookes (1832–1919) developed an even more thoroughly evacuated
vacuum tube and demonstrated cathode rays more clearly. They traveled in
straight lines and could even turn a little wheel. An object placed in the path
caused a shadow to appear in the glow they produced.
5. The argument reemerged concerning whether cathode rays were waves or
particles. If they were to be particles, they could carry a charge and would be bent
in an electric field. If they were waves, then waves carry no charge and would not

deviate. From the early 1880s, various experiments all suggested that they were
waves as they did not deviate in electric fields. However, all the experiments
suffered from technical difficulties that were finally overcome by Joseph John
Thomson (1856–1940) who demonstrated that the rays were particles with a
negative charge.
6. The degree of deflection of a particle in an electric field was proportional to the
mass of the particle, the velocity of its movement, and the charge it carries.
A similar deflection will occur in magnetic fields but in different ways. By
comparing the two kinds of deflection, Thomson could calculate the relationship
between charge and particle mass. He could thereby work out the mass of a
single cathode ray particle for which he received the Nobel Prize in 1906.
The particle came to be called the electron. (Thirty-one years later, his son shared
the Nobel Prize in Physics for discoveries related to the diffraction of electrons.)

2.4 SUBATOMIC STRUCTURE
Thomson discovered that the mass of an electron was considerably smaller than that
of the smallest atom. This blew a huge crater in the accepted wisdom that the atom
was the smallest possible component of matter. It became necessary to accept that
atoms were made up of smaller structures. At this stage, nobody had any idea about
the internal architecture of an atom. Indeed, Thomson’s own idea was that an atom
was a featureless positively charged sphere into which electrons were embedded like
seeds in a cake. Thus, for the time being, the nature of subatomic particles remained
unclear. Nonetheless, since the electron has a negative charge and the atom was
known to be electrically neutral, the search was on to discover what part of an atom
contained this charge. In this part of the march of ideas, chance played an
important part.

17



18

CHAPTER 2 Some physics from 550 BC to AD 1948

2.5 EXPERIMENTS USING SPONTANEOUSLY RADIOACTIVE
MATERIALS (ASIMOV, 1991)
A naturally fascinating phenomenon is the emission of light by certain substances
when exposed to light. There are two such phenomena. Fluorescence occurs when
as substance exposed to light itself gives off light but ceases to do so the moment
the stimulating external light is extinguished. Phosphorescence is similar but continues for a period after the external light source is extinguished. One physicist
researching this interesting topic was Wilhelm Conrad Ro¨ntgen (1845–1923). He
was investigating how not just light but cathode rays impinging on various chemicals
could produce luminescence. He was using paper coated with a barium platinocyanide. He noticed that such paper, which was not in the pathway of the cathode rays,
fluoresced when the cathode ray tube was turned on. He took this coated paper into
the next room and turned on the cathode ray tube, and again, the paper fluoresced. He
reasoned that the tube was emitting a radiation that was not just cathode rays and in
view of its mysterious nature called the radiation X-rays. He received the very first
Nobel Prize in Physics in 1901.
Antoine Henri Becquerel (1852–1908) somewhat later was also investigating
fluorescence. He used a known fluorescent substance, potassium uranyl sulfate, that
contains one uranium atom. He was wondering if fluorescent substances gave off
spontaneous radiation, and to test this, he used an elegant but simple experimental
model. He wrapped some photographic plates in black paper, which sunlight could
not penetrate. He placed this package in sunlight and placed a crystal of his fluorescent crystal upon the package. Sure enough, there was fogging of the plates suggesting radiation emanating from the crystal. To confirm the finding, he planned to repeat
the experiments. However, as happens in northern Europe quite a bit, there was a run
of cloudy days. During this period, a package of film plates with a crystal on top was
kept in a drawer. It would seem that Becquerel could be impatient so he developed
the film after a few days without exposing it to sunlight and discovered that the plates
were strongly fogged in the absence of sunlight-induced fluorescence. He reasoned
that the crystals must be giving off radiation independent of an external light source

and he set about examining this phenomenon. He found that the culprit was the uranium atom in the potassium uranyl sulfate. Later, Marie Curie demonstrated that thorium, polonium, and radium and uranium had similar properties. From now on, the
next steps of the research would continue using radioactive substances rather than a
vacuum tube. After all, radioactive breakdown provided a spontaneous splitting of
the atom into subatomic components.
In 1899, the New Zealand physicist Ernest Rutherford (1871–1937) analyzed the
radiation observing and quantifying its deflection and penetration and demonstrated
two of its components as shown in Fig. 3. They were called alpha and beta particles.
The alpha particles had poorer penetration than the beta. The beta particles were soon
shown to be electrons. In the 1900s, Paul Ulrich Villard demonstrated the most penetrating of the rays, which were called gamma rays. This left a query as to the nature
of the alpha particles. They were positively charged and more massive than


2 Before accelerators

FIGURE 3
The pattern of spontaneous radioactive decay as perceived by Rutherford. The proportion of
the different components varies with different elements.

electrons. The research details do not matter here. Various models of the atoms had
been proposed previously, but Rutherford provided convincing evidence that the majority of the mass of the atom lay at its center and that the majority of the volume of
the atom was made up by circulating electrons. Because he provided quantitative
evidence to underpin his concepts, he received the Nobel Prize in Chemistry in
1908 (Fig. 3). The central portion was called the nucleus, which means little nut
in Latin. Interestingly, it could be argued that he did some of his best work after
receiving a Nobel Prize, which is unusual. He devised a method of separating and
accumulating alpha particles (using a specially adapted vacuum tube) showing them
to be helium nuclei. He continued to search for the smallest positively charged particle to match the electron. He found nothing that small but instead found the smallest
positively charged particle to have 1836.11 the mass of an electron. This he called the
proton from the Greek word meaning first.
The proton and electron had been demonstrated. However, it became clear that all

atoms except hydrogen had a mismatch between charge and weight. For example,
helium has two electrons and two protons. The electrons having a mass of 1/1837
of a hydrogen atom do not contribute to an atom’s mass. Yet a helium atom has a
mass of four hydrogen atoms. There must be something else to account for that mass.
A German physicist Walther Wilhelm Georg Bothe (1891–1957) bombarded beryllium with alpha particles from polonium in the hope of splitting this very light atom
thereby releasing protons, which had not been done up to that time. Beryllium was
attractive for the purpose being a very light element with very small nuclei. Instead,
he produced very penetrating rays following this bombardment and assumed they
were gamma rays, since alpha particles do not penetrate well. However, Fre´de´ric
Joliot-Curie (1900–1958) and Ire`ne Joliot-Curie (1897–1956) repeated the Bothe

19


20

CHAPTER 2 Some physics from 550 BC to AD 1948

FIGURE 4
Sir James Chadwick who provided convincing evidence for the existence and nature of
neutrons.

experiment allowing the rays to strike paraffin. This caused protons to be ejected
from the paraffin, an activity that did not happen with gamma rays. The stage
was set for a new interpretation. This was undertaken by James Chadwick
(1891–1974). He thought that because the radiation ejected a massive proton, it must
be massive itself. Yet it had no charge, so it was not a proton. He suggested that this
was the missing particle that accounted for the differences between atomic number
and atomic weight and it was called the neutron. He received the Nobel Prize in 1935
for this work (Fig. 4).


3 THE NEED FOR NEW INSTRUMENTS
Investigations into subatomic structure as outlined above had used vacuum tubes and
natural spontaneous radioactive processes that impose a limit on what was achievable. To advance knowledge, more powerful instruments were needed that were specifically designed to examine the internal structure of the atoms of matter. The
question was how to make such a machine. The first successful attempt was carried
out in Cambridge in 1929 by John Douglas Cockroft (1897–1967) and Ernest
Thomas Sinton Walton (1903–1995).


×