Molecular Repair of the Brain: A Scientific Critique
From Cryonics February 1991 and May 1991
by Gregory M. Fahy, Ph.D.
with a Response by Dr. Merkle
The October, 1989 (vol. 10(10)) issue of Cryonics magazine carried an impressive and seminal article by Dr. Ralph Merkle entitled “Molecular Repair of the Brain” (pp. 21-44) [later revised and published in the January and April 1994 issues of Cryonics]. One index of the influence of this article is its citation by Arthur C. Clarke in his November, 1990 book, The Ghost from the Grand Banks (Bantam; pp. 221-222, 259-260), which mentions both Merkle and Alcor (complete with an address) by name. The importance of this paper lies in its attempt to demonstrate the likely feasibility of cryonics through a series of logical and mathematical arguments. Such an attempt, if successful, should send doubting cryobiologists packing and make the world safe for cryonics forever. Dr. Merkle’s article, therefore, should be evaluated carefully and honestly by cryobiologists. Since I am a cryobiologist, and one who likes to consider new ideas, I have decided to undertake the task of providing such an evaluation, and the present article contains the results of this evaluation. Unfortunately for the readers of this periodical, I must report that the conclusion of my critique will be that Dr. Merkle’s attempt to provide persuasive arguments for cryonics fails in a number of basic ways.
The Problem of Chemistry
Merkle notes, quite correctly, that “The thawing process. . . causes damage and, once thawed, continued deterioration will proceed unchecked by the mechanisms present in healthy tissue. This cannot be tolerated during a repair time of several years” (p. 32). For this reason, he notes that “it seems likely that repair will take place when the tissue is still frozen” (p. 30). Although he says that temperature of repair is left open, he clearly favors repair at temperatures below the glass transition temperature, e.g., at liquid nitrogen temperature. For example, there are references to “an assembler operating at (perhaps) liquid nitrogen temperatures” (p. 30), and “Fractures made at. . . temperatures below the glass transition temperature” (pp. 33-34). He also makes the following general statement: “it seems unlikely that reducing the temperature will create a barrier that will inherently require longer synthesis times. Assemblers are basically mechanical in nature, and so they can be designed to operate across a broad range of temperatures. If anything, the reduction in thermal vibration as a consequence of reduced temperature should allow more accurate positioning and facilitate, rather than hinder, the assembler-based synthesis process.” The same basic idea has been restated also in two subsequent documents by the same author (an as-yet unpublished update and revision of “Molecular Repair of the Brain” renamed “The Technical Feasibility of Cryonics,” and a short article called “Cold Starting” in the November, 1990 issue of Cryonics (vol. 11(11), p. 11).
There is just one problem with sub-Tg repair: physical law! The fatal error is that although assemblers may be “basically mechanical in nature,” what they do is not. What they are supposed to do is chemistry. At normal temperatures, this is clearly reasonable: enzymes do chemistry all the time. But enzymes do not work below Tg, and neither will assembler-induced chemical modifications. Enzymes take advantage of thermal energy that is already available within the reacting species to supply the activation energy required for chemistry (the making or breaking of covalent bonds) to occur. Below Tg, this activation energy is not present [1].
The breaking and making of chemical bonds under these circumstances can only be achieved mechanically: by ripping atoms from other atoms and/or by slamming or jamming atoms into other atoms with sufficient force as to provide the equivalent of the ordinary thermal activation energy. (Conceivably, spectroscopic approaches could also be used in some cases, but, most likely, not as a general rule.) “Slamming” would involve accelerating the reacting species to velocities comparable to (and perhaps greater than) their velocities at normal body temperature. “Jamming” would involve a vice-like compression of molecule against molecule so as to overcome intermolecular repulsions and thus catalyze the reaction. However, the latter is the rough equivalent of increasing the local hydrostatic pressure, and it appears that absolutely enormous pressures would generally be required to drive chemical reactions at -196C. To give one indication: Whalley and colleagues [2] have shown that pressures on the order of 15,000 atmospheres are required to convert ice into amorphous solid water at liquid nitrogen temperature, and this is a reaction that involves no chemistry! This reaction also involves a decrease in volume. Driving reactions that result in a net increase in volume in this way might not be possible. This seems to leave the “slamming” approach as the main possibility.
But the “slamming” approach and the “jamming” approach are fundamentally similar, the main difference being the time scale over which energy is applied. In any case, how will the accelerations required for this approach be produced? At a minimum, it seems to me, one must rip the desired molecule free from its embedding medium (without hurting it), attach it to an assembler arm, orient it with extreme precision on that arm in some fashion, and then slam it against the desire reactant, also perfectly oriented on a second assembler arm. The basic problem that arises from these requirements is: How can you attach each of the reacting molecules to the assembler arms using only forces weaker than covalent bonds in such a way that the force of the collision, which must be powerful enough to make or break covalent bonds, does not dislodge them? (This will be an especially large problem for smaller molecules.) Another important complication is waste heat and the limitations it may put on assemblers: How much waste heat will be generated during the acceleration of the assembler arms to sufficiently high velocities, and what is the likelihood that this waste heat will accidentally lead to local warming and diffusion or to the catalysis of some undesired reaction?
The opposite problem is: how does one grip a molecule on both ends in just the right way as to be able to rend it asunder at exactly the correct bond in every case? The answer is likely to be: one doesn’t.
Possibly some technique in which harmonic oscillations of progressively greater magnitude are mechanically induced between individual atoms could break selective bonds and begin to approach these problems. But the problem is, nobody knows. Merkle’s paper simply fails to appreciate the fundamental problems of doing chemistry on stable molecules below Tg, and one is left with only wild speculations about how such a problem could even be approached in principle. It thus seems to be something of an understatement to say that Merkle’s approach of sub-Tg repair (or even near-Tg repair) falls short of providing convincing evidence for the technical feasibility of cryonics. Solving these problems seems to be not just a matter of engineering, but also of creating an entirely new branch of chemistry (or materials science), i.e., cryomechanical chemistry, to use as a basis for the engineering that is needed. But it is by no means obvious that it is possible, even in principle, to duplicate room temperature chemistry using only mechanically-driven reactions at sub-Tg temperatures. At these temperatures, we are not dealing with the kind of concept Feynman and Drexler have considered, in which it is only necessary to position atoms appropriately and lean on them just a little to get what you want. This is not chemistry as cells and as nature know it. It is therefore quite obviously inappropriate to assume that normal biological repair processes provide anything comparable to a “proof of principle” that repair can be effected below Tg.
I believe Merkle’s response to this problem may be to disassemble the frozen brain into its individual molecules, warm them to room temperature individually to permit them to react, and then to cool them back to liquid nitrogen temperature and reassemble them at that temperature back into the intact, repaired brain. But even this scenario is doubtful. It supposes that the brain is like a house made of bricks, which only need to be stacked next to each other to complete the edifice. The reality, however, is that there is a significant degree of covalent bonding between many of the molecular components of the brain (e.g., membrane proteins are linked to the cytoskeleton, which in turn is linked to organelles, and so forth). It seems unlikely that an entire brain can be disassembled and reassembled at liquid nitrogen temperature without requiring the performance of any chemistry at that temperature, even without considering the issue of molecular repairs. Another suggestion Merkle has proposed informally is to use free radical chemistry. Unfortunately, once again, it is far from clear that free radical chemistry can entirely or even mostly duplicate ordinary, thermally-driven chemistry.
Problems of Physics
On page 39, Merkle says “we must generate a plan for reassembly of the tissue components (the molecules) back into the healthy state. . . that is, we must determine how to actually rebuild the healthy tissue.” The meaning of this is explained on page 37 by, for example, the following: “If the initial data base describes tissue with swollen or non-functional mitochondria, then the revised data base should be altered so that it describes fully functional mitochondria.” (This idea is repeated also in “The Technical Feasibility of Cryonics.”) Confirmation that this is what Merkle actually proposes be done (i.e., restoration of a healthy functional state at sub-Tg temperatures) is given by Merkle’s “Cold Starting” article.
Unfortunately, this approach is fundamentally nonsensical for a variety of reasons. The simplest of these is simply that tissue cannot exist in a healthy, functional state at -196C! For one thing, a functional mitochondrion contains liquid water and no cryoprotectant. Even if such a state (in vitreous form) could be created at very low temperatures, it would revert to a mitochondrion containing massive amounts of internal ice within microseconds or less on warming (hence Merkle’s proposal in “Cold Starting” for a means of warming fast enough to outrun this crystallization process!).
The more basic and general point is that some kinds of repair would be extraordinarily difficult, futile, or even counterproductive to carry out at the lowest, most protective temperatures for fundamental physical reasons. Consider the following examples.
Osmotically-Induced Cellular Shrinkage. Slow freezing causes cell volume reduction, which in turn may cause the reduction of cellular surface area and a resulting extrusion of lipids and proteins from the membrane. Extruded lipids and proteins cannot be reinserted into the membrane until the cell volume is once again increased because there is no room for them. Restoring cell volume while the cell is in the vitreous state would be a seemingly ridiculous and superfluous task to attempt, and would again create a cell whose interior will freeze within a fraction of a second during warming!
Phase Transitions. Low temperatures and membrane dehydration per se cause membrane lipid species to crystallize or undergo HexII reorganizations. This is therefore the natural state of these lipids at the prevailing temperatures. Any attempt to reorder the membrane lipids into a lamellar phase will lead to spontaneous re-separation of these phases either at the prevailing temperatures or on warming. Thus, simply “repairing” this membrane defect at cryogenic temperatures would be futile. Introduction of alien lipid species to prevent re-separation would be problematic due to the absence of room in the membrane for such species and the need to subtract native lipid to make room. These changes would all have to be reversed later, and might create more problems than the original phase separations.
Denaturation. Any denatured proteins will also prefer to be denatured under the prevailing conditions. Renaturing them will tend to lead to re-denaturation as temperatures inevitably rise later on.
Changes in Tissue Volume: Thermal Expansion vs. Brittleness & Elasticity. A fracture represents anisotropic contraction of cerebral tissue due to temperature reduction. Local rips in axons may arise for similar reasons. To fill in gaps caused by the inherent thermal contraction of cerebral tissue may create a problem when the temperature is raised and all of the existing structure, both the native structure and the added structure, is inevitably forced to expand: expansion lesions such as buckling and shearing of axons may replace the previous contraction lesions. Likewise, many axons may be very stretched while frozen. Destretching them by adding material to them could cause the same buckling problem when warming occurs. Finally, tissue will be brittle below Tg and may be brittle even at temperatures moderately above this. Physically moving structures around under such conditions may damage them, and attempting to close a fracture by physically forcing the two sides together is liable to rip structures on both sides of the gap. Thus, some repairs made below Tg could induce the need for more repairs!
Incidentally, the thermal contraction-expansion cycle may also make Merkle’s “Cold Start” fail: even if the heating rates he wishes to achieve could be attained, the result would quite possibly be a brain macerated or exploded from the stresses of expanding its volume by several percent in a one microsecond interval. (Consider the kinetic energy of brain tissue expanding outward at a speed of 0.5 cm/microsecond, or, in other units, 18,000 km/hr!)
Problems of Power
How will nanomachines be powered? No comments from Merkle. At body temperature, nanomachines could be powered by chemical energy the way metabolism is powered. But at -135C? This is not just a detail to be left to future designers: it is a point of principle. Is it feasible in principle to power complex molecular manipulations (not even chemistry per se, but just physical manipulations) at cryogenic temperatures? How can energy be translated from the macroscopic to the molecular level? Without answers to these questions, the central idea of Merkle’s paper stands on a very flimsy foundation.
Presumably the power would have to be supplied via electrical cables or sliding rods going in through the vascular system. How much power is needed? Can it be supplied on wires small enough to thread through capillaries without warming the tissue through resistive (or frictional) heating?
Problematic Time Estimates
On pages 29-31, Merkle tries to estimate the time required for the repair of individual molecules. He does this by multiplying the in vivo synthesis time by 10 to account for the fact that not only molecular synthesis but also computations about such synthesis will be needed. He then notes, on page 30, that “the times for the various biological synthesis steps give here must be viewed as general ‘proofs of principle’ times rather than specific estimates of the actual time that will be required by an assembler operating at (perhaps) liquid nitrogen temperatures.”
But in no way is the time for biological processes a “proof of principle” for estimated cryogenic repair times: the biological processes depend on DIFFERENT PRINCIPLES than the repair processes, both in terms of the mode of operation (diffusion vs. conveyance) and in terms of the power supply. The biological systems, at best, tell us how long molecular reactions take under one set of conditions. However, without more detailed calculations (which, as indicated above, may be impossible), the biological time scales and the nanotechnological repair time scales (assuming that nanotechnological repair is possible at -196C in the first place) cannot be related to one another. Assuming that the two time scales are even in the same ballpark amounts to pure handwaving. This invalidates the entire discussion of the time required for repair, which is a central point of the paper.
Merkle does not really address the issue of determining WHERE molecules ought to be and carrying out the actual procedure of repositioning them. It could, for a variety of reasons, be time-consuming to figure out where to place a molecule if it is misplaced, especially since placing one molecule influences the proper placement of subsequent molecules. Consider that image analysis systems with good resolution store individual images at 1-3 megabytes or more per 2-D frame, exclusive of any analysis of the image. How many 2-D or 3-D images would be necessary to carry out the needed repairs? Possibly a very very large number, with correspondingly long times required for analysis.
The Problem of Vagueness
Merkle says, on page 40, “We will not examine the problem of generating a feasible assembly sequence here. . . [but] it should be clear that it is indeed possible to build living tissue. It is, after all, done by every living creature on the planet. It also follows from the general thesis of nanotechnology: that the construction of almost any chemically stable object that has been specified to the atomic level is feasible. The revised structural data base clearly specifies such an object (the brain) and specifies its structure in precise molecular detail. Its construction should therefore be feasible, particularly when we consider that existing biological systems already demonstrate ‘proof of principle.'”
Thus, Merkle’s paper does not seek to tell us how to repair a frozen brain. It seeks only to describe peripheral issues of information content, computational speed, etc. But it is hard to evaluate the possibility of repair if no actual suggestions for repair are give. We have already exploded the analogies noted in the preceding paragraphs: the workings of living systems have nothing to do with the problem of constructing a brain at cryogenic temperatures, and the tenet that specified structures can be built does not imply that specified structures can be built under impermissive conditions such as in black holes, stars, or vats of liquid nitrogen. Merkle says, on page 37, “if any cracks are present in the initial data base (describing the frozen tissue) then the revised data base (describing the healthy tissue) should be altered to remove these cracks.” But “removing these cracks” is a non-trivial exercise, and we are told nothing about how this might be possible. In the end, we are left only with an apparently unsupportable assertion that it should be possible. And this is the problem that cryobiologists have had with cryonics all along.
Problems of Biology
On page 38, Merkle says “all current estimates of tissue ‘viability’ based on functional criteria [are] irrelevant.” However, functional damage is related to structural damage. The greater the functional loss, the greater the structural loss, and the less likely it is that the previous structure can be inferred.
Conclusions
Ralph Merkle has written an excellent paper which attempts to identify important issues of the repair of frozen brains. He deserves praise for his great intellectual effort and for many of his results. From the point of view of a cryobiologist, however, Merkle’s analysis falls far short of being convincing. It is based on a number of assumptions that have dubious validity, and it fails to be specific. While the present critique by no means rules out the possibility of developing repair technology for frozen brains, it may help to clarify why the disagreements between cryonicists and cryobiologists are not likely to be settled by Merkle’s paper.
References
1. Just below the glass transition temperature, available thermal energy is insufficient to drive even diffusive processes, but ordinary biochemical reactions require much more energy than does diffusion. Thus, the temperature below which ordinary chemistry becomes almost impossible is likely to be considerably higher than the glass transition temperature.
2. E. Whalley, O. Mishima, Y.P. Handa, and D.D. Klug. Pressure below the glass transition: a new way of making amorphous solids. In: Dynamic Aspects of Structural Change in Liquids and Glasses (C.A. Angell and M. Goldstein, Eds.), Ann. N.Y. Acad. Sci., 484, 81-95, (1986).
Dr. Merkle’s Response
A Brief Summary
Greg Fahy recently (February, 1991) wrote a critique of “Molecular Repair of the Brain” (originally published in the October, 1989 Cryonics, and under continuous revision). To provide orientation for the reader who might not have read that article, or whose memory of it might be hazy, a brief summary is in order. It said that the frozen human brain could be repaired by the following general approach: 1) Digitize the frozen structure. A sufficiently accurate digitization for any purpose considered here would be provided by giving the coordinates and orientation of every major molecule in the brain; 2) Once a complete description of the frozen structure is available in digital format, the description can be manipulated and revised to eliminate the damage; 3) Once we have a digital description of a healthy human brain, we can then use that description as a blueprint to rebuild the original.
The most obvious concern raised by this strategy is the rather massive amount of raw information and the large amount of computer power being used. The fairly long sections of the paper looking at projected future memory and computational capacities were intended specifically to address that issue. Dr. Fahy’s statement that these issues are “peripheral” is wrong, for they are quite central. The claim that computer power of the magnitude required will likely be available in the future is not immediately obvious. If we expect people to believe this claim, it must be supported by a careful analysis of the relevant facts.
The next problem is how to obtain the necessary information. A simple “divide and conquer” strategy, in which the human brain is divided into pieces small enough that they can be directly analyzed by the use of high resolution imaging technology (e.g., nanotechnology) was proposed and should be quite adequate.
The paper did not discuss in any detail how “nanotechnology” works, but simply provided some general reasons for believing it is plausible and references for further reading. A detailed discussion of nanotechnology would require writing a rather detailed technical book. Fortunately, Eric Drexler is currently writing exactly such a book. The early drafts look very good. Many of Dr. Fahy’s questions really concern the nature and limitations of nanotechnology, so having a detailed technical description of the subject will be very helpful in creating a common framework within which to carry out further discussions.
The final concern is how to build a structure with atomic precision, given the blueprint. Here, the paper concludes that there are strong arguments supporting the general idea that this should be feasible and did not pursue the technical issues further. The argument that it should be possible to build human brains because they have in fact been built is very strong, and it would have required significant additional work to provide a sufficiently detailed analysis of the construction process to provide a better argument.
An issue which I view as completely irrelevant, but which causes some people concern, is the retention of the “original” atoms. The claim that the original electrons, protons, and neutrons are somehow vital to our continued existence strikes me as absurd. Despite my opinions, some quite intelligent people take the opposite view. As a consequence, the paper examined the technical feasibility of retaining the original atoms, and concluded that this retention (while somewhat increasing the technical difficulties that must be dealt with) would in fact be feasible.
It is interesting that Dr. Fahy’s criticisms are largely concerned with the section of the paper that was not written, the section on synthesis. In several instances, in the absence of a specific proposal in the paper, Dr. Fahy invented a specific proposal and then criticized it. The whole section discussing “jamming” and “slamming” is of this nature.
This form of criticism suffers because the critic’s proposed solution to the perceived problem is in fact a proposal of the critic. It is not surprising that such proposals are often found wanting. . . . The underlying criticism is that the original proposal has not provided sufficient detail to persuade the critic, so the critic has felt obliged to invent something.
Dr. Fahy appears to agree that the synthesis of large structures (e.g., a human brain) will be feasible. His criticisms have focused rather specifically on the suggestion that such synthesis be done at low temperature (e.g., perhaps 130 to 140 Kelvins).
Some General Approaches to Repair
Before addressing the specific issues surrounding low temperature synthesis, it would be advisable to discuss the general issues involved in synthesis at any temperature, and the kinds of structures that might prove satisfactory. The following taxonomy is not intended to be exhaustive, but is intended to provide the reader with a feeling for the range of possibilities available.
1) The least demanding approach would be to build an “artificial brain” using the digitized information provided by the analysis of the frozen brain. This approach allows the selection of the simplest technology available which can adequately support consciousness and human thought. While still controversial, it is very likely that this approach will be technically feasible at some point in the future.
The second class of methods seek to build an actual human brain, on the grounds that we have a high degree of confidence that a human brain can support consciousness and human thought. Rather than building a human brain directly, however, we actually build a structure which closely resembles the desired structure but which is, for some reason, stable. That is, the human brain is in a constant state of dynamic change. Directly building a structure which is in a state of constant dynamic change is difficult, so instead we build a static structure which closely resembles the dynamic structure at some specific point in time. The reason for building a static structure is the presumption that it will take some time to build, and that a dynamically changing structure would deteriorate during the synthesis time. The static structure won’t move while it’s being built, so we can take as long as we wish to complete the construction. The obvious methods of doing this are:
2) Synthesize the structure at low temperature.
3) Synthesize the structure in the dehydrated state.
4) Synthesize the structure in a normal “wet” state, but stabilize all major macromolecules by chemical means (cross linkages, etc.). This might be called “full stabilization.”
5) Synthesize the structure in a normal “wet” state, but use minimal stabilization aimed primarily at the membranes (by, e.g., simple mechanical supports), prevent the entry of oxygen or other reactive compounds, and allow “harmless” diffusion to take place. Note that with intact membranes, diffusion outside of well defined compartments will not take place. Some additional stabilization might be required, but the objective in this approach is to stabilize as little as possible. This might be called “minimal stabilization.”
Each of methods (2) through (5) has a “start-up” requirement. If synthesis is done at low temperature, then the temperature must be somehow raised. If synthesis is done in the dehydrated state, then water must be added in a controlled way. If chemical stabilization is used, then the stabilizing agents must be removed, presumably in some appropriate sequence. If minimal membrane stabilization coupled with low oxygen content is used, then oxygen levels (and other reactive compound levels) must be restored and the membrane supports removed.
Finally, we could adopt an approach that takes maximum advantage of the existing technology base: guided growth. In this method, we build the dynamic final structure through a series of dynamic intermediate states, much as an actual human brain is synthesized today by natural methods.
6) Synthesize the structure using the same general intermediate states that are used during normal growth. Achieve selectivity by placing key cellular activities under the control of an on-board computer. Thus, the bulk of the cell’s metabolic machinery would be identical to that of a normal cell, but where a normal cell would spontaneously initiate cell division, the “controlled” cell would be unable to initiate cell division unless the trigger for division were produced by the on-board computer. Changes in cellular shape and movement would likewise be under on-board computer control, as well as the growth of synapses, etc.
Although superficially resembling the growth of a normal person, this process would in fact be carefully controlled and planned. In simple organisms the growth of every single cell and of every single synapse is determined genetically. “All the cell divisions, deaths, and migrations that generate the embryonic, then the larval, and finally the adult forms of the roundworm Caenorhabditis Elegans have now been traced.”[2]. “The embryonic lineage is highly invariant, as are the fates of the cells to which it gives rise”[1]. The appendix to reference [1] says: “Parts List: Caenorhabditis elegans (Bristol) Newly Hatched Larva. This index was prepared by condensing a list of all cells in the adult animal, then adding comments and references. A complete listing is available on request. . .” The adult organism has 959 cells in its body, 302 of which are nerve cells[3].
The same principles apply in many insects. Grasshoppers, for example, have about 50,000 neurons whose development is invariant. Other insects have significantly more neurons.
Building a specific biological structure using this approach would require that we determine the total number and precise growth patterns of all the cells involved. The human brain has roughly 10-12 nerve cells, plus perhaps ten times as many glial cells and other support cells. While simply encoding this complex a structure into the genome of a single cell and then expecting that cell to grow into the final structure might prove to be overly complex, it would certainly be feasible to control critical cellular activities by the use of on-board nanocomputers. That is, each cell would be controlled by an on-board computer, and that computer would in turn have been programmed with a detailed description of the growth pattern and connections of that particular cell. While the cell would function normally in most respects, critical cellular activities, such as replication, motility, and synapse growth, would be under the direct control of the on-board computer. Thus, as in C. Elegans but on a larger scale, the growth of the entire system would be “highly invariant.” Once the correct final configuration had been achieved, the on-board nanocomputers would terminate their activities and be flushed from the system as waste.
Tradeoffs
The six approaches mentioned here have different technical and philosophical tradeoffs which will appeal to different people. Which approach is “best” is a question which cannot be answered on purely rational bases. A process more akin to an opinion poll is required. Those familiar with a specific technology will naturally be more comfortable with methods in which that technology is prominently used. Those with more conservative philosophical opinions will quite naturally exclude some approaches, even at the cost of some increased technical complexity.
Dr. Fahy, for example, would probably be most comfortable with “guided growth,” for this makes maximal use of existing (proven) technology. On the other hand, for someone worried that “guided growth” might produce a “mere copy,” frozen synthesis or fully stabilized chemical synthesis offers the most precise ability to restore the structure with atomic precision.
Building an artificial brain is the simplest approach technically and would therefore be attractive to those most concerned about technical feasibility. This technical simplicity is gained by relaxing the philosophical criteria, which is a tradeoff that some will not wish to make.
As can be readily seen, the debate about which of these general approaches to use includes factors well beyond the technical issues. A desirable goal would be to show that the most philosophically restrictive objectives are technically feasible, for such a proposal could be used as a “least common denominator” by everyone. This, presumably, would require a highly precise synthesis technique, and would thus favor either frozen synthesis or fully stabilized chemical synthesis. An interesting question is the degree of general acceptance of minimally stabilized chemical synthesis. This approach provides a number of significant technical simplifications and, if it were viewed as generally acceptable, might serve as a reasonable “least common denominator.”
In minimally stabilized chemical synthesis the original molecules would be restored (thus satisfying the concerns of those who wish restoration of the same atoms), but they would be allowed to move in accordance with diffusive forces as they might normally move in a living person. Individual membranes would be anchored (positionally stabilized) by a framework introduced for the purpose. Thus, repair would restore the original person with the original cellular structure and the original molecules, but the molecules would have been allowed to diffuse within their cellular compartments (or diffuse two-dimensionally within a membrane) much as they would normally do.
Computer Analysis is Fundamental
All these methods first use digitization of the human brain and revision of the digitized information to “repair” damage. Changing bits in a data base is a much more general and uniform method of “repair” than attempting to engage in actual physical repair of a specific form of damage using a specific physical repair technique. Given the severe level of damage that might occur when significant pre-suspension injury has taken place, especially when this is compounded by a suspension performed under adverse or sub-optimal conditions, it seems most attractive to digitize the entire structure first rather than to attempt the direct physical repair of specific forms of damage using specific techniques. Such direct physical repair techniques could be overwhelmed by the many synergistically interacting forms of injury that are likely to take place in many current suspensions.
Chemistry at Low Temperatures: Radicals and Pressure
Dr. Fahy devotes a long section to claims that low temperature chemistry is unfeasible, violates physical law, and isn’t what Feynman and Drexler had in mind!
Feynman never made any statements about temperature, nor did he specify in any detail how synthesis of arbitrary objects might take place. By contrast, Drexler’s technical book is very specific about the techniques to be employed, and considers temperature as a significant issue in most settings. Examining the current draft shows that it will include a chapter on “Mechanochemistry” with subsections on radicals, carbenes, and other open-shelled (highly reactive) species; as well as a section on piezochemistry, which will include a section on force versus thermal activation. While normally of limited use in chemistry, highly reactive species can be quite useful when their tendency to react with anything they touch (even at low temperature) is controlled by positional capabilities.
Suppose that we wished to bond two compounds, A and B. Let us presume that both A and B are closed-shell “stable” compounds, that we are operating in a high-vacuum low temperature environment, and that we have positional control available. To create the necessary bond, we might proceed as follows: 1) Abstract a hydrogen from compound A. (For reasons not entirely clear to me, chemists like to “abstract” hydrogens with radicals rather them remove them, delete them, or otherwise dispose of them); 2) Abstract a hydrogen from compound B; 3) Place compound A next to compound B, with the dangling bonds created by the hydrogen abstractions of steps (1) and (2) facing each other; 4) Wait for the laws of physics to do their thing. The activation energy for a radical-radical reaction is very low, so it doesn’t look like we have to worry about the temperature being too low to support “chemical reactions.”
Of course, we need to do an atomically precise hydrogen abstraction for this procedure to work: how can this be done?
One approach is to use a hydrogen abstraction tool. The basic requirements for such a tool are clear: one end must be very fond of hydrogen and the other end must form a “handle” which can be safely grabbed. 1-propynyl (the radical derivative of propyne) seems to fill the bill (though we will likely wish to expand the “handle” end of the molecule in some convenient fashion). A carbon radical triple-bonded to another carbon has an affinity for hydrogen which is quite high. The bond dissociation energy for the resulting H-C bond is about 132 kilocalories per mole (data were taken from the Handbook of Chemistry and Physics for the H-C bond in acetylene). Such a structure should be quite effective as a hydrogen abstraction tool.
This is just one example. Chemistry textbooks that discuss reaction mechanisms are filled with hydrogen abstractions by radicals. Activation energies for such abstraction operations can be quite small. Although “normal” compounds don’t react at low temperatures, chemistry using exotic compounds can take place quite readily.
Of course, we can also apply high pressure. Dr. Fahy said that “. . . pressures on the order of 15,000 atmospheres are required to convert ice into amorphous solid water at liquid nitrogen temperature. . .” incorrectly implying that achieving such pressures should be viewed as difficult. The current record for static pressure is almost 1.7 million atmospheres (from the Guiness Book of World Records. Much higher pressures have been achieved dynamically). This pressure creates forces at the atomic level that are a substantial fraction of the force required to rupture bonds. We will be able to achieve at least such pressures in the future, and use them in whatever way seems appropriate during the synthesis process. A “molecular vise” is not at all unreasonable. By building a diamond-like “reactive site” that was both extremely hard and whose shape was precisely tailored to promote a specific reaction, we could “squeeze” two compounds together using extremely high force that was very precisely applied. This entirely novel form of synthesis opens yet another broad range of chemical reactions that will occur at low temperature.
And, of course, we can apply modest pressure to highly reactive radicals, thus eliminating the need for even the small thermal activation energy called for in these cases. Chemistry can be done at 0 Kelvin.
Misunderstandings
Many of the criticisms that Dr. Fahy made are based on a massive misunderstanding of the proposal. He devotes an entire section to specific forms of damage and the physical problems that would be involved in attempting to directly repair those forms of damage. However, the major thrust of “Molecular Repair of the Brain” was precisely to avoid the need to worry about the specific physical problems in repairing each individual form of damage. Having once gotten a digitized description of a human brain (and optionally, for those concerned about it, a “filing cabinet” holding every major macromolecule from that brain), the physical problems involved in repairing a fractured axon simply don’t matter. The component molecules of the fractured axon now reside in the filing cabinet, while the coordinate data for the molecules from that fractured axon reside in the data base describing the frozen structure. “Repair” of the frozen axon, at this stage, consists of altering the data base. No physical manipulations are called for, nor would they be useful. Dr. Fahy’s concerns are like asking how a computer text editor can remove the paper when you delete a word. There isn’t any paper to remove. The question, as stated, simply doesn’t make sense. You can ask how the text editor alters the bit-patterns that describe the text. You can ask about the physical process of printing. But you can’t ask how the text editor changes the printed words on a piece of paper because that’s simply not what’s going on.
The alternative to digital modifications of a digital description of the structure is to directly modify the real physical structure, damage and all. Each specific form of damage that might occur would require a separate direct physical repair process. Such a case-by-case analysis is complicated, error prone, and not very confidence-inspiring. If, however, we digitize the original structure and perform the “repairs” on the data base, then we can at once eliminate virtually all problems. The problems that remain are fundamental and are not obscured by a cloud of secondary issues.
There is one case where direct physical repair of the original structure probably makes good sense: when the damage that has been done is minimal, is well defined and well understood, and direct physical repair is not too complicated. One of the major objectives of research in cryobiology is to minimize the damage done by freezing and to better characterize that damage. It seems plausible, therefore, that with continued advances in cryobiology the need for sophisticated repair methods can be avoided entirely. While we can look forward to that happy day, it seems unlikely that direct physical repair methods will produce a satisfactory result when applied to the people suspended using current methods. By contrast, digitization followed by sophisticated computer analysis and repair is likely to produce a good result when applied to a person suspended using the current rather primitive methods (with apologies to those providing us with those much appreciated primitive methods!). Indeed, sophisticated computer analysis should produce a satisfactory outcome under remarkably bad circumstances.
Further research aimed at better characterizing and minimizing freezing damage, as well as aggressive efforts to minimize the damage actually incurred during suspensions, are both very worthwhile objectives that deserve strong support. At the same time, it is essential to consider repair methods that will be able to cope with the most severe damage that might actually occur in practice. By both minimizing freezing damage and maximizing repair capabilities we will achieve the highest possible probability of success.
Dr. Fahy has argued that building a brain at low temperature and then warming it is “nonsensical” because (inter alia) it would explode. Unfortunately for this argument, extremely rapid warming does not impart momentum per se, and volume changes caused by temperature changes can be compensated by a number of mechanisms (e.g., leave space for expansion. . .). The claim that rapid heating of a biological structure from (say) 130 Kelvins to 340 Kelvins or so will inherently cause it to explode is without merit.
Much as the rapid heating proposal is charming, a proposal of Dr. Fahy’s is better: build the frozen structure with an appropriate concentration of cryoprotectant and then heat it slowly. This doesn’t have the technical drama of rapid rewarming, but solves the problem quite effectively. This is, of course, simply one illustration of a general principle: if you are building a structure using technology X, then modifications to the structure to make the job easier for that technology are entirely reasonable. If technology X involves building a frozen structure and then warming it, then banning structural changes that would allow the structure to better resist heating would be plain silly. While certain constraints on the allowed modifications must be made (if the structure is my brain, I have some strong opinions on some of the constraints!) it should be very clear that adding cryoprotectants is acceptable.
As an aside, frozen synthesis would allow the cryoprotectant concentration and even the type of cryoprotectant to be varied from tissue to tissue (or even cell to cell) to achieve optimal tissue-specific cryoprotectant concentrations. Combining this with highly controlled (and perhaps quite rapid) heating rates will result in minimal damage during warming. More sophisticated structural modifications to make the tissue resistant to warming damage would also be feasible.
Power
“How will nanomachines be powered? No comments from Merkle.”
Comment: properly designed electrostatic motors will function quite nicely, however cold it gets. Electrostatic attraction and repulsion are not altered by temperature. “Is it feasible in principle to power complex molecular manipulations . . . at cryogenic temperatures?” Yes. Simple mechanical interactions are not temperature-dependent. If a probe knob runs into a gate knob, it’s blocked regardless of how low the temperature gets. Rod logic will work quite nicely at liquid nitrogen temperature.
“Presumably the power would have to be supplied via electrical cables or sliding rods going in through the vascular system.” No. Such a presumption might be considered for “on board repair.” The off board repair method discussed in the paper eliminates this problem. The structure being examined was taken apart. The issues surrounding power dissipation were largely eliminated. The volume occupied by the repair system could greatly exceed the volume occupied by the brain. The vast bulk of energy dissipation is involved in computation. The computation can take place as far away from any tissue as we desire.
Time Estimates
Dr. Fahy correctly points out that if repair takes place at low temperature, then the time estimates based on biological analogies must be viewed with caution. However, every factor of which I am aware provides a speed advantage to assembler-based methods, rather than the reverse. As a consequence, the biological times are extremely conservative estimates of the time that would actually be required to perform the necessary manipulations. Thermal diffusion and self-assembly are inherently limited in their speed of operation, and it would be truly remarkable if future molecular engineering technology did not exceed these speeds by several orders of magnitude.
Analysis of the fundamental speed limits produces numbers that are shockingly good (and were not needed to support the basic case). Chemical reactions don’t fundamentally require much time. Femtoseconds and picoseconds are the units typically used. If we assume one microsecond per chemical reaction, and something like 1025 chemical reactions to synthesize a structure as large as the human brain, and if we assume a parallelism of 1016, then we find the job can be completed in 1000 seconds, or about 17 minutes. There do not appear to be any fundamental physical reasons to doubt that this will be feasible.
While questions about the fundamental physical limits of computation have attracted a great deal of interest (for rather obvious economic reasons), no one has yet (to my knowledge) published a paper discussing the fundamental physical limits to the speed of synthesis of a complex object. The demonstrated biological speeds are adequate for our purposes, and tend to be less shocking. While Dr. Fahy has argued that the biological speeds cannot be used to estimate the rate of synthesis if non-biological techniques are used, it is in fact reasonable to view the biological speeds as an upper bound on the synthesis time involved provided that the non-biological methods are faster. Positionally-based synthesis techniques should indeed be substantially faster than biological methods, so the assumption is reasonable.
Other items
Dr. Fahy’s claim that building structures at the temperature of liquid nitrogen is like building them in a black hole is clearly poetic hyperbole and not intended to be taken seriously.
I’m puzzled by the claim that “removing cracks” is a non-trivial exercise. It is a trivial exercise. Assignment: given a data base that describes frozen tissue with cracks, modify it so that it describes the same structure, but without the cracks. A student in an advanced data structures course might view this as a reasonably challenging assignment, but any professional in the image analysis field could toss off half a dozen algorithms for doing the job in an hour. (I assume the cracks are “clean” low temperature fractures).
The paragraph claiming that rending a molecule asunder at a specific bond is implausible has rather obviously gone too far. Clearly, given a specific molecule, and given that we are pulling on it hard enough to rupture a bond, one of two things will be true. Either: a) two or more bonds are of sufficiently similar strength that random thermal variations will cause one bond or the other to actually rupture, thus leading to difficulty in predicting which bond will break, or; b) one bond is sufficiently weak as compared with other bonds that the weak bond will always (or very nearly always) break.
Rather obviously, if one wanted a molecule to break at a certain point, one would design the molecule with a “weak link” at that point. In this way, the molecule would always break exactly where it was designed to break. Typically, when a molecule is broken in two in this fashion, the dangling bonds will be highly reactive radicals that can be used in further reactions. Indeed, rupturing a molecule with a deliberately designed “weak link” is a good way to reliably and predictably create specific radicals. In current chemistry, radicals are often produced by selecting “weak bonds” and breaking them by some process. Oxygen-oxygen single bonds (peroxides) are fairly popular in this regard. The use of mechanical methods to rupture weak bonds simply continues an old and familiar chemical tradition used to generate radicals in support of chemistry.
Conclusion
Dr. Fahy concludes that “From the point of view of a cryobiologist, however, Merkle’s analysis falls far short of being convincing.” Evidently, however, the analysis was convincing as far is it went. The “unconvincing” part was the part not written: e.g., the synthesis method. Even here, Dr. Fahy seems to agree that the synthesis of a human brain is feasible. His only objection is that such synthesis could not be done at low temperature. I have no objections to synthesis at some other temperature, but the objections he raises to low temperature synthesis are incorrect. Low temperature synthesis continues to be a synthetic method with certain advantages (e.g., high precision, stability of intermediate structures) when compared with other approaches.
This exchange on the subject will not be the last, nor should it be. As repair scenarios become more detailed, there will be more points of disagreement, not fewer. Consensus does not emerge at once, full blown. Instead, it emerges bit by bit, a single piece at a time, as the various issues are argued and discussed in greater and greater detail.
REFERENCES
1. J.E. Sulston, E. Schierenberg, J.G. White, and J.N. Thomson, “The embryonic cell lineage of the nematode Caenorhabditis elegans,” Developmental Biology, Vol. 100, pages 64-119 (1983).
2. Jean L. Marx, “Caenorhabditis elegans: Getting to Know You,” Science, Vol. 225, pages 40-42 (July 6 1984).
3. Roger Lewin, “Why is Development So Illogical?” Science, Vol. 224, pages 1327-1329 (June 22 1984).