Reamde-ing me into a diamondcrash

(By Extropia DaSilva)




In part one of this essay, we examined that most infamous of dystopian nanotech outcomes, the ‘grey goo’ of self-replicating machines. In this second part the view shall be widened as we examine how molecular manufacturing might affect society as a whole. I am obviously not the first person to attempt such a thing. In fact, ever since Drexler established the field with his books ‘Engines Of Creation’, ‘Unbounding The Future’ and ‘Nanosystems’, there have been no end of speculations regarding how society will adapt to this paradigm shift in engineering. Some of these speculations are decidedly dystopian, others defiantly utopian but if there’s anything their authors share in common it’s the fact that none of them have had first-hand experience of a society built on widespread access to molecular manufacturing. This is simply because the technology is still very much in the theoretical stage of development and no practical nanosystems currently exist.

However, I would argue that there does exist a society built around a manufacturing system that shares certain similarities with molecular nanotechnology. There are no prizes for guessing that I am referring to Second Life. I believe that SL can serve two useful purposes as molecular nanotechnology emerges from vapourware. The first is that we can take those aforementioned speculations and see if they have come true in this prototype nanotech society. The second is that, as the metaverse develops, it may be possible to guide its evolution so that it represents a bridge that helps us cross over to a nanotech society as painlessly as possible.


Before we really get stuck in, there are some issues to clear out of the way. The first is to acknowledge that nanotechnology’s effect on society will be so far-reaching it would not be possible to fully examine every aspect of it. Omissions have to be made. On the dystopian side, I will not be discussing how war will be conducted in a world with widespread molecular nanotechnology, or whether its development decreases or increases the likelihood of conflict. Nor will I be discussing the possible negative environmental consequences of materials built at the nanometer scale. On the utopian side, I shall have nothing to say regarding the augmentation of human beings. There will be no discussions of ageless bodies or uploaded minds here.

Another point that needs to be addressed is the fact that building with prims is not exactly equivilent to molecular manufacturing. The main difference is that SL residents can violate the laws of physics. This is most obviously demonstrated by those floating homes that openly defy the law of gravity. But on a more subtle level, builders in SL use ‘material’ that does not possess physical properties like load bearing, wear, and material fatigue. Because of this, they are able to build structures that, even if they were built out of diamondoid material (which has 55 times the ratio of tensile strength to density of steel) would inevitably collapse if built in RL. Another weakness of using SL as a model of a nanotech society is that the various scenarios are based on the assumption that molecular manufacturing has replaced the current industrial system. Clearly, SL has not done so, but instead is more like a microcosm of a much larger society. Having said that, it must be the case that molecular manufacturing will be developed within the current industrial system, so I don’t consider this difference to be too damaging to the premise that SL is a prototype nanotech society.


Right, that’s the flaws in the ‘prim-building is comparable to molecular manufacturing’ argument out of the way, now let’s examine how they do compare and why Drexler’s technology contrasts with the current manufacturing system. The main reason why conventional manufacturing is unlike molecular nanotechnology is because it approaches the task of creating useful products from a completely different direction. With the current approach, a purpose-suited device is distilled or carved from a mass of raw materials. Conventional manufacturing begins with large and unformed parts, and this fact has lead to a trend towards larger, more centralised factories. The Industrial Revolution also set in motion a trend known as ‘division of labour’, which refers to economies of scale that result from having a particular task performed by fewer groups, or fewer companies, or in fewer places. Specialization leads to better products for less cost, because it makes use of workers who understand their job better than a generalist could, and because it eliminates redundant factories by consolidating many tasks into only a few.
An outcome of these trends is that factories became equipped with highly-specialised tools that can only deal with a very limited range of material. A sawmill is great for turning lumber into planks, fences, wooden pegs and so-forth but is completely useless if you want to churn out computer chips. A factory built for manufacturing cars is similarly ill-equipped to manufacture anything other than automobiles.

Current manufacturing methods are ’subtractive’. In direct contrast to this, molecular manufacturing is ’additive’. It takes a bottom-up approach to engineering by assembling the building blocks of matter into useful products, following a design that calls for only what’s needed. Defining what is meant by ’feedstock’ is rather difficult with conventional manufacturing, because at the scale at which current systems manipulate matter, material comes in such a wide range of forms. But at the scale at which molecular manufacturing works, there are, at most, 92 different building blocks (the elements of the periodic table). What’s more, almost everything in the material world uses fewer than 20 of these elements.

The process detailed in Drexler’s ’Nanosystems’ makes use of exponential manufacturing, in which integrated systems contain numerous subsystems attached to a framework. The process would begin with a flow of whatever elements are required (typical products require large quantities of carbon, moderate quantities of hydrogen, oxygen, nitrogen, phosphorus, chlorine, fluorine, sulfur, sillicon and lesser quantities of other elements). Molecular mills (mechanisms capable of selectively binding and transporting chemical species from a feedstock), would combine molecules into a diverse set of building blocks in the 10^-7 to 10^-6m range. Block assemblers would assemble components, component assemblers would piece together subsystems and systems assemblers would manufacture the finished product. As far as a household with a desktop nanofactory is concerned, the basic building blocks are likely to be those nanoscale Legos bricks. At this point a comparison with prim-building should be obvious. In the physical world of desktop nanosystems, macro scale products would be assembled from the bottom up by combining a diverse set of nanoblocks. In SL, builders take basic building-blocks known as ’prims’ and reshape and combine them into complex products (the building blocks used by nanosystems could incorporate struts and joints that contain sliding interfaces, thereby allowing them to be extended or twisted to assume a wide range of lengths or angles, Moreover, the blocks could be assembled into as many objects as can be derived by reshaping prims. In that sense, nanoblocks may also be reshaped as needed).
The building tools in SL allow content creators to make pretty-much anything. Whether it be jewellery, clothes, furniture, houses or cars it is all constructed from the same elementary building-blocks. Admittedly, designers may employ additional CAD tools like Photoshop, but even so prim-building is far more flexible than the highly specialized tools used in RL engineering today. Plastic-moulding machines and metal-cutting machines shape particular kinds of plastic and metal respectively; they do not possess the flexibility that would come from having tiny, fast-cycling parts that form complex patterns of the elementary building-blocks of matter. Nanosystems, though, would have precisely that capability.

It was mentioned earlier that prims are the elementary building blocks of all products built in SL. Of course, this is a virtual world and in reality everything is built out of the building blocks of information, which are binary digits. Linden Lab’s prototype metaverse exists inside computers, which are machines that contain tiny, fast cycling parts that can be directed to form complex patterns of bits. Computers are extremely capable of processing bits and just as adept at copying information. This gives Sl’s builders certain advantages that would not be possible using conventional manufacturing. Consider the way Fallingwater Celladoor goes about her job. “Sales vary a lot: Some things hit big, other’s don’t. I just make what I like and see what happens”. In other words, she makes use of rapid prototyping and deployment. In the real world, rapid prototyping does not exist in any meaningful way. So-called ’rapid prototyping machines’ are very costly, take substantial time to manufacture something, and that something can only be a passive component, not an integrated product. Assembly is still required. As for high-volume manufacturing, overheads to be considered include procurement of supplies, training workers and the product must not only be useful but manufacturable as well. A significant part of the total design cost may be taken up by desiging that manufacturing process.

A builder in SL does not need to worry about such things. There is no need to design the manufacturing process because the content-creation tools are already in place. There is no need to worry about procurement of supplies because prims are a readily-available resource. There is no need to train workers to run the manufacturing process because it’s carried out automatically by the power of computers. What’s more, while it requires time and effort to design and build a product in SL, this only applies to the first of anything. But once it’s done, once you have created your prim-based wonder, it requires zero effort from you to mass-produce them. A person turns up at your store, chooses whatever, and the information embodying its design is copied and a perfect reproduction is duly delivered to the customer’s inventory. If the item has been tagged as copyable, the customer can effortlessly give away the item to anyone without diminishing their own supply.
You may have noticed that describing the inner workings of computers as ‘fast cycling parts that can be directed to form complex patterns of information’ bares similarities to desktop nanosystems, which contain fast-cycling parts that can be directed to form complex patterns of the building blocks of matter. You would expect to find nanosystems offering a similar set of advantages, and this is ineed the case. Procurement of supplies is no problem, since all the process requires is a mixture of simple compounds (Carbon, the main ingredient, currently costs $0.1/Kg). The nanofactory would then convert that feedstock into the finished product, and the intermediate stages would not require external handling or transport- no need to train workers to run these factories! It would take about an hour to produce a functional prototype at a cost of a few dollars per-Kg, regardless of the complexity of the product. The approved design could immediately be put into production.
Most of the internal volume of a desktop nanosystem is devoted to open workspaces for manipulators. According to Drexler, ’it should be feasible to design a system that can be unfolded from linear dimensions of ~0.2 m to linear dimensions of <0.4 m…with the use of programmable manipulators to build a diverse set of structures from a smaller set of building blocks, the output of a set of specialized mills can be used to build an identical set of mills, as well as many other structures’. In other words, a desktop nanosystem can build an identical desktop nanosystem. Strictly speaking, it’s also possible to build a duplicate of a conventional factory. We had the technology to build one of them, we could obtain the supplies to build its twin. But at this scale, it would take more than a year for a system to produce outputs with a complexity equalling its own. A 1kg desktop nanosystem, however, would manufacture an identical system in about an hour. More than anything else, it is this capability that sets molecular nanotechnology apart from our current system. In principle, a society with access to nanosystems would be different from all previous economies, because the means of production themselves are replicable from cheap, readily available elements. The only society based on a comparable system is Second Life, with its cheap, readily-available prims that can be assembled into complex products and thereafter effortlessly duplicated. Drexler was unflinching in his appraisal of the expected consequences of molecular-manufacturing: ’The industrial system won’t be fixed, it will be junked…the wholesale replacement of 20th Century technologies’. But then what? Replace the engines- the entire system- of the industrial revolution- and what becomes of the society it supports? What about employment, money, social status? Would current notions of work and leisure still apply and if not would they be replaced by something better, or worse?


As I said earlier, one can find all manner of speculations about how society will adjust to this paradigm shift in manufacturing. Some of these commentators expect a kind of enlightenment to follow widespread access to nanosystems: ’People will not need to work to make a living. We will be living in a world where true equality exists’. Others see things entirely differently. Damon Knight’s story, ’A For Anything’, is a dystopian tale in which America breaks down into lawlessness, and innovation comes to be seen as intolerably disruptive, following the proliferation of ’gizmos’ that provide for all material needs.

And then there are those who question whether there is an incentive to develop the technology in the first place. In ’The Spike’, Damien Broderick asked, ’why should canny investors choose to move their money into the nano field if they can see…general assembler machines that literally compile material objects, including more of themselves? Where’s the profit in that?’ Another reporter witnessed at first hand the angry response of shop owners at the introduction of a device that could copy their wares: ’One resident created a massive boulder, instantiated it next to the vendor, and as it grew a hundred metres in diameter, flung cat, protesters, and embedded journalist in every direction’.

Well done if you successfully spotted that the last quote (from a blog entry by Hamlet Au) concerned the ’copybot versus SL seller’s guild’ controversy. LibSl’s ’copybot’ caused a certain amount of panic upon its release in SL, as it was feared the economic system would collapse as everyone ran around making illegal copies of items they would otherwise have to pay for. In the end the Lindens stepped in and declared that using copybot or some similar tool was a violation of TOS, punishable by exile from SL. The whole controversy seemed to fade away almost as quickly as it arose. Does this episode suggest that we should expect to see practical molecular assemblers similarly banned?

I asked such a question at a Thinkers discussion, and Nite Zelmanov was quite certain that copybot had not been banned. ’Copybot is NOT banned. What it does is 100% allowed. Doing that to things you’ve been told not to is a TOS violation’. When I asked if there were legitimate reasons for using copybot, Zelmanov answered, ’thousands of legitimate uses, (for instance) copybot is currently the only way to backup your prims/texture-based creations outside of SL. Many people use it for that today’. Similarly, would there be an advantage in pursuing molecular manufacturing, even if perfecting this technology would threaten the economic system as we currently understand it?

The answer is that pursuing molecular nanotechnology is more than a luxury we can choose to opt out of, it is a non-negotiable condition for survival. This has been known for over 200 years, following the publication of ’Essay On The Principles Of Population’ by Thomas Malthus. Malthus noted that growing populations tend to expand exponentially, but the food supply can only increase by a fixed amount per-year and so exhibits linear growth. Any rate of exponential growth must eventually outstrip linear growth, which means unchecked population growth must outrun food production. At which point, of course, starvation and death ensue.
Sounds grim, but this essay was published in 1798 and more than two centuries later food production is still keeping up with population growth. This tends to make his (and similar arguments, like Paul Ehlrich’s ’The Population Bomb’, which in 1968 gloomily declared ’the battle to feed humanity is over…hundreds of millions are going to starve to death’) come across as needless doom-mongering. It is not. Predicting that exponential growth in population size WILL outrun resources is mathematically undeniable. Predicting WHEN limits will pinch is rather more difficult. Malthus failed to anticipate breakthroughs in farm equipment, crop genetics and fertilizer. Similarly, the so-called ’Green Revolution’ averted the catastrophe that Ehlrich forsaw thanks to new generations of high-yield crops and the industrialization of agriculture.

Technology has delayed the Malthusian catastrophe, but is itself dependent on the Earth’s natural resources. It would be more appropriate to say the Green Revolution diverted the problem rather than averted it. The US food system consumes ten times more energy than it produces in food. Fossil fuels make this disparity possible, but they are in finite supply. It also requires copius amounts of fertilizer. Worldwide, more nitrogen fertilizer is used per year than can be supplied through natural resources. Finally, it is dependent on water. We can grow twice as much food as we could a generation ago but require three times as much water to do so. Fresh water is being lost at a rate of 6% per year, and in the last 3 decades we consumed a third of the world’s natural resources.

In 4 decades time, the Earth will be able to provide a maximum of 3.5 acres of land per person. Environmentalists talk about ecological footprints in order to give an idea of how much we consume natural resources. A person with a 4-acre footprint requires 4-acres worth of resources to maintain their lifestyle. Currently, the Earth can provide 5.3 acres of land per person and so an ecological footprint of 4 acres would be sustainable. Unfortunately, developed nations are not living within their means. The ecological footprint of the average American is 24 acres worth of land. The average Brit uses 11 acres worth of land. These and other countries are said to have an ecological deficit, because the number of acres that exist in that country are not sufficient to support the lifestyles of the populace. This fact has lead to the conclusion that the Malthusian catastrophe was postponed, not averted.

Now that nearly all the productive land on this planet is being exploited by agriculture, the only way to keep delaying the Malthusian Catastrophe is to learn how to manage what we have more efficiently. In practical terms, this largely involves developing ways of exercising finer and finer control over matter and obviously molecular nanotechnology is the endpoint of such ongoing efforts. We are discovering, as our control over matter heads towards the molecular level, that what we took to be fundamental problems were in fact temporary failings of inferior technology. I suspect most people believe industry is inextricably linked with sewage, waste and pollution. But all of these result from inadequate control over how matter is handled. Toxic wastes consist, generally speaking, of harmless atoms arranged into noxious chemicals; much the same can be said of sewage. Such waste could be converted into harmless forms once we have the tools to work with matter at the molecular level. With much greater control over the handling of matter, waste would no longer be disposed of by dumping it into landfills, rivers and the air. Products we no longer need would be disassembled into simple molecules, ready for near-total recycling. There may still be some waste in the form of leftover atoms, but these would most likely be ordinary minerals and simple gases.

Some forms of material (elements like lead, mercury and cadmium) are intrinsically toxic. Such elements need play no role in molecular manufacturing processes or products, though they may be introduced into the system via a bad mix of raw material. Since molecular manufacturing cannot create elements (only combine them into complex structures) the process cannot be blamed if such elements come out. If toxic elements resulting from a bad mix of raw material do come out, chemically bonding them into a stable mineral and putting them back where they came from would be the best method of disposal.

The pollution that’s causing the most concern right now are greenhouse gases. Earth would be too cold to support life forms such as humans, were it not for gases in the atmosphere like carbon dioxide that trap some of the heat from the Sun as it is radiated back from the surface of the Earth. That our planet has long had the right amount of greenhouses gases in the atmosphere to make it hospitable is no coincidence. In fact, the geosphere and biosphere interact in complex ways that control the levels of carbon dioxide, sort of like the way a thermostat regulates the temperature in a room. Or, rather, it did. But after the Industrial Revolution, industry powered by the burning of fossil fuels artificially pumped billions of metric tons of carbon dioxide into the atmosphere, which is far more than those self-regulating systems can cope with. As they break down, the Earth’s climate may change in ways that are detrimental to us.

Time for more scary statistics. As developing countries rise from poverty to prosperity, it is expected that CO2 emissions will rise sharply. Singapore’s development saw its emissions rise from 1 metric ton of CO2 per person to 22 metric tons in three decades. India’s economic development is expected to increase its emissions from 1.1 metric tons per person, to 12 metric tons. It seems that rising prosperity goes hand in hand with worsening air quality and declining climate stability.
But, such predictions stem from assumptions that greater wealth means greater resource consumption, that the burning of fossil fuels, deforestation and scarring the Earth to mine for minerals will continue, and that pollution is the inevitable consequence of industry. Drexler pointed out that all of these assumptions are dependent on the belief that industry as we know it cannot be replaced. The successful development of molecular nanosystems would refute this assumption, and turn what’s now considered to be a harmful pollutant into a useful resource. After all, carbon is the main building material in molecular manufacturing, and 20th century industries have pumped enough into the atmosphere to provide 31,000 kilos for every person alive today. Using almost nothing but the waste from 20th century industry, a civilization built on nanosystems could support a population of 10 billion to a high standard of living, using just 3% of present US farm acerage to do so.

Of course, this will require energy as well as matter. Molecular nanotechnology can help in two ways. Firstly, it would greatly reduce the energy requirements needed in manufacturing. Reductions in energy requirements are also something we see in SL. In ‘Second Lives’, the author Tim Guest noted that a flight across the Atlantic to interview Philip Rosedale produced the same amount of carbon as running a small family car non-stop for two years. But an inworld interview between their avatars produced a carbon footprint ‘equivilent to keeping the fridge door open for five minutes’. The same author noted that 16 acres of SL real estate consumes 280 kilowatt hours of electricity per year, whereas a thousand-square-foot retail outlet in RL consumes as much energy in a week.

The strongest materials that can be produced in bulk today achieve a mere 5 percent of theoretical molecular strengths. ‘Things break down’ seems to be a fundamental law, but a lot of atoms need to be out of place for failures to occur, and so the occurance of failures can be attributed to the fact that we currently handle matter in a very crude fashion. The more precisely a product is manufactured, the less likely it is to contain the atomic defects, impurities, dislocations, grain boundaries and microcracks from which malfunctions occur. Molecular manufacturing would develop materials that are many times stronger, and a great deal lighter, than steel. Products would be assembled with atomic precision. They would be tough and reliable, making malfunctions virtually none-existant.

The second way molecular nanotechnology can help is by making solar power a viable source of energy. Currently, solar power costs about $2.75 per watt, but it has been determined that Nan technology will lower the price by a factor of ten to one hundred. Molecular manufacturing would not only make them cheap, but tough enough to replace asphalt as the choice material for surfacing roads with. Drexler also explained that ‘Molecular manufacturing…could make them tiny enough to be incorporated into the building blocks of smart paint. Once the paint was applied, its building blocks would plug together to pool their electrical power and deliver it through some standard plug’. The massive improvements in energy demands made possible by the far lighter and durable products of molecular nanotechnology implies that a global industry built on nanosystems would require 30 trillion watts, which could be obtained by capturing a mere three ten thousandths of the solar energy striking the Earth.


At this point, it would be worth debunking a few arguments that critics are always assuming are devastating to the concept. Some people believe that oil companies will use their financial might and political influence to block the development of any technology that could replace fossil fuels as the lifeblood of industry. In all fairness, there is evidence to support such a view. For instance, when municipalities in California decided to purchase cleaner, natural gas buses, the diesel industry sued to block the switch. It’s worth remembering, though, that fossil fuels have a limited lifespan as an energy source. They become unviable, not when the Earth’s stockpiles are completely exhausted, but when the energy required to extract a barrel of oil exceeds the energy that can be obtained from a barrel of oil. Oil discovery and development costs tripled from 1999 to 2006, which lead J. Robinson West (chairman of oil industry consulting firm PEC energy) to comment, ‘there are no easy barrels left’. Talking about the fundamental limitations that will eventually block further improvements in integrated circuits, Hans Moravec said, ’as long as conventional approaches continue to be improved, the radical alternatives don’t have a competitive chance. But as soon as progress in conventional techniques falters, radical alternatives jump ahead’. I suppose it’s possible that oil barons will block development of alternative energy right up until the moment fossil fuels have utterly exhausted their energy potential, causing the total collapse of civilization and propelling us back to the stone-age. I think it is rather more likely that the paradigm shift Moravec spoke of will apply to radical alternatives in energy.

Another poor criticism is one I like to call the ’non-existant flying car’ argument, whereby a critic points out that futurists predicted we would have flying cars by the year 2000 (or some other device) that, several years after the millenium, have failed to materialise. They then confidently assert that predictions concerning molecular manufacturing are just as dubious. The massive flaw in such arguments is that they fail to recognise that molecular nanotechnology is not a single device (like a flying car) but rather the endpoint of a trend that pervades all of technology and nearly all scientific progress. That trend is the pursuit of the precise control of matter. Scientists in the field of chemistry are always striving to synthesize more complex chemicals. This requires the development of instruments that can be used to prod, measure and modify molecules, helping chemists to study their structure, behaviours and interactions on the nanoscale. Biologists strive to not only find molecules but learn what they do. Molecular manufacturing would provide the means to map cells completely and reveal the molecular underpinnings of disease and genetic disorder.

Materials scientists strive to make better products. With access to molecular manufacturing, a few tons of raw material would produce a billion cubic-micron sized samples and one laboratory could do more than all of today’s materials scientists put together. Moreover, molecular manufacturing would allow new materials to be built according to plan, making the field far more systematic and thorough than it is now. On a related note, car, aircraft and especially spacecraft manufacturers are obsessed with chasing the Holy Grail of materials science, which is to produce products that are both lightweight and strong. Reducing mass saves materials and energy. Products made of diamondoid would have an identical size and shape to those we make today but would be simultaneously stronger and 90% lighter. Constructing such material requires advanced mechanosynthesis, made available through molecular nanotechnology.

In fact, manufacturing as a whole continually strives to make better products, and so the natural endpoint is the precise, molecular control of complex structures- the very definition of nanosystems. As each individual field gets closer to this goal, it will become increasingly necessary to collaborate with other fields because nanotechnology is distinguished by its interdisciplinary nature. We see such developments occurring today. The most advanced research and product development calls for knowledge of disciplines that hitherto operated mostly independently of one another. The full potential of nanotechnology lies in the gaps between academic fields.

Molecular nanotechnology might require expertise in multiple fields, but currently the academic community is not geared towards such multidisciplinary research. This may well be a consequence of the division of labour that we talked about earlier, which favours specialists over generalists, but it has lead to a similar division in the research community, where proposals are evaluated by experts within one field who have little or no understanding of the developments in fields outside of their expertise. Where nanosystems is concerned, this has lead to psuedoexperts who debunk their own misunderstood concept of Drexler’s proposal. As Drexler explained, ’a superficial glance suggests something is wrong- applying chemical principles leads to odd-looking machines, applying mechanical principles leads to odd-looking chemistry’. What is actually lacking is an inability to appreciate a deeper view of how these principles interact.

One particularly prevalent misconception (one I myself unfortunately made in part one) is to describe molecular nanotechnology in terms of ’building things atom by atom’. This approach is rightly criticised as infeasible, because unbound reactive atoms would react and bond to the manipulator (this was called the ’sticky fingers problem’ by nanocritic Richard Smalley). But this fact is not a problem for molecular nanotechnology, because assemblers were never conceived as tiny tweezers picking up and positioning atoms in the first place. What these assemblers actually do is mechanically-guide reactive molecules, so whereas chemistry involves a lot of molecules wandering around and bumping together at random, assemblers would control how molecules react via a robotic positioning system that brings them together at the specific location and at the desired time. In short, Drexlerian nanotechnology applies the principles of mechanical engineering to chemistry. It is properly defined as ’the process that uses molecular machinery to guide reactive molecules’; and it is a misconception to describe it as ’building things atom by atom’. It IS true that the construction of specific molecules is governed by the physical forces between the individual atoms composing them, and it is equally true to say that controlling the motions and reactions of individual molecules implies controlling the motions and destinations of their individual constituant atoms, but it is NOT true that molecular nanotechnology builds with individual, unbounded (and, hence, highly reactive) carbon atoms. That is not the only misconception; there are many more, and they can all be attributed to the same root cause: Hardly anyone knows both chemistry and mechanical design. Fortunately, there is a solution.


In part one, I said ’if you want to be a nanotechnologist you have to have a grounding in chemistry’. In the 1960s, students in American grade schools and junior high schools were taught to use numbers written in base 2 (binary), as well as the more familiar base 10, because it was assumed that the approaching ’computer age’ would require everyone to be adept at writing assembly language programs. The vision of computers in our working and social lives was certainly spot-on, but today hardly anyone even needs to know that all these machines ultimately do is addition and multiplication of integers, so neatly are these mathematical operations hidden beneath search engines, graphical user interfaces and pre-installed software packages. Engineers have long been used to computer-aided design software that can reduce what would have been weeks of work with a pen and paper to a simple click of the mouse. Now we are beginning to see CAD packages like NanoEngineer 1, a 3D molecular engineering program that ’has been developed with a familiar intuitive user interface for mechanical engineers with experience using CAD…NE1 doesn’t even require the user to know much about chemistry to use it’. If simulations incorporated into CAD software can help engineers absorb knowledge of chemical rules without learning chemistry in the classic nose-in-a-textbook sense, that would be a step towards opening the bottleneck caused by a shortage of knowledgeable designers.

The metaverse may offer further solutions to the problem of bringing disparate teams of scientists and engineers together. Systems and applications tend to have a lot of complexity, which is largely due to the fact that such systems have grown from the machine up. IBM’s chief technology strategist, Irving Wladawsky, put it like this: ’We do our machines, we do middleware, we do applications, then we put in a thin layer of human interface’. But, by recreating the person-person social and commercial interactions in an online 3D space, SL demands interfaces that give top priority to the user. ’One of our biggest challenges is to make IT systems and applications far, far more useable to human beings. IT systems in business, healthcare, education, everything’.

Now, it must be admitted that ’intuitive’ and ’user friendliness’ are not words that immediately spring to mind when describing SL’s user interface. But in 2007 the viewer software was open-sourced, which would make the client software accessible for modification and improvement by ’a bigger group of people writing code than any shared project in history, including Linux’ (Cory Ondrejka). A few companies have already taken up the challenge of improving SL’s client code. One such company is Electric Sheep. Its CEO explained, ’LL has done extraordinarily well creating a platform for motivated early adopters, but they have not made the front-end experience ready for the mass-market. These barriers will be addressed very rapidly upon the adoption of the open source initiative’.

Meanwhile, Linden Lab recently announced its plans to team up with IBM to create 3D Internet standards, with the intention of eventually allowing users to ’connect to virtual worlds in a way similar to the way users move across the Web’, seamlessly travelling from one online world to the next while retaining the same name, appearance, and attributes like digital assetts. Also being worked on are ’requirements for standards-based software designed to enable security-rich exchange of asetts in and across virtual worlds (allowing) users to perform purchases or sales with other people in VR worlds for digital assets’ (which, with the availability of desktop assemblers, would include the control files that instruct the system in how to assemble building blocks into finished product). And, yes, they are working on developing more user-friendly interfaces.

SL is but one example of a growing range of web-based applications and collectively these are developing in directions that may supply yet more solutions to the design bottleneck. In an earlier essay (’The Metaverse Reloaded’) I pointed out that ’the Internet is evolving…into a vehicle for software services that foster participation and collaboration’. Gwyneth Llewelyn’s way of explaining what ’Web 2.0’ means was ’List all possible media, list the word “shared” before it, and we’ve covered the whole spectrum of Web 2.0 applications’. Then she asked ’is that all?’ which rather implies that tools which promote the sharing and ’mashup’ of information (Ie combining two or more sets of data into an integrated whole, for example overlaying air-traffic control data over Google Earth) has little hope of generating genuinely new discoveries.

Yes, well, declarations like that reveal a lack of understanding that nanotechnology exposes the core areas of overlap in the fundamental sciences (physics, materials science, mechanical engineering, life sciences, chemistry, biology, electrical engineering, computer science, IT). Recall that the problem academia currently has in developing nanosystems stems from the fact that each discipline has developed its own proprietory systems vernacular, effectively cutting each one off from its neighbouring disciplines and making exploration of the gaps between scientific fields (where the potential of nanotechnology lies) almost impossible. Vernor Vinge proposed a way out of this dilema that has obvious parallels with Web 2.0: ‘In the social, human layers of the Internet, we need to devise and experiment with large-scale architectures for collaboration (and) extend the capabilities of search engines and social networks to produce services that can bridge barriers created by technical jargon’.

As it happens, many scientists are beginning to use blogs as modern-day intellectual salons, and there is an increasing amount of science-based social networking sites and data-sharing tools. For example, the publishing group responsible for the journal ‘nature’ have developed ‘Connotea’, which adds a toolbar to your web-browser that allows you to save a link whenever you come across an interesting reference. You then tag your references with keywords, which lets you share your bookmark library. There is much scope for social bookmarking applied to scientific research in bringing together once-disparate groups with hitherto unseen complimentary problems and solutions.

Being breezily optimisic for a moment, as online worlds develop into the metaverse, they will incorporate CAD tools that show, using quantum chemistry calculations, how molecular structures affect each other. This information is not represented by coldly abstract equations and graphs, it taps into the visual and tactile senses we evolved to be past-masters at using. The maths is hidden deep within beautifully intuitive user interfaces. The gaps between disciplines are filled in by new generations of Web 2.0 collaborative tools which, together with the visualization packages that allow specialists to see what was once only visible to the polymath, brings about a thorough exploration of nanosystem design space.


It had better be like that. Because getting nanosystems out of the conceptual stage and onto the market is a formidable challenge. One of the deceptive things about molecular nanotechnology is that it sounds so simple. Drescriptions written for the layperson compare molecules and their bonds to the parts in a tinker toy set, and coupled with a reference to legos when explaining the utility of nanoblocks, the overall picture is one of child’s play. One person who made some headway in dispelling notions that developing productive nanosystems would be easy is Lyle Burkhead, the second person to join up as a senior associate of the Foresight Institute (a non-profit organization dedicated to the development of safe molecular nanotechnology). His reasoning begins with a reference to the one existing proof-of-principle that complex systems can be built from the molecular level up- life. Machines require struts and beams to hold positions; cables, bearings and fasteners to transmit tension and connect parts; motors to turn shafts and drive shafts to transmit torque. In biology, there are molecular structures that perform all of these functions. Nanosystems would need tools to modify work pieces, production lines to control devices and control systems to store and read programs. Again, nature shows us the feasibility of building such systems on the nanoscale. Enzymes and reactive molecules modify work pieces, ribosomes control devices and the genetic system stores and reads programs.

So what’s the problem? Well, Burkhead pointed out that ‘a general purpose, programmable system would be like a general purpose (programmable) ant colony. How much would you have to know about ants, their society, their genome, before you could make them programmable and able to build structures to specification?’. A further problem is that, because molecules and micron-scale blocks are so tiny, enormous numbers of them are needed in the construction of macro scale objects. By way of illustration, let’s compare the total amount of prims in SL to the amount of nanoblocks required to build a 1Kg object. I don’t know what the maximum number of prims SL can render is, but an attack of self-replicating spheres reached 5 billion. So, let’s assume that Linden Lab’s servers can handle a maximum of ten billion prims. Well, the number of micron-scale blocks needed to build a 1Kg object is substantially greater: A million billion. Furthermore, each one of these blocks would themselves be built from a hundred billion molecular fragments- an order of magnitude above my hypothetical maximum number of prims. Drexler has calculated that a thousand people making a thousand design decision per second would require a century of eight hour days to design a single cubic micron. And then you would need a million billion lines of code to specify how 10^15 such blocks should be positioned to build that 1Kg object.

A nanofactory would be composed systems and subsystems whose components are beyond current engineering feasibility. I don’t think it’s too much of an exaggeration to say that Drexler’s ‘Nanosystems’ (published in 1992) would be comparable to some visionary conceptualising Second Life in 1832, when the computer existed only as principles outlined by Charles Babbage, and the communications system invented by Samuel Morse was still five years from realization. All of which begs two questions. How do you build a system when its complexity lies beyond technical feasibility, and how do you write a program with a million billion lines of code when such an endeavour is out of the question on grounds of complexity?


The answer to both questions can be found in computer science. “Bootstrapping” is a term that describes the process of putting together a complex system by hooking up a number of more primitive parts. There is also an analytical tool engineers use known as ‘backward chaining’. The idea is that you start with a goal, and then work backwards through a series of intermediate steps until you arrive at capabilities that are currently accessible. Regardless of their size, manufacturing any machine requires two capabilities- fabrication of parts and assembly of parts. Assembly of parts can be achieved in two ways. You can either actively position parts in the desired location and orientation, a process known as positional assembly. Or, you can allow the parts to move at random until they ’settle in’ to the right position: self assembly.

We already possess three enabling technologies that demonstrate at least a primitive parts fabrication and assembly capability on the nanoscale. Namely, biotechnology, supramolecular chemistry and scanning probes. Molecular biologists and genetic engineers have demonstrated the possibility of achieving positional assembly using microbes, viruses, proteins and DNA. Gerald Sussman of MIT said, ’Bacteria are like little workhorses for nanotechnology; their wonderful at manipulating things in the chemical and ultramicroscopic worlds‘. DNA has proved useful for nanoscale construction purposes, acting as a molecular scaffolding. Simple molecular machines have also been built from DNA. Researchers from Ludwig Maximillians University have built a DNA-based molecular machine that can bind to and release single molecules of a specific type of protein, and which can be made to select any of many types of proteins. A prototype of a nanoscale robot arm has also been constructed using DNA. This controllable molecular mechanical system has two rigid arms that can be rotated between two fixed positions.
Various protein-based components and devices have also been constructed. Kinesin motors attached to flat surfaces in straight grooves were shown passing 2.5 nanometer-wide microtubules hand over hand in the manner of a cilliary array. Ratchet-action protein-based molecular motors are well-known in biology, a special genetic varient of yeast cell prions have been used to self-assemble gold-particle based nanowires and, according to Rob Frietas and Ralph Merkle, ‘antibody molecules could be used to first recognise and bind to specific faces of crystalline nanoparts, then as handles to allow attachment of the parts into arrays at known positions, or into more complex assemblies”.

Supramolecular chemistry can build up complex molecular parts from simpler molecular parts. Kurt Mislow has synthesized molecular gear systems which ‘resemble, to an astonishing degree, the coupled rotations of macroscopic mechanical gears. It’s possible to imagine a role for these and similar mechanical devices, molecules with tiny gears, motors, levers etc in the nano of the future’. Also, Markus Krummenakar is developing molecular building blocks, with the intent of opening up a pathway that leads to a primitive, polymer-based assembler.

Protein engineering and macromolecular engineering are examples of self-assembly. A disadvantage is that solution-phase synthesis cannot provide orientation or positional control, and it has a maximum complexity of 1000 steps. Values in the upper range are seldom achieved, and lie an order of magnitude below the number of steps required in the assembly of molecular machine systems in Drexler’s most primitive design.

The third type of enabling technology, scanning probes, have provided experimental proof that molecules and molecular parts can be mechanically positioned and assembled with atomic precision. A group at the University of North Carolina have created an interactive haptic control system called ‘Nanomanipulator’. This device enables users to ‘feel’ the interatomic forces as atoms are pushed around on a surface, using a hand-held master-slave controller that drives a STM probe while the position of the atom is displayed on a monitoring screen visible to the user.

But, while reproducing a map of the world, a portrait of Einstein, or IBM’s corporate logo by positioning individual atoms is pretty impressive, it is still a very long way from the complex three dimensional lattices that would be required in molecular nanotechnology. Work is underway to build nanoscale attachments that will act as ‘grippers’ for binding and manipulating specific molecules. These grippers will emphatically not be tweezers, mechanically picking up atoms. Rather, they might be something like fragments of antibody molecules. The trick would be to get the “back” of the molecule stuck onto an AFM tip, which would then allow the “front” to bind and hold molecular tools. According to Drexler, ‘if you want to do something with tool type A, you wash in the propper liquid, and a type A molecule promptly sticks to the gripper…once the tip has positioned a molecule, it reacts quickly, about a million times faster than unwanted reactions at other sites’. Thus, such a modified AFM tip would enable the mutual positioning of the reactive groups, forging a chemical reaction at the desired location but nowhere else. Drexler has calculated that, because it would accelerate desired reactions by a factor of a million or so, a molecular manipulator could perform up to 100,000 steps with good reliability.
However, solution-phase synthesis is massively parallel, because a chemical reaction typically makes many trillions of molecules at once. In contrast, the AFM-based manipulator may be able to construct a large molecular aggregate, but it would do so one molecule at a time. Therefore, manipulator-made products would be trillions of times more expensive. Also, the procedure would tie up a very expensive scientific instrument for hours in order to build that one large molecule. If you wanted to construct another scanning probe microscope, it would take on the order of a million,million, million years to manipulate its own mass in molecules.
Needless to say, AFM-based molecular manipulation won’t be put to such purposes. Instead, it would provide chemists with useful information concerning the building blocks and assembled structures of the components needed to begin the multi-stage development of nanosystems. In order to begin stage one, we would need to develop more than 400 self-assembling building blocks, each assembled from 50 monomers apiece (a monomer is a molecule that is able to bond in long chains). These building blocks would self-assemble into folded polymers (long string of linked molecules) with 10-100 parts. Brownian assembly of medium scale building blocks of folded polymers would lay down the technical foundations required to attempt stage 2 in Drexler’s pathway towards nanosystems. This stage would require arrays of complex molecules that would serve as feedstocks, suspended in pure liquid rather than the solution of stage one’s working medium. The structural material being built from this feedstock would be cross-linked polymers, suitable building blocks of solution-based systems that would take an estimated hour to copy itself and several days to produce a macroscopic quantity of systems. Drexler explained that this would establish a technology base that would allow ‘advance…towards inert interiors, then towards more active reagents’. (A reagent is a chemical structure, such as a molecule, that undergoes change as a result of a chemical reaction).

These advances would allow us to develop the technologies required for stage three, systems that would be flexible enough to assemble a variety of different molecular building blocks. Structural material would still be cross linked polymers, but now systems are put together via positional assembly, rather than the folding block assembly of stage 2. Mechanochemical reactions would be capable of generating subassemblies from a smaller range of feedstock molecules. The technical knowhow facilitated by stage 3 would allow progress towards devices made from diamondoid materials. The system would be able to increase the frequency of its operation, because the working environment would be suitable for more active reagents. Instructions and control, that hitherto had come from outside via acoustic pressure waves, would be replaced with internal control and data storage devices that could activate complex subroutines from brief instructions. These advances would give us the capability to build multiple production lines, and various other developments that would enable feedstocks to be simpler, less pure, and therefore less expensive. From there, we would have the technical capability to attempt stage four, desktop nanosystems whose assemblers would be made up of a billion molecules. Cross-linked polymers would have been replaced by diamondoid solids, assembled in a vacuum environment with each atom bonded in the exact place planned by advanced computational modelling.

Let’s recap this backwards-chaining analysis in Drexler’s own words. ‘A series of steps can enable a relatively smooth transition from solution-phase assembly of monomers…to the assembly of diamondoid mechanisms in an inert (eventually, vacuum) environment using highly reactive reagents’. Simplifying further, developing productive nanosystems would require the ability to build complex macromolecular structures in a solution environment, and eventually the mechanosynthesis of macroscopic structures in a vacuum environment. But now we have fallen into the trap of downplaying the challenge of developing molecular nanotechnology again. Saying that you need to build macromolecular objects and macroscopic structures in order to develop nanosystems is akin to saying you need to combine suitably shaped prims and write scripts in order to build the content of Second Life.

Notice that, while builders are concerned with combining prims and writing scripts, putting it like that does not convey a sense of the enormous practical challenges met by Sl’s creative community. It brushes over the long, hard work demanded by the host of sub problems that were generated during every step along the path towards building the content of this online world. I would argue that building and scripting the content of SL amounted to researcher-centuries of effort. And you can be quite sure that the pathway towards nanosystems will also be plagued by problems and sub problems, demanding interdisciplinary collaboration of the fundamental sciences, amounting to researcher-centuries of effort.

However, it’s worth remembering that the successful realization of stage four molecular manufacturing would give us digital control over matter, and therefore building the next nanofactory would be a whole lot easier. Why? Well, recall that in SL, getting an idea from concept to finished build takes some effort, but once completed you can easily duplicate it. A well-designed molecular manufacturing system would essentially treat atoms as bits, and a practical design for its molecular mills, assemblers and all other components requires them to be made from materials they can handle. So what? Because, as Damien Broderick explained, ‘it might cost a zillion dollars and exhaust the mental reserves of an entire generation…but once it’s there in its vacuum tank, once its specs are in the can…it will make its twin, and they’ll make another two, so you have four, then eight, then sixteen…and by god at the end of the day you will look into your garden, at your handiwork, and you will see that it is good’.


The ability to replicate the means of production themselves from cheaply available elements is what underlies most of the utopian expectations of a society with molecular nanotechnology. One commentator on an online forum asked ‘why the hell would anyone pay for something nano makes with no effort?’. Second Life, though, suggests such an argument holds no water. After all, this is a world whose content is built from resources instantly available wherever you happen to be at negligible cost, and which can be duplicated with no effort. But most reporting on Second Life does not describe a world where products are given away free. Instead, it’s all about the money. ‘None-existant’ objects being bought and sold for real cash, land barons earning fortunes from virtual property. Also, Gwyneth Llewelyn wrote about the socio-political beliefs that SL residents subsribe to (‘Anarcho-syndicalists, ‘Anarcho-capitalists’, ‘libertarian/neoliberalists’). Of these groups, only the first ‘idealise a SL where money, land and prim limits are unnecessary’. I don’t know how many residents consider themselves to be anarcho-syndicalists, but common sense dictates that the group believing money is unnecessary are in a minority compared to the many groups who consider it necessary, for the simple reason that the latter are many and the former is one.

Still, it is by no means uncommon to see a reporter expressing surprise that SL has virtual goods trading hands for real money. But the fact that SL’s content has monetary value is not all that suprising when one considers the entire system that supports the likes of Aimee Weber or Fallingwater Celladore. The ability to produce copies of virtual goods does happen automatically with little human intervention, but it’s only automated at one point in the manufacturing process. The design of the goods requires a concentration of effort, promoting the company and its products requires ongoing work. All of this necessitates the coordination of many tasks, and this activity amounts to a dynamic economy which is an essential element in building an online world compelling enough to sustain the interests of millions for indefinite periods.

Lyle Burkead insisted that it would also be a necessary condition for delivering the fabled machine that produces anything you wish for (provided it is physically possible). We already have many goods that are put together via molecular manufacturing. All foodstuff and timber fall into this category. So, how come oranges are not given away for free? Because, ‘they need fertilizing, watering, protection from insects. Oranges must be picked, put in boxes, shipped to store…The store has human employees, the fertilizer company has human employees and so on. The orange tree doesn’t exist in some separate space by itself, it’s part of the economy’.
This holds true for any material good. Each and every item that ends up in the shops is an end result of a great many tasks that need to be done in order to get that product into our homes. A machine capable of producing anything you want would need to be a self-contained system that can make anything the world economy makes. To do that it would have to pack in the entire logic and process structures that collectively make up the expert knowledge of all workers and managers who currently toil away in the many corporations that make up the global economy. As Burkhead cautioned, ‘all those jobs still have to be done because if you scale the economy down to the nano-level, it’s still an economy’.

But, didn’t we discover that all that work would be necessary only in developing the first mature nanosystem? Not really, no. Once completed, it would contain the instruction set for manufacturing another nanofactory nearly identical to itself, but that is all it (and its twin) would be capable of producing. Similarly, you can expect any individual item in SL’s stores to copy into your inventory, but that one item can only duplicate itself. True, the store that sold it represents a system capable of turning prims into many products, and is itself one business amongst many that make up Sl’s economy, which is capable of turning prims into almost anything you want. But the many, many people who run that economy are seemingly unwilling to work for free. Why should their attitude change if, instead of building prims into useful product, they are instructing molecular mills and manipulators to organise molecules and nanoblocks into useful product?

Then again, participation in SL requires Internet access and a supply of electricity. It requires constant maintainance of the servers that run the SL grid. Even the most dedicated immersionist hell-bent on projecting their mind into a digital personae cannot ignore an empty stomach for too long, and larders don’t get stocked unless you pay money for food, or for whatever is needed to produce it. In short, all SL residents have RL bills to pay. This places an irreducible cost on every build. If our creative community came to the collective decision that they no longer needed to earn money, you’d better pray that the companies supplying their Internet access, electricity and food adopt the same attitude, or else supplying SL with content would become impossible before long.

Admittedly, one could argue that the SL community could engage in money-making work in RL, while in SL they could be entirely altruistic. But economics is ‘the allocation of scarce goods’. If you’ve ever seen residents materialise prims out of thin air, they can seem to be an abundant resource. In reality, they are one factor in a system otherwise constrained by scarcity, because the hardware storing and processing their bits is of finite capacity, and the bandwidth streaming that data to users’ pcs imposes more bottlenecks. So long as constraints remain, irreducible costs will be unavoidable, and any new manufacturing process would emerge in the same capitalist economy that SL is part of. Should we expect irreducible costs with advanced molecular nanosystems?

It seems more than likely that this will be the case. In all likelihood, the process of building functional products out of chemical feedstock would not contained in a single system, but instead would be separated into nanofactories consisting of mills that build nanoblocks out of molecules, and other nanofactories that use manipulators that assemble those nanoblocks into macro scale products. This scheme makes sense for several reasons. Probably the major one is that it would provide a way of avoiding runaway self-replication, because the mills would only be able to turn molecules into nanoblocks (but could not manufacture complex machinery) and the manipulators would be capable of building complex machinery but could not manufacture nanoblocks.

Drexler reasoned that micron-scale building blocks would be small enough to make almost any macroscopic shape in ordinary use today within better tolerances than those provided by conventional machining. It would also allow construction of almost as wide a range of products as atom-precise nanosystems. Tom Craver suggested that ’products that cannot be made out of nanoblocks and require atom-precise assembly could be built by dedicated-function nanofactories, with the design built in at the lowest level without destroying the factory’.

Another advantage is energy consumption. Building products out of nanoblocks requires far less energy than atom-precise molecular manufacturing. Most of the energy consumed and heat released would ocurr during the fabrication of the nanoblocks themselves, rather than assembling those blocks into macro scale products. Assuming the blocks were re-usable, the energy used in manufacturing them would not be wasted.

Should we expect re-usable nanoblocks? Craver reckons that a profitable business could be made if manufacturing systems could copy themselves but the nanoblocks used in constructing most everyday items were not reusable. If the manufacturing systems were self-copyable but the nanoblocks were not re-usable, that would quickly build up a huge market for nanoblocks. However, Craver also commented that this approach has several drawbacks. If the nanoblocks could not be re-used, there would almost certainly be a massive increase in waste. People would be quickly compiling macro scale objects and, once tired of that product for whatever reason, could only dispose of it via the less-than-ideal methods used today. On the other hand, any product built from re-usable nanoblocks could be broken down, its building blocks fed back into the compiler, ready to be assembled into another product. Craver concluded, ’given the value of recylable nanoblocks for energy, cost-savings and convenient disposal, and the security risks of self-copying fabber components, it seems wisest to allow recyclable blocks but prohibit fabbers that can self-copy’.

No doubt, the well-publicized dangers of gray goo will make for a powerful reason to deny widespread access to self-copying nanosystems, particularly if block assemblers are quite capable of compiling almost anything a household requires anyway. But, from a commercial point of view, the more compelling reason for suppressing self-copying capabilities is because that would nullify the R+D funding and manufacturing business model. Exponential assembly must be researched and developed, as it is the only way to build trillions of machine parts in a reasonable timeframe. But, it seems doubtful that fully-replicating nanosystems will make it into general use. This would limit the scenario in which economies as we know them end, because productive economic activity would be required in order to afford replacement nanoblocks, should a person’s current stock be tied up in product too useful or treasured to be worth disassembling.


All of which makes the promise of material wealth reduced to zero by molecular nanotechnology sound as hollow as Alvin Weinberg’s claim that nuclear energy would lead to power ’too cheap to meter’. Actually, he never claimed any such thing. Instead, he performed various calculations that apparently showed the power cost ’might have been’ as low as one half the cost of the cheapest coal-fired plant. He never actually claimed that nuclear energy would be too cheap to meter, yet somehow that catchphrase lives on in the public conscience. Drexler shares something in common with Weinberg. His idea of molecular manufacturing has captured the imagination as the system that reduces manufacturing costs to zero, and yet one person who never claimed this would be the case is Eric Drexler. Rather, he argued that ’there will always be limiting costs, because resources- whether energy, matter, or design skill- always have some alternative use. Costs will not fall to zero, but it seems they could fall very low indeed’.

His reasoning for a dramatic lowering in cost is as follows. The cost of conventional machines is strongly dependent on the number of parts they contain, since more intricate systems require more parts and manufacturing operations. But the reliability and manufacturing cost of nanomachines is pretty much independent of the number of parts they contain. As Drexler noted, ’the number of assembly operations is roughly proportional to the number of atoms in the product, and hence roughly proportional to mass…costs will be insensitive to the number of separate mechanical parts’. In fact, an analysis of molecular manufacturing shows that the basic cost of production will be almost wholley determined by the cost of the chemical feedstocks.

But Rob Frietas made the point that there is a difference between ’cost’ and ’price’, saying ’in a capitalist economy, prices of goods are set by competitive markets’. We have seen that, in SL, the economy that is required to build and maintain a compelling online world imposes intangible costs on the price of inworld goods. Given that nanosystems will also emerge within the economy, they too will be subject to various intangible costs. Frietas argued, ’even if the cost of material and energy inputs fell to zero, say through the use of recyclable nanoblocks, there would still be an amortized capital cost plus a fixed intangible cost built into all products manufactured by the personal nanofactory…adding in the amortized initial capital outlay…plus intangible costs, manufacturing cost for consumer products should be $1/Kg’. That certainly is cheaper than today’s manufacturing costs, which currently fall between $10/kg and $10,000/kg.

Molecular manufacturing will not lower the price of everything. Any rare element, like gold or platinum, would retain its value because nanotechnology cannot make stuff like that. It requires nuclear physics, not chemistry. Also, given that the manufacturing cost for houses is already $1/kg, it seems doubtful that we will all be instructing our nanosystems to build full-scale replicas of our SL mansions and castles.


The main expectation of an economy based on nanosystems is for the cost of material goods to fall to a negligible level, and for information to become close to 100% the value of any product. In SL, particularly gifted designers charge thousands of Linden dollars for goods that cost next to nothing to produce. The raw materials may have no value, but their design expertise certainly does. It could well be the case that, even if a product costs $1/kg to manufacture, designers could charge much more than that for the all-important blueprints driving the assembly process. During a discussion I held on the societal impact of nanotechnology, Leia Chase argued, ’it will make the mass-produced nearly free, make services more expensive than goods, and make custom-designed items the commodity to those who think of themselves as wealthy’. All of which would sound entirely familiar to a resident of SL, because that is exactly how things work in this online world.

We saw earlier that the optimistic outlook for a society based on molecular nanotechnology stems from the massive drop in manufacturing costs it would enable. The dystopian scenarios are, in one way or another, attributable to the fact that nanosystems must be provided with a set of instructions to guide the assembly process. In this part of the essay, I shall be using the points raised in an article called ’Nanoscocialism’, written by David M. Berube, who is a Professor of Communication at the University of South Carolina. The paper pretty much covers every negative possibility regarding the social impact of nanotechnology (those that fall within scope of this essay, to be precise).

Berube’s first agument is that nanotechnology is a threat to current corporate profitability. This is maximized by reducing production and supply substitution from competitors, which together keep supply down and demand high. At the same time, that demand is magnified by designing in obsolescence (which has the effect of sustaining levels of consumption) and by persuading customers that they need (rather than want) the product.

Berube argues that obsolescence, the aftermarket and substitution are critical to corporate profitability, and that molecular nanotechnology is a threat to the established order. How so? Because handling matter with digital control would make a product ’the final purchase within a product line that the customer needs’. It is digital because atoms in strong material are either bonded or they are not bonded. In-between possibilities do not exist. Because assemblers work by making or breaking bonds, each step in the manufacturing process either succeeds perfectly or fails completely. Unlike current manufacturing, whose parts are always made and put together with small inaccuracies, each step in molecular manufacturing is perfectly precise, so little errors cannot add up. Admittedly, thermal vibrations are likely to cause parts to come together and form bonds in the wrong place, so it is more accurate to say macro scale products will be ’almost’ perfect, not ’absolutely’ perfect. But, a few misplaced atoms not withstanding, products manufactured in this way would go significantly beyond the durability of today’s offerings. Eric Drexler visualized a rocket engine, built the nanotechnology way: ’Rather than being a massive piece of wielded and bolted metal, it is a seamless thing, gemlike…its empty internal cells, patterned in arrays about a wavelength apart…producing a varied iridescence like that of a fire opal…Because assemblers have let designers pattern its structure to yield before breaking (blunting cracks and halting their spread) the engine is not only strong but tough’.

In all practical definitions of the word, wear and breakdown would be nonexistent for products assembled with atomic precision. The result, according to Berube, is that ’replacement and aftermarkets become irrelevant’. Now, as far as I can tell, wear and breakdown of SL ’products’ is similarly nonexistent. Clothes never fray, buildings never crumble, boots never loose their shine, jewellery never looses its lustre. True, they can mysteriously vanish from your inventory, but that annoyance aside I think it is true to say everything residents have built shall remain just like new until the end of the world. A further challenge for Sl’s content providers is that ’needs’ are very much irrelevant. The whole world is a luxury item; nobody NEEDS to log into SL in the way we need to seek shelter and nourishment. The world of SL, then, is built around completely nonessential products that are utterly impervious to wear and tear. But despite all that, every day millions of items continue to be traded, driving an economy that can either be described in triumphant tones as ’the fastest growing economy on the planet’ or ’still a very tiny economy relative even to towns in RL’, depending on which statistics best serve your agenda. Either way, that economy persists, which suggests that a global market based on products invulnerable to wear and tear don’t come to a dead end, after all.

So what’s going on? I think we need to consider another kind of obsolescence: ’Design’ obsolescence. Consider, for instance, how fashion designers in SL upped the ante. Clothes progressed from being mere 3D shapes, to shapes textured with images of ’real’ cloth, to clothes sculpted with creases and folds, to dresses that swung naturally with their wearer’s movement. Similar progress was made in all aspects of ’builds’ in SL, and it is clearly a sign of a community pushing a learning curve, discovering what can be done (while the limits of possibility move further out as the tools are debugged, improved, and expanded). Anshe Chung highlighted innovation as the key skill required to run a successful SL venture: ’The nature of the VR economy is that it’s hard to maintain margin when you do something everybody does…But when you are innovative you have even more opportunity than the real world’. So, in SL the bar keeps being raised and obsolescence is very much a part of this world, as items whose design does not incorporate the latest and best techniques look tawdry in comparison.

That earlier reference to the ultra-durable rocket engine did not do justice to the full potential of molecular manufacturing, for it goes way beyond merely improving current materials. Whereas today a single function is incorporated within a volume of the product, molecular manufacturing could see items with trillions of sensors, computers, motors and electronics. This is partly due to the incredible levels of miniaturization it would open up, but also because a nanofactory imposes negligible cost for each additional feature. This is in marked contrast to conventional manufacturing, in which product complexity is limited because the number of operations are minimized in order to reduce manufacturing costs.

Nanotechnology would do much to advance us beyond the expense, bulkiness, clumsiness and unreliability of today’s motors, sensors, computers, electronics and moving parts, and the limited flexibility that stems from all that. Drexler observed that fireflies and some deep sea fish use molecular devices capable of converting stored chemical energy into light. ’With molecular manufacturing, this conversion can be done in thin films, with control over the brightness and color of each microscopic spot’. Various other methods of fine control would give materials the ability to change shape, color, texture and so on, and this would give real world artefacts almost as much flexibility as virtual ones. As a consequence, the SL designer’s augmented ability to experiment fast and strange, get feedback, and experiment again would leak out into real world manufacturing and aftermarkets, resulting in the kind of rapid innovation required to cut it in the SL marketplace.

You can see why information and service jobs will assume a dominant role in the nanosociety. With goods able to pass from final design to mass production with ease, and with products potentially enabling degrees of customization unseen outside of virtual reality, molecular manufacturing would open up a competitive advantage in knowing customer preferences. We should expect a further move away from the traditional make-and-sell, command-and-control organization and toward the sense-and-respond, adaptive organizations that emerged as IT was integrated into businesses and realtime customer feedback became easier to gather and analyze.

The competitive edge in a society with widespread molecular manufacturing will come mostly from being able to focus on and respond to the changing moods of the customer. It’s interesting, then, that we are seeing a move away from a centralized delivery of services in SL (in the shape of welcome areas, orientation islands etc run by the Lindens) toward a more decentralized scheme in which 3rd parties develop customized login processes, welcome areas and other such services. The reason for this move is clearly because the sheer number of people joining SL make a one-type-fits-all introduction to SL largely infeasible. One company cannot be expected to deliver myriad help islands and other services tailor made to suit every group and subgroup that have now formed. As Gwyneth Llewelyn observed, ’the whole login process has to clearly focus on bringing someone directly into a community that’s likely to attract the new user and make them stay’.

If anything is required to encourage a person to stay in SL, it is access to services and communities that will nuture their particular talents. Unfortunately, by handing over nearly all of the content-creation duties to residents while at the same time taking it upon themselves to provide help and support, the Lindens created a situation where diversity exploded, communities became lost in the crowd and new arrivals set foot in a world where finding your way around is a baffling task. Lem Skall commented on how it is so very different with most other community websites: ’There’s usually some overlap, but they are either a game, or a social network, or maybe a place to do business. When joining these communities, we know what to expect and what to look for’. Now, on one hand the good thing about SL is that it’s flexible enough to be all those things at once. But, on the other hand, such flexibility must face the bottleneck of individual strength and weakness. Even if all technical constraints were removed, SL would still not really be the place where you can do ’anything’; only a place where your limited skills are less constrained by external factors than in RL. This brings into focus the problem of discovering the right path through a world with near infinite possibilities, most of which are ill-suited to the individual’s preferences and skills. Lem Skall again: ’Things might have been very different if SL had started as a pure software platform that separate providers could use for separate worlds with clear purposes, and if all the worlds had been unified later…so much has been said about the strategies of corporations into SL. Maybe one of the best strategies is to act as portals. No building but an orientation island and Web interface to creating new accounts. Businesses and educational institutions are already creating their own sims…What I’m thinking is…a unification of such separate worlds into sub worlds’.

Notice the parallels that exist between building a useful metaverse, and the anticipated skills required to run a successful business in a society based on productive nanosystems. In both cases, the ability to provide highly tailored services is paramount. It seems to me, then, that as the Lindens pass over more and more of the running of SL to the open source community- depending on 3rd party viewers, welcome areas, themed islands and so on- there will be much opportunity to perfect the kinds of personal services and product advice that would have value in a world where the consumer/producer relationship blurs in the continual choice of the individual to ’make’ or ’buy’.


Specialization has long been understood to be a defining feature of market economics. Individuals are producers of one thing and consumers of everything else. Some commentators expect consumers to be sole producers of finished products of all kinds once productive nanosystems go mainstream, leading to a more equal society. Others (Berube among them) see things entirely differently, believing molecular manufacturing will only lead to the caste-ing of society into those with power and those without.

How inclusive will the development of the technology itself and the manufacturing capabilities it enables be? Another way to phrase this question would be ‘will we see open source designs, or will some centralized group seek to monopolize the technology, perhaps through patents and other legal restrictions?’. Berube sees the latter as most likely, arguing that totally free access to productive nanosystems would jeapordise contemporary hierarchial structures in capitalist corporatism. “A technology paradox ocurrs when R+D by a corporation actually reduces corporate power. For example, in the present system, as products increase in supply or as the means of production devolve into the hands of consumers, prices fall”. Traditionally, the paradox is avoided by expanding the market so that it exceeds the declining prices. But, once the means of production becomes completely decentralized and placed in every home, “most avenues of market growth lead nowhere”.
As SL spread its message beyond early adopters and began to attract the attention of commercial giants, there was some uneasiness among the residents. How would those who catered for the fashions in this online world fare against high-street brands? Would these masters of marketing take control of the VR landscape, manipulating desires by spinning a web of concepts, brands, advertising and persuasion, shaping not only the surroundings but the thoughts of the populace to suit themselves? Nowadays, though, one tends not to read about the intense viral growth of corporations in SL. Quite the opposite. What you tend to read about is how familiar brand names came to SL and failed to have any impact at all, beyond a few curious visitors during the first hours of opening.

Is this failure connected with the fact that SL features a massively decentralized means of production, delivered into the hands of each and every user? It must surely be the case that the competitive advantage that corporations have over the little guy is very much reduced in SL because, relative to the real world, everything is so easily accomplished. But, I doubt that this is the only reason. What also needs to be considered is the fact that most RL brand names achieved widespread penetration through traditional media channels, and perhaps what works well there works less well in SL? The main difference between online worlds and traditional media was explained by Rosedale: “We all got TV, and it enabled us to see and learn many things, but unfortunately those things had to be centrally authored, without our participation, by a very small number of people. SL, built and managed by the residents, is a natural correction to our early, disempowering media- a better world, owned by us all”.

Perhaps because the populace has such powerful control over the landscape, and are very much an active contributer using the same tools as any corporation hoping to spread their message in SL, it becomes significantly harder to spread brand awareness using the means of advertising familiar to the high street. As Justin Bovington (who co-founded the branding agency Rivers Run Red with his wife Louise) reasoned, ‘you can’t just dump stuff in here and expect people to take an interest…People think young consumers are apathetic. They’re not apathetic. They’re just very well defended against advertising”. In RL, billboard posters are a part of our landscape whether we wish they were or not. But, in SL, a company’s billboard campaign must contend with the fact that, on Resident-owned land, unwelcome content is deleted with a simple mouse-click.

Really, though, the main reason why high-street names tended to fail in SL can be attributed to the fact that they were remarkably unimaginative when it came to extending their brands in VR worlds. Simply setting up a store and expecting to attract a large and persistent customer base just because its ‘popular brand name X’ is not good enough. Perhaps it is true that, in a VR world, ‘most avenues for market growth lead nowhere’, but it must also be the case that new opportunities for raising brand awareness become available. Given that active, realtime collaboration is a major part of SL’s appeal, perhaps involving the customer in the design process would be one such opportunity. Reebok went down this route. They opened up a store in SL that allowed residents to customize virtual sneakers according to taste, and the company planned to take the most popular design and market it in RL.

Open source tends not to put a final polish on its products. Because of this, commercial interests could still make a profit if the means of manufacturing went down the open source route by repackaging and adding that final polish to products. Along with focusing on personal services, goods in a shop could be priced according to prestige of certain designers. Berube believes that the price of goods and services cannot be expected to decrease with the realization of molecular manufacturing, since the cost of R+D must be recouped. But, once nanosystems are as fully integrated as Pcs now are, nearly all capital would be dramatically reduced in value. Capital, by the way, is not ‘money’, which in and of itself has no value. What capital REALLY is, what REALLY has value, are services and the means of production. Labour, raw material, machinery and knowhow are the true lifeblood of industry. “In a world of nearly infinite resources, the value of toil and labour will disappear”, wrote Berube. “The nanotech elite will be the technocrat and the tech-intelligentsia- a small group”. As for the rest of us, Berube argued, “whatever time they have at their disposal will be spent acquiring worth of any and all sorts merely to keep step in the nanoeconomy…economically defranchised and socially declassed people could contribute to the genesis of Third World countries in the centre of our cities”. These fears were echoed by Susan (baroness) Greenfield in her book ‘Tomorrow’s People”: “In times to come…there might be the…invidious distinction of the technological master class versus the- in employment terms- truly useless”.
Remember that quote from Sl’s founder, ‘a better place, owned by us all’? Lovely sentiment and all that, but it really isn’t true. Gwyn explained why. “You can see a huge gap between the resident’s classes…while perhaps 5% of all residents are active participants in the economy (who) contribute to the overall content, the remaining 95% are completely out of the loop”. In fact, so imbalanced is the flow of currency in SL that it has been compared by some to a traditional pyramid scheme in which only a few harvest money from a large mass of players. It would be wrong to suggest that SL was deliberately conceived as a pyramid scheme. But, by granting everybody the right to buy and sell services and virtual goods to one another in a free market, it was perhaps inevitable that wealth would accumulate around the gifted few who can produce masterpieces of whatever they make.

This does sound uncannily like Berube’s dystopian vision of a technological master class reaping all the rewards of molecular nanotechnology. What’s more, other observers have seen a parallel between the activities of SL’s residents and Berube’s expectation that the masses will be frantically acquiring worth of any and all sorts. In answering that evergreen question, ‘what are you meant to do in SL’, ‘Play Money’ authour Julian Dibell answered, ‘SL is about getting the better clothes etc. The basic activity is still the keeping up with the Jones’s, the rat race game’.

If ‘what am I meant to do?’ is the first question a SL resident asks, the next is likely to be ‘how do I do it?’. If a fundamental aspect of SL is the buying and selling of goods, then the second question is more precisely defined as ’how do I get a foothold on the economic ladder?’. In other words, how do you start aquiring the finances required to earn the capital needed to be a player in your chosen business? There is a quick and easy way to get reasonably large amounts of SL currency, which is to purchase them directly. As with all currency, the value of the Linden dollar against the US dollar continually changes, but on average you can expect to get between L$260 and L$320 for every US dollar spent.

However, a ’New York Times’ article noted that ’although L$ can be bought with a credit card, there’s evidence that the in-world economy is self-sustaining, with many players compelled to earn a living in-world and live on a budget’. You might think everybody would settle for nothing less than the kind of career seen as aspirational in RL- property tycoon, popstar, architect- that sort of thing. But, actually, SL residents are willing to take on jobs as sales clerks, nightclub bouncers, hostesses, for wages ranging from L$50 to L$150 per hour. In a world where owning that ultimate symbol of material wealth, your own private island, is within the budget of most people who can afford a high-end laptop, people sidestep the easy way to big Linden bucks and instead work for them, in jobs that pay a pittence in real money.

It’s probably not the case that anybody comes to SL in order to fullfill a lifelong ambition to work as a shop assistant. Rather, they accept that engaging in the lowest level of work in SL is often the necessary first step an entrepreneur must take. But the fact that such roles are performed at all in what is a fantasy world brings into question the assumption, often expressed, that nobody will be willing to do work of this kind once molecular manufacturing enters the market. But while they may be willing to do such work, the opportunity to do so will only occurr if such work is available. There are two great promises and perils commonly associated with molecular manufacturing. The first is the promise that exponential assembly will compile an abundance of goods (with the peril of runaway assembly leading to gray goo), and the second is the promise that nanosystems will dramatically lower the cost of capital (with the peril that labour will be totally devalued).

Is the latter peril really a bad thing? Such a declaration would appear to stand in contrast to the dream of a life free from toil. This vision can be traced back at least 23 centuries, to a time when Aristotle wrote, in ‘The Politics’, ‘we can imagine managers not needing subordinates and masters not needing slaves…if every machine could work by itself…by intelligent anticipation’. And here it is again, this time from a quote in ‘Time’ magazine, 1966: ‘By 2000, the machines will be producing so much that everyone in the US will, in effect, be independently wealthy. How to use leisure meaningfully will be a major problem’.

Ah, there’s the rub. It is generally taken as axiomatic that loosing jobs must mean the loss of meaningful activity. And if you examine that Aristotle quote closely you will notice an imbalanced benefit. It is the MANAGERS who no longer need (human) subordinates, the MASTERS who no longer need (human) slaves. It’s an imagined world in which the elite exchange human labour for machines, flexible enough in limb and just flexible enough in mind to be trusted to perform its role in the workforce (but, presumably, not to question its lot in life). But Aristotle makes no suggestion that the displaced subbordinate class has been lifted to the status of ‘master’ (in fact, the passage is actually his pragmatic defense of slavery in his own time). We like to think slavery has been abolished now, but the other assumed axiom is that the loss of your job must mean the loss of your income. How would the labouring classes raise the funds needed to become a factory-owning capitalist, if his or her skills have lost all monetary value?

Then again, isn’t the promise of molecular manufacturing that nobody NEEDS to work? If it lowers the cost of capital and profoundly raises the abundance of goods and puts the means of production in everyone’s home, then (as SL resident Ralph Radius asked) ‘why wouldn’t a world of nano be divided into purposeful people and those who hang out? Living will be virtually free’. What might be wrong with this picture is that it assumes a lowering of the COST of manufacturing means a reduction in the PRICE of goods and services. As we have seen, Berube anticipates that this will not be the case (at least initially) because nanoproduced goods and related services will carry the R+D surtax of molecular nanotechnology. As for the hypothetical ability to bring forth an abundance of products (and the implication that they will be given away to anyone who asks for it), perhaps artificial constraints like IP rights will limit this scenario, as is the case with hypothetically copyable product in SL. Some of the products made possible by molecular manufacturing could create huge incentives for profit taking. Nano-manufactured computer components, by today’s standards, would be worth billions of dollars per gram. And something like food has large and intricate molecules providing its taste and smell, minerals for nourishment that would require much research in order to handle them in a nanofactory setting, and it contains a lot of water, which is a molecule that tends to gum up the components of the nanosystem. I’m not saying that compiling food is impossible, only that compiling food from chemical feedstock would be a very stiff challenge. Will this basic requirement of life be distributed for free, or will there be a heavy R+D price imposed on it, as is the case with lifesaving medicine?


Having decided everything will not be ‘free’ once nanosystems become widely available, we seem to have leapped to the opposite extreme, that their products and services need to be very expensive. We also seem to be assuming that molecular manufacturing must exclude the majority of the populace from gainful productivity. What underlies such assumptions? Most likely, it is ‘complexity’. Productive nanosystems would be the most sophisticated products ever built. There is no precedent for a process that combines 10^25 parts to form a single object in manufacturing today. Some assume that using such immensely complicated machines must require a great deal of skill. ‘Yeah, all those unemployed steelworkers can be retrained as molecular biologists’ was one sarcastic reply to the suggestion that the age of molecular nanotechnology need not mean the end of gainful employment. But is this a safe assumption to make? Possibly not. After all, do you need to be a mechanic in order to use a car? There was a time when this was indeed necessary. Lifting up the hood, tweaking and fiddling around with the engine was not an indulgance for the hobbyist or an occasional annoyance for the stranded motorist, it was a regular part of car ownership. One can well imagine early car drivers fearing that if automobiles became more complex all but the very best mechanics would be excluded from motoring. Cars did indeed increase their complexity, but they also became more reliable; easier to operate.

Another, perhaps better, example is computers. The first operational computers were built by a ten-thousand strong team of elite thinkers, lead by Alan Turing. They were a top-secret military tool; 2,400 valves all put to the chief purpose of decoding Nazi transmissions that had been scrambled using a cipher machine known as ‘Enigma’. It not only required rare skills to construct these mechanized wonders, but also to operate them. A later computer (ENIAC) typically required eight hours of repair for every eight hours of use. Who would have believed that, one day, computers with hundreds of millions of parts, able to outperform those early examples by eight orders of magnitude, would be a standard feature in people’s homes?

The fear that technology will become too complex for all but those highly skilled in some niche discipline is a recurring theme. Another fear is that skills will be lost because of technology. Such concerns did not begin in the 90s with the arrival of competent spell-checking software and the worry that a strong knowledge of grammar would be lost. Nor did they arise in the 70s, with affordable pocket calculators and the fear that fundamental skills in maths would be eroded. They didn’t begin in the 20th century at all, or even the millenium. As far back as 470 BC, Socrates feared that the development of the alphabet (which had been in use for over 100 years) would ‘create forgetfullness in learner’s souls…they will trust to external written characters and not remember of themselves’.

You would be hard-pressed to find anyone who regarded literacy as a skill that enfeebled the mind today, although you may well hear such voices of concern regarding the tools built into word-processing software or learning aids freely available on the Web. And yet, in both cases there is a common theme. Technology does not just cause the loss of skills, it ENABLES the loss of skills. That last point is expressed by the term ‘encapsulation’, which refers to technology that has become hidden in everyday society, despite being in widespread use. It can be hidden in a literal sense. Personal computers began as home-built construction kits, assembled by keen enthusiasts who obviously became familiar with its innards. These days we buy laptops and risk loosing our warranty if we open them up. But mostly the technology becomes hidden because it does its job with minimal fuss. The TV simply starts transmitting sound and visuals. We no longer need to fiddle with manual controls for horizontal and vertical synch, because you get a stable image at the press of the power button. The telephone simply connects your call. Remember how there was a drive to teach everybody binary, in anticipation of the ‘computer age’ when we would all need to know how to write assembly language, but now packaged software enables anybody to get Pcs to perform useful tasks, not just programmers? Well, in 1910 the rate of growth in the telephone industry prompted a Bell Telephone statisician to project that every working-age American woman would be needed as a switchboard operator. In his book ‘Future Hype, Bob Seidensticker reasoned that, according to the definitions of 1910, every single person who uses communication technology to make a call or surf the Web is (thanks to automatic switching technology) connecting calls and doing the job of the switchboard operator. In 1911, the philosopher Alfred North made the following observation: “Civilization advances by extending the number of important operations which we can perform without thinking about them”.

Let’s stick with computers a while longer. Earlier, I asked, ‘how do your write…a million billion lines of code when such an endeavour is out of the question?’ but left this unanswered. A similar dilema was encountered in computer chip design. At first, draughstmen designed computer circuitry by hand, but as the parts counts soared into the tens of thousands and beyond it became impossible to design and layout such chips by hand. Fortunately, ready-made computers were there to open up the bottleneck, and today engineers have access to many powerful CAD tools. Some just enable the computer screen to serve as a traditional drawing board, but at the other end of the scale there are so-called ‘silicon-compilers’. These software systems can produce a detailed design of a chip- ready to manufacture- with very little human help beyond specifying the chip’s function.

It becomes advantageous to develop compilers only when resources are cheap and abundant. If they are costly and scarce, this puts an economic pressure on developing systems that are small and simple, which requires step-by-step human planning. Before the 1960s, processors were orders of magnitude slower and memory was orders of magnitude more expensive than today. This economic environment favoured assembly language and its ability to provide instruction-by-instruction control. But after the 1960s, the number of components rose by a factor of a million, while the manufacturing cost per transister had fallen to mere pennies. Drexler explained, ‘if a 10^6 transister design has an expected market of 10^5 units, then every dollar of design cost per transister adds tens of dollars to the price of each chip, yet a dollar can’t buy much time from a human design team…sillicon compilers emerged…gained a foothold, then steadily improved, becoming an integral part of the design process’.

Current macroscopic hardware designs are comprised of relatively few parts and production costs can be expensive. So, naturally, there has been no incentive to develop compilers to help us plan the design of macrostructures. They would not compete with the quality and cost-effectiveness of detailed human design. But, as we have seen, the parts-count of products manufactured via nanosystems (including nanosystem parts) will grow into the trillions and beyond, and production costs will dramatically fall. This would make compilers attractive, even if each compiler-specified system were to waste twice as much space, mass and energy as would a system designed by detailed human knowledge. Therefore, even inefficient compilers would be attractive, and once they gained a foothold in macroscopic design space, we should expect compiler tech to improve, just as it did in computer chip design.

It’s worth emphasising that compilers do not completely remove humans from the design process. Drexler: ’Human design will remain dominant at the level of parts and subsystems (in the form of knowledge built into the compiler) and at the level of overall system organization and purpose (in the form of specifications given to the compiler when it is used). The intermediate levels will be designed, with considerable inefficiency, using algorithms and heuristics that represent a workable subset of human knowledge of design principles’.

So, computers both encouraged and aided the development of design tools that can assist people in planning the manufacture of systems too complex for humans. They also enabled a radical shift in employment patterns, and really molecular manufacturing should be seen as an evolution of the working practices enabled by IT technologies, rather than a revolutionary dislocation from current jobs. This becomes even more aparrant when you consider that a far greater revolution in working practices occurred in our past. When Berube talks about the cost of labour devaluing in the face of molecular manufacturing, it’s hard to shake the conviction that he equates ’labour’ with physical effort, wages earned by the sweat of the brow and all that. 150 years ago, 69% of Americans were engaged in just that sort of work, because they worked in agriculture. Today, the number of Americans working in agriculture is just 3%. As for the rest, 28% work in industrial production and 69% work in the service or information industries. “Increasingly”, an article in ’Forbes’ magazine noted, “People are no longer labourers; they’re educated professionals who carry their most important work tools in their heads…modern occupations generally give their practitioners more independence- and greater mobility- than did those of yesteryear’.

It is expected that, as productive nanosystems become integrated into society, work will shift towards 100% service and information. This is obviously the state of employment in SL today. Whatever work you are involved in, you can guarantee it either involves finding, evaluating, analysing and creating information (in which case you work in ’Information’) or it involves ways of helping other people (in which case you work in ’service’). It is obvious that programmers work in ’information’, but so do lawyers and engineers and librarians and teachers and magazine columnists. One thing that SL has shown is that, at some point, people do not crave standard goods at ever-decreasing prices, but customized goods tailored to meet individual tastes or needs. The opportunities that exist for gainful employment in SL centre almost entirely on ‘providing creativity and originality, customizing things for other people, managing complexity, helping people with problems, providing old services in new contexts, teaching, entertaining, and making decisions‘. I was not quoting a SL analyst, by the way. That list came from a passage written by Eric Drexler, regarding the kind of work that will be valuable in the nanosociety. That SL should favour the sort of work that will retain its value once productive nanosystems become widely available is not all that surprising, since it realises most of the perceived advantages of molecular manufacturing over top-down subtractive manufacturing.


People have occasionally wondered what kind of economic system is at work in SL. Rest assured that this is much more than idle ivory tower speculation, because defining Sl’s economy would enable us to anticipate what economic model would develop under the widespread adoption of productive nanosystems.

One possibility is that Sl’s economy is the same as the one we have in RL. This is the viewpoint that the ’NY Times’ article I mentioned earlier subscribed to. According to the article, Sl is a world of ’mortgage payments, risky investments, land barons, evictions, designer rip-offs, scams and squatters’. Where there are shops everywhere ’so it’s easy to say “oh, OK I guess I’ll have a better pair of jeans” ’. Lured in by tales of ’residents (who) lived the American dream in SL and built up L$ fortunes through entrepreneurship’, newbies enter a world ‘where we trade our consumerist-orientated culture for one that’s even worse’.

Others, though, have questioned this assumption that the SL economy is simply the same as the one we find in the consumer-orientated parts of RL. One critic argued, ‘what Linden Labs has tried to do is replicate the atom-world scarcity rules in a bit-world environment’. In other words, SL really was intended to be the sort of scarcity-based economy we find in RL, but its fundamental reality is binary digits and ‘it is the nature of bits to be easily copied’. Thus, Linden Labs’ attempt to impose artificial scarcity in an online world was bound to fail sooner or later (as if you didn’t guess, this argument was a response to the CopyBot incident).

However, Wagner James (Hamlet) Au identified a flaw in this argument. ‘I think it’s highly debatable whether SL is a scarcity-based economy. I think it makes more sense to think of SL as a brand or even a personality economy in which there’s a high premium in owning content from the most admired creators’.

There was a time when any press release would feature an interview with at least one of those ‘admired creators’ Au referred to. There were two good reasons for this. First, the quality of their work rightfully brought them recognition. But, secondly, it was the simplest way to highlight the fundamental difference between SL and the MMORPGs with which it shares a nominal similarity. A typical MMORPG comes with draconian licensing agreements that explicitly forbid the end user from claiming ownership over the money and objects they quest for. Attempts to sell your wares over eBay and other such sites meets with instantaneous deletion of accounts and removal from the game (not that such measures have prevented the emergence and growth of a market in VR goods. In fact, it is rumoured to value $20 million in the US alone and an order of magnitude higher in Asia).

Of course, SL has quite the opposite attitude, in that the objects you create inworld ARE your intellectual property; you DO own the rights. As Cory Ondrejka explained, ‘historically, what you need to drive innovation is markets, and markets derive from ownership’. So, an interview with one of the revered builders of SL was the most efficient way to get across the message ‘no, this is not an MMORPG’, and if you wanted your reader to understand that SL was serious business, what better way to do that than to refer to the serious money some residents were making for themselves?

But, while it’s undeniable that you can, in principle, earn a good living entirely on in-world entrepreneurship, perhaps those articles were misleading. This was especially true if the implication was that you WOULD make a good living (or any profit at all). Just as Dick Whittington found that the streets of London were not paved with gold after all, newcomers to SL discover this is no quick and easy passage to fame and fortune.

The economics page on SL’s official website provides statistics such as ‘monthly spending by amount’ and ‘unique users with positive monthly $L flow’ (PMLF). Looking at the latter and assuming a PMLF of between $10-$500 makes you ‘poor’ while $500- $5,000+ makes you ‘rich’, one can see that, in December 2007, a whopping 48,904 out of 50,678 users with PMLF were ‘poor’. Much the same conclusion arises if we look at the statistics for ‘monthly spending by amount’. According to this chart, out of a total of 341,791 customers spending money inworld (again, during December 07), 269, 926 spent between $L 1 and $L 10,000, and 71, 865 spent between $L 10,000 and $L 1 million. If we assume the strength of the L$ against the US$ was at its highest, that translates to 269,000 spending between a fraction of a dollar and $30, while 71,865 spent between $156 and $3125+.

What does this tell us? These days, Googling ‘SL economics’ reveals that the most popular interpretation is that, since the vast majority of residents are not making fortunes (or anything like a profit at all), those old stories of SL as a land of opportunity were overblown hype. Gwyneth Llewelyn recently wrote that a favourite theme amongst journalists is ‘to report how SL’s buzz and hype is dying’ leading inevitably to ‘the downfall of SL’. Google corroborates her opinion, because the most popular ‘hits’ are all articles explaining ‘the phoney economics of SL’, ‘VR world’s supposed economy is a pyramid scheme’ and other such analyses that can hardly be described as flattering.

That ‘NY Times’ article I referred to was therefore one of a great many articles that paint a negative picture of this online world. “What does SL say about us, that we trade a consumerist-orientated culture for one that’s even worse?”. What if this question truly reflects the nature of SL? Does that imply that our future nanotech societies will be dystopian nightmares of rampant consumerism favouring a tiny elite?

Not according to Au, who countered Nick Yee’s question quoted above by pointing out that ‘the latest economic figures simply don’t back up the premise of Yee’s question. In August…91% were spending less than L$ 10,000 (USD 18.50). Only when you get to that remaining 9% do you see any significant spending in terms of real dollars…There’s surely a lot of inworld goods and services that exist inworld, and much of it is trading hands. But what seems more plausible is that the bulk of those transactions are conducted in a barter or gift economy between friends and communities and, just as often, total strangers, sharing and trading what they own. This almost strikes me as a reversal of consumerism as it is commonly understood, for it undermines the economic motives for doing so’.

Perhaps describing this exchanging of gifts etc as being engaged in ‘economic‘ activity is just wrong. This naturally raises the question, ‘OK, but if SL is not an ‘economy’ what is it?’. I think Au has partly arrived at the answer by acknowledging that ’the bulk of those transactions’ are friends and communities and strangers ’sharing and trading what they own’. Now, Robert Levin introduced a new phrase- ’Agalmics’ (he derived the word from the Greek ’Aglama’ meaning ’a pleasing gift’), by which he meant ’the study and practice of the production and allocation of non-scarce goods’.

Levin’s concept of ’agalmics’ is therefore the opposite of ’economics’ (which, remember, is ’the study of the allocation of SCARCE goods’). Levin argued, ’we can be certain that, over time, more and more basic goods will become less and less scarce…we need a new paradigm and a new field of study. What we need is ’agalmics’. When it comes to the gift ’economy’ of SL, should we adopt the catchphrase, ’it’s an agalmia, stupid’, in reference to what Levin called ’the sum of the agalmic activity in a region or sphere. Analogous to an “economy” in economic theory’?

Well, this assumption depends heavily on the extent to which SL agrees with Levin’s notion of what agalmic activity is. Earlier, we saw how physical constraints like server capacity imposes limits on our freedom to create in SL. This might imply that SL cannot be an ’agalmia’. However, it’s Levin’s opinion that ’economics’ gives way to ’agalmics’ as a result of the MARGINALIZATION of scarcity, not necessarily its ERADICATION. ’Agalmics goods…are often produced using scarce goods as raw material. An important example is the initial programming work that goes into a free software application. At the current state of the human lifespan, programmer time must be regarded as a scarce good’.

In fact, Levin cites the open source software community as a contemporary example of agalmic activity. This obviously marks SL out as a definite candidate for an agalmia, because it is very much part of the OS model. Levin identifies several key characteristics of agalmic activity. Let’s look at each one and see how well SL conforms to each.

1: ‘Economic trade is finite; when I give you a dollar I have one less than I did. Agalmic activity involves goods which are not scarce, so I can give you one without appreciably diminishing my supply’.

In SL, anything can be transferable and copyable, or non-transferable/ non-copyable. Objects that are tagged as non-copyable/ non-transferable are traded according to ‘economic’ activity, because choosing to pass such items on results in you no longer possessing it. On the other hand, any item that is tagged as copyable can indeed be given away without diminishing one’s supply. In SL’s stores, items for purchase are often (but not always) marked ‘noncopyable’. But what about all those ‘transactions (that) are conducted in a barter or gift economy’ which, according to Au, makes up the bulk of ‘economic’ activity in SL? I think it’s highly likely that these transactions involve items that are copyable, allowing individuals to trade what they own without diminishing their supply. If my assumption is correct, this is ‘agalmic’ (not ‘economic’) activity.

2: ‘It is co-operative. Economic activity often involves competition. Buyers must allocate their limited funds to the supplier who best meets their needs. Since it doesn’t involve scarce resources, agalmic activity rarely involves competition. Efficient agalmic actors know how to encourage cooperation and benefit from the result’.

No doubt, whenever an inworld architect like Scope Cleaver negotiates for the contract to build something like the Estonian Embassy, his prospective client has a limited amount of land (and funds), so only requires a small team of ace designers to construct the virtual property. When it comes to negotiating for such contracts, I think it’s fair to say that this is economic activity.

However, I wonder if, overall, Cleaver feels he co-operates with the architectural community in SL? Does this community freely swap building tips and are customized tools exchanged between fellow architects in accordance with agalmic activity as defined earlier? And not only architects but all creative communities in SL. Does the machinima community, the photographers, the scripters, the fashion designers, ‘encourage co-operation and benefit from the result’. My gut feeling is that they do, but further investigation is required before a more definitive answer can be formulated.

3: ‘It is self-interested. Agalmic activity advances personal goals, which may be charitable or profit-orientated, individual or organizational. An agalmia typically contains both individuals and organizations, with a broad mix of charitable and profit-orientated goals. Agalmic profit is measured in such things as knowledge, satisfaction, recognition and often in indirect economic benefit.’

Obviously SL contains both individuals and organizations who pursue both profit-driven and charitable goals. But the real question is what motivates residents to fill SL with content. Of course, we all know that Anshe Chung and Aimee Weber now have joint ownership of all the gold in the Federal Reserve, since that’s the only way to pay what they are now worth. Ok, I exaggerate but (beyond the necessity of earning money to live) one has to wonder if the financial rewards the elites of SL earn really counts as any motivation at all. Cleaver once admitted to me that he would happily work for free, were it not for the fact that we all need money for daily necessities. Moreover, many of SL’s designers have told me that whenever somebody buys one of their products, what is satisfying is the recognition that what they do is appreciated and valued…and I don’t mean in a monetary sense. And then, of course, there are the masses who stock land with builds, galleries with portraits and sculptures, cinemas with machinima and generally fill SL with content but earn no economic profit for their efforts. I don’t think these people are chasing dreams of financial wealth, I think it is agalmic profit that motivates them.

4: ‘It is self-stimulating. Examples can be seen in free software communities, in which new programmers, documenters and debuggers come from the ranks of free software users’.

Here, I am reminded of an old essay by Gwyn (‘Crowdsourcing in Second Life’) in which she wrote, ‘there wouldn’t be any point of having 3300 sims available on a grid, if they didn’t have any content at all…Instead, Linden Lab learned how to employ the users-very successfully- to develop the content for themselves, without paying a cent’. I could also quote who said, ‘near-term, users will create code to address bugs and other problems, as well as do things like enable SL to run on cell-phones, or add support for different kinds of multimedia content inside the world‘.

All of which sounds very much like Levin’s example of self-stimulating agalmic activity. (Why is it self-stimulating? Because ‘everybody is inspired to keep topping each other with ever cooler things’-Philip Linden).

5: ‘It is self-directing. Free software users provide feedback to developers in the form of bug reports, patches and requests for new features. Software projects can be forked by users when an existing developer group is no longer responsive to their needs. Maintainers are then free to adopt the new work or go their own way’.

This very much applies to SL, and can only become more relevant in the future. Just ask Gwyn, who wrote, ‘things like SL Brazil show what will happen in the near future: Companies creating high quality content and providing the whole range of services that LL refuses to do: a special client, a logging-in system, a welcome area…inworld patrolling, technical support…’

6: ’It is decentralized and non-authoritarian. In a free software community, developer groups maintain their position only as long as they are responsive to their user bases. No one is forced to participate in a project, and the projects people participate in are the ones in which they are interested. Involuntary activity places limits on exchange and creates scarcity. As such, it is non-agalmic. A particular agalmic group may be organised in a top-down fashion, and non-agalmic groups may act agalmicly. But alternatives are available and participation is voluntary. Authoritarian systems remove personal incentives for agalmic behaviour’.

Nobody is forced to participate in SL, and it’s fairly safe to assume that the inworld projects residents undertake are things that interest them. I do wonder, however, if Linden Labs conforms to the agalmic ideal of a developer group capable of maintaining its position only as long as responsive to the needs of its users. LL is the true owner of SL and, within the TOS, they are the ultimate authority. Of course, users can raise concerns, hold protests and even opt out of using SL altogether. If we all stopped using SL, LL would have no reason to exist. But, I don’t think Levin is talking about software projects simply ending due to its participants becoming too pissed-off to work on it. Rather, he is talking about developer groups being replaced if they don’t run things the way the community likes. It seems to me that LL will maintain their position as the ultimate authority in SL whether the users like it or not.

But, then again, that may change in the future, what with Linden Lab’s plans to make the whole code open source. As Gwyn commented, ‘an open source grid is naturally the dream of everybody who’s tired with LL’s recent strong measures in limiting personal freedoms. By distributing grids all over the world, and interconnecting them together…if your country is restricting personal freedom too much you can jump over to the sims hosted in another country’.

7: ‘It is positive-sum. In games theory, a ‘zero-sum game’ is one in which one player’s gain is another player’s loss. Conventional economies often describes zero-sum games. When two suppliers compete for the dollars of a single customer, or when two government agencies compete with each other for fixed budget dollars, a zero-sum game is being played. A ‘positive-sum game’ is one in which players gain by behaviour which enhances the gains of others. Efficient agalmics is a positive-sum game’.

No one could deny that there are zero-sum games being played in SL. Whenever a client awards a building contract to one group rather than any other; whenever you spend your Linden dollars in this store rather than that one, a non-zero game is being played. And let’s not forget the griefers. But, while zero-sum games definitely happen in SL, so do positive-sum games. Examples would be the people willing to spend time teaching newcomers the basics of using SL, or more advanced courses on scripting, prim-building and such. It would include the bloggers, prepared to spend a great deal of time hunting down the best SL has to offer (or highlighting its deficiencies) and bringing them to our attention. And, of course, it would include the exhange of items in a gift ‘economy’ and the move to open-source Second Life. Teaching people to use SL efficiently and build competently increases the number of residents who can partcipate usefully in SL, bloggers with a good reputation attract a readership that keep them informed about goings-on, giving items away in a gift economy enhances the chances of your generosity being reciprocated and open sourcing SL massively increases the number of people debugging, tweaking, and ehancing it. In such ways, users gain by enhancing the gains of others.


All in all, I think it’s unarguable that Second Life is a textbook example of an agalmia. And yet, very little study of the agalmic activity in Sl seems to have been undertaken. It’s now almost eight years since a little-known professor at the University of Rochester, New York, decided to treat Everquest like a real country and collect macroeconomic statistics like GDP, inflation, productivity and wages. The resulting paper (‘Virtual Worlds: A First Hand Account of Market and Society on the Cyberian Frontier’) lifted its author- Edward Castronova- out of obscurity to become a leading authority on the implications of MMOGs.

These days, one is spoilt for choice where looking for information on economic activity in SL is concerned. Putting the keywords ‘Second Life economics’ into Google returns 5, 140,000 hits. By comparison, research into agalmic activity in SL is negligible. The keywords ‘Second Life agalmics’ returns a paltry 292 hits (and none that I looked at were particularly relevant). And yet, there is every reason to suppose that agalmic activity makes up the bulk of interactions in SL, and that it can only increase as LL hands over more and more of its baby to the open source community. A thorough investigation of the agalmic activity in SL by anthropologists, sociologists and economists could not be more timely. ‘As time goes on’, wrote Levin, ‘the technology of agriculture and manufacture teaches us how to produce goods with more efficiency, at less cost. The trend in technology is an exponential improvement of knowledge and capabilities’.

Thus, the driving forces pushing us towards agalmics are inextricably linked with those pushing towards molecular nanotechnology. Our best hope for ensuring an inclusive nanotech civilization (rather than one that disfavours the majority of citizens), lies in studying the underlying mechanisms of agalmic activity in SL and guiding the evolution of the metaverse so that it may act as a bridge, enabling us to make the transition to the Diamond Age as smoothly as possible.


In the years following the original publication of my essays on molecular nanotechnology, certain criticisms of the theories supporting this hypothetical technology have forced me to re-evaluate my position regarding MNT’s feasibility. This addendum to the ‘Snowcrashing’ essays was originally part of an opinion piece I wrote for the Kurzweilai forums called ‘James G76′s Miracle Machine’…


When thinking about an invention, it is helpful to consider three stages of knowledge: Mystery, Problem, and Solution. According to Steven Pinker, while at the mystery stage of knowledge “we can only stare in wonder and bewilderment, not knowing what an explanation would even look like”. On the other hand, (again, quoting Pinker) “when we face a problem, we may not know its solution, but we have insight, increasing knowledge, and an inkling of what it is we are looking for”. So, basically, the difference between ‘mystery’ and ‘problem’ is all to do with being able to ask useful questions, such that even mistakes and wrong answers help you make progress more than they keep you totally blind as to the correct route toward the solution.

Molecular nanotechnology, surely, is at the ‘problem’ stage? After all, didn’t Eric Drexler design blueprints for all the components, subsystems and systems of nanobots in his book ‘Nanosystems’? Yeah, kind of, but before we get too excited, Drexler has some sobering words for us:

“The work outlined here could rapidly absorb researcher-centuries of effort. Many of the steps described here will, if attempted, spawn a host of sub problems, each demanding long, hard, and creative work… Developments that will one day make molecular manufacturing fast and easy will result from efforts that are slow and difficult”.


Still, despite this cautionary remark, Drexler had confidence in the swift realization of molecular nanotechnology. “Image interpretation software able to determine the types, positions, and orientations of specific sets of reagent binding molecules; control software to automate reagent positioning and sensing, thereby automating the execution of long sequences of reactions” would be feasible, Drexler reckoned, “in a fraction of a decade”.

However, he said that in 1992 and it is safe to say that, here in 2011, nanotechnologists are nowhere close to this level of capability. But, when it comes to Drexler’s prediction that attempts to build his nanotechnology would “spawn a host of sub problems” it seems he was right on the money.

Richard Jones, professor of Physics at the University of Sheffield and senior advisor for the UK Government’s Physical Sciences and Engineering Funding Agency surveyed the progress being made in nanotech for IEEE Spectrum. “In 15 years of intense nanotech research, we have not even come close to experiencing the exponentially accelerating technological progress toward the goals set by singularitarians… the Drexlerian vision… seems to be accumulating obstacles faster than it can overcome them”.

Jones does admit that “impressive advances are emerging from the laboratories of real-world nanotechnologies”, but this kind of nanotech has little to do with the stuff Drexler sketched out in ‘Nanosystems’ and ‘Engines Of Creation’. The same thing could be said for life itself, which is often held up as proof that molecular nanotechnology is achievable. Proponents of nanobots are fond of reminding us of biology’s numerous natural nanoassemblers, molecular motors, software-controlled manufacturing, and more besides. But, they fail to take into consideration the fact that this is a ’nanotech’ of a radically different nature to what people like Ray Kurzweil talk about. That nanotech is basically robotic factories like the kind that automate car manufacturing shrunk down to sub microscopic sizes. But life’s nanotech is nothing like that. According to Professor Jones, life’s nanotech “operates on principles that are fundamentally different to the mechanical principles that macro scale engineering works with”.

In some ways, the argument ‘life provides proof-of-principle of molecular machinery’ is like saying ‘the brain creates an intelligent mind and it is a kind of computer, so we have a proof-of-principle that artificial general intelligence is possible’. The flaw in this argument is that the brain is nothing like any computer you are familiar with, so no matter how more sufficient current computing technology becomes, humanlike intelligence may never arise. Nothing less than a complete paradigm shift to a new kind of computer (one more ‘brainlike’) might be required before we have genuine AGI. Similarly, nature’s nanotechnology (often referred to as ‘wet’ nano) is a totally different design philosophy to the diamandoid-based machines outlined by Drexler (‘dry’ nano.) Again, this means all that progress in ‘wet’ nano does not necessarily lead inevitably to the kind of nanotech outlined in ‘Nanosystems’. We require a paradigm leap to an entirely new kind of nanotechnology for which nature has no precedent.

Frankly, the people who claim ’nanorobots that can build anything physically possible out of atoms or molecules they got from disassembling garbage are but decades away’ have no clue regarding the tower of near-intractable problems Drexlerian nanotechnology has stacked up. These include problems to do with the stability of ’machine parts’ (ie, the cogs and gears of nanosystems), problems caused by thermal noise and Brownian motion, problems to do with friction and energy dissipation, and, perhaps most concerning of all for those who hold out hopes of owning a desktop nanofactory by 2020,30, 40, there are the problems caused when trying to make the transition between two completely incompatible environments and two completely incompatible design philosophies (ie, bio nanotechnology like DNA origami, and manufactured proteins on the one hand and diamondoid gears, motors, actuators on the other). And, let’s be clear that we are not talking about insignificant little problems here. These are show-stopping problems that seriously challenge the very feasibility of nanobots. This is not to say that zero progress toward Drexlerian nanotechnology has been made. Through heroic efforts, folks like Rob Frietas and Ralph Merkle are inching toward theoretical solutions to some of these problems. But still, the fact remains that the gap between what has been accomplished and what needs to be accomplished before we can even begin to design nanorobotics like transhumanists talk about is A) immense and B) (if Jone’s survey is correct) growing wider instead of shrinking.

The nanotech situation is an important lesson regarding whether or not the ‘problem’ stage is close to transitioning into the ‘solution’ stage. If subproblems are accumulating faster than partial solutions, you can safely assume there is no chance of the final goal being in sight.

For more information on the difficulties facing MNT, google ‘Richard Jones Soft Machines’

If you have any comments please email me

Check out my Disparity SCOOP.IT, respectively my Oil Versus SSPS SCOOT.IT.