• Nu S-Au Găsit Rezultate

Fast-track your career


Academic year: 2022

Share "Fast-track your career"

Arată mai multe ( pagini)

Text complet


Jeremy Ramsden 

Download free books at 






ISBN 978-87-7681-418-2




Guide to the reader 1. What is nanotechnology?

1.1 Defi nitions

1.2 History of nanotechnology 1.3 Context of nanotechnology 1.4 Further reading

2. Motivation for nanotechnology 2.1 Materials

2.2 Devices 2.3 Systems

2.4 Issues in miniaturization 2.5 Other motivations 3. Scaling laws 3.1 Materials 3.2 Forces

3.3 Device performance 3.4 Design

3.5 Further reading

8 10 10 13 15 19 20 20 24 25 26 26 28 28 34 35 35 37

Stand out from the crowd

Designed for graduates with less than one year of full-time postgraduate work experience, London Business School’s Masters in Management will expand your thinking and provide you with the foundations for a successful career in business.

The programme is developed in consultation with recruiters to provide you with the key skills that top employers demand. Through 11 months of full-time study, you will gain the business knowledge and capabilities to increase your career choices and stand out from the crowd.

Applications are now open for entry in September 2011.

For more information visit www.london.edu/mim/

email [email protected] or call +44 (0)20 7000 7573 Masters in Management

London Business School Regent’s Park

London NW1 4SA United Kingdom Tel +44 (0)20 7000 7573 Email [email protected] www.london.edu/mim/

Fast-track your career

Please click the advert



4. Nanometrology 4.1 Imaging nanostructures 4.2 Nonimaging approaches 4.3 Other approaches

4.4 Metrology of self-assembly 4.5 Further reading

5. Raw materials of nanotechnology 5.1 Nanoparticles

5.2 Nanofi bres 5.3 Nanoplates

5.4 Graphene-based materials.

5.5 Biological effects of nanoparticles 5.6 Further reading

6. Nanodevices 6.1 Electronic devices 6.2 Magnetic devices 6.3 Photonic devices 6.4 Mechanical devices 6.5 Fluidic devices 6.6 Biomedical devices 6.7 Further reading

38 38 41 42 45 46 47 47 51 51 53 54 57 58 62 64 65 67 67 68 69

© UBS 2010. All rights reserved.


Looking for a career where your ideas could really make a difference? UBS’s Graduate Programme and internships are a chance for you to experience for yourself what it’s like to be part of a global team that rewards your input and believes in succeeding together.

Wherever you are in your academic career, make your future a part of ours by visiting www.ubs.com/graduates.

You’re full of energy and ideas . And that’s

just what we are looking for.

Please click the advert



7. Nanofacture 7.1 Top-down methods 7.2 Molecular manufacturing 7.3 Bottom-up methods 7.4 Intermolecular interactions 7.5 Further reading

8. Bionanotechnology 8.1 Biomolecules

8.2 Characteristics of biological molecules 8.3 Mechanism of biological machines 8.4 Biological motors

8.5 The cost of control 8.6 Biophotonic devices

8.7 DNA as construction material 8.8 Further reading

9. New fi elds of nanotechnology 9.1 Quantum computing and spintronics 9.2 Nanomedicine

9.3 Energy

9.4 Three concepts 9.5 Further reading

71 71 73 74 83 91 93 95 96 97 99 101 103 104 105 106 106 109 113 114 116

everyo ne des erves

good design

educe euse ecycle



© Inter IKEA Systems B.V. 2009 © Inter IKEA Systems B.V. 2009


It’s only an

opportunity if you act on it


Please click the advert



10. Implications of nanotechnology 10.1 Enthusiasm

10.2 Neutrality

10.3 Opposition and scepticism 10.4 A sober view of the future 10.5 Further reading


117 117 119 120 122 123 124

your chance

to change

the world

Here at Ericsson we have a deep rooted belief that the innovations we make on a daily basis can have a profound effect on making the world a better place for people, business and society. Join us.

In Germany we are especially looking for graduates as Integration Engineers for

• Radio Access and IP Networks

• IMS and IPTV

We are looking forward to getting your application!

To apply and for all current job openings please visit our web page: www.ericsson.com/careers

Please click the advert



Guide to the reader

Welcome to this Study Guide to nanotechnology.

Nanotechnology is widely considered to constitute the basis of the next technological revolution, following on from the first Industrial Revolution, which began around 1750 with the introduc- tion of the steam engine and steelmaking (and which parallelled, or perhaps caused, upheavals in land ownership and agriculture practice). The Industrial Revolution constituted as profound a change in society and civilization as the earlier Stone, Bronze and Iron revolutions, each of which ushered in a distinctly new age in the history of human civilization. A second Industrial Revolution began around the end of the 19th century with the introduction of electricity on an industrial scale (and which paved the way for other innovations such as wireless commu- nication), and most recently we have had the Information Revolution, characterized by the widespread introduction of computing devices and the internet.

Insofar as the further development of very large-scale integrated circuits used for information processing depends on reducing the sizes of the individual circuit components down to the nanoscale (i.e., a few tens of nanometres), the Information Revolution has now become the Nano Revolution—just as steam engines powered dynamos for the industrial generation of electricity.

But, nanotechnology brings its own distinctive challenges, notably: (i) handling matter at the atomic scale (which is what nanotechnology is all about—a synonym is “atomically precise engineering”) means that qualitatively different behaviour needs to be taken into account; and (ii) in order for atomically precisely engineered objects to be useful for humans, they need to be somehow multiplied, which introduces the problem of handling vast numbers of entities.

One should not underestimate the multidisciplinary nature of nanotechnology. This forces researchers to adopt a manner of working more familiar to scientists in the 19th century than in the 21st. Many active fields in nanotechnology research demand an understanding of diverse areas of science. Sometimes this problem is solved by assembling teams of researchers but members of the team still need to be able to effectively communicate with one another. An inevitable consequence of this multidisciplinarity is that the range of material that needs to



be covered is rather large. As a result, some topics have had to be dealt with rather sketchily in order to keep the size of this book within reasonable bounds, but I hope I may be at least partly excused for this by the continuing rapid evolution of nanotechnology, which in many cases would make additional details superfluous since their relevance is likely to be soon superseded.

Fundamental discoveries will doubtless continue to be made in the realm of a very small—and given the closeness of discoveries to technology in this field, in many cases they will doubtless be rapidly developed into useful products.

References to the original literature are only given (as footnotes to the main text) when I consider the original article to be seminal, or that reading it will bring some additional illumi- nation. At the end of each chapter I list some (mostly relatively short) authoritative review articles (and a few books) that could be usefully read by anyone wishing to go into more detail.

These lists do not include standard texts on topics such as the general properties of matter, electricity and magnetism, optics, quantum mechanics, and so forth.



Chapter 1

What is nanotechnology?

1.1 Definitions

Let us briefly recall the bald definition of nanotechnology: “the design, characterization, pro- duction and application of materials, devices and systems by controlling shape and size of the nanoscale”.1 The nanoscale itself is at present consensually considered to cover the range from 1 to 100 nm.2 A slightly different nuance is given by “the deliberate and controlled manipulation, precision placement, measurement, modelling and production of matter at the nanoscale in or- der to create materials, devices, and systems withfundamentally new properties and functions”

(my emphasis). Another formulation floating around is “the design, synthesis, characteriza- tion and application of materials, devices and systems that have a functional organization in at least one dimension on the nanometre scale” (my emphasis). The US Foresight Institute gives: “nanotechnology is a group of emerging technologies in which the structure of matter is controlled at the nanometer scale to producenovel materials and devices that have useful and unique properties” (my emphases). The emphasis on control is particularly important: it is this that distinguishes nanotechnology from chemistry, with which it is often compared: in the latter, motion is essentially uncontrolled and random, within the constraint that it takes place on the potential energy surface of the atoms and molecules under consideration. In order to achieve the desired control, a special, nonrandomeutacticenvironment needs to be a available.

How eutactic environments can be practically achieved is still being vigorously discussed.3

1E. Abad et al., NanoDictionary. Basel: Collegium Basilea (2005).

2This scale (and indeed the definitions) are currently the subject of discussions within the International Standards Organization (ISO) aimed at establishing a universal terminology.

3E.g., F. Scott et al., NanoDebate. Nanotechnology Perceptions 1 (2005) 119–146.



A very succinct definition of nanotechnology is simply “engineering with atomic precision”.

However, we should bear in mind the “fundamentally new properties” and “novel” and “unique”

aspects that some nanotechnologists insist upon, wishing to exclude existing artefacts had happen to be small.

Another debated issue is whether one should refer to “nanotechnology” or “nanotechnologies”.

The argument in favour of the latter is that nanotechnology encompasses many distinctly different kinds of technology. But there seems to be no reason not to use “nanotechnology” in a collective sense, since the different kinds are nevertheless all united by (striving for) control at the atomic scale.

Elaborating somewhat on the definitions, one can expand nanotechnology along at least three imaginary axes:

1. The axis of tangible objects, in order of increasing complexity: materials, devices and systems. Note that the boundaries between these three can be crossed by such things as

“smart” materials.

2. The axis starts with passive, static objects (such as nanoparticles) whose new (i.e., dif- ferent from those of bulk objects having the same chemical composition) properties arise from their small size. It continues with active devices (e.g., able to transduce energy, or store information, or change their state)—that is, their dynamical properties are ex- plicitly considered. Further along the axis are devices of ever more sophistication and complexity, able to carry out advanced information processing, for example. Finally, we come to manufacture (nanomanufacturing, usually abbreviated to nanofacture), also called atomically precise manufacturing (APM), i.e.processes, andnanometrology, which of course comprises a very varied collection of instruments and procedures. Sometimes these are considered under the umbrella of “productive nanosystems”, which implies a complete paradigm of sustainable nanofacture.

3. The axis starts with direct nanotechnology: materials structured at the nanoscale (in- cluding nanoparticles), devices with nanoscale components, etc.; continues with indirect nanotechnology, which encompasses things like hugely powerful information processors based on very large scale integrated chips with individual circuit components within the nanoscale; and ends with conceptual nanotechnology, which means the scrutiny of engi- neering (and other, including biological) processes at the nanoscale in order to understand them better.



Within the context of active devices, it is often useful to classify them according to the media on which they operate—electrons, photons or liquid materials, for example. Thus, we have molecular electronics, and single electron devices made from scaled-down bulk materials such as silicon; nanophotonics, which is nowadays often used as an umbrella term to cover planar optical waveguides and fibre optics, especially when some kind of information processing is involved; and nanofluidics, smaller versions of the already well established micromixers used to accomplish chemical reactions. This classification is, however, of only limited utility, because many devices involve more than one medium: for example, nanoelectromechanical devices are being intensively researched as a way of achieving electronic switching, optoelectronic control is a popular way of achieving photonic switching, and photochemistry in miniaturized reactors involves both nanophotonics and nanofluidics.

what‘s missing in this equation?

maeRsK inteRnationaL teChnoLogY & sCienCe PRogRamme

You could be one of our future talents

Are you about to graduate as an engineer or geoscientist? Or have you already graduated?

If so, there may be an exciting future for you with A.P. Moller - Maersk.


Please click the advert



1.2 History of nanotechnology

Reference is often made to a lecture given by Richard Feynman in 1959 at Caltech (where he was working at the time). Entitled “There’s Plenty of Room at the Bottom”, he envisaged machines making the components for smaller machines (a familiar enough operation at the macroscale), themselves capable of making the components for yet smaller machines, and simply continuing the operation until the atomic realm was reached. Offering a prize of $1000 for the first person to build a working electric motor with an overall size not exceeding 1/64th of an inch, he was dismayed when a student presented him not long afterwards with a laboriously hand-assembled (i.e., using the technique of the watchmaker) electric motor of conventional design that met the specified criteria.

In Feynman we find the germ of the idea of the assembler, a concept later elaborated by Eric Drexler.4 The assembler is a universal nanoscale assembling machine, capable not only of making nanostructured materials, but also copies of itself as well as other machines. The first assembler would be laboriously built atom by atom, but once it was working numbers would evidently grow exponentially, and when a large number became available, universal manufacturing capability, and the nano-era, would have truly arrived.

A quite different approach to the nanoscale starts from the microscopic world of precision engineering, progressively scaling down to ultraprecision engineering (Figure 1.1). The word

“nanotechnology” was coined by Norio Taniguchi in 1983 to describe the lower limit of this process.5 Current ultrahigh-precision engineering is able to achieve surface finishes with a roughness of a few nanometres. This trend is mirrored by relentless miniaturization in the semiconductor processing industry. Ten years ago the focus was in the micrometre domain.

Smaller features were described as decimal fractions of a micrometre. Now the description, and the realization, is in terms of tens of nanometres.

A third approach to nanotechnology is based on self-assembly. Interest in this arose because, on the one hand, of the many difficulties in making Drexlerian assemblers, which would appear to preclude their realization in the near future, and on the other hand, of the great expense of the ultrahigh precision approach. The inspiration for self-assembly seems to have come from the work of virologists who noticed that pre-assembled components (head, neck, legs) of bacteriophage viruses would further assemble spontaneously into a functional virus merely

4K.E. Drexler, Molecular engineering: an approach to the development of general capabilities for molecular manipulation. Proc. Natl Acad. Sci. USA78 (1981) 5275–5278.

5N. Taniguchi, On the basic concept of nano-technology. Proc. Intl Conf. Prod. Engng Tokyo, Part II (Jap.

Soc. Precision Engng).



Figure 1.1: The evolution of machining accuracy (after Norio Taniguchi).

upon mixing and shaking in a test-tube.

Nanoparticles mostly rank as passive nanostructures. At present, they represent almost the only part of nanotechnology with commercial significance. However, it is sometimes questioned whether they can truly represent nanotechnology because they are not new. For example, the Flemish glassmaker John Utynam was granted a patent in 1449 in England for making stained glass incorporating nanoparticulate gold; the Swiss medical doctor and chemist von Hohenheim (Paracelsus) prepared and administered gold nanoparticles to patients suffering from certain ailments in the early 16th century. The fabrication of nanoparticles by chemical means seems to have been well established by the middle of the 19th century (e.g., Thomas Graham’s method for making ferric hydroxide nanoparticles). Wolfgang Ostwald lectured extensively in the USA, and wrote up the lectures in what became a hugely successful book on the subject, “Die Welt der vernachl¨assigten Dimensionen” (published in 1914). Many universities had departments of colloid chemistry, at least up to the middle of the 20th century, then slowly the subject seemed to fall out of fashion, until its recent revival as part of nanotechnology.



1.3 Context of nanotechnology

Scientific revolutions. The development of man is marked by technological breakthroughs.

So important are they that the technologies (rather than, for example, modes of life) give their names to the successive epochs: the Stone Age, the Bronze Age, the Iron Age, rather than the age of hunting, pastoralism, agriculture, urbanization etc. The most significant change in our way of life during the last two or three millennia was probably that brought about by the Industrial Revolution that began in Britain around the middle of the 18th century; by the middle of the 19th century it was in full swing in Britain and, at first patchily, but later rapidly, elsewhere in Europe and North America. This in turn was replaced by the Information Revo- lution, marked by unprecedented capabilities in the gathering, storage, retrieval and analysis of information, and heavily dependent upon the high-speed electronic digital computer. We are still within that epoch, but the next revolution already appears to be on the horizon, and it is thought that it will be the Nano Revolution.

There are a couple of things worth noting about these revolutions. Firstly, the exponential growth in capabilities. This is sometimes quite difficult to accept, because an exponential function is linear if examined over a sufficiently small interval, and if the technology (or a technological revolution) unfolds over several generations, individual perceptions tend to be strongly biased towards linearity. Nevertheless, empirical examination of available data shows that exponential development is the rule (Ray Kurzweil has collected many examples, and in our present epoch the best demonstration is probably Moore’s law), although it does not continue indefinitely, but eventually levels off. Secondly, very often a preceding technological breakthrough provided the key to a successive one. For example, increasing skill and knowledge in working iron was crucial to the success of the steam power and steel that were the hallmarks of the Industrial Revolution, which ultimately developed the capability for mass production of the very large-scale integrated electronic circuits needed for realizing the Information Revolution.

Why do people think that the next technological revolution will be that of nanotechnology? Be- cause once we have mastered the technology, the advantages of making things “at the bottom”

will be so overwhelming it will rapidly dominate all existing ways of doing things. Once iron- making and working had been mastered, no one would have considered making large, strong objects out of bronze; no one uses a slide rule now that electronic calculators are available; and so forth.

What are the advantages of nanotechnology? They follow from miniaturization, novel combinations of properties, and a universal fabrication technology. Typical of the benefits



of miniaturization is the cell (mobile) phone. The concept was developed in the 1950s, but the necessary circuitry would have occupied a large multistorey building using the technology of the day (thermionic valves). Materials made with carbon nanotubes can be light and very strong, and transparent and electrically conducting. Universal fabrication, based on assemblers (personal nanofactories) would enable most artefacts required by humans to be made out of acetylene and a source of energy.

How close are we to realizing the Nano Revolution? Miniaturization of circuitry is already far advanced. Components and chips can now be made with features in the size range of tens of nanometres. The World Wide Web would be scarcely conceivable without the widespread dissemination of powerful personal computers enabled by mass-produced inte- grated circuits. Materials based on carbon nanotubes are still very much at the experimental stage. Nevertheless, prototypes have been made and the difficulties look to be surmountable.

Assembly-based nanofacture seems still to be some way in the future. To demonstrate feasi- bility, computer simulations are generally adduced, together with biological systems (e.g., the rotary motor, a few nanometres in diameter, which is at the heart of the ubiquitous enzyme ATPase, found in abundance in practically all forms of life). Nevertheless, actual experiments demonstrating assembly with atomic precision are still in a primitive state.

It all starts at Boot Camp. It’s 48 hours that will stimulate your mind and enhance your career prospects. You’ll spend time with other students, top Accenture Consultants and special guests. An inspirational two days

packed with intellectual challenges and activities designed to let you discover what it really means to be a high performer in business. We can’t tell you everything about Boot Camp, but expect a fast-paced, exhilarating

and intense learning experience.

It could be your toughest test yet, which is exactly what will make it your biggest opportunity.

Find out more and apply online.

Choose Accenture for a career where the variety of opportunities and challenges allows you to make a difference every day. A place where you can develop your potential and grow professionally, working alongside talented colleagues. The only place where you can learn from our unrivalled experience, while helping our global clients achieve high performance. If this is your idea of a typical working day, then Accenture is the place to be.

Turning a challenge into a learning curve.

Just another day at the office for a high performer.

Accenture Boot Camp – your toughest test yet

Visit accenture.com/bootcamp

Please click the advert



What might the benefits be? Reports published during the last few years are typically euphoric about nanotechnology and all the benefits it will bring. Many of the examples are, however, of a relatively trivial nature and do not seem to represent sufficient breakthrough novelty to constitute a revolution. Thus, we already have nanostructured textiles that resist staining, self-cleaning glass incorporating nanoparticulate photocatalysts capable of decompos- ing dirt (Figure 9.3); nanoparticle-based sun creams that effectively filter out ultraviolet light without scattering it and are therefore transparent; even lighter and stronger tennis racquets made with carbon fibre or even carbon nanotube composites; and so forth. None of these developments can be said to be truly revolutionary in terms of impact on civilization. The In- dustrial Revolution was very visible because of the colossal size of its products: gigantic bridges (e.g., the Forth bridge), gigantic steamships (e.g., the Great Eastern), and, most gigantic of all if the entire network is considered as a single machine, the railway. And the steel for these constructions was produced in gigantic works; a modern chemical plant or motor-car factory may cover the area of a medium-sized town. In sharp contrast, the products of nanotechnology are, by definition, very small. Individual assemblers would be invisible to the naked eye. But of course theproducts of the assemblers would be highly visible and pervasive—such as ultralight strong materials from which our built environment would be constructed.

Microprocessors grading into nanoprocessors are a manifestation of indirect nanotechnology, responsible for the ubiquity of internet servers (and hence the World Wide Web) and cellular telephones. The impact of these information processors is above all due to their very high-speed operation, rather than any particular sophistication of the algorithms governing them. Most tasks, ranging from the diagnosis of disease to surveillance, involve pattern recognition, some- thing that our brains can accomplish swiftly and seemingly effortlessly for a while, but which require huge numbers of logical steps when reduced to a form suitable for a digital processor.

Sanguine observers predict that despite the clumsiness of this “automated reasoning”, ulti- mately artificial thinking will surpass that of humans—this is Kurzweil’s “singularity”. Others predict that it will never happen. To be sure, the singularity is truly revolutionary, but is as much a product of the Information Revolution as of the Nano Revolution, even though the latter provides the essential enabling technology.

Conceptual nanotechnology implies scrutinizing the world from the viewpoint of the atom or molecule. In medicine this amounts to finding the molecular basis of disease, which has been underway ever since biochemistry became established, and which now encompasses all aspects of disease connected with the DNA molecule and its relatives. There can be little doubt about the tremendous advance of knowledge that it represents. It, however, is part of the more general scientific revolution that began in the European universities founded from the 11th century



onwards—and which is so gradual and ongoing that it never really constitutes a perceptible revolution. Furthermore, it is always necessary to counterbalance the reductionism implicit in the essentially analytical atomic (or nano) viewpoint by insisting on a synthetic systems approach at the same time. Nanotechnology carried through to Productive Nanosystems could achieve this, because the tiny artefacts produced by an individual assembler have somehow to be transformed into something macroscopic enough to be serviceable for mankind.

Can nanotechnology help to solve the great and pressing problems of contemporary human- ity? Although, if ranked, there might be some debate about the order, most people would include rapid climate change, environmental degradation, depletion of energy, unfavourable de- mographic trends, insufficiency of food, and nuclear proliferation among the biggest challenges.

Seen from this perspective, nanotechnology is the continuation of technological progress, which might ultimately be revolutionary if the quantitative change becomes big enough to rank as qualitative. For example, atom-by-atom assembly of artefacts implies discarded ones can be disassembled according to a similar principle, hence the problem of waste (and concomitant en- vironmental pollution) vanishes. More advanced understanding at the nanoscale should finally allow us to create artificial energy harvesting systems, hence the potential penury of energy disappears. If the manufacture of almost everything becomes localized, the transport of goods (another major contributor to environmental degradation) should dwindle to practically noth- ing. Localized energy production would have a similar effect. However, the achievement of this state of affairs depends on the advent of the personal nanofactory, or something resembling it, which is by no means inevitable. Perhaps the nanobot is somewhat closer to realization.

Would indefatigably circulating nanobots inside our bodies enable our lives to be extended almost indefinitely? And what would be the consequences?

Nanoscience. Is there a need for this term? Sometimes it is defined as “the science un- derlying nanotechnology”. But this really is biology, chemistry and physics—or “molecular sciences”. It is the technology of designing and making functional objects at the nanoscale that is new; science has long been working at this scale, and below. No one is arguing that fundamentally new physics emerges at the nanoscale; rather, it is the new combinations of phenomena manifesting themselves at that scale that constitute the new technology. The term

“nanoscience” therefore appears to be wholly superfluous if it is used in this sense. As a syn- onym of conceptual nanotechnology, however, it does have a useful meaning: the science of mesoscale approximation. The description of a protein as a string of amino acids is a good example. At the mesoscale, one does not need to inquire into details of the internal structure (at the atomic and subatomic levels) of the amino acids.



1.4 Further reading

K.E. Drexler, Engines of Creation. New York: Anchor Books/Doubleday (1986).

R. Feynman, There’s plenty of room at the bottom. In: Miniaturization (ed. H.D. Gilbert), pp. 282–296. New York: Reinhold (1961).

R. Kurzweil, The Singularity is Near. New York: Viking Press (2005).

J.J. Ramsden, What is nanotechnology? Nanotechnology Perceptions 1 (2005) 3–17.

In Paris or Online

International programs taught by professors and professionals from all over the world

BBA in Global Business

MBA in International Management / International Marketing DBA in International Business / International Management MA in International Education

MA in Cross-Cultural Communication MA in Foreign Languages

Innovative – Practical – Flexible – Affordable

Visit: www.HorizonsUniversity.org Write: [email protected]

Call: www.HorizonsUniversity.org

Please click the advert



Chapter 2

Motivation for nanotechnology

In this chapter, we look at some of the reasons why one might want to make things very small, viewing nanotechnology along the “materials, devices, and systems” axis introduced in Chapter 1.

2.1 Materials

Most of the materials around us are composites. Natural materials such as wood are highly structured and built upon very sophisticated principles. The basic structural unit is cellulose, which is a polymer of the sugar glucose, but cellulose on its own makes a floppy fabric (think of cotton or rayon), hence to give it strength and rigidity it must be glued together into a rigid matrix. This is accomplished by the complex multiring aromatic molecule lignin. The design principle is therefore akin to that of reinforced concrete: Steel rods strengthen what is itself a composite of gravel and cement.

The principle of combining two or more pure substances with distinctly different properties (which might be mechanical, electrical, magnetic, optical, thermal, chemical, and so forth) in order to create a composite material that combines the desirable properties of each to create a multifunctional substance has been refined over millennia, presumably mostly by trial and error. Typically, the results are, to a first approximation, additive. Thus we might write a sum of materials and their properties like



cellulose high tensile strength self-repellent

+ lignin weak sticky

= wood strong cohesive

Empirical knowledge is used to choose useful combinations, in which the desirable properties dominate—one might have ended up with a weak and repellent material. The vast and growing accumulation of empirical knowledge, now backed up and extended by fundamental knowledge of the molecular-scale forces involved, usually allow appropriate combinations to be chosen. The motif of strong fibres embedded in a sticky matrix is very widely exploited, other examples being glass fibre- and carbon fibre-reinforced polymers.

Essentially, the contribution of nanotechnology to this effort is simply to take it to the ultimate level, in the spirit of “shaping the world atom-by-atom”.1

Rather like the chemist trying to synthesize an elaborate multifunctional molecule, the ma- terials nanotechnologist aims to juxtapose different atoms to achieve multifunctionality. This approach is known as mechanosynthetic chemistry or, in its large-scale industrial realization, as molecular manufacturing. The famous experiment of Schweizer and Eigler, in which they rearranged xenon atoms on a nickel surface to form the logo “IBM”,2 represented a first step in this direction. Since then, there has been intensive activity in the area, but it still remains uncertain to what extent arbitrary combinations of atoms can be assembled disregarding chem- ical concepts, and whether the process can ever be scaled up to provide macroscopic quantities of materials.

Most of the recognized successes in nanomaterials so far have been not in the creation of totally new materials through mechanosynthesis, (which is still an unrealized goal) but in the more prosaic world of blending. For example, one adds hard particles to a soft polymer matrix to create a hard, abrasion-resistant coating. As with atomically-based mechanosynthesis, the results are, to a first approximation, additive. Thus we might again write a sum like

polypropylene flexible transparent

+ titanium dioxide rigid opaque

= thin film coating (paint) flexible opaque

This is not actually very new. Paint, a blend of pigment particles in a matrix (the binder), has

1The subtitle of a report on nanotechnology prepared under the guidance of the US National Science and Technology Council Committee on Technology in 1999.

2E.K. Schweizer and D.M. Eigler, Positioning single atoms with a scanning tunneling microscope. Nature (Lond.) 344 (1990) 524–526.



been manufactured for millennia. What is new is the detailed attention paid to the nanopar- ticulate additive. Its properties can now be carefully tailored for the desired application. If one of components is a recognized nanosubstance—a nanoparticle or nanofibre, for example—it seems to be acceptable to describe the blend as a nanomaterial.

Terminology. According to Publicly Available Specification (PAS) 136:2007,a a nano- material is defined as a material having one or more external dimensions in the nanoscale or (my emphasis) which is nanostructured. It seems to be more logical to reserve the word

“nano-object” (which, according to PAS 136:2007, is a synonym of nanomaterial) for the first possible meaning. This covers nanoparticles, nanorods, nanotubes, nanowires, and so forth. In principle, ultrathin paper would also be included in this definition. The term

“nanostructured” is defined as “possessing a structure comprising contiguous elements with one or more dimensions in the nanoscale but excluding any primary atomic or molecular structure.” This definition should probably be strengthened by including the notion ofdelib- erate in it. Its use would then be properly confined to materials engineered “atom by atom”.

Nanoparticles in a heap are contiguous to one another, but the heap is not structured in an engineering sense, hence a collection of nanoparticles is not a nanomaterial. Substances made simply by blending nano-objects with a matrix should be called nanocomposites.

“Nanosubstance” is not defined in PAS 136:2007.

aPublished by the British Standards Institute.

The biggest range of applications for such nanocomposites is in thin film coatings—in other words paint. Traditional pigments may comprise granules in the micrometre size range; grinding them a little bit more finely turns them into nano-objects. Compared with transparent varnish, paint then combines the attribute of protection from the environment with the attribute of colour. The principle can obviously be (and has been) extended practically ad libitum: by adding very hard particles to confer abrasion resistance; metallic particles to confer electrical conductivity; tabular particles to confer low gas permeability, and so on. Two relatively old products even today constitute the bulk of the so-called nanotechnology industry: carbon black (carbon particles ranging in size from a few to several hundred nanometres) added to the rubber tyres for road vehicles as reinforcing filler; and crystals of silver chloride, silver bromide and silver iodide ranging in size from tens of nanometres to micrometres, which form the basis of conventional silver halide-based photography.

Why nanoadditives? Since it is usually more expensive to create nanosized rather than microsized matter, one needs to justify the expense of downscaling. As matter is divided ever



more finely, certain properties become qualitatively different (see Chapter 3). For example, the optical absorption spectrum of silicon vapour is quite different from that of a silicon crystal, even though the vapour and crystal are chemically identical. When a crystal becomes very small, the melting point falls, there may be a lattice contraction (that is, the atoms move closer together)—these are well understood consequences of Laplace’s law, and may be very useful for facilitating a sintering process. If the radius of the crystal is smaller than the Bohr radius of the electron in the bulk solid, the electron is confined and has a higher energy than its bulk counterpart. The optical absorption and fluorescent emission spectra shift to higher energies. Hence, by varying the crystal radius, the optical absorption and emission wavelengths can be tuned.

Chemists have long known that heterogeneous catalysts are more active if they are more finely divided. This is a simple consequence of the fact that the reaction takes place at the interface between the solid catalyst and the rest of the reaction medium. For a given mass, the finer the division, the greater the surface area. This is not in itself a qualitative change, although in an industrial application there may be a qualitative transition from an uneconomic to an economic process.

By 2020, wind could provide one-tenth of our planet’s electricity needs. Already today, SKF’s innovative know- how is crucial to running a large proportion of the world’s wind turbines.

Up to 25 % of the generating costs relate to mainte- nance. These can be reduced dramatically thanks to our systems for on-line condition monitoring and automatic lubrication. We help make it more economical to create cleaner, cheaper energy out of thin air.

By sharing our experience, expertise, and creativity, industries can boost performance beyond expectations.

Therefore we need the best employees who can meet this challenge!

The Power of Knowledge Engineering

Brain power

Plug into The Power of Knowledge Engineering.

Visit us at www.skf.com/knowledge

Please click the advert



Our planet has an oxidizing atmosphere, and has had one probably for at least 2000 million years. This implies that most metals, other than gold, platinum and so forth (the noble metals), will be oxidized. Hence, many kinds of metallic nanoparticles will not be stable in nature.

Carbon-based materials, especially fullerenes in carbon nanotubes, are often considered to be the epitome of a nanomaterial. Carbon has long been an intriguing element because of the enormous differences between its allotropes of graphite and diamond. The carbon nanomaterials are based on another new form, graphene (see §5.4).

2.2 Devices

A device turns something into something else. Synonyms are machine, automaton, transducer, encoder, and so forth. Possible motivations for miniaturizing a device are:

1. Economizing on material. If one can accomplish the same function with less material, the device should be cheaper, which is often a desirable goal—provided that it is not more expensive to make. In the macroscopic world of mechanical engineering, if the material costs are disregarded, it is typically more expensive to make something very small; for example, a watch is more expensive than a clock, for equivalent timekeeping precision.

On the other hand, when things become very large, as in the case of the clock familiarly known as Big Ben for example, costs again start to rise, because special machinery may be needed to assemble the components, and so on. We shall return to the issue of fabrication in Chapter 7.

2. Performance (expressed in terms of straightforward input-output relations) may be en- hanced by reducing the size. This is actually quite rare. For most micro electromechanical systems (MEMS) devices, such as accelerometers, performance is degraded by downscal- ing, and the actual size of the devices currently mass-produced for actuating automotive airbags represents a compromise between economy of material, not taking up too much space nor weighing two much, the and still-acceptable performance.



Downscaling. An accelerometer (which transduces force into electricity) depends on the inertia of a lump of matter for its function, and if the lump becomes too small, the output becomes unreliable. Similarly with photode- tectors (that transduce photons into electrons): due to the statistical and quantum nature of light, the smallest difference between two levels of irra- diance that can be detected increases with diminishing size. On the other hand, there is no intrinsic lower limit to the physical embodiment of one bit of information. One bit could be embodied by the presence of a neutron, for example. Information processing and storage is the ideal field of application for nanotechnology. The lower limit of miniaturization is only dependent on practical considerations of “writing” and “reading” the information. Hence nanotechnology is particularly suited to information processors.

3. Functionality may be enhanced by reducing the size. Using the same example as in the previous item, it would not be practicable to equip mass-produced automobiles with macroscopic accelerometers with a volume of about 1 litre and weighing several kilograms.

Another example is cellular telephony, already mentioned. A similar consideration applies to implantable biosensors for monitoring clinical parameters in a patient. In other words, miniaturization increases accessibility.

2.3 Systems

The essence of a system is that it cannot be usefully decomposed into its constituent parts.

Two or more objects constitute a system if the following conditions are satisfied:

One can talk meaningfully of the behaviour of the whole of which they are the only parts

The behaviour of each part can affect the behaviour of the whole

The way each part behaves and the way its behaviour affects the whole depends on the behaviour of at least one other part

No matter how one subgroups the parts, the behaviour of each subgroup will affect the whole and depends on the behaviour of at least one other subgroup.

Typically, a single nanodevice is complex enough to be considered a system, hence a “nanosys- tem” generally signifies a system whose components are nanoscale devices. An example of a



system that can truly be called “nano” is the foot of the gecko, many species of which can run up vertical walls and across ceilings. Their feet are hierarchically divided into tens of thousands of minute pads that allow a large area of conformal contact with irregular surfaces. The adhe- sive force is provided by the Lifshitz-van der Waals interaction (see§7.4), normally considered to be weak and short range, but additive and hence sufficiently strong in this embodiment if there are enough points of contact. Attempts to mimic the foot with a synthetic nanostructure have only had very limited success, because the real foot is living and constantly adjusted to maintain the close range conformal contact needed for the interaction to be sufficiently strong to bear the weight of the creature.

2.4 Issues in miniaturization

Considering the motor-car as a transducer of human desire into translational motion, it is obvious that the nanoautomobile would be useless for transporting anything other than nano- objects. The main contribution of nanotechnology to the automotive industry is in providing miniature sensors for process monitoring in various parts of the engine and air quality moni- toring in the saloon; additives in paint giving good abrasion resistance, possibly self-cleaning functionality, and perhaps novel aesthetic effects; new ultrastrong and ultralightweight com- posites incorporating carbon nanotubes for structural parts; sensors embedded in the chassis and bodywork to monitor structural health; and so forth.

Scaling up. In other cases, scaling performance up to the level of human utility is simply a matter of massive parallelization. Nanoreactors synthesizing a medicinal drug simply need to work in parallel for a reasonably short time to generate enough of the compound for a therapeutically useful dose. With information processors, the problem is the user interface:

a visual display screen must be large enough to display a useful amount of information, a keyboard for entering instructions and data must be large enough for human fingers, and so forth.

2.5 Other motivations

The burgeoning worldwide activity in nanotechnology cannot be explained purely as a ratio- nal attempt to exploit “room at the bottom”. Two other important human motivations are doubtless also playing a role. One it is simply “it hasn’t been done before”—the motivation



of the mountaineer ascending a peak previously untrodden. The other is the perennial desire to “conquer nature.” Opportunities for doing so at the familiar macroscopic scale have become very limited, partly because so much has already been done—in Europe, for example, there are hardly any marshes left to drain or rivers left to dam, two of the most typical arenas for

“conquering nature”—and partly because the deleterious effects of such “conquest” are now far more widely recognized, and the few remaining undrained marshes and undammed rivers are likely nowadays to be legally protected nature reserves. But the world at the bottom, as Feynman picturesquely called it, is uncontrolled and largely unexplored.

Finally, the space industry has a constant and heavily pressing requirement for making payloads as small and lightweight as possible. Nanotechnology is ideally suited to this end user—provided nanomaterials, devices and systems can be made sufficiently reliable.


The financial industry needs a strong software platform

That’s why we need you

SimCorp is a leading provider of software solutions for the financial industry. We work together to reach a common goal: to help our clients succeed by providing a strong, scalable IT platform that enables growth, while mitigating risk and reducing cost. At SimCorp, we value commitment and enable you to make the most of your ambitions and potential.

Are you among the best qualified in finance, economics, IT or mathematics?

Find your next challenge at www.simcorp.com/careers

Please click the advert



Chapter 3

Scaling laws applied to nanotechnology

The main point to be discussed in this chapter is how properties and behaviour change as the characteristic dimension is reduced. Of particular interest are discontinuous changes occurring at the nanoscale. Some very device-specific aspects of this topic are discussed in Chapter 6.

3.1 Materials

An object is delineated by its boundary. Dividing matter into small particles has an effect on purely physical processes. Suppose a spherical object of radiusris heated by internal processes, and the amount of heat is proportional to the volume V = 4πr3/3. The loss of heat to the environment will be proportional to the surface area,A= 4πr2. Now let the object be divided into n small particles. The total surface area is now n1/34πr2. This is the basic reason why small mammals have a higher metabolic rate than larger ones—they need to produce more heat to compensate for its relatively greater loss through the skin in order to keep their bodies at the same steady temperature. This also explains why so few small mammals are found in the cold regions of the earth.

Chemical reactivity. Consider a heterogeneous reaction A + B C, where A is a gas or a substance dissolved in a liquid and B is a solid. Only the surface atoms are able to come into contact with the environment, hence for a given mass of material B the more finely it is divided the more reactive it will be, in terms of numbers of C produced per unit time.

The above considerations do not imply any discontinuous change upon reaching the nanoscale.



Granted, however, that matter is made up of atoms, the atoms situated at the boundary of an object are qualitatively different from those in the bulk (Figure 3.1). A cluster of six atoms (in two-dimensional Flatland) has only one bulk atom, and any smaller cluster is “all surface”. This may have a direct impact on chemical reactivity (considering here, of course, heterogeneous reactions). It is to be expected that the surface atoms are individually more reactive than their bulk neighbours, since they have some free valences (i.e., bonding possibilities). Consideration of chemical reactivity (its enhancement for a given mass, by dividing matter into nanoscale-sized pieces) suggests a discontinuous change when matter becomes “all surface”.

Figure 3.1: The boundary of an object shown as a cross-section in two dimensions. The surface atoms (white) are qualitatively different from the bulk atoms (grey), since the latter have six nearest neighbours (in the two-dimensional cross-section) of their own kind, whereas the former only have four.

In practice, however, the surface atoms may have already satisfied their bonding requirements by picking up reaction partners from the environment. For example, many metals become spontaneously coated with a film of their oxide when left standing in air, and as a result are chemically more inert than pure material. These films are typically thicker than one atomic layer. On silicon, for example, the native oxide layer is about 4 nm thick. This implies that a piece of freshly cleaved silicon undergoes some lattice disruption enabling oxygen atoms to effectively penetrate deeper than the topmost layer. If the object is placed in the “wrong”

environment, the surface compound may be so stable that the nanoparticles coated with it are actually less reactive than the same mass of bulk matter. A one centimetre cube of sodium taken from its protective fluid (naphtha) and thrown into a pool of water will act in a lively fashion for some time, but if the sodium is first cut up into one micrometre cubes, most of the metallic sodium will have already reacted with moist air before it reaches the water.



Solubility. The vapour pressureP of a droplet, and by extension the solubility of a nanopar- ticle, increases with diminishing radiusr according to the Kelvin equation

kBTln(P/P0) = 2γv/r (3.1)

wherekB is Boltzmann’s constant, T the absolute temperature, P0 the vapour pressure of the material terminated by an infinite planar surface, γ the surface tension (which may itself be curvature-dependent), andv the molecular volume.

Electronic energy levels. Individual atoms have discrete energy levels and their absorption spectra correspondingly feature sharp individual lines. It is a well known feature of condensed matter that these discrete levels merge into bands, and the possible emergence of a forbidden zone (band gap) determines whether we have a metal or a dielectric.

Stacking objects with nanoscale sizes in one, two or three dimensions (yielding nanoplates, nanofibres and nanoparticles, with, respectively, confinement of carriers in two, one or zero dimensions) constitute a new class of superlattices or superatoms. These are exploited in a variety of nanodevices (Chapter 6). The superlattice gives rise to sub-bands with energies

En(k) =En(0)+2k2/(2m) (3.2) whereEn(0)is thenth energy level,kthe wavenumber, andm the effective mass of the electron, which depends on the band structure of the material.

Similar phenomena occur in optics, but since the characteristic size of photonic band crystals are in the micrometre range, they are, strictly speaking, beyond the scope of nanotechnology.

Electrical conductivity. Localized states with Coulomb interactions cannot have a finite density of states at the Fermi level, which has significant implications for electron transport within nanoscale material. By definition, at zero Kelvin all electronic states of a material below the Fermi level are occupied and all states above it are empty. If an additional electron is introduced, it must settle in the lowest unoccupied state, which is above the Fermi level and has a higher energy than all the other occupied states. If, on the other hand, an electron is moved from below the Fermi level to the lowest unoccupied state above it, it leaves behind a positively charged hole, and there will be an attractive potential between the hole and the electron. This lowers the energy of the electron by the Coulomb term −e2/(r) where e is the electron charge, the dielectric permittivity, and r the distance between the two sites. If the density of states at the Fermi level is finite, two states separated by but very close to the



Fermi level could be chosen, such that the energy difference was less thane2/(r), which would mean—nonsensically—that the electron in the upper state (above the Fermi level) has a lower energy than the electron located below the Fermi level. The gap in states that must therefore result is called the Coulomb gap, and materials with a Coulomb gap are called Coulomb glasses.

If the size of the conductor is significantly smaller than the mean free path of the electron between collisions, it can traverse the conductor ballistically, and the resistance is h/(2e2) per sub-band, independent of material parameters.

Ferromagnetism. In certain elements, exchange interactions between the electrons of adja- cent ions lead to a very large coupling between their spins, such that, above a certain tempera- ture, the spins spontaneously align with each other. The proliferation of routes to synthesizing nanoparticles of ferromagnetic substances has led to the discovery that when the particles are below a certain size, typically a few tens of nanometres, the substance still has a large magnetic susceptibility in the presence of an external field, but lacks the remanent magnetism character- istic of ferromagnetism. This phenomenon is known as superparamagnetism. There is thus a lower limit to the size of the magnetic elements in nanostructured magnetic materials for data storage, typically about 20 nm, below which room temperature thermal energy overcomes the magnetostatic energy of the element, resulting in zero hysteresis and the consequent inability to store magetization orientation information.

What do you want to do?

No matter what you want out of your future career, an employer with a broad range of operations in a load of countries will always be the ticket. Working within the Volvo Group means more than 100,000 friends and colleagues in more than 185 countries all over the world. We offer graduates great career opportunities – check out the Career section at our web site www.volvogroup.com.

We look forward to getting to know you!

AB Volvo (publ) www.volvogroup.com


Please click the advert



Electron confinement. The Bohr radius rBof an electron moving in a condensed phase is given by

rB = h2

e2mπ (3.3)

wherehis Planck’s constant. Typical values range from a few to a few hundred nanometres.

Therefore, it is practically possible to create particles whose radius r is smaller than the Bohr radius. In this case the energy levels of the electrons (a similar argument applies to defect electrons, positive holes) increase, and the greater the degree of confinement, the greater the increase. Hence the band edge of optical adsorption (and band-edge luminescent emission) blue shifts with decreasingr forr < rB. This is sometimes called a quantum size effect in the scientific literature, and nanoparticles with this property are called quantum dots.

Integrated optics. Light can be confined in a channel or plate made from a transparent material having a higher refractive index than that of its environment. Effectively, light propa- gates in such a structure by successive total internal reflexions at the boundaries. The channel (of fibre) can have a diameter, or the plate and thickness, less than the wavelength of the light. Below a certain minimum diameter or thickness (the cut-off), however, typically around one third of the wavelength of the light, propagation is no longer possible. The science and technology of light guided in thin structures is called integrated optics and fibre optics, and sometimes nanophotonics. However, the cut-off length is several hundred nanometres, and does not therefore truly fall into the nano realm as it is currently defined.

Chemical reactivity. Consider the prototypical homogeneous reaction A + B C. Sup- posing that the reaction rate coefficient kf is much less than the diffusion-limited rate, that is, kf 4π(dA+dB)(DA +DB), where d and D are the molecular radii and diffusivities respectively. Then1


dt =kf[ab+ Δ2(γt)] =kfab (3.4) wherea and bare the numbers (concentrations) of A and B, and the angular brackets denote expected numbers, and γt is the number of C molecules created up to time t. The term Δ2(γt) expresses the fluctuations inγt: γt2=γt2+ Δ2(γt): supposing that γt approximates to a Poisson distribution, then Δ2(γt) will be of the same order of magnitude as γt. The kinetic mass action law (KMAL) putting a =a0−c(t) etc., the subscript 0 denoting initial concentration at t = 0, is a first approximation in which Δ2(γt) is supposed negligibly small

1See A. R´enyi, K´emiai reakci´ok t´argyal´asa a sztochasztikus folyamatok elm´elete seg´ıts´eg´evel. Magy. Tud.

Akad. Mat. Kut. Int. K¨ozl. 2 (1953) 83–101.



compared to a and b, implying that ab=ab, whereas strictly speaking it is not since aand bare not independent. The neglect of Δ2(γt) is justified for molar quantities of starting reagents (except near the end of the process, whenaand bbecome very small), but not for reactions in nanomixers.

These number fluctuations, i.e. the Δ2(γt) term, will constantly tend to be eliminated by diffusion. On the other hand, because of the correlation betweenaandb, initial inhomogeneities in their spatial densities lead to the development of zones enriched in either one or the other faster than the enrichment can be eliminated by diffusion. Hence instead of A disappearing as t−1(whena0 =b0), it is consumed ast−3/4, and in the case of a reversible reaction, equilibrium is approached as t−3/2. Deviations from perfect mixing are more pronounced in dimensions lower than three.

Occurrence of impurities. Ifpis the probability that an atom is substituted by an impurity, then the probability of exactly k impurities amongnatoms is

b(k;n, p) =



pkqnk (3.5)

where q = 1−p. If the product np = λ is of moderate size ( 1), the distribution can be simplified to:

b(k;n, p) λk

k!eλ =p(k;λ) (3.6)

the Poisson approximation to the binomial distribution. Hence the smaller the device, the higher the probability that it will be defect-free. The relative advantage of replacing one large device by m devices each 1/mth of the original size is m1−kenp(1−1/m), assuming that the nanification does not itself introduce new impurities.

Mechanical properties. The spring constant (stiffness) k of a nanocantilever varies with its characteristic linear dimension as l, and its massm asl3. Hence the resonant frequency of its vibrationω0=

k/mvaries as 1/l. This ensures a fast response—in effect, nanomechanical devices are extremely stiff. Since the figure of merit (quality factor)Qequalsω0 divided by the drag (friction coefficient), Q, especially for devices operating in a high vacuum, can be many orders of magnitude greater than the values encountered in conventional devices. On the other hand, under typical terrestrial operating conditions water vapour and other impurities may condense onto moving parts, increasing drag due to capillary effects, and generally degrading performance.



3.2 Forces

The magnitudes of the forces (gravitational, electrostatic, etc.) between objects depend on their sizes and the distance zbetween them. At the nanoscale, gravitational forces are so weak that they can be neglected. Conversely, the range of the strong nuclear force is much smaller, and can also be neglected. Of particular importance are several forces (e.g., the van der Waals force) that are electrostatic in origin. They are discussed in Chapter 7, since they are especially important for self-assembly.

A cavity consisting of two mirrors facing each other disturbs the pervasive zero-point electro- magnetic field, because only certain wavelengths can fit exactly into the space between the mirrors. This lowers the zero-point energy density in the region between the mirrors, resulting in an attractive Casimir force. The force falls off rapidly with the distance z between the mirrors (asz−4), and hence is negligible at the microscale and above, but at a separation of 10 nm it is comparable with atmospheric pressure (105 N/m), and therefore can be expected to affect the operation of nanoscale mechanical devices.

Challenging? Not challenging? Try more

Try this...


Please click the advert



Patients share the view of respondents depending on the causes of insufficient involvement of the family doctor in controlling disease in patients with diabetes,

Comparing the methods of postoperative analgesia, that is the efficacy of the treatment using NSAIDs, as against the efficacy of acetaminophen, the literature data

The study results concluded that the ACEIs are effective in lowering systolic and diastolic blood pressure, they reduce global cardiovascular risk through

The aim of this study was to assess right ventricular outflow tract fractional shortening in patients with non-high risk acute pulmonary embolism and to determine

Selenium Nanoparticles were prepared in four different concentrations (0.2, 0.4, 0.6, and 0.8 ) mM, then the selenium Nanoparticles were characterized by UV-VIS spectroscopy,

ZnO NPs 10 ppm treatment significantly increased plant height, biomass, number of spikelets per spike, spike length, number of grains per spike compared to

The drop in the value of saturation magnetization is due to the presence of LAA surfactant on the surface of magnetite nanoparticles and also the smaller size

Preliminary biological assays on these nanoparticles proved that Fe 3 O 4 @Zn-1 nanoparticles exhibit good biocompatibility in HCT-8 and MSC cell lines; flow cytometry

The temperature sensitivity reported with carbon nanotubes was found to be higher than those based on hydrogenated multi-wall carbon nanotubes (-0.16%/°C) [20], carbon nanotube

In this study, we investigated effects of the self-assembling peptide RADA16-I, compared with Matrigel and Collagen I, on the malignant phenotype of a pancreatic cancer cell

Our main objective in this paper is to study the sufficient conditions of the form (1.3) for meromorphically convex functions of order α and for functions in a family that are convex

Even the fact that languages like Italian had predicative gerunds in past stages cannot be considered as evidence for the same structure, because when Italian predicative

As for the Hungarian sentences containing focus, the children either fail to front the focused constituent, applying the optionality available in Romanian sentences containing

In Romanian, the configurations with the verb vedea ‘see’ followed by a că-CP can express: direct perceptions, as in example (12); indirect physical perceptions – in

A direct derivation inside TP will give the active complement in (21a), with EA Mary in SpecVoiceP; an indirect derivation gives the passive complement in (21b), with the

In Contemporary Romanian the pronouns with neutral values are: the demonstratives asta / aceasta ‘this’, more rarely aceea ‘that’, the indefinite pronouns una

The results of the study therefore reveals that the most common influential factors hindering consumers’ adoption of electronic banking in Nigeria are relative

They defined the US fea- tures of APN as follows: (1) a change in renal parenchy- mal echogenicity, (2) swelling of the renal parenchyma, (3) loss of renal

The diagram shows the percentage of all students in Europe, your country and at your university who worry about their future career..

Then V 2 is a vector space over R (or a real vector space), where the addition is the usual addition of two vectors by the parallelogram rule and the external operation is the

Machine Learning Latent variable models Estimation methods Summary.. Nonparametrics

The DUAREM project aims at validating through pseudo-dynamic testing the re-centering capability of dual structures with removable dissipative members,

Among them are coordinating shared data across multiple devices and servers, offloading code from devices to the cloud, and integrating heterogeneous components