Give the summary of the text using the key terms. Read the following words and word combinations and use them for understanding and translation of the text:


ARTIFICIAL LIFE

Read the following words and word combinations and use them for understanding and translation of the text:

manifold- разнообразный, многообразный

blanket term- общий термин

design space- пространство проектных решений (пара-метров)

generalize- обобщать

to conceive- задумать, замыслить, разработать

typified- на примере

crossover- кроссинговер (перекрест хромосом)

to intervene - вмешиваться

full-blown design- полнофункциональная модель (об-разец)

laypeople- непрофессионалы

carbon chemistry- химия углеродных соединений

species of prey- хищный вид

predator- хищник

to validate- подтверждать, проверять правильность

conversely - наоборот, напротив, с другой стороны

to remedy- лечить, исправлять

commitment to the idea- приверженность идее

autopoiesis- самосоздание, самовоспроизводство

The historical and theoretical roots of the field are manifold. These roots include:

· early attempts to imitate the behavior of humans and animals by the invention of mechanical automata in the sixteenth century;

· cybernetics as the study of general principles of informational control in machines and animals;

· computer science as theory and the idea of abstract equivalence between various ways to express the notion of computation, including physical instantiations of systems performing computations;

· John von Neumann's so-called self-reproducing Cellular Automata;

· computer science as a set of technical practices and computational architectures;

· artificial intelligence (AI)

· robotics;

· philosophy and system science notions of levels of organization, hierarchies, and emergence of new properties;

· non-linear science, such as the physics of complex systems and chaos theory; theoretical biology, including abstract theories of life processes; and

· evolutionary biology.

Artificial life is a blanket term used to refer to human attempts at setting up systems with lifelike properties all biological organisms possess, such as self-reproduction, homeostasis, adaptability, mutational variation, optimization of external states, and so on. The term is commonly associated with computer simulation-based artificial life, preferred heavily to robotics because of its ease of reprogramming, inexpensive hardware, and greater design space to explore. Artificial life projects can be thought of as attempts to generalize the phenomenon of life, asking questions like, "what would life have looked like if it evolved under radically different physical conditions?", "what is the logical form of all living systems?", or "what is the simplest possible living system?"

The term "artificial life", often shortened to "alife" or "A-Life", was coined in the late 1980s by researcher Christopher Langton, who defined it as "the study of artificial systems that exhibit behavior characteristic of natural living systems. It is the quest to explain life in any of its possible manifestations, without restriction to the particular examples that have evolved on earth... the ultimate goal is to extract the logical form of living systems."

Probably the first person to actively study and write on topics related to A-Life was the noted mathematician John Von Neumann, who was also an early figure in the field of game theory. In the middle of the 20th century, Von Neumann delivered a paper entitled "The General and Logical Theory of Automata," in which he discussed the concept of a machine that follows simple rules and reacts to information in its environment. Von Neumann proposed that living organisms are just such machines. He also studied the concept of machine self-replication, and conceived the idea that a self-replicating machine, or organism, must contain within itself a list of instructions for producing a copy of itself. This was several years before James Watson and Francis Crick, with the help of Rosalind Franklin and Maurice Wilkins, discovered the structure of DNA.

The field was expanded by the development of cellular automata as typified in John Conway’s Game of Life in the 1970s, which demonstrated how simple components interacting according to a few specific rules could generate complex emergent patterns. This principle is used to model the flocking behavior of simulated birds, called “boids”.

The development of genetic algorithms by John Holland added selection and evolution to the act of reproduction. This approach typically involves the setting up of numerous small programs with slightly varying code, and having them attempt a task such as sorting data or recognizing patterns. Those programs that prove most “fit” at accomplishing the task are allowed to survive and reproduce. In the act of reproduction, biological mechanisms such as genetic mutation and crossover are allowed to intervene. A rather similar approach is found in the neural network, where those nodes that succeed better at the task are given greater “weight” in creating a composite solution to the problem.

A more challenging but interesting approach to AL is to create actual robotic “organisms” that navigate in the physical rather than the virtual world. Roboticist Hans Moravec of the Stanford AI Laboratory and other researchers have built robots that can deal with unexpected obstacles by improvisation, much as people do, thanks to layers of software that process perceptions, fit them to a model of the world, and make plans based on goals. But such robots, built as full-blown designs, share few of the characteristics of artificial life. As with AI, the bottom-up approach offers a different strategy that has been called “fast, cheap, and out of control”—the production of numerous small, simple, insectlike robots that have only simple behaviors, but are potentially capable of interacting in surprising ways. If a meaningful genetic and reproductive mechanism can be included in such robots, the result would be much closer to true artificial life.

Artificial life is still a very new discipline, having been founded only in the late 1980s, and is still very much under development. Like other new fields, it has been the subject of some criticism. Based on its abstract nature, artificial life has taken time to be understood and accepted by the mainstream; papers on the topic have only recently been put into prominent scientific publications like Nature and Science. As with any new discipline, researchers need time to select the most fruitful research paths and translate their findings into terms other scientists and laypeople can understand and appreciate. The field of artificial life is one that seems poised to grow as the cost of computing power continues to drop.

Artificial life may be labeled software, hardware, or wetware, depending on the type of media researchers work with.

Software artificial lifeis rooted in computer science and represents the idea that life is characterized by form, or forms of organization, rather than by its constituent material. Thus, "life" may be realized in some form (or media) other than carbon chemistry, such as in a computer's central processing unit, or in a network of computers, or as computer viruses spreading through the Internet. One can build a virtual ecosystem and let small component programs represent species of prey and predator organisms competing or cooperating for resources like food.

The difference between this type of artificial life and ordinary scientific use of computer simulations is that, with the latter, the researcher attempts to create a model of a real biological system (e.g., fish populations of the Atlantic Ocean) and to base the description upon real data and established biological principles. The researcher tries to validate the model to make sure that it represents aspects of the real world. Conversely, an artificial life model represents biology in a more abstract sense; it is not a real system, but a virtual one, constructed for a specific purpose, such as investigating the efficiency of an evolutionary process of a Lamarckian type (based upon the inheritance of acquired characters) as opposed to Darwinian evolution (based upon natural selection among randomly produced variants). Such a biological system may not exist anywhere in the real universe. As Langton emphasized, artificial life investigates "the biology of the possible" to remedy one of the inadequacies of traditional biology, which is bound to investigate how life actually evolved on Earth, but cannot describe the borders between possible and impossible forms of biological processes. For example, an artificial life system might be used to determine whether it is only by historical accident that organisms on Earth have the universal genetic code that they have, or whether the code could have been different.

It has been much debated whether virtual life in computers is nothing but a model on a higher level of abstraction, or whether it is a form of genuine life, as some artificial life researchers maintain. In its computational version, this claim implies a form of Platonism whereby life is regarded as a radically medium-independent form of existence similar to futuristic scenarios of disembodied forms of cognition and AI that may be downloaded to robots. In this debate, classical philosophical issues about dualism, monism, materialism, and the nature of information are at stake, and there is no clear-cut demarcation between science, metaphysics, and issues of religion and ethics.

Hardware artificial life refers to small animal-like robots, usually called animats, that researchers build and use to study the design principles of autonomous systems or agents. The functionality of an agent (a collection of modules, each with its own domain of interaction or competence) is an emergent property of the intensive interaction of the system with its dynamic environment. The modules operate quasi-autonomously and are solely responsible for the sensing, modeling, computing or reasoning, and motor control that is necessary to achieve their specific competence. Direct coupling of perception to action is facilitated by the use of reasoning methods, which operate on representations that are close to the information of the sensors.

This approach states that to build a system that is intelligent it is necessary to have its representations grounded in the physical world. Representations do not need to be explicit and stable, but must be situated and "embodied." The robots are thus situated in a world; they do not deal with abstract descriptions, but with the environment that directly influences the behavior of the system. In addition, the robots have "bodies" and experience the world directly, so that their actions have an immediate feedback upon the robot's own sensations. Computer-simulated robots, on the other hand, may be "situated" in a virtual environment, but they are not embodied. Hardware artificial life has many industrial and military technological applications.

Wetware artificial life comes closest to real biology. The scientific approach involves conducting experiments with populations of real organic macromolecules (combined in a liquid medium) in order to study their emergent self-organizing properties. An example is the artificial evolution of ribonucleic acid molecules (RNA) with specific catalytic properties. (This research may be useful in a medical context or may help shed light on the origin of life on Earth.) Research into RNA and similar scientific programs, however, often take place in the areas of molecular biology, biochemistry and combinatorial chemistry, and other carbon-based chemistries. Such wetware research does not necessarily have a commitment to the idea, often assumed by researchers in software artificial life, that life is a composed of medium-in-dependent forms of existence.

Thus wetware artificial life is concerned with the study of self-organizing principles in "real chemistries." In theoretical biology, autopoiesis is a term for the specific kind of self-maintenance produced by networks of components producing their own components and the boundaries of the network in processes that resemble organizationally closed loops. Such systems have been created artificially by chemical components not known in living organisms.

The philosophical implications arising from the possible development of true artificial life are similar to those involved with “strong AI.” Human beings are used to viewing themselves as the pinnacle of a hierarchy of intelligence and creativity. However, artificial life with the capability of rapid evolution might quickly outstrip human capabilities, perhaps leading to a world like that portrayed by science fiction writers where flesh-and-blood humans become a marginalized remnant population.

Notes:

Cellular Automaton is a collection of "colored" cells on a grid of specified shape that evolves through a number of discrete time steps according to a set of rules based on the states of neighboring cells. The rules are then applied iteratively for as many time steps as desired.

homeostasis is the ability to maintain a constant internal environment in response to environmental changes.

DNAor deoxyribonucleic acid is the hereditary material in humans and almost all other organisms that encodes the genetic instructions used in the development and functioning of all known living organisms and many viruses.

Game of Life, also known simply as Life, is a cellular automaton devised by the British mathematician John Horton Conway in 1970. The "game" is a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves.

Hans Moravec (born November 30, 1948, Kautzen, Austria) is an adjunct faculty member at the Robotics Institute of Carnegie Mellon University. He is known for his work on robotics, artificial intelligence, and writings on the impact of technology. Moravec also is a futurist with many of his publications and predictions focusing on transhumanism. Moravec developed techniques in computer vision for determining the region of interest (ROI) in a scene.

animatsare artificial animals, a contraction of animal-materials. The term includes physical robots and virtual simulations.

ribonucleic acid (RNA) is a ubiquitous family of large biological molecules that perform multiple vital roles in the coding, decoding, regulation, and expression of genes. Together with DNA, RNA comprises the nucleic acids, which, along with proteins, constitute the three major macromolecules essential for all known forms of life.

autopoiesis(from Greek αὐτo- (auto-), meaning "self", and ποίησις (poiesis), meaning "creation, production") refers to a system capable of reproducing and maintaining itself.

Assignments

1. Translate the sentences from the texts into Russian in writing paying attention to the underlined words and phrases:

1. The term is commonly associated with computer simulation-based artificial life, preferred heavily to robotics because of its ease of reprogramming, inexpensive hardware, and greater design space to explore.

2. It is the quest to explain life in any of its possible manifestations, without restriction to the particular examples that have evolved on earth... the ultimate goal is to extract the logical form of living systems.

3. This principle is used to model the flocking behavior of simulated birds, called “boids”.

4. This approach typically involves the setting up of numerous small programs with slightly varying code, and having them attempt a task such as sorting data or recognizing patterns.

5. As Langton emphasized, artificial life investigates "the biology of the possible" to remedy one of the inadequacies of traditional biology, which is bound to investigate how life actually evolved on Earth, but cannot describe the borders between possible and impossible forms of biological processes.

6. The functionality of an agent (a collection of modules, each with its own domain of interaction or competence) is an emergent property of the intensive interaction of the system with its dynamic environment.

7. This approach states that to build a system that is intelligent it is necessary to have its representationsgrounded in the physical world.

2. Answer the following questions:

1. What are the origins of A-life as a discipline?

2. What questions are believed to be central to the field of A-life?

3. What does the concept of Cellular Automata involve?

4. Which of the three types of A-life seems to be most promising?

5. What are the distinguishing features of each type?

6. What kind of ethic issues might arise concerning A-life?

3. Translate into English:

Искусственная жизнь создана! Возможно ли такое?

24 мая 2010 года на пресс-конференции известный и талантливый американский биолог и бизнесмен Вентер, первый в мире расшифровавший геном человека, объявил общественности, что под его руководством институтом его же имени создана искусственная жизнь.

Впервые в истории создана искусственная живая клетка, которая всецело управляется рукотворным гено­мом.Ранее ученые лишь редактировали ДНК по кусочкам, получая генномодифицированные растения и животных.

Это достижение, несомненно, подогреет споры об этичности создания искусственной жизни, а также о юри­дически-правовых моментах и общественной опасности таких работ. "Это поворотный момент в отношениях чело­века с природой: впервые создана целая искусственная клетка с заранее заданными свойствами", - пояснил моле­кулярный биолог Ричард Эбрайт из Университета Рутд­жерса. По мнению экспертов, вскоре метод будет использо­ваться в коммерческих целях: некоторые компании уже разрабатывают живые организмы, способные синтезиро­вать топливо, вакцины и др. Компания Synthetic Genomics Inc., основанная Вентером, заключила контракт на 600 млн. долларов на разработку водорослей, способных поглощать углекислый газ и производить топливо.

Ученые фактически претворили компьютерную про­грамму в новое живое существо. Взяв за основу одну из бак­терий, они внесли в компьютер полную расшифровку ее генома, заменили некоторые фрагменты в этом "тексте" своими собственными "сочинениями" и получили моди­фицированный вариант бактерии другого реально сущест­вующего вида. "Мы изготавливаем геном из четырех пу­зырьков химикатов, вносим искусственный геном в клетку, и наш искусственный геном подчиняет клетку себе", - разъ­яснил один из руководителей проекта Дэниел Гибсон. Чтобы обособить эту новую бактерию и всех ее потомков от творений природы, Вентер и его коллеги вставили в геном свои имена, а также три цитаты из Джеймса Джойса и дру­гих авторов. Эти "генетические водяные знаки" помогут ученым предъявить право собственности на клетки.

Topics for essays (you might need additional information):

· Early automatons

· Famous robotics projects

· Swarm intelligence: pros and cons

· Chemically Synthesized Genome

FUTURE COMPUTING

QUANTUM COMPUTING

Read the following words and word combinations and use them for understanding and translation of the text:

property- свойство, качество

quantum- квант. квантовый

spin- вращение

superposition- суперпозиция, наложение, совмещение

to flesh out- конкретизировать, изложить в деталях

to spur- побуждать, стимулировать

in part- частично

to outline- намечать, изложить вкратце

to factor- факторизовать, разложить (на множители)

integer- целое число

to be of great interest (to)- представлять большой интерес (для)

to tackle- заниматься

entanglement- перепутывание (квантовых состояний)

to crack- раскалывать(ся), ломаться

civilian- гражданский

The fundamental basis of electronic digital computing is the ability to store a binary value (1 or 0) using an electromagnetic property such as electrical charge or magnetic field.

However, during the first part of the 20th century, physicists discovered the laws of quantum mechanics that apply to the behavior of subatomic particles. An electron or photon, for example, can be said to be in any one of several “quantum states” depending on such characteristics as spin. In 1981, physicist Richard Feynman came up with the provocative idea that if quantum properties could be “read” and set, a computer could use an electron, photon, or other particle to store not just a single 1 or 0, but a number of values simultaneously. This ability of a quantum system to be in multiple states at the same time is called superposition. The simplest case, storing two values at once, is called a “qubit” (short for “quantum bit”). In 1985, David Deutsch at Oxford University fleshed out Feynman’s ideas by creating an actual design for a “quantum computer”, including an algorithm to be run on it.

At the time of Feynman’s proposal, the techniques for manipulating individual atoms or even particles had not yet been developed, so a practical quantum computer could not be built. However, during the 1990s, considerable progress was made, spurred in part by the suggestion of Bell Labs researcher Peter Shor, who outlined a quantum algorithm that might be used for rapid factoring of extremely large integers. Since the security of modern public key cryptography depends on the difficulty of such factoring, a working quantum computer would be of great interest to spy agencies.

The reason for the tremendous potential power of quantum computing is that if each qubit can store two values simultaneously, a register with three qubits can store eight values, and in general, for n qubits one can operate on 2n values simultaneously. This means that a single quantum processor might be the equivalent of a huge number of separate processors. Clearly many problems that have been considered not practical to solve might be tackled with quantum computers.

Quantum computers also utilize another aspect of quantum mechanics known as entanglement. Unfortunately, quantum particles cannot be observed without being altered. Scientists use their knowledge of entanglement to indirectly observe the value of a qubit. When two subatomic particles become entangled, one particle adopts the properties of the other. Without looking at the qubit itself, scientists can read its value by observing the behavior of a particle with which it is entangled.

There are many potential applications for quantum computing. While the technology could be used to crack conventional cryptographic keys, researchers have suggested that it could also be used to generate unbreakable keys that depend on the “entanglement” of observers and what they observe. The sheer computational power of a quantum computer might make it possible to develop much better computer models of complex phenomena such as weather, climate, and the economy – or of quantum behavior itself.

As of 2014 quantum computing is still in its infancy but experiments have been carried out in which quantum computational operations were executed on a very small number of qubits. Both practical and theoretical research continues, and many national governments and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes, such as cryptanalysis.

Notes:

Bell Labs (Bell Laboratories) - бывшая американская, а ныне франко-американская корпорация, крупный исследова­тельский центр в области телекоммуникаций, электронных и компьютерных систем. Штаб-квартира Bell Labs располо­жена в Мюррей Хилл (Нью-Джерси, США)

Assignments

1. Translate the sentences from the text into Russian in writing paying attention to the underlined words and phrases:

1. However, during the 1990s, considerable progress was made, spurred in part by the suggestion of Bell Labs researcher Peter Shor, who outlined a quantum algorithm that might be used for rapid factoring of extremely large integers.

2. Since the security of modern public key cryptography depends on the difficulty of such factoring, a working quantum computer would be of great interest to spy agencies.

3. Unfortunately, quantum particles cannot be observed without being altered.

4. As of 2014 quantum computing is still in its infancy but experiments have been carried out in which quantum computational operations were executed on a very small number of qubits.

5. Both practical and theoretical research continues.

2. Answer the following questions:

1. What is the basis of electronic digital computing?

2. What provocative idea did physicist Richard Feynman come up with?

3. Why could a practical quantum computer not be built at the time of Feynman’s proposal?

4. Describe the reason for a huge potential power of quantum computing.

5. What aspects of quantum mechanics do quantum computers utilize?

6. How can quantum computing be applied?

3. Translate into English:

Современные компьютерные чипы могут содержать до нескольких миллиардов транзисторов на одном квадрат­ном сантиметре кремния, а в будущем подобные элементы не будут превышать размера молекулы. Устройства с та­кими чипами будут существенно отличаться от классиче­ских компьютеров. Это обусловлено тем, что принципы их работы будут основаны на квантовой механике, физиче­ских законах, объясняющих поведение атомов и субатом­ных частиц. Ученые надеются, что квантовые компьютеры смогут решать ряд специфических задач гораздо быстрее, чем их классические собратья.

В действительности создать квантовый компьютер не­просто. Основные его элементы - атомы, фотоны или спе­циально созданные микроструктуры, хранящие данные в так называемых кубитах (квантовых битах), особенность которых заключается в том, что они должны отвечать двум противоречивым требованиям. С одной стороны они должны быть достаточно изолированы от любых внешних воздействий, которые могут нарушить вычислительный процесс, а с другой - иметь возможность взаимодействовать с другими кубитами. Кроме того необходимо иметь воз­можность измерить окончательное состояние кубитов и отобразить результаты вычислений.

Ученые во всем мире используют несколько подходов для создания первых прототипов квантовых компьютеров.

Наши рекомендации