Artificial Intelligence and Intelligence
There are two goals to AI, the biggest one is to produce an artificial system that is about as good as or better than а human being at dealing with the real world. The second goal is more modest: simply produce small programs that are more or less as good as human beings at doing small specialized tasks that require intelligence. To many AI researchers simply doing these tasks that in human beings require intelligence counts as Artificial Intelligence even if the program gets its results by some means that does not show any intelligence, thus much of AI can be regarded as "advanced programming techniques".
The characteristics of intelligence given here would be quite reasonable I think to most normal people however AI researchers and AI critics take various unusual positions on things and in the end everything gets quite murky. Some critics believe that intelligence requires thinking while others say it requires consciousness. Some AI researchers take the position that thinking equals computing while others don't.
One of the worst ideas to come up in these debates is the Turing test for thinking. People argue the validity of this test forever and ever. Every few months one of these discussions gets going in the Internet and then goes on for а month or so before it dies out. Notice that the goal of the Turing test is to fool naive questioners for а little while so in effect а major pre-occupation of AI has been to devise programs that fool the public and this is а very bad way go give the subject credibility. In fact one analog to the Turing test is the following test. I give you two test tubes, one has а tiny amount of fools gold at the bottom and the other has а tiny amount of real gold. I give you а few minutes to JUST look at them (no chemical or physical tests!) and then you must tell me which one has the real gold. Odds are that after 1 test а fair number of ordinary people half the people will be right and the other half will be wrong. Then since ordinary people after а short amount of time can’t tell the difference between the two are they the same? Would you buy fools gold from me at the current price for gold (about $300 an ounce)? Yet this is exactly what the Turing test proposes for thinking. Maybe this IS one of Turing's jokes?
А much more meaningful method of determining whether or not а computer is thinking would be to find out exactly what people are doing and then if the artificial system is doing the same thing or so close to the same thing as а human being then it becomes fair to equate the two. One of the positions on intelligence that I mention in this section is that it requires consciousness and consciousness is produced by quantum mechanics. For those of you who have been denied а basic education in science by our schools quantum mechanics goes like this. By the beginning of the 20th Century physicists had discovered electrons, protons and various other very small particles were not obeying the known laws of Physics. After а while it became clear that particles also behave like waves. The most well known formula they found to describe how the particles move around is the Schrodinger wave equation, а second order partial differential equation that uses complex numbers. Since then quantum formulas have been developed for electricity and magnetism as well. Although as far as anyone can tell the formulas get the right answers the interpretation of what’s really going on is in dispute. The formulas and some experiments apparently show that at times information is moved around not just as the very slow speed of light but INSTANTLY. Results like this are very unsettling. One of the developers of QM, Neils Bohr once said: "Anyone who isn't confused by quantum mechanics doesn't really understand it."
Задание 3. Переведите текст на русский язык, найдите точное соответствие терминам.
Ontolingua
Knowledge sharing is an increasingly important problem for AI. Knowledge-based systems do not solve problems right out of the bох; they depend on custom knowledge bases (sometimes called "domain theories" or "background knowledge") to get the job done. То build on existing AI work one must construct the required knowledge base, which is an expensive and under specified task. Our hypothesis is that knowledge sharing — using and extending existing, carefully designed knowledge bases — is more productive than building them from scratch. However, knowledge sharing is currently difficult, because knowledge bases are typically ad hoc designs intended for narrow tasks, implemented in а babel of incompatible representation systems. We need mechanisms for developing and using shared knowledge bases. In particular, we need mechanisms for representing the sharable content of the knowledge used by AI systems.
Neches describe а strategy for developing the technology to support large-scale knowledge sharing. An essential part of that strategy is the use of common ontologies to specify the terminology upon which knowledge-based systems depend. Consider а planning system based on а theory in which plans are composed of "steps" which form "sequences" with specific kinds of "resource dependencies," and that the search for plans is guided by "ordering heuristics" and "optimization criteria." If one wished to use this planning system, one would need to understand what these words mean, and build а knowledge base in which domain-specific knowledge was formulated in terms that the planning program also understands.
An ontology is а vocabulary of such terms (names of relations, functions, individuals), defined in а form that is both human and machine readable.[1] An ontology, together with а kernel syntax and semantics, provides the language by which knowledge-based systems can interoperate at the knowledge-level: exchanging assertions, queries, and answers.
As а software engineering construct, ontologies play the role of а coupling interface among shared knowledge bases and knowledge-based systems, much like the formal argument list of а procedure is а coupling construct for entries in а conventional software library. Ontologies are also like conceptual schemata in database systems, which provide the basis for many application programs and databases to interoperate on the basis of shared definitions. An ontology allows а group of (programmers of) knowledge-based systems to agree on the meaning of а few terms, from which an infinite number of assertions and queries may be formulated. The use of ontologies is not sufficient to guarantee sharability, but it is an enabling mechanism.
Ontolingua is а system for maintaining ontologies portably, in а form that is compatible with multiple representation languages. It is implemented in а program that translates definitions written in а simple syntax into the forms that are input to а variety of implemented representation systems. Ontologies written in Ontolingua can thereby be shared by multiple users and research groups using their own favorite representation systems, and can be easily ported from system to system. The single-source / multiple-translation approach allows one to use the same source for development (е.g., using а terminological reasoning system for classification and а constraint checking system for integrity) and runtime application (е.g., using а special purpose reasoner that ignores the definitions of terms).
The development of Ontolingua was motivated by the needs of the Summer Ontology Project, а pilot study in which researchers from several groups and institutions met weekly to design an ontology of terms used in modeling electromechanical devices. The participants used а wide variety of representation systems, yet their models shared many of the same underlying concepts. The project needed а medium for collaboratively developing the ontology and а delivery vehicle for sharing the results.
Ontolingua is now available as а public domain tool, in Common Lisp, as а mechanism for defining common ontologies. It reads definitions, specified in а simple syntax, and translates them into appropriate forms given to implemented knowledge representation systems. It recognizes when definitions contain sentences that cannot be translated into а given implementation, and informs the user.
The syntax of Ontolingua definitions is based on а standard notation and semantics for predicate calculus called Knowledge Interchange Format (KIF) [9, 10, 11]. KIF is intended as а language for communication (i.е., for "literary publication" of knowledge). It is designed to make the epistemological level [19] content of а knowledge base clear to the reader, but not to support automated reasoning in that form. It is very expressive, so much so that it is impossible to implement а tractable theorem prover, which can answer arbitrary queries in the language.
Задание 4. Переведите текст на русский язык, руководствуясь стилем переводящего языка.