Network Fault Management Made Easy

Nokia's OMC solution pro­vides graphical and textual displays containing informa­tion about the status of the network. If an equipment problem is indicated, the user can find the location of the faulty equipment by zooming in with a graphical tool. Fur­ther text-based failure analy­sis can be carried out on any faulty network element.

Any alarms generated by the network are collected and stored in a database. Nokia OMC has a modifiable alarm manual which can be accessed easily from the alarm moni­toring application. Alarm fil­tering is available for use when it is necessary to control alarm flow - during site mainte­nance, for example. Alarm trends can be analysed in or­der to identify network faults and problem sites.

Powerful Performance Management

The DX 200-based network elements collect measure­ment and observation data. This data is automatically transferred to Nokia OMC and stored in the database. Further processing of the data is easy: the user can view it in graphical form and textual reports. The Nokia OMC also allows the user to set his or her own threshold values for the measurement data. The system automatically gener­ates an alarm when the pre­set threshold value is ex­ceeded, e.g. when an abnor­mal measurement value is detected.

Furthermore, the Nokia OMC supports deeper analy­sis and post-processing of net­work data. The user can post-process the measurement data with a spreadsheet program. With that program's various tools, the user can further customise reports according to specific needs.

Open Interfaces

Nokia's OMC open architec­ture accommodates interfaces to different managed objects and other management sys­tems. The interfaces to man­aged objects provide fault management functions and remote sessions to the equip­ment. These tools let the user see the whole network in one view.

The systems supported by Nokia OMC include:

• DX 200 switches

• Nokia Transmission equipment

• Nokia's IN solution

• Equipment of other vendors

The Nokia OMC has inter­faces to other systems. For example, it possible to for­ward fault management in­formation to other informa­tion systems, or to service support centres via e-mail. Interfaces to the fault man­agement and measurement databases are also provided.

Easy to Use

Graphical user interfaces make Nokia OMC easy to use. Even a network distributed over a large area can be visu­alised clearly with the help of the hierarchical view system. It is common for telecommu­nication networks to undergo expansion and change continuously in other ways. A user can easily and immedi­ately adapt the Nokia OMC graphical environment to re­flect changes in the network.

The user can tailor the views to suit his or her own specific needs. The overall graphical environment can also be tailored to suit individual user work assignments, where one user may be responsible for monitoring one important geographical area, another for monitoring a type of equip­ment such as switches, and so on.

Pop-up menus, point-and-click mouse movements and drag-and-drop functions make network management operations simple. The Nokia OMC also provides the user with a context-sensitive help feature containing on-line in­structions.

The Nokia OMC is a state-of-the-art network manage­ment system. With the Nokia OMC, operators can manage their networks efficiently and flexibly. Nokia is also continuously developing the Nokia OMC to better serve customer needs.

д) Текст“ Nations in Race to Produce World's Fastest, Most Powerful Computer”

By Mike Toner, The Atlanta Journal-Constitution

Oct. 18-In the rarefied world of supercomputing, a petaflop is the four-minute mile of number crunching — the kind of raw computational power needed to predict detailed global climate changes, simulate nuclear chain reactions or model the birth of the cosmos.

So far, no computer has come close to such speeds, but several institutions, including the Department of Energy's Oak Ridge National Laboratory in Tennessee, are taking aim at the veritable holy grail of computing: a quadrillion calculations, or floating point operations, a second -- otherwise known as "flops."

That's roughly a billion times faster than today's generation of desktop computers.

But when it comes to advances in computing, change is the only constant. Today's most powerful computers are never powerful enough for tomorrow's problems.

The data being analyzed by today's climate models, for instance, already includes millions of temperature and rainfall measurements, trends in air pollution, greenhouse gases, the effects of agriculture, deforestation, snowfall, volcanoes, solar variability and the changing seasons. But greater processing power is needed to "see" further into the future in more detail.

Only one institution will earn the bragging rights to the next round-numbered milestone in computing, but in this race there are lots of winners. Like other "big science" projects that involve alliances between government and industry, advances in supercomputing push technology into the frontiers of the barely possible — paving the way for commercial versions of the machines that follow.

Commercial uses are as varied as commerce itself. Procter & Gamble once used a supercomputer to find a way to keep its Pringles potato chips from being blown off the assembly line. Nor do big-business applications mark the end of the trickle-down effect. Today's laptop computers, in fact, are as powerful as the supercomputers of a generation ago.

The current supercomputing speed champion, at 280 trillion calculations a second, is the IBM BlueGene/L, housed at the Lawrence Livermore National Laboratory in California. Earlier versions of multimillion-dollar supercomputers built for the government by companies like IBM and Cray Research are already commonplace at universities, banks and businesses around the world — and soon, the BlueGene generation of supercomputers will be replacing them.

But today's leaders can become tomorrow's also-rans. Two years ago, Japan's massive Earth Simulator system was the fastest in the world -- and a source of concern that the United States had lost its lead in the field. Today, the Earth Simulator has fallen to 10th fastest.

"Performance improvements at the very high end of scientific computing show no sign of slowing down," said Jack Dongarra, director of the University of Tennessee's Innovative Computing Laboratory and co-keeper of the ever-shifting list of the world's 500 fastest computers. "We are about to see some big changes."

Dongarra says at least nine government-backed computing projects - in the United States, China, Japan and Europe — are now in the running to break the petaflop barrier. Japan's effort to reclaim the supercomputing lead, the Life Simulator, will cost $1 billion.

But Dongarra says a number of centers in the United States are likely to achieve petaflop speeds within the next year or two. How they intend to do it - and what they will do with all that computing power once they succeed — are very different.

At the Department of Energy's Los Alamos National Laboratory in New Mexico, components for what could be the first machine to break the petaflop barrier began arriving this fall. Thirty-six more moving vans full of equipment will be needed to complete the $110 million Roadrunner project.

By the time the project, named for New Mexico's speedy state bird, is completed in 2008, the racks of computers will fill a room the size of a hockey rink and consume as much power as a small town.

Roadrunner will be used primarily to simulate the first few seconds of a nuclear detonation — virtual explosions that will enable the government to monitor the integrity of its bombs without actually testing them.

Los Alamos' task is no child's play. But the ability to achieve those unprecedented speeds will depend, in part, on builder IBM's ability to harness the power of 16,000 computer chips originally designed for Playstation 3 video consoles. The video chips will do the computational "grunt work" for another 16,000 more conventional processors.

"We need speeds like this to assure the integrity of the nation's nuclear stockpile," said Los Alamos spokesman Kevin Roark. "It's not that important to us to be first. If we are, it's just gravy."

Other centers, however, have their eyes on the gravy bowl, too. The Energy Department's Oak Ridge Laboratory, for instance, is working on a $200 million project -- code-named Baker -- that also aims to have a petaflop system up and running by 2008.

Oak Ridge's existing Jaguar supercomputer, although ranked only 24th in the world overall, is the most powerful system available today for general scientific research. It's currently used for everything from modeling of the Earth's future climate to 3-D animation for DreamWorks, the creators of animated movies like "Shrek" and other popular films.

Oak Ridge associate director Thomas Zacharia says the laboratory intends to open its upcoming petaflop system for use by outside interests. Boeing Co. is hoping to use it to study lighter, more efficient jet planes. European researchers want to use it to help develop fusion energy. And climate experts want to use it to predict the effects of global warming with unprecedented precision.

"There's no question that we would like to be the first center to have a petaflop machine," said Zacharia. But he acknowledges that any bragging rights to the fastest computer in the world, he says, would be short-lived.

"You have to remember that at the rate technology advances, even a petaflop machine will be obsolete in a pretty short time," he said. "That's why what really matters is what you do with this kind of computing power when you have it."

To see more of The Atlanta Journal-Constitution, or to subscribe to the newspaper, go to http:// www.ajc.com.

Copyright (c) 2006, The Atlanta Journal-Constitution Distributed by McClatchy-Tribune Business News.

For reprints, email [email protected], call 800-374-7985 or 847-635-6550, send a fax to 847-635-6968, or write to The Permissions Group Inc., 1247 Milwaukee Ave., Suite 303, Glenview, IL 60025, USA.

TERA, IBM, 6680, DWA, BA, 7661,

ЧАСТЬ 2.

КОММУНИКАТИВНЫЕ ЗАДАНИЯ ТЕКСТОВ СТИЛЯ НАУЧНОЙ ПРОЗЫ И ИХ РЕАЛИЗАЦИЯ

Введение

В каждом развитом литературном языке наблюдаются более или менее определенные системы языкового выражения, отличающиеся друг от друга особенностями использования общенародных языковых средств. Системный характер использования языковых средств приводит к тому, что в различных сферах употребления языка нормализуется выбор слов и характер их употребления. Преимущественное использование тех или иных синтаксических конструкций, особенности употребления образных средств языка, употребление различных способов связи между частями высказывания и т. д. Такие системы называются стилями речи или речевыми стилями.

Речевые стили выделяются как определенные системы в литературном языке, прежде всего с целью сообщения. Каждый речевой стиль имеет более или менее точную цель, которая предопределяет его функционирование и его языковые особенности. Так целью стиля научной прозы является доказательство определенных положений, гипотез, аргументация и т.д.

Каждый речевой стиль имеет как общие, типические для данного стиля особенности, так и частные формы его проявления. Соотношения общего и частного в речевых стилях проявляются по-разному в разные периоды развития этих стилей и внутри стилевой системы данного литературного языка. Так, например, научная статья, технический текст, текст учебника, энциклопедическая статья, инструкция, научно-популярная статья и пр. являются формами проявления и существования стиля научной прозы. Все они имеют то общее, что лежит в основе их выделения в самостоятельный речевой стиль. Однако каждая из этих разновидностей стиля имеет свои специфические черты, в которых проявляются как общие закономерности стиля, так и индивидуальные особенности, присущие лишь данному подстилю. Так, образность речи, характерная для стиля художественной речи и не характерная для стиля научной прозы, может своеобразно применяться в последней, не нарушая общих закономерностей этого стиля.

Соотношение общего и частного особенно выпукло выступает при анализе индивидуальной манеры пользования языком. Проявление индивидуального в газетных статьях, в значительной степени, ограничено общими закономерностями этого стиля. В стиле же научной прозы проявление индивидуального становится вполне допустимым.

В стиле английской научной прозы часто проявление индивидуального столь усилено, что здесь появляется много личного, оценочного, субъективного, эмоционального, претендующего на исключительную оригинальность. Но и в английской научной прозе можно говорить о проявлении индивидуального лишь как о чем-то допустимом, а не как об органическом качестве стиля.

Каждый речевой стиль имеет также свое коммуникативное задание, причем коммуникативные задания его подстилей также отличаются друг от друга.

Наши рекомендации