Advertisements
Advertisements
प्रश्न
Direction: Read the passage given below. Choose the best options for the Question.
IOT has had an impact across all fields, be it industries, government, small or large businesses and even for Personal Consumption. What is IOT (Internet of things) you might ask. It’s been a growing topic of conversation for some time now. Put in the simplest term it means anything that has an on and off button and is connected to the internet for receiving, analyzing, storing or sending data. This could mean anything, from the watch that you wear to airplanes that can be controlled from a remote location. According to the analyst firm Gartner, by the year 2020 we’ll have over 26 billion connected devices. That could mean people to people, people connected to things and things connected to things.The new rule of the future is going to be “Anything that can be connected will be connected”. Take for example that when you set an alarm to wake up and that alarm goes off it not only wakes you up but also brews your coffee, sets the right temperature of water for your bath, puts on the television to bring you the latest updates from around the globe and all this before you even put a foot out of your bed. This is all done by simply getting the network of interconnected things/devices that have embedded sensors, network connectivity, software and necessary electronics that collect and exchange data. To show how far we have come with technology and connectivity, we have smartwatches such as Fitbit, Garmin to name a few that have changed the way we look at time. We have one device that not only tells us the time but also tracks the number of steps, calories, and our heart rate. This watch is actually connected to our phone so with just one turn of the wrist, one can tell who is calling or what messages have been received without having to dig through pockets or handbags. IOT is making its presence felt in health care as well. Doctors can now remotely monitor and communicate with their patients and health care providers can benefit from this. Whether data comes from foetal monitors, electrocardiograms, temperature monitors or blood glucose levels, tracking this information is vital for some patients. Many of this requires to follow up interaction with healthcare professionals. With smarter devices that deliver more valuable data, it can reduce the need for direct patient-physician interaction.
Take for instance in the sporting field, minute chips are being attached to balls and bats which will transmit information of how fast the ball is travelling and a batsman’s moves, the time, the angles, the pressure on the bat at different positions, data of the muscle stretch if he’s hit a six so on and so forth. Formula one cars are also being fitted with these sensors which relay information on the minute moves being made by the driver. Chips are also being put into wearable devices of sportsmen to detect suboptimal action of any body parts to show signs of stress or strain which will help in the early detection of injuries and take preventive measures. IOT has had an impact across all fields, be it industries, government, small or large business and even for personal consumption. IBM, Google, Intel, Microsoft, and Cisco are some of the top players in the IOT spectrum. With billions of devices connected security becomes a big issue. How can people make sure that their data is safe and secure? This is one of the major concerns in the IOT that becomes a hot topic. Another issue is with all these billions of devices sharing data companies will be faced with the problem of how to store, track, analyse and make vast sense of the information being generated. Companies are monitoring the network segment to identify anomalous traffic and to take action if necessary. Now that we have a fair understanding of IOT let’s see what impact it’s had on the education sector. The only constant in our lives is change and learning. From the get-go, we learn, be it to the walk, talk or run. We adapt to the changing times and constantly learn from them. Education or learning as we know it in the broader sense is the most important of all and the one that decides which way we handle those changes to impact us and the world. Today’s world is fast-paced and to keep up with this we need an infusion of speed with learning. From the classroom assignments, lectures, blackboards, and chalk we have come a long way to what is now known as e-learning (electronic learning) or m-learning (mobile learning). With the GenNext it is imperative to provide the right kind of education. The rise of technology and IOT allows schools to improve the safety of their campuses, keep track of resources and enhance access to information. It ensures data quality being the top priority but also facilitates the development of content allowing teachers to use this technology to create smart lesson plans and ensuring the reach of this content to any corner of the world.
Technology and IOT have benefited education considerably because it has:
विकल्प
Helped in improving the lesson.
Helped keep track of all resources.
Helped teachers to track.
Helped with keeping up with the change in learning.
Advertisements
उत्तर
Helped with keeping up with the change in learning.
APPEARS IN
संबंधित प्रश्न
In 1954, a Bombay economist named A.D. Shroff began a forum of free Enterprise, whose ideas on economic development were somewhat at odds with those then influentially articulated by the Planning Commission of the Government of India. Shroff complained against the 'indifference, if not discouragement, with which the state treated entrepreneurs.
At the same time as Shroff, but independently of him, a journalist named Philip Spratt was writing a series of essays in favour of free enterprise. Spratt was a Cambridge communist who was sent by the party in the 1920s to the foment revolution in the subcontinent. detected in the act, he spent many years in an Indian jail. The books he read in the prison, and his marriage to an Indian woman afterward, inspired a steady move rightwards. By the 1950s, he was editing a pro-American weekly from Banglore, called mysIndia. there he inveighed against the economic policies of the government of India. These, he said, treated the entrepreneur 'as a criminal who has dared to use his brain independently of the state to create wealth and give employment’. The state’s chief planner, P.C. Mahalanobis had surrounded himself with Western leftists and Soviet academicians, who reinforced his belief in 'rigid control by the government overall activities’. The result, said Spratt, would be `the smothering of free enterprise, a famine of consumer goods, and the tying down of millions of workers to soul-deadening techniques.'
The voices of men like Spratt and Shroff were drowned in the chorus of popular support for a model of heavy industrialization funded and directed by the governments. The 1950s were certainly not propitious times for free marketers in India. But from time to time their ideas were revived. After the rupee was devalued in 1966, there were some moves towards freeing the trade regime and hopes that the licensing system would also be liberalized. However, after Indira Gandhi split the Congress Party in 1969, her government took its `left turn’, nationalizing a fresh range of industries and returning to economic autarky.
Which of the following statements is least likely to be inferred from the passage:
The question in this section is based on what is stated or implied in the passage given below. For the question, choose the option that most accurately and completely answers the question.
The words invention and Innovation are closely linked, but they are not interchangeable. The inventor is a genius who uses his intellect, imagination, time and resources to create something that does not exist. But this invention may or may not be of utility to the masses. It is the enterprising innovator who uses various resources, skills and time to make the invention available for use. The innovator might use the invention as it is, modifies it or even blend two or more inventions to make one marketable product. A great example is that of the iPhone which is a combination of various inventions. If an invention is the result of countless trials and errors, so can be the case with innovation. Not every attempt to make an invention is successful. Not every innovation sees the light of the day. Benjamin Franklin had the belief that success doesn‘t come without challenge, mistake, and in a few cases failure.
One of the world‘s most famous innovators, Steve Jobs says, ―Sometimes when you innovate, you make mistakes. It is best to admit them quickly and get on with improving your other innovations.‖ Thus, inventors and innovators have to be intrepid enough to take risks; consider failures as stepping stones and not stumbling blocks. Some inventions are the result of a keen observation or a simple discovery. The inventor of Velcro, also called the zipless zipper, is the Swiss engineer George de Mestral. He was hiking in the woods when he found burrs clinging to his clothes and his dog‘s fur. Back at home, he studied the burrs. He discovered that each burr was a collection of tiny hooks which made it cling on to another object. A few years later, he made and patented the strips of fabric that came to us like Velcro. The world of inventions and innovations is a competitive one. But the race does not end here; it is also prevalent in the case of getting intellectual property rights. There have been inventors who failed to get a single patent while there have been some who managed to amass numerous patents in their lifetime. Thomas Edison had 1,093 patents to his credit! We relate the telephone with Alexander Graham Bell. It is believed that around the same time, Antonio Meucci had also designed the telephone, but due to a lack of resources and various hardships, he could not proceed with the patent of his invention. It is also believed that Elisha Gray had made a design for the telephone and applied for the patent at the U.S. patent office on the same day as Graham Bell did. By sheer chance, Graham‘s lawyer‘s turn to file the papers came first. Hence, Graham was granted the first patent for the telephone. It is not easy, and at times almost impossible, for an inventor to be an innovator too. There are very few like Thomas Edison who graduated from being an incredible inventor to a successful manufacturer and businessman with brilliant marketing skills. While innovations that have helped to enhance the quality of life are laudable, equally laudable are the inventions that laid the foundation of these very innovations.
Which of the following texts from the passage clearly indicates failure?
Read the given passage carefully and answer the questions given after the passage:
1. Often, we passionately pursue matters that in the future appear to be contradictory to our real intention or nature; and triumph is followed by remorse or regret. There are numerous examples of such a trend in the annals of history and contemporary life.
2. Alfred Nobel was the son of Immanuel Nobel, an inventor who experimented extensively with explosives. Alfred too carried out research and experiments with a large range of chemicals; he found new methods to blast rocks for the construction of roads and bridges; he was engaged in the development of technology and different weapons; his life revolved around rockets and cannons and gun powder. The ingenuity of the scientist brought him enough wealth to buy the Bofors armament plant in Sweden.
3. Paradoxically, Nobel's life was a busy one yet he was lonely; and as he grew older, he began suffering from guilt of having invented the dynamite that was being used for destructive purposes. He set aside a huge part of his wealth to institute Nobel Prizes. Besides honouring men and women for their extraordinary achievements in physics, chemistry, medicine and literature, he wished to honour people who worked for the promotion of peace.
4. It's strange that the very man whose name was closely connected with explosives and inventions that helped in waging wars willed a large part of his earnings for the people who work for the promotion of peace and the benefit of mankind. The Nobel Peace Prize is intended for a person who has accomplished the best work for fraternity among nations, for abolition or reduction of war and for promotion of peace.
5. Another example that comes to one's mind is that of Albert Einstein. In 1939, fearing that the Nazis would win the race to build the world's first atomic bomb, Einstein urged President Franklin D Roosevelt to launch an American programme on nuclear research. The matter was considered and a project called the Manhattan Project was initiated. The project involved intense nuclear research the construction of the world's first atomic bomb. All this while, Einstein had the impression that the bomb would be used to protect the world from the Nazis. But in 1945, when Hiroshima was bombed to end World War II, Einstein was deeply grieved and he regretted his endorsement of the need for nuclear research.
6. He also stated that had he known that the Germans would be unsuccessful in making the atomic bomb, he would have probably never recommended making one. In 1947, Einstein began working for the cause of disarmament. But, Einstein's name still continues to be linked with the bomb.
Man's fluctuating thoughts, changing opinions, varying opportunities keep the mind in a state of flux. Hence, the paradox of life: it's certain that nothing is certain in life.
The paradox, 'it's certain that nothing is certain in life', indicates the writer's
Read the passage and answer the question based on it.
Management education gained new academic stature within US Universities and greater respect from outside during the 1960 and 1970s. Some observers attribute the competitive superiority of US corporations to the quality of business education. In1978, a management professor, Herbert A. Simon of Carnegie Mellon University, won the Nobel Prize in economics for his work in decision theory. And the popularity of business education continued to grow, since 1960, the number of master’s degrees awarded annually has grown from under 5000 to over 50,000 in the mid-1980s as the MBA has become known as ‘the passport to the good life’.
By the 1980s, however, US business schools faced critics who charged that learning had little relevance to real business problems. Some went so far as to blame business schools for the decline in US competitiveness.
Amidst the criticisms, four distinct arguments may be discerned. The first is that business schools must be either unnecessary or deleterious because Japan does so well without them. Underlying this argument is the idea that management ability cannot be taught, one is either born with it or must acquire it over years of practical experience. A second argument is that business schools are overly academic and theoretical. They teach quantitative models that have little application to real-world problems. Third, they give inadequate attention to shop floor issues, production processes and to management resources. Finally, it is argued that they encourage undesirable attitudes in students, such as placing value on the short term and ‘bottom line’ targets, while neglecting longer-term development criteria. In summary, some business executives complain that MBA’s are incapable of handling day to day operational decisions, unable to communicate and to motivate people, and unwilling to accept responsibility for following through on implementation plans. We shall analyze these criticisms after having reviewed experiences in other countries.
In contrast to the expansion and development of business education in the United States and more recently in Europe, Japanese business schools graduate no more than two hundred MBA’s each year. The Keio Business School (KBS) was the only graduate school of management in the entire country until the mid-1970s and it still boasts the only two-year master's programme. The absence of business schools in Japan would appear in contradiction with the high priority placed upon learning by its Confucian culture. Confucian colleges taught administrative skills as early as 1630 and Japan wholeheartedly accepted Western learning following the Meiji restoration of 1868 when hundreds of students were dispatched to universities in US, Germany, England, and France to learn the secrets of Western technology and modernization. Moreover, the Japanese educational system is highly developed and intensely competitive and can be credited for raising the literary and mathematical abilities of the Japanese to the highest level in the world.
Until recently, Japan corporations have not been interested in using either local or foreign business schools for the development of their future executives. Their in-company training programs have sought the socialization of newcomers, the younger the better. The training is highly specific and those who receive it have neither the capacity nor the incentive to quit. The prevailing belief, says Imai, ‘is management should be born out of the experience and many years of effort and not learnt from educational institutions.’ A 1960 survey of Japanese senior executives confirmed that a majority (54%) believed that managerial capabilities can be attained only on the job and not in universities.
However, this view seems to be changing: the same survey revealed that even as early as 1960, 37% of senior executives felt that the universities should teach integrated professional management. In the 1980s a combination of increased competitive pressures and greater multi-nationalization of Japanese business are making it difficult for many companies to rely solely on upon internally trained managers. This has led to the rapid growth of local business programmes and greater use of American MBA programmes. In 1982-83, the Japanese comprised the largest single group of foreign students at Wharton, where they not only learnt the latest techniques of financial analysis but also developed worldwide contacts through their classmates and became Americanized, something highly useful in future negotiations. The Japanese, then do not ‘do without’ business schools, as is sometimes contended. But the process of selecting and orienting new graduates, even MBA’s, into corporations is radically different than in the US. Rather than being placed in highly paying staff positions, new Japanese recruits are assigned responsibility for operational and even menial tasks. Success is based upon Japan’s system of highly competitive recruitment and intensive in-company management development, which in turn are grounded in its tradition of universal and rigorous academic education, life-long employment and strong group identification.
The harmony among these traditional elements has made the Japanese industry highly productive and given corporate leadership a long term view. It is true that this has been achieved without much attention to university business education, but extraordinary attention has been devoted to the development of managerial skills, both within the company and through participation in programmes sponsored by the Productivity Center and other similar organizations.
The 1960s and 1970s can best be described as a period
Paragraph: A fundamental principle of pharmacology is that all drugs have multiple actions. Actions that are desirable in the treatment of disease are considered therapeutic, while those that are undesirable or pose risks to the patient are called "effects." Adverse drug effects range from the trivial, e.g., nausea or dry mouth, to the serious, e.g., massive gastrointestinal bleeding or thromboembolism; and some drugs can be lethal. Therefore, an effective system for the detection of adverse drug effects is an important component of the health care system of any advanced nation. Much of the research conducted on new drugs aims at identifying the conditions of use that maximize beneficial effects and minimize the risk of adverse effects.
The intent of drug labeling is to reflect this body of knowledge accurately so that physicians can properly prescribe the drug; or, if it is to be sold without prescription so that consumers can properly use the drug.
The current system of drug investigation in the United States has proved very useful and accurate in identifying the common side effects associated with new prescription drugs. By the time a new drug is approved by the Food and Drug Administration, its side effects are usually well described in the package insert for physicians. The investigational process, however, cannot be counted on to detect all adverse effects because of the relatively small number of patients involved in premarketing studies and the relatively short duration of the studies.
Animal toxicology studies are, of course, done prior to marketing in an attempt to identify any potential for toxicity, but negative results do not guarantee the safety of a drug in humans, as evidenced by such well-known examples as the birth deformities due to thalidomide.
This recognition prompted the establishment in many countries of programs to which physicians report adverse drug effects. The United States and other countries also send reports to an international program operated by the World Health Organization. These programs, however, are voluntary reporting programs and are intended to serve a limited goal: alerting a government or private agency to adverse drug effects detected by physicians in the course of practice. Other approaches must be used to confirm suspected drug reactions and to estimate incidence rates. These other approaches include conducting retrospective control studies; for example, the studies associating endometrial cancer with estrogen use, and systematic monitoring of hospitalized patients to determine the incidence of acute common side effects, as typified by the Boston Collaborative Drug Surveillance Program.
Thus, the overall drug surveillance system of the United States is composed of a set of information bases, special studies, and monitoring programs, each contributing in its own way to our knowledge about marketed drugs. The system is decentralized among a number of governmental units and is not administered as a coordinated function. Still, it would be inappropriate at this time to attempt to unite all of the disparate elements into a comprehensive surveillance program. Instead, the challenge is to improve each segment of the system and to take advantage of new computer strategies to improve coordination and communication.
The author is most probably leading up to a discussion of some suggestions about how to:
Read the given passage carefully and attempt the questions that follow.
MY LOVE OF NATURE, goes right back to my childhood, to the times when I stayed on, my grandparents' farm in Suffolk. My father was in the armed forces, so we were always moving and didn't have a home base for any length of time, but I loved going there. I think it was my grandmother who encouraged me more than anyone: she taught me the names of wild flowers and got me interested in looking at the countryside, so it seemed obvious to go on to do Zoology at University.
I didn't get my first camera until after I'd graduated, when I was due to go diving in Norway and needed a method of recording the sea creatures I would find there. My father didn't know anything about photography, but he bought me an Exacta, which was really quite a good camera for the time, and I went off to take my first pictures of sea anemones and starfish. I became keen very quickly, and learned how to develop and print; obviously I didn't have much money in those days, so I did more black and white photography than colour, but it was all still using the camera very much as a tool to record what I found both by diving and on the shore. I had no ambition at all to be a photographer then, or even for some years afterwards.
Unlike many of the wildlife photographers of the time, I trained as a scientist and therefore my way of expressing myself is very different. I've tried from the beginning to produce pictures that are always biologically correct. There are people who will alter things deliberately: you don't pick up sea creatures from the middle of the shore and take them down to attractive pools at the bottom of the shore without knowing you're doing it. In so doing you're actually falsifying the sort of seaweeds they live on and so on, which may seem unimportant, but it is actually changing the natural surroundings to make them prettier. Unfortunately, many of the people who select pictures are looking for attractive images and, at the end of the day, whether it's truthful or not doesn't really matter to them. It's important to think about the animal first, and there are many occasions when I've not taken a picture because it would have been too disturbing. Nothing is so important that you have to get that shot; of course, there are cases when it would be very sad if you didn't, but it's not the end of the world. There can be a lot of ignorance in people's behaviour towards wild animals and it's a problem that more and more people are going to wild places: while some animals may get used to cars, they won't get used to people suddenly rushing up to them. The sheer pressure of people, coupled with the fact that there are increasingly fewer places where no-one else has photographed, means that over the years, life has become much more difficult for the professional wildlife photographer.
Nevertheless, wildlife photographs play a very important part in educating people about what is out there and what needs conserving. Although photography can be an enjoyable pastime, as it is to many people, it is also something that plays a very important part in educating young and old alike. Of the qualities it takes to make a good wildlife photographer, patience is perhaps the most obvious -you just have to be prepared to sit it out. I'm actually more patient now because I write more than ever before, and as long as I've got a bit of paper and a pencil, I don't feel I'm wasting my time. And because I photograph such a wide range of things, even if the main target doesn't appear I can probably find something else to concentrate on instead.
How is she different from some of the other wildlife photographers she meets?
Read the given passage carefully and answer the questions that follow.
There is a fairly universal sentiment that the use of nuclear weapons is clearly contrary to morality and that its production probably so, does not go far enough. These activities are not only opposed to morality but also to the law if the legal objection can be added to the moral, the argument against the use and the manufacture of these weapons will considerably be reinforced. Now the time is ripe to evaluate the responsibility of scientists who knowingly use their expertise for the construction of such weapons, which has deleterious effect on mankind.
To this must be added the fact that more than 50 percent of the skilled scientific manpower in the world is now engaged in the armaments industry. How appropriate it is that all this valuable skill should be devoted to the manufacture of weapons of death in a world of poverty is a question that must touch the scientific conscience.
A meeting of biologists on the Long-Term Worldwide Biological consequences of nuclear war added frightening dimension to those forecasts. Its report suggested that the long biological effects resulting from climatic changes may at least be as serious as the immediate ones. Sub-freezing temperatures, low light levels, and high doses of ionizing and ultraviolet radiation extending for many months after a large-scale nuclear war could destroy the biological support system of civilization, at least in the Northern Hemisphere. Productivity in natural and agricultural ecosystems could be severely restricted for a year or more. Post war survivors would face starvation as well as freezing conditions in the dark and be exposed to near lethal doses of radiation. If, as now seems possible, the Southern Hemisphere were affected also, global disruption of the biosphere could ensue. In any event, there would be severe consequences, even in the areas not affected directly, because of the interdependence of the world economy. In either case the extinction of a large fraction of the earth’s animals, plants and microorganisms seem possible.
The population size of Homo sapiens conceivably could be reduced to prehistoric levels or below, and extinction of the human species itself cannot be excluded.
Which of the following statements I, II, III and IV are definitely true in the context of the passage?
(I) There is every likelihood of survival of the human species as a consequence of nuclear war.
(II) Nuclear war risks and harmful effects are highly exaggerated.
(III) The post war survivors would be exposed to the benefits of non-lethal radiation.
(IV) Living organisms in the areas which are not directly affected by nuclear was would also suffer.
The question in this section is based on the passage. The question is to be answered on the basis of what is stated or implied in the passage.
Although the legal systems of England and the United States are superficially similar, they differ profoundly in their approaches to and uses of legal reasons: substantive reasons are more common than formal reasons in the United States, whereas in England the reverse is true. This distinction reflects a difference in the visions of law that prevails in the two countries. In England, the law has traditionally been viewed as a system of rules; the United States favours a vision of law as an outward expression of community’s sense of right and justice.
Substantive reasons, as applied to law, are based on moral, economic, political and other considerations. These reasons are found both “in the law” and “outside the law” so to speak. Substantive reasons inform the content of a large part of the law: constitutions, statutes, contracts, verdicts, and the like. Consider, for example, a statute providing or purposes were explicitly written into the statute was to ensure quiet and safety in the park. Now suppose that a veterans’ group mounts a World War II jeep (in running order but without a battery) as a war memorial on a concrete slab in the park, and charges are brought against its members. Most judges in the United States would find the defendants not guilty because what they did had no adverse effect on park’s quiet and safety.
Formal reasons are different in that they frequently prevent substantive reasons from coming into play, even when substantive reasons are explicitly incorporated into the law at hand. For example, when a document fails to comply with stipulated requirements, the court may render the document legally ineffective. A Will requiring written witness may be declared null and void and, therefore, unenforceable for the formal reason that the requirement was not observed. Once the legal rule–that a Will is invalid for lack of proper witnessing –has been clearly established, and the legality of the rule is not in question, application of that rule precludes from consideration substantive arguments in favour of Will’s validity or enforcement.
Legal scholars in England and the United States have long bemused themselves with extreme examples of formal and substantive reasoning. On the one hand, formal reasoning in England has led to wooden interpretations of statutes and an unwillingness to develop the common law through judicial activism. On the other hand, freewheeling substantive reasoning in the United States has resulted in statutory interpretations so liberal that the texts of some statutes have been ignored.
The author of the passage suggests that in English law a substantive interpretation of a legal rule might be warranted under which one of the following circumstances.
Read the given passages and answer the question with the help of information provided in the passage.
Rural development in India has witnessed several changes over the years in its emphasis, approaches, strategies and programmes. It has assumed a new dimension and perspectives as a consequence. Rural development can be richer and more meaningful only through the participation of the clienteles of development. Just as implementation is the touchstone for planning, people's participation is the centre-piece in rural development.
People's participation is one of the foremost pre-requisites of development process both from procedural and philosophical perspectives. For the development planners and administrators, it is important to solicit the participation of different groups of rural people, to make the plans participatory.
Rural development aims at improving rural people's livelihoods in an equitable and sustainable manner, both socially and environmentally, through better access to assets and services and control over productive capital. The basic objectives of Rural Development Programmes have been alleviation of poverty and unemployment through creation of basic social and economic infrastructure, provision of training to rural unemployed youth and providing employment to marginal farmers/labourers to discourage seasonal and permanent migration to urban areas.
Rural development is the main pillar of our nation's development. In spite of rapid urbanisation, a large section of our population still lives in the villages. Secondly. rural India has Jagged behind in development because of many historical sectors. Though the 11th plan began in very behavioural circumstances with the economy has grown at the rate of 7.7% per year in the 10th plan period, there still existed a big challenge to correct the development imbalances and to accord due priority to development in rural areas.
Ministry of Rural Development is implementing a number of programmes aimed at sustainable holistic development in rural areas. The thrust of these programmes is on all-round economic and social transformation in rural areas, though a multi-pronged strategy aiming to reach out to the most disadvantaged sections of the society.
Although concrete efforts have been initiated by the Government of India through several plans and measures to alleviate poverty in rural India, there still remains much more to be done to bring prosperity in the lives of the people in rural areas. At present, technology dissemination is uneven and slow In rural areas.
According to the passage, the experiences of many countries suggest that technological development fuelled by demand has a higher ............ rate.
Read the given passages and answer the question with the help of the information provided in the passage.
Thomas Edison was born in 1847 In Milan, Ohio. He was nicknamed 'Al' at an early age. At age 11, Edison moved to Michigan where he spent the remainder of his childhood. Thomas Edition struggled at school but learned to love reading and conducting experiments from his mother who taught him at home. At age 15, Edison became a 'tramp together', sending and rece1vrng messages via Morse code, an electronically-conveyed alphabet using different clicks for each letter. In 1870, Edison moved to New York City and improved the stock ticker. He soon formed his own company that manufactured the new stock tickers. He also began working on the telegraph and invented a version that could send our messages at once. Edison then moved with his family to New Jersey where he started his famous laboratory. In 1877, Edison, with help from 'muckers', individuals from around the world looking to make fortune in America, invented the phonograph. The phonograph was a machine that recorded and played back sounds. In 1878, Edison invented the light bulb as well as the power grid system, which could generate electricity and deliver it to homes through a network of wires. He subsequently started the Edison Electric Light Company in October of 1878. Edison continued to invent or improve products and make significant contributions to X-ray technology, storage batteries and motion pictures (movies). Edison was a prolific inventor, holding 1,093 US patents in his name, as well as many patents in the United Kingdom, France, and Germany.
Which of the following describes Morse Code most appropriately?
