SpringMVCR[56-57]

3 阅读15分钟

Alfred Wegener’s Theory of Continental Drift In 1912, the German geologist Alfred Wegener proposed that Earth's continents are mobile rafts of lighter crust that have shifted over time by plowing their way through the denser crust of the oceans. ▋The theory, called continental drift, was partly motivated by the apparent fit, like puzzle pieces, of the coastlines of South America and Africa. ▋Wegener first presented his theory of continental drift at a meeting and in a paper, and then as a book. The Origin of Continents and Oceans, published in 1915. ▋He continued to write updated versions of this work until his death in an ill-fated expedition to Greenland in 1930. ▋Wegener maintained that Earth is composed of concentric shells of increasing density from crust to core. The outermost shell is not continuous but made of continental blocks of lighter rock called sial (an acronym for silica- and aluminum-rich rock) floating in the denser sima (silica- and magnesia-rich rock) underlying the oceans. All the continents had been joined in the supercontinent of Pangaea. As the continent broke up, the pieces moving apart left bits behind, explaining the presence of nonvolcanic islands and island chains, according to Wegener. Where the moving pieces collided, mountains formed. They were thrust up either by the plowing of the continents through the sima, as in the case of the Andes, or by the colliding of two blocks of sial, as in the case of the Himalayas. As for the force driving continental drift, Wegener initially invoked Polfluchtkraft-a force causing flight from the poles as a result of Earth’s rotation-and later the tidal force resulting from the gravitational attraction between Earth and the Moon and Sun. One of the most influential geologists to join the mobilist camp, as the drifters school became known, was Arthur Holmes. Holmes recognized the importance of radioactive heat-which had recently been discovered-and realized that there must be a mechanism to remove it from Earth’s interior. That mechanism, he argued, is convection-the rising of less dense material and the sinking of more dense material. He went on to propose that the mantle (the part of Earth’s interior below the outer crust and above the core) convects in large, circulating patterns, and that this motion carries the continents across Earth’s surface. He also related crustal movement and mantle convection to the evolution of mountain belts. Wegener adopted Holmes's mechanism in the last rendition of his theory Holmes, for his part, presented his grand concept of a dynamic Earth in his influential and popular text Principles of Physical Geology, published in 1944. Although it was eventually supplanted by the theory of plate tectonics, Wegener ’ s theory of continental drift influenced science because it explained disparate observations, because it was placed in the context of existing theories, and because it offered a coherent view of Earth's evolution. For example, Wegener showed not only that the coastlines on opposite sides of the Atlantic fit together, but that geologic features on the different continents fit as well. He asserted that the Appalachians, which can be traced northward through the Canadian Maritime Provinces, match the Caledonian Mountains in Scotland and Norway. He marshaled evidence from the distributions of fossil and living species to argue that land bridges joining continents were less likely than a single continent. The example commonly cited is that of Mesosaurus, a shoreline scavenger reptile that lived in the Permian period and is found as fossils in rocks on both sides of the South Atlantic Ocean. Mesosaurus was thought not to be a great swimmer, certainly not able to cross an ocean. Wegener also found supporting evidence in ancient climates. Mounting observations indicated that the past climate of many regions was much different from the present climate. In the tropics, geologists had found sand and gravel left by ancient glaciers, and in rainy regions they had located prehistoric deserts. Then there was the discovery, by a British expedition, of plant fossils only 600 kilometers (370 miles) from the South Pole. Particularly puzzling was evidence suggesting that widely different climates in different regions had occurred at the same time, so one could not account for different climates by claiming simply that the whole of Earth was once hotter or once cooler than now.Wegener solved this dilemma by showing that observations of paleoclimate could be explained if the positions of the continents had shifted relative to those of the poles.

Representative Government in Colonial North America Before 1750, colonists in North America had little occasion to think of themselves as a distinct people. There was no American government, no single political organization in which all the colonies joined to manage their common concerns. There was not even a wish for such an organization except among a few eccentric individuals America, to the people who lived in it, was still a geographic region, not a frame of mind. Asked about nationality, the typical American colonist of 1750 would have said English or British. In spite of substantial numbers of Dutch, Germans, and Scotch-Irish, English people and English institutions prevailed in every colony, and most colonists spoke of England as home even though they had never been there. Yet no American institutions were quite like their counterparts in England: the heritage of English ideas that went with these institutions was so rich and varied that colonists were able to select and develop those that best suited their situation and forget others that meanwhile were growing prominent in the mother country. This variety sometimes led to regional differences: in some ways New Englanders were set off from Virginians even more than from people in England. But some ideas, institutions, and attitudes became common in all the colonies and remained uncommon in England. Although colonial Englishmen were not yet aware that they shared these Americanisms with one another or that English people in England did not share them, many of the characteristic ideas and attitudes that later distinguished United States nationalism were already present by the mid-eighteenth century. English people brought with them to the New World the political ideas that still give English and American government a close resemblance. But American colonists very early developed conceptions of representative government that differed from those in England. Representative government in England originated in the Middle Ages, when the king called for men to advise him. They were chosen by their neighbors and informed the king of his subjects’ wishes. Eventually, their advice became so compelling that the king could not reject it, and the representatives of the people, organized as a legislature known as the House of Commons, became the most powerful branch of the English government. At first, the House of Commons consisted of representatives from each county, or shire, and from selected boroughs. Over the centuries many of these boroughs became ghost towns with only a handful of inhabitants, and great towns sprang up where none had existed before. Yet the old boroughs continued to send members to the House of Commons, and the new towns sent none. Moreover, only a fraction of the English population could participate even in county elections. In order to vote, a man had to own property that would, if rented, yield him at least 40 shillings yearly. Few could meet the test. A number of English people thought the situation absurd and said so. But nothing was done to improve it; in fact, a theory was devised to justify it. A member of the House of Commons, it was said, represented not the people who chose him but the whole country, and he was not responsible for any particular constituency. Not all Englishmen could vote for representatives, but all were virtually represented by every member of the Commons. The assemblies of American colonial representatives were more democratic. ▋Although every colony had property qualifications for voting, probably the great majority of adult white males owned enough land to meet them. ▋Moreover, the system for apportioning representation was more balanced. ▋ New England colonies gave every town the right to send delegates to the legislature. ▋Outside New England, the unit of representation was usually the county. The political organization of new counties and the extension of representation seldom kept pace with the rapid advance of settlement westward, but nowhere was representation so uneven or irrational as in England. American colonists knew nothing of virtual representation. A colonial representative was supposed to be an agent of the people who chose him. He was supposed to look after their interests first and those of the colony second. In New England, where town meetings could be called at any time, people often gathered to tell their delegate how to vote on a particular issue.

Urban Heat Islands Climatic changes such as changes in temperature, precipitation, humidity, or wind speed that are produced by urbanization involve all major surface conditions. Some of these changes are quite obvious and relatively easy to measure. Others are more subtle and sometimes difficult to measure. The amount of change in any of these elements, at any time, depends on several variables, including the extent of the urban complex, the nature of industry, site factors such as topography and proximity to water bodies, time of day, and existing weather conditions. ▋The most studied and well-documented urban climatic effect is the urban heat island. ▋The term simply refers to the fact that temperatures within cities are generally higher than in rural areas. ▋The heat island is evident when temperature data are examined. ▋For example, the distribution of average minimum temperatures in the Washington, D C., metropolitan area for the three-month winter period (December through February) over a five-year span, clearly represents a well-developed heat island. The warmest winter temperatures occurred in the heart of the city, while the suburbs and surrounding countryside experienced average minimum temperatures that were as much as 3 3 ° C lower. Remember that these temperatures are averages, because on many clear, calm nights the temperature difference between the city center and the countryside was considerably greater, often 11°C or more. Conversely, on many overcast or windy nights the temperature differential approached zero degrees. The radical change in the surface that results when rural areas are transformed into cities is a significant cause of the urban heat island. First, the tall buildings and the concrete and asphalt of the city absorb and store greater quantities of solar radiation than do the vegetation and soil typical of rural areas. In addition, because the city surface is impermeable, the runoff of water following a rain is rapid, resulting in a severe reduction in the evaporation rate. Hence, heat that once would have been used to convert liquid water to a gas now goes to increase the surface temperature. At night, while both the city and the countryside cool by radiative losses, the stone-like surface of the city gradually releases the additional heat accumulated during the day, keeping the urban air warmer than that of the outlying areas. A portion of the urban temperature rise must also be attributed to waste heat from sources such as home heating and air conditioning, power generation, industry, and transportation. Many studies have shown that the magnitude of human-generated energy in metropolitan areas is great when compared to the amount of energy received from the Sun at the surface. For example, investigations in Sheffield, England, and Berlin, Germany, showed that the annual heat production in those cities was equal to approximately one-third of that received from solar radiation. Another study of densely built-up Manhattan in New York City revealed that during the winter, the quantity of heat produced by combustion alone was 2 times greater than the amount of solar energy reaching the ground. In summer, the figure dropped to 1/6. There are other, somewhat less influential, causes of the heat island. For example, the blanket of pollutants over a city contributes to the heat island by absorbing a portion of the upward directed long-wave radiation emitted at the surface and re-emitting some of it back to the ground. A somewhat similar effect results from the complex three-dimensional structure of the city. The vertical walls of office buildings, stores, and apartments do not allow radiation to escape as readily as in outlying rural areas where surfaces are relatively flat. As the sides of these structures emit their stored heat, a portion is re-radiated between buildings rather than upward, and is therefore slowly dissipated. In addition to re-radiating the heat loss from the city, tall buildings also alter the flow of air. Because of the greater surface roughness, wind speeds within an urban area are reduced. Estimates from available records suggest a decrease on the order of about 25 percent from rural values. The lower wind speeds decrease the city’s ventilation by inhibiting the movement of cooler outside air which, if allowed to penetrate, would reduce the higher temperatures of the city center.

Pests and Pesticides Around 1870, a little fruit-eating insect arrived in San Jose, California, on some nursery stock shipped from Asia. The pest, which became known as the San Jose scale, quickly spread through the United States and Canada, killing orchard trees as it went. Farmers found that the best way to control the scale was to spray their orchards with a mixture of sulfur and lime. Within a few weeks of spraying a tree, the insect vanished completely. Around the turn of the century, however, farmers began to notice that the sulfur-lime mixture was not working all that well. A handful of scales would survive a spraying and eventually rebound to their former numbers. In Clarkston Valley in Washington State, orchard growers became convinced that manufacturers were adulterating their pesticide. They built their own factory to guarantee a pure poison, which they drenched over their trees, yet the scale kept spreading uncontrollably. An entomologist named A. L Melander inspected the trees and found scales living happily under a thick crust of dried spray. Melander began to suspect that adulteration was not to blame. In 1912, he compared how effective the sprays were in different parts of Washington In Yakima and Sunnyside, he found that sulfur-lime could wipe out every last scale on a tree, while in Clarkston between 4 and 13 percent survived. On the other hand, the Clarkston scales were annihilated by a different pesticide made from fuel oil, just as the insects in other parts of Washington were. In other words, the scales of Clarkston had a peculiar resistance to sulfur-lime. Melander wondered why He knew that if individuals eat small amounts of certain poisons, such as arsenic, they can build up an immunity. But San Jose scales bred so quickly that no single scale experienced more than a single spray of sulfur-lime, giving them no chance to develop immunity. A radical idea occurred to Melander Perhaps mutations made a few scales resistant to sulfur-lime. When farmers sprayed their trees, these resistant scales survived, as did a few nonresistant ones that hadn’t received a fatal dose. The surviving scales would then breed, and the resistant genes would become more common in the following generations. Depending on the proportions of the survivors, the trees might become covered by resistant or nonresistant scales. In the Clarkston Valley region, farmers had been using sulfur-lime longer than anywhere else in the Northwest and were desperately soaking their trees with the stuff. In the process, they were driving the evolution of more resistant scales. Melander offered his ideas in 1914, but no one paid much attention to him; they were too busy discovering even more powerful pesticides. ▋In 1939 the Swiss chemist Paul Muller found that a compound of chlorine and hydrocarbons called DDT could kill insects more effectively than any previous pesticide had ▋DDT was cheap and easy to make, it could kill many species of insects, and it was stable enough to be stored for years. ▋It could be used in small doses, and it didn’t seem to pose any health risks to humans. ▋Between 1941 and 1976, 4.5 million tons of DDT were produced DDT was so powerful and cheap that farmers gave up old-fashioned ways of controlling pests, such as draining standing water or breeding resistant strains of crops. DDT and similar pesticides created the delusion that pests could be not merely controlled but eradicated, so farmers began spraying pesticides on their crops as a matter of course, rather than to control outbreaks Meanwhile, public health workers saw in DDT the hope of controlling mosquitoes, which spread diseases such as malaria. DDT certainly saved a great many lives and crops, but even in its early days, some scientists saw signs of its doom. In 1946 Swedish scientists discovered houseflies that could no longer be killed with DDT Houseflies in other countries became resistant as well in later years, and soon other species could withstand it. Melander’s warning was becoming a reality. By 1992 more than 500 species were resistant to DDT, and the number is still climbing. As DDT began to fail, farmers at first just applied more of it; when more no longer worked, they switched to newer pesticides.

The Debate over Spontaneous Generation Until the second half of the nineteenth century, many scientists and philosophers believed that some forms of life could arise spontaneously from nonliving matter; they called this hypothetical process spontaneous generation. Not much more than 100 years ago, people commonly believed that toads, snakes, and mice could be born of moist soil; that flies could emerge from manure; and that maggots, the larvae of flies, could arise from decaying corpses. A strong opponent of spontaneous generation, the Italian physician Francesco Redi, set out in 1668 to demonstrate that maggots did not arise spontaneously from decaying meat. Redi filled three jars with decaying meat and sealed them tightly. Then he arranged three other jars similarly but left them open. Maggots appeared in the open vessels after flies entered the jars and laid their eggs, but the sealed containers showed no signs of maggots. Still, Redi’s antagonists were not convinced; they claimed that fresh air was needed for spontaneous generation. ▋So Redi set up a second experiment, in which three jars were covered with a fine net instead of being sealed. ▋No larvae appeared in the net covered jars, even though air was present. ▋Maggots appeared only when flies were allowed to leave their eggs on the meat. ▋ Redi's results were a serious blow to the long-held belief that large forms of life could arise from nonlife. However, many scientists still believed that tiny microorganisms were simple enough to be generated from nonliving materials. The case for spontaneous generation of microorganisms seemed to be strengthened in 1745, when John Needham, an Englishman, found that even after he heated nutrient fluids (chicken broth and corn broth) before pouring them into covered flasks, the cooled solutions were soon teeming with microorganisms. Needham claimed that microbes developed spontaneously from the fluids. Twenty years later, Lazzaro Spallanzani an Italian scientist, suggested that microorganisms from the air probably had entered Needham's solutions after they were boiled. Spallanzani showed that nutrient fluids heated after being sealed in a flask did not develop microbial growth. Needham responded by claiming the “vital force” necessary for spontaneous generation had been destroyed by the heat and was kept out of the flasks by the seals. This intangible “vital force” was given all the more credence shortly after Spallanzani's experiment, when Laurent Lavoisier showed the importance of oxygen to life. Spallanzani's observations were criticized on the grounds that there was not enough oxygen in the sealed flasks to support microbial life. The issue was still unresolved in 1858, when the German scientist Rudolf Virchow challenged spontaneous generation with the concept of biogenesis, the claim that living cells can arise only from preexisting living cells .Arguments about spontaneous generation continued until 1861, when the work of the French scientist Louis Pasteur ended the debate.  With a series of ingenious and persuasive experiments, Pasteur demonstrated that microorganisms are present in the air and can contaminate sterile solutions, but air itself does not create microbes. He filled several short-necked flasks with beef broth and then boiled their contents. Some were then left open and allowed to cool. In a few days, these flasks were found to be contaminated with microbes. The other flasks, sealed after boiling, were free of microorganisms. From these results, Pasteur reasoned that microbes in the air were the agents responsible for contaminating nonliving matter such as the broths in Needham’s flasks. Pasteur next placed broth in open-ended long-necked flasks and bent the necks into S-shaped curves. The contents of these flasks were then boiled and cooled. The broth in the flasks did not decay and showed no signs of life, even after months. Pasteur’s unique design allowed air to pass into the flask,but the curved neck trapped any airborne microorganisms that might have contaminated the broth. Pasteur showed that microorganisms can be present in nonliving matter-on solids, in liquids, and in the air. His work provided evidence that microorganisms cannot originate from mysterious forces present in nonliving materials. Rather, any appearance of ‘ ‘ spontaneous ” life in nonliving solutions can be attributed to microorganisms that were already present in the air or in the fluids themselves.

The Problem of Narrative Clarity in Silent Films Beginning in 1904. American commercial filmmaking became increasingly oriented toward storytelling. Moreover, with the new emphasis on one-reel films, narratives became longer and necessitated a series of camera shots. Filmmakers faced the challenge of making story films that would be comprehensible to audiences. How could techniques of editing, camerawork, acting, and lighting be combined so as to clarify what was happening in a film? How could the spectator grasp where and when the action was occurring? Over the span of several years, filmmakers solved such problems. Sometimes they influenced each other, while at other times two filmmakers might happen on the same technique independently. Some devices were tried and abandoned. By 1917, filmmakers had worked out a system of formal principles that were standard in American filmmaking. That system has come to be called the classical Hollywood cinema. Despite that name, many of the basic principles of the system were being worked out before filmmaking was centered in Hollywood and, indeed, many of those principles were first tried in other countries. In the years before the First World War, film style was still largely international, since films circulated widely outside their country of origin. The basic problem that confronted filmmakers early in the silent-movie era was that audiences could not understand the causal, spatial, and temporal relations in many films If the editing abruptly changed locales, the spectator might not grasp where the new action was occurring. An actor's elaborate pantomime might fail to convey the meaning of a crucial action. A review of a 1906 film lays out the problem:“Regardless of the fact that there are a number of good motion pictures brought out, it is true that there are some which, although photographically good, are poor because the manufacturer, being familiar with the picture and the plot, does not take into consideration that the film was not made for him but for the audience. A movie recently seen was very good photographically, but the story could not be understood by the audience.” In a few theaters, a lecturer might explain the plot as the film unrolled, but producers could not rely on such aids. Filmmakers came to assume that a film should guide the spectator’s attention, making every aspect of the story on the screen as clear as possible. In particular, films increasingly set up a chain of narrative causes and effects. One event would plainly lead to an effect, which would in turn cause another effect, and so on. Moreover, an event was typically caused by a character’s beliefs or desires. Character psychology had not been particularly important in early films. ▋Comical chases or brief melodramas depended more on physical action or familiar situations than on character traits. ▋ Increasingly after 1907, however, character psychology motivated actions. ▋By following a series of characters' goals and resulting conflicts, the spectator could comprehend the action. ▋ Every aspect of the silent-film style came to be used to enhance narrative clarity. Staging or framing action in depth could show the spatial relationships among elements. Intertitles could add narrative information beyond what the images conveyed. Closer views of the actors could suggest their emotions more precisely. Color, set design, and lighting could imply time of day, the milieu of the action, and so on. Some of the most important innovations of this period involved the ways in which camera shots were put together, or edited, to create a story In one sense, editing was a boon to the filmmaker, permitting instant movement from one space to another or cuts to closer views to reveal details. But if the spectator could not keep track of the temporal or spatial relations between one shot and the next, editing might also lead to confusion. In some cases, intertitles could help. Editing also came to emphasize continuity among shots. Certain visual cues indicated that time was flowing uninterruptedly across cuts. Between scenes, other cues might suggest how much time had been skipped over. When a cut moved from one space to another, the director found ways to orient the viewer.