Ad blocker interference detected!
Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers
Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.
The Ancient World Edit
Ebla The ancient city of Ebla is identifi ed with modern Tell Mardikh in north Syria, 34 miles south of Aleppo. It created a sensation when archaeologists uncovered the largest single fi nd of third-millennium cuneiform tablets there. The University of Rome has excavated the site since 1964, under the leadership of Paolo Matthiae. Ebla is poorly attested in the Early Bronze I–II Periods (c. 3200–2700 b.c.e.), with an absence of Uruk pottery. This suggests that Ebla did not emerge directly due to the development of Sumerian colonies along Syrian trade routes, by which means the “Uruk culture” was disseminated. By 2400 b.c.e. Ebla had grown into an urban center of more than 135 acres. A palace constructed on the acropolis (designated as “Royal Palace G” by archaeologists) testifi es to the increasing importance of centralized administration. Ebla’s urbanization may possibly be interpreted in terms of the sociopolitical climate prevalent in Syria at that time. With the Mesopotamian city-states extending their infl uence through long-distance trade, those in Syria felt the pressure to organize and assert their political independence. Upon excavation Royal Palace G revealed a large archive of cuneiform texts, dated to 2400–2350 b.c.e. The texts span the reigns of Kings Igrish-Halam, Irkab- Damu, and Ishar-Damu, as well as the tenure of important court offi cials such as Ibrium, Ibbi-Zikir, and Dubukhu-Adda. The archive contained a grand total of about 1,750 whole tablets and 4,900 tablet fragments. A severe fi re, which destroyed the palace, had fortuitously baked and hardened the tablets, thus helping to preserve them. Scholars generally agree that these cuneiform texts were intended to be read in the local language, Eblaite. However, the texts tend to be written with numerous Sumerian logograms (word signs). This means that Eblaite pronunciation and grammar are often not refl ected in the writing. Some have considered Eblaite to be northwest Semitic, possibly an antecedent for the later Canaanite dialects. Others have noted its affi nities to east Semitic languages, such as Old Akkadian. It is conceivable that Eblaite represents a time before the northwest and east branches of the Semitic family were clearly distinguished. Alternatively, Eblaite may represent the dialect of a geographical region that was infl uenced by much interaction with both East and West. Among the tablets are lexical texts that list the Sumerian logograms followed by their Eblaite translations. These represent the earliest attested bilingual dictionaries. Other lexical texts list words according to various categories, such as human vocations, names of fi shes, and names of birds. The sequence and arrangement of these lists are identical with those in southern Mesopotamia, signifying Ebla’s indebtedness to the Sumerian scribal tradition. Several texts mention, “Young scribes came up from Mari,” and may suggest a means by which Mesopotamian scribal practices passed into the Syrian regions. The Ebla scribes, nonetheless, preferred their own method of number notation and system of measures, instead of adopting Mesopotamian forms. E 123 The vast majority of tablets consist of administrative and economic records, which elucidate much of Ebla’s society. The highest authority at Ebla was designated by the Sumerian title EN, which is translated in Eblaite as malikum (king). The Sumerian title LUGAL was used in Ebla for governors, who were subordinate to the king. This contrasts with the usage in Mesopotamia, where LUGAL typically denotes an individual of higher rank than an EN. Royal inscriptions, which laud the king’s power and legitimize his reign, have not yet been found at Ebla. Also, Ebla does not follow the usual Mesopotamian practice of naming years according to signifi cant acts of the king. Such reticence has encouraged the view that Ebla’s king did not rule as an absolute monarch but as one reliant on leading tribal elders for aspects of state administration. The cult of dead kings is attested at Ebla, with ritual texts describing various sacrifi ces offered to previous rulers of the dynasty. Ebla was divided into eight administrative districts. The districts on the acropolis were named saza, while those in the countryside were named ebla. It was the palace, rather than the temple, that chiefl y directed the city’s economics. The palace was responsible for the ownership of land, the sustenance of Ebla’s workforce, and even the record of animals used in religious sacrifi ces. In Ebla, however, the system of labor management was not as highly developed as that of Mesopotamia. Agriculture and industry often remained under the management of local communities, which in turn reported to supervisors from the palace. The most important deity at Ebla was Kura, who functioned as the patron god of the royal household. The pantheon at Ebla included a core of Semitic deities that persisted into later times and appear in Canaanite religion. Native names were used for deities, and Sumerian gods were worshipped only when there was no Semitic equivalent. This selective appropriation of Sumerian deities suggests that the people of Ebla were well familiar with divine roles and cultic practices in Sumerian religion. Ebla was strategically located at the junction of major trade routes and engaged in the commerce of products such as wool, fl ax, olive oil, barley, and wine. Its treasury of gold and silver was immense for its time. International contact extended as far as Egypt, and Ebla’s access to Anatolia supplied it with prized bronze tin. Various cities between the Euphrates and Balikh Rivers, though far away from Ebla itself, actually came under Ebla’s control. Ebla was interested in northern Mesopotamian trade routes, which would allow it to bypass Mari on the way to southern Mesopotamia. Perennial confl icts ensued between Ebla and Mari. Both Sargon and Naram-Sin boasted that they conquered Ebla, and the fi re that destroyed Royal Palace G most likely dates to either of their reigns. Ur III records, however, imply that Ebla was rebuilt, and that its citizens had name types that show continuity with those of pre-Sargonic Ebla. The archaeology of the Old Syrian Period (c. 1800–1600 b.c.e.) indicates that Ebla experienced resurgence during this time. However, around 1600 b.c.e. the Hittite king Murshili I destroyed Ebla and effectively ended its political power. See also Akkad; Damascus and Aleppo; Fertile Crescent; Hittites; Sumer; Ur. Further reading: Gordon, Cyrus H., Gary A. Rendsberg, and Nathan H. Winter, eds. Eblaitica: Essays on the Ebla Archives and Eblaite Language. 4 vols. Winona Lake, IN: Eisenbrauns, 1987–2002; Milano, Lucio. “Ebla: A Third-Millennium City- State in Ancient Syria,” In Civilizations of the Ancient Near East, Vol. 2 edited by Jack M. Sasson, 1,219–1,230. New York: Charles Scribner’s Sons, 1995. John Zhu-En Wee Ecbatana See Persepolis, Susa, and Ecbatana. Edessa Both Edessa and its successor, Nisibis, were in northern Mesopotamia, in an area known for its military and its religious importance. Edessa has been called the Athens of Syriac learning; but after its educational institutions were shut down in 489 c.e., Nisibis, a city less controlled by Byzantine authorities, became the heir to the learning traditions of Syriac culture and church. Edessa was founded in 303 b.c.e. Legends tell of its king Abgar who was so taken by Jesus (Christ) of Nazareth that he sent him a letter. Jesus responded by sending the famed apostle Addai to convert Edessa and the rest of Mesopotamia to Christianity. Other ancient traditions indicate that Edessa was at center stage in early church development: The body of Thomas the Apostle is buried here; the Syriac translation of the Bible (the Peshitta), the synthesis of the Gospels (Tatian’s Diatessaron), Acts of Thomas, Odes of Solomon, Gospel of Truth, Acts of Thomas, and Psalms of Thomas all were written in Edessa. Nearby Dura-Europos was the site of the fi rst-known Christian building dedicated to worship. The area was also known as a potpourri of 124 Ecbatana religious diversity, perhaps accounting for its powerful creative productivity. Besides Judaic, Mithraic, Greek, and Syrian infl uences, currents of Gnosticism and monasticism vied for popular attention. As time went on Edessa succumbed to imperial pressures toward Orthodox Christianity. In the fourth century c.e. Edessan Christianity tended toward zealous monasticism. Along with this movement come the intellectual bards of Syriac literature: Ephrem the Syrian (fourth century), Jacob of Sarug (fi fth century), and Philoxenus of Mabbug. From 363 until 489 Edessa was the major intellectual center for Syriac Christians. The ancestors of Alexander the Great established Nisibis. Because of its strategic position the city often changed hands, as armies and king perennially coveted control of its resources. In the fi rst fi ve centuries of the fi rst millennium c.e. Roman Caesars and Persian shapurs lay many sieges and battles upon its population. The modern city offers an ancient two-nave church, where Ephrem’s hallowed teacher, Jacob of Nisibis, is entombed. Jacob’s academy itself is located in the no-man’s land between the barbed-wire boundaries separating modern Turkey and Syria, just south of the modern city of Nisibis. When Persians surrounded the city in 363, the Syriac Christians were expelled and resettled in Edessa. Less than 120 years later disaffected Syriac Christians fl ed from Byzantine persecution (instigated by the Greek Church) in Edessa to fi nd refuge in Nisibis under Persian protection. Greek authorities had offi cially shut down the theological school at Edessa, and the axis of dissent, led by followers of Nestorius, shifted back into Nisibis. Ties with the Byzantine Christian world foundered—and still suffer today. Nisibis eclipsed Edessa as a center for the Syriac Church. The School of Nisibis would dominate Syriac Christianity in Persia for the next two centuries. One of Edessa’s refugees, Narsai, led the school for 40 years, and such stability allowed his successor to gather more than 1,000 students. These graduates then became the leaders for the Assyrian Church and other churches outside of the Byzantine Christian pale. Eventually Ctesiphon displaced Nisibis as the Syriac intellectual center, but not until the eighth century. See also Byzantine-Persian War; Christianity, early; Oriental Orthodox Churches; Roman Empire; Sassanid Empire. Further reading: Le Coz, Raymond. Histoire de l’Église d’Orient. Paris: Fayard, 1994; Segal. J. B. Edessa: The Blessed City. Piscataway, NJ: Gorgias Press, 2001. Mark F. Whitters Egeria (4th century c.e.) pilgrim and writer In the middle of the fourth century c.e., a Christian woman took a journey lasting four years to the Middle East. She wrote a journal of her travels, and the manuscript lay dormant until the late 1800s. Other Latin writers made mention of her, so her accounts circulated among religious pilgrims before they were lost for centuries. Her name was Egeria (also known as Eutheria, Aetheria, and Silvia), and she was writing for other religious women who lived in Europe, perhaps on the Atlantic coast of Spain or France. Most likely she was a nun commissioned by her community to put her curious and adventurous mind to work for the benefi t of the spiritual life of her sisters. She went on pilgrimage to the most important sites of the Christian and Jewish world of her day. Her account is one of the most valuable documents scholars have of the fourth-century world of travel, piety, early monasticism, women’s roles, and even the development of late Latin. Her book has two parts. The fi rst part is a travelogue and is simply her report of her pilgrimage. She tells her sisters of her visits to such hallowed and historical places as Jerusalem, Edessa, sites in Mesopotamia, Mount Sinai, Jericho, the Jordan River, Antioch, and Constantinople, and of meeting people (usually monks and mystics) staffi ng the places. She follows the itinerary of the people who made the places famous and prays there. Often her comments about the rustics at the sacred sites show a bit of dry humor. Her tourist program has many other objectives, such as following the path of Moses through the desert to Mt. Sinai, her plan to visit the home of Abraham’s family (Carrhae or biblical Harran, southeast of Edessa), and her hope to go to Thomas the Apostle’s tomb in Edessa. The travelogue is incomplete, for like any good pilgrim she concocted ever more schemes to visit other places like Ephesus to pray at the tomb of the John the “Beloved” Apostle. This part of her travels is missing from the manuscript. The second part is more a journalistic report on the church of Jerusalem’s liturgical practices over the three years she lodged there. Her record of the practices surrounding daily life and prayer of the church is the fi rst one that scholars have on the topic. She also reports on how the church’s celebrations correspond to its unique location in the Holy Land. The liturgies she describes are hardly stationary ceremonies in one church location, but they involve processions from place to place Egeria 125 according to the occasion. In addition, her descriptions are useful for historians of church architecture. Her account allows modern readers to see things like the need for military escorts in various places of the Holy Land, the unfailing hospitality of the monasteries along the way, the road network, and the system of inns maintained by the empire. She speaks of the monks, the nuns, and the religious laity in the Holy Land and their patterns of fasting and the instruction of the candidates for entrance into the church. Finally, she epitomizes the heart of the pilgrim and shows pluck and pithiness as she describes each stage of her spiritual journey. See also Apostles, Twelve. Christianity, early; Greek Church; Judaism, early (heterodoxies); King’s Highway and Way of the Sea. Further reading: Burghardt, Walter J., ed. Ancient Christian Writers. Long Praiie, MN: Neumann Press, 1970; Wilkinson, John. Egeria’s Travels to the Holy Land. Rev. ed. Jerusalem: Ariel, 1981. Mark F. Whitters Egypt, culture and religion The civilization of ancient Egypt lasted about 30 centuries—from the 30th century b.c.e. to 30 b.c.e., when it became part of the Roman Empire. Egypt was signifi cant for its size and longevity, retaining a strong continuity of culture despite several periods of turmoil. Egypt developed along the valley surrounding the Nile River in northeast Africa, extending into the desert and across the Red Sea. Ancient Egyptians traced their origins to the land of Punt, an eastern African nation that was probably south of Nubia, but their reasons for this are unclear. As early as the 10th millennium b.c.e., a culture of hunter-gatherers using stone tools existed in the Nile Valley, and there is evidence over the next few thousand years of cattle herding, large building construction, and grain cultivation. The desert was once a fertile plain watered by seasonal rains, but may have been changed by climate shifts or overgrazing. At some point the civilizations of Lower Egypt (in the north, where the Nile Delta meets the Mediterranean Sea) and Upper Egypt (upstream in the south, where the Nile gives way to the desert) formed; the Egyptians called them Ta Shemau and Ta Mehu, respectively, and their inhabitants were probably ethnically the same and culturally interrelated. By 3000 b.c.e. Lower and Upper Egypt were unifi ed by the fi rst pharaoh, whom the third-century b.c.e. historian Manetho called Menes. Lower and Upper Egypt were never assimilated into one another—their geographical differences ensured that they would retain cultural differences, as the peoples of each led different lives—but rather, during the Dynastic Period that followed, were ruled as a unit. Each had its own patron goddess—Wadjet and Nekhbet—whose symbols were eventually included in the pharaoh’s crown and the fi vefold titular form of his name. The fi rst Pharaoh also established a capital at Memphis, where it remained until 1300 b.c.e. The advent of hieroglyphics and trade relations with Nubia and Syria coincide with the Early Dynastic Period. HISTORY The history of ancient Egypt is traditionally divided into dynasties, each of which consists of rulers from more or less the same family. Often, a dynasty is defi ned by certain prevailing trends as a result of the dynastic family’s interests—many of the signifi cant pyramid builders in ancient Egypt were from the Fourth Dynasty, for instance. In the early dynasties, we have little solid information about the pharaohs, and even our list of their names is incomplete. The dynasties are organized into broad periods of history: the Early Dynastic Period (the First and Second Dynasties), the Old Kingdom (Third through Sixth), the First Intermediate Period (Seventh through Tenth), the Middle Kingdom (Eleventh through Fourteenth), the Second Intermediate Period (Fifteenth through Seventeenth), the New Kingdom (Eighteenth through Twentieth), the Third Intermediate Period (Twenty-fi rst through Twenty-fi fth), and the rather loosely characterized Late Period (Twenty-sixth through Thirty-fi rst). Ancient Egypt essentially ends with the Thirty-fi rst Dynasty: For the next 900 years Egypt was ruled fi rst by Alexander the Great, then the “Ptolemaic dynasty,” founded by Alexander’s general Ptolemy, and fi - nally by Rome directly. RELIGION Ancient Egyptian religion can be described through syncretism, the afterlife, and the soul. Syncretism refers to the merging of religious ideas or fi gures, usually when disparate cultures interact. In the case of ancient Egypt, it refers to the combination and overlapping of local deities. Many sun gods (Ra, Amun, Horus, the Aten) were fi rst worshipped separately and then later in various combinations. This process was a key part of Egyptian polytheism and likely helped preserve the nation’s cultural continuity across its vast life. 126 Egypt, culture and religion Mortal life was thought to prepare Egyptians for the afterlife. The Egyptians believed that the physical body would persist in the afterlife and serve the deceased, despite being entombed and embalmed. Amulets, talismans, and sometimes even mummifi ed animals were provided for the deceased’s use. As described in the Book of the Dead (a term referring to the corpus of Egyptian funerary texts), in later stages of Egyptian religious history the deceased was judged by the god Anubis. The god weighed the heart, which was thought to hold all the functions of the mind and therefore a record of the individual’s life and behavior, against a single feather. Those judged favorably were ushered on to the afterlife; those who were not had their hearts eaten by the crocodile-lion-hippopotamus demon Ammit and remained in Anubis’s land forever. The different parts of the soul—or different souls— included the ba, which developed from early predynastic beliefs in personal gods common to the ancient Near East, and which was the manifestation of a god, a full physical entity that provided the breath of the nostrils, the personality of the individual, and existed before the birth of the body; the ka, the life power which comes into existence at birth and precedes the individual into the afterlife to guide their fortunes; the akh, a kind of ghost that took many different forms in Egyptian religion over the dynastic era; the khaibut, the shadow; the ren, or name; and the sekhu, or physical body. LANGUAGE AND MATH Egyptian writing dates as far back as the 30th–50th centuries b.c.e. Early Egyptian—divided into the Old, Middle, and Late forms—was written using hieroglyphic and hieratic scripts. Although hieroglyphs developed from pictographs—stylized pictures used for signs and labels—they included symbols representing sounds (as our modern alphabet does), logographs representing whole words, and determinatives used to explain the meaning of other hieroglyphs. Translation of ancient Egyptian writing was nearly impossible for modern Egyptologists until the discovery of the Rosetta Stone by an army captain in Napoleon Bonaparte’s campaign in Egypt, in 1799. When the French surrendered in 1801, the stone was claimed by the British forces and sent to the British Museum, where it remains today. The stone was a linguist’s dream come true, the sort of fi nd that revolutionizes a fi eld. Upon it was written a decree by Pharaoh Ptolemy V in 196 b.c.e., not only in hieroglyphics and Demotic but in Greek. Since ancient Greek was well known, this allowed Egyptologists to compare the two line by line and decipher the meaning of many of the hieroglyphs. Much work and refi nement has been done since, receiving a considerable boost from the archaeological fi nds of the 19th and 20th centuries. The hieratic numeral system used by the Egyptians had similar limitations to the Roman numeral system: It was poorly suited to anything but addition and subtraction. As attested in the Rhind and Moscow papyri, the Egyptians were capable of mathematics including fractions, geometry, multiplication, and division, all of which were much more tedious than in modern numeral systems but were required for trade and timekeeping. Like other ancient civilizations, the Egyptians lacked the concept of zero as a numeral, but some historians argue that they were aware of and consciously employed the golden ratio in geometry. See also New Kingdom, Egypt; Old Kingdom, Egypt; Ptolemies. Further reading: Aldred, Cyril. Egyptian Art. Oxford: Oxford University Press, 1980; Andrews, Carol. Egyptian Amulets. Austin: University of Texas, 1994; Hobson, Christine. The World of the Pharaohs. London: Thames and Hudson, 1987; Hornung, Erik. The Valley of the Kings. New York: Timken, 1990; James, T. G. H. Ancient Egypt: Its Land and Its Legacy. Austin: University of Texas, 1988; Redford, Donald R. From Slave to Pharaoh: The Black Experience of Ancient Egypt. Baltimore, MD: Johns Hopkins, 2004; Shaw, Ian. The Oxford History of Ancient Egypt. Oxford: Oxford University Press, 2000; Watterson, Barbara. Amarna. Charleston, SC: Tempus, 1999; Weeks, Kent R. Valley of the Kings. New York: Friedman, 2001. Bill Kte’pi Egypt, culture and religion 127 Translation of ancient Egyptian hieroglyphics was nearly impossible until the discovery of the Rosetta Stone. Elam The country of Elam encompassed the southwest of modern Iran. The Elamites designated themselves as Haltamti, from which the Akkadians derived Elamtu. Susa and Anshan were the two major centers of Elamite civilization. The proto-Elamite period (c. 3400–2700 b.c.e.) witnessed the emergence of writing at Susa. This proto-Elamite script differed suffi ciently from Mesopotamian scripts to indicate a different language. Indeed, the Elamite language is unrelated to other known ancient languages. From the Old Elamite Period (c. 2700–2100 b.c.e.) Mesopotamian texts begin to mention Elam. Because native texts remain sparse, Elamite chronology derives much from Mesopotamian records. Elam’s encounter with the Agade kings illustrates a recurring trait in its history: Elam’s lowlands are readily accessible from the west and periodically came under Mesopotamian control. By contrast, the Zagros highlands are more isolated by topography and experienced more autonomy. Accordingly, Susa and the Khuzistan plains (in southwest Iran) were the regions that fell most completely under Akkadian dominion. After Agade’s fall, the Elamite Puzur-Inshushinak claimed the title king of Awan (an Iranian region). His incursion into Mesopotamia was deterred by Ur-Nammu, and Susa fell under Ur III dynasty rule. The Shimashki dynasty (c. 2100–1900 b.c.e.) emerged in the highlands, a result of alliances formed against Ur. The Elamites eventually regained control of the lowlands, even moving west to destroy Ur and capture its last king, Ibbi- Sin. The Elamite king Kindattu managed to occupy Ur but was soon expelled by Ishbi-Erra, founder of the fi rst dynasty of Isin. The Sukkalmah era (c. 1900–1600 b.c.e.) is best attested in the records, when Elam experienced unprecedented prosperity. A triumvirate from the same dynastic family ruled the country: The sukkalmah (grand regent), the sukkal (regent) of Elam and Shimashki in second place, and the sukkal of Susa in third place. The ascendancy of the city-state of Larsa over Isin was propitious for Elam, and the Elamites even established a dynasty at Larsa. Elam dominated the eastern edge of Mesopotamia, exerting its economic (especially tin trading) and diplomatic presence as far as northern Syria. Hammurabi’s conquests, however, led Elam to decline in the following centuries. The Middle Elamite Period (c. 1600–1100 b.c.e.) saw a resurgence of Elamite power, marked by the use of the title king of Anshan and Susa. From c. 1400–1200 b.c.e., relations between Elam and Kassite Babylonia were promoted by means of several royal intermarriages. However, with the rise of a new Elamite dynasty, Shutruk-Nahhunte put an end to Kassite hegemony by raiding southern Mesopotamia, bringing to Elam such important monuments as Naram-Sin’s Victory Stela and Hammurabi’s Law Code. His son, Kutir- Nahhunte, terminated the Kassite dynasty by deposing Enlil-nadin-ahi and brought home Marduk’s cult statue from Babylon. Nebuchadnezzar I eventually reclaimed this statue when he sacked Elam around 1100 b.c.e. During the Neo-Elamite Period (c. 1100–539 b.c.e.) the country faced pressures from two fronts: Medo- Persian encroachment on the highlands and Assyrian aggression toward the lowlands. The Elamites suffered Assyrian vengeance for being Babylonian allies against the growing Neo-Assyrian Empire. In 647 b.c.e., Ashurbanipal despoiled Susa. Moreover, in 539 b.c.e., Cyrus II conquered Babylonia and extended Achaemenid control over Elam. Until the Hellenization of the region Elamite culture continued to assert itself through the use of the Elamite language in offi cial documents, the worship of Elamite deities, and the importance of Anshan and Susa in the Persian Empire. See also Akkad; Assyria; Babylon, later periods; Fertile Crescent; Medes, Persians, and Elamites; Persepolis, Susa, and Ecbatana. Further reading: Carter, Elizabeth, and Matthew W. Stolper. Elam: Surveys of Political History and Archaeology. Berkeley: University of California Press, 1984; Potts, Daniel T. The Archaeology of Elam. New York: Cambridge University Press, 1999. John Zhu-En Wee Elamites See Medes, Persians, and Elamites. Eleusis Eleusis was a city in Attica in Greece, located some 12 miles northwest of Athens. From early times Eleusis was associated with the Eleusinian mystery rites of Demeter, a Mother Goddess fi gure and maternal fi gure of power, and the development of a cult that existed since the early Greek culture of the Cyclades islands. Festivals like the Eleusinian Mysteries were part of the annual celebration of birth and rebirth in the early Mediterranean. 128 Elam The rites also included worship of the god of wine and pleasure, Bacchus (or Dionysus). While Athens took over the rites around 600 b.c.e., there is much evidence that the mystery rites had their origin in the dawn of Greek civilization and formed a part with the Mother Goddess cults found throughout both the western and eastern Mediterranean in ancient times. The festivals, as fertility rites, can only be fully understood when they are viewed as only the fi rst act of a two-act annual drama. What was actually the fi rst act was held in the spring at Agrae, the Lesser Mysteries. This corresponds with the traditional time of sowing the new crops and the joy of rebirth. The mystery celebration at Eleusis marked not only the harvest, but the hope that life would return again after the winter, as Persephone, Demeter’s daughter, would return from the Underworld, or Hades. It is possible that in the earliest times the mysteries also included human sacrifi ce, with the shedding of the blood of the sacrifi cial victim offered to bring fertility back to the land. Even into classic Grecian times after 600 b.c.e. the celebration of the Eleusinian Mysteries formed a major milestone in Greek religion. However, something—a taboo or a fear of retaliation from the gods or those who celebrated the mysteries—kept even the most rational minds of the day from relating what happened at the Eleusinian Mysteries, or even about the buildings used in their celebration. See also Greek mythology and pantheon; mystery cults. Further reading: Angus, S. The Mystery Religions and Christianity. New Hyde Park, NY: University Books, 1966; Frazer, James George. The Golden Bough. New York: Touchstone, 1995; Graves, Robert. The White Goddess: A Historical Grammar of Poetic Myth. New York: Farrar, Straus and Giroux, 1966; Hesiod and Homer. Hesiod, the Homeric Hymns, and Homerica. Trans. by H. G. Evelyn-White. Cambridge, MA: Harvard University Press, 1974; Sophocles. Eleusis and the Eleusinian Mysteries. Trans. by G. E. Mylonas. Princeton, NJ: Princeton University Press, 1961. John F. Murphy, Jr. Ephesus and Chalcedon, Councils of The third and fourth ecumenical councils held at Ephesus in 431 c.e. and at Chalcedon in 451 c.e., respectively, discussed and formulated how Christians were to speak of the relationship of Christ’s human and divine natures to one another. Whereas the earlier ecumenical Council of Nicaea (325) and Council of Constantinople (381) had defi ned doctrinal perspectives on belief in the Trinity as the conviction that there was one ousia (essence) of God in three hypostases (persons), now Christology was the main concern of the councils. At the request of Bishop Nestorius of the eastern capital Constantinople, Emperor Theodosius II called together all the major bishops of the Eastern and a few of the Western Roman Empire to meet at Pentecost 431 to resolve questions that had arisen concerning teachings advanced by Nestorius. Nestorius, who had been trained in the theological tradition of the school of Antioch, resisted calling Mary Theotokos (Mother of God) and preferred to speak of her as Christotokos (mother of Jesus as the one united with the Logos). Taking advantage of travel delays among the supporters of Nestorius and hostile attitudes among delegates from Asia Minor who resented Nestorius’s claims of authority over them, Cyril took the lead at the council with a group of Egyptian monks. Following irregular proceedings, Cyril had Nestorius condemned and deposed from his position. When other Eastern bishops, who had arrived late, met separately under Bishop John of Antioch, who was their leader, they criticized Cyril’s anathemas as fraught with Apollinarianism and Arianism and in turn condemned and deposed Cyril and Bishop Memnon of Ephesus. Joined by the Roman delegates, Cyril reconvened the council and condemned and deposed John of Antioch as well as 34 Eastern bishops. Emperor Theodosius II approved the depositions of Nestorius, Cyril, and Memnon in early August and formally dissolved the council, yet in the confusion following the council Cyril succeeded in returning to his see of Alexandria as victor. Nestorius, however, withdrew from the capital city to a monastery near Antioch, from where he was expelled fi rst to Petra and then to the Great Oasis in Libya until his death after 451. Accusations against Nestorius had focused on the claim that he divided Christ and was affi rming two Christs and two Sons, a man and God, by considering the union of man and God in Christ as merely an external union. One of the main opponents of this supposed Nestorian teaching, the infl uential monk Eutyches of Constantinople, promoted an extreme Monophysite (one [divine] nature) teaching that denied that Christ is homoousios (consubstantial) with humankind. Having been condemned at a local council under the leadership of Bishop Flavian of Constantinople, Eutyches Ephesus and Chalcedon, Councils of 129 was rehabilitated by Bishop Dioscorus of Alexandria at the so-called latrocinium (Robber Council) at Ephesus in 449, a meeting remembered both for its irregularities and violence that led to the death of Flavian of Constantinople. Summoned by Emperor Marcian, the Council of Chalcedon met on October 8, 451, to resolve disputes about Monophysitism. It rehabilitated Flavian of Constantinople and accepted as defi nitive the Christology formulated in Pope Leo I of Rome’s Letter to Flavian, which stated that in Christ, who is a single prosopon (person) and a single hypostasis, the complete and entire divine and human natures coexist “without mixture, without transformation, without separation, and without division.” Thus Christ is homoousios both to the Father with regard to his divinity and to us with regard to his humanity. See also Bible translations; Christianity, early; heresies; Jesus (Christ) of Nazareth. Further reading: Chadwick, Henry. The Early Church. New York: Penguin, 1990. Cornelia Horn and Robert Phelix Ephrem (306–373 c.e.) theologian and composer Known as the Harp of the Spirit, Ephrem was perhaps the most creative voice of the Syriac culture and church and one of the most infl uential theologians of early Christianity. He was born in 306 c.e., just south of the holy region of Syriac monks and spirituality in Mesopotamia. Formed into a mature Christian by the mentor and holy man Jacob, Ephrem became profi - cient enough to join his teacher in the same school of Nisibis. Julian the Apostate ignominiously lost his war with the Persian Sassanid Empire, so the Romans were forced to withdraw from Nisibis in 363. The Christians associated with Ephrem also retreated to the eastern frontier city of Edessa. There Ephrem served his people in two primary ways. First, he threw himself into the distribution of food and alms among the refugees of the retreat. For all of his impact on the church the only offi ce he held was deacon. He had no desire to be a priest, and he avoided the popular call to be a bishop by feigning madness. His personality aided him in his quest to steer clear of the hierarchy, for he had an irascible personality that only personal sanctity could control. Second, he produced a corpus of hymns, homilies, poems, and commentaries that scholars marvel at today. His genius lay in combining his hymnology with a unique method of interpreting the Bible and spiritual mysteries. On one hand he accepts the plain sense of scripture (this is called the Antiochene method of interpreting the Bible); but on the other hand he relies on allegory and poetic license when logic and historical circumstances do not offer a relevant application (this is called the Alexandrian method). His hybrid thinking represents the Syriac Church penchant for dealing with the forces of Gnosticism, and early Judaism. The central event of history and nature is the Christ event: the incarnation of the divine in the life of Jesus (Christ) of Nazareth. Ephrem fi nds symbols of this mystery in history and nature, and even the Bible speaks in a typological way about this event. He has no hesitation about borrowing a vocabulary and hymnology that the Syriac church encountered in the Mesopotamian world of the heterodox currents and Jewish infl uences. While his method was unique, his own doctrine was orthodox. His hymns and elevated speech show him to be steadfast in opposing Arianism, Marcionism, Manichaeanism, and other Christian Dualisms. His images stand in support of such ideas as the Last Judgment, purgatory, original sin, free will and its reliance on grace, the primacy of Peter, the intercession of the saints, the real presence of Jesus in the bread of communion, the sacraments, and the Trinity. All these doctrines are in the crucible of theological development, so Ephrem’s genius is valuable and ahead of his time. He had a special devotion to the mother of Jesus and foreshadowed the concept of her Immaculate Conception. His compositions are in Syriac, and many were immediately translated into Greek. People so loved his metaphors and analogies that they used them in their own languages and liturgies; thus, some works exist only in their Latin or Armenian forms. A complete inventory of his compositions has yet to be accomplished. Ephrem died in the epidemic of 373, taking care of Edessa’s refugees. See also Alexandria; Cappadocians; Christianity, early; Greek Church; monasticism; Origen. Further reading: Griffi th, Sidney H. Images of Ephraem: The Syrian Holy Man and His Church. New York: Traditio, 1989–90; ———. Faith Adoring the Mystery. Milwaukee, WI: Marquette University Press, 1997. Mark F. Whitters 130 Ephrem Epicureanism Epicureanism is named after the philosopher Epicurus, who founded a school of teaching in Athens that continued for seven centuries after his death. Epicurus (342–270 b.c.e.) was a citizen of Athens, raised on the island of Samos. His contribution was aimed at the practical application of philosophy and its role in enabling people to lead a pleasurable and virtuous life. His ideas are completely distinct from the slanderous attacks made on him by later thinkers, who have given to Epicureanism a pejorative sense of gluttony and unbridled hedonism. Writings of Epicurus that have survived, notably letters to Herodotus and Menoecus, contain only the restatement of the works of others, with one principal exception. This was in the area of atomism. The basis of atomism is that the phenomena of the universe can be explained by the interactions of the smallest independently existing particles of matter (atoms) that follow observable physical laws in predictable ways. In other words, there is no fundamental need for the gods to exist for the universe also to exist. Consequently, humanity should be freed from the terror infl icted upon it by anxiety of what new diseases and disasters the gods might next release and in concern that those disasters are the fault of people suffering from them. To this belief Epicurus added the innovation that some atoms will voluntarily bend from the paths that would otherwise cause them to descend from the skies to the earth. This voluntary movement, the cause for which was not properly explained, had the effect of preventing people from feeling that they were trapped in a mechanistic universe with no fundamental meaning or purpose. Epicurus taught in a garden in Athens from approximately 306 b.c.e. until his death. Athens had also witnessed the brilliance of Aristotle within the preceding 20 years and the astonishing conquests of Alexander the Great in Asia Minor and Egypt. Greek culture was becoming one of the most dynamic forces of the world. Yet, a time of such change and innovation also led to a sense of impermanence and the fear of the unknown. According to Epicurus, the sensible approach of any adult was to seek the maximization of pleasure and peace, rather than appease or appeal to the supernatural. This is the philosophy of hedonism, which holds the seeking of pleasure to be the ultimate purpose of life. This does not mean, however, that people should heedlessly chase after any immediate pleasurable sensation without consideration of the future or of any other person. The sensible person should recognize that different forms of pleasure inevitably bring pain. For example, the thoughtless drinking of wine will lead to the pain of a hangover, while the lusts of a criminal nature will lead to the misery of prosecution. As a result, the properly hedonistic person is also a virtuous person who selects pleasures that do not cause pain to themselves or to others. However, owing to his atomistic beliefs, Epicurus would have acknowledged that virtue is intrinsically of no value. This made his position rather paradoxical. His belief in the gods, whom he conceived of as living blissful existences while completely ignoring humanity, is also illogical. If there is no necessity for the gods to exist, as well as no meaningful way to detect their presence, then why would Epicurus claim their existence? It is possible that, bearing in mind the practicality of his teaching, he simply wished to avoid the political danger of denying the existence of the gods. The Epicurean school continued after the death of its founder and became particularly prominent in Rome. Two of the tutors of Cicero were Epicureans, and Seneca defended the beliefs of Epicurus against the attacks of the religious minded and particularly the Christians. The conversion of the emperor Constantine the Great to Christianity in 313 c.e. signaled the end of Epicureanism as part of the mainstream of intellectual discourse. One problem was that Epicureanism showed little ability to innovate or develop. Once it was accepted that Epicurus had identifi ed the proper way to live, there was little to discuss, and people should just go about practicing what he taught. As Lucretius wrote, he was “. . . the man in genius who o’er-topped / The human race, extinguishing all others, / As sun, in ether arisen, all the stars.” The inability to adapt effectively condemned Epicureanism to continual condemnation by religious believers who characterized it as little more than egotistic selfi shness. Humanists who tried to support Epicureanism were condemned as libertines. The poem “De Rerum Natura” (“On the Nature of the Universe”) by the Roman philosopher and Stoic Lucretius explains Epicureanism in its fullest form. See also Greek mythology and pantheon; Stoicism. Further reading: Epicurus. The Epicurus Reader: Selected Writings and Testimonia. Trans. and ed. by Brad Inwood and L. P. Gerson. Cambridge, MA: Hackett Publishing, 1994; Lucretius, Titus Carus. On the Nature of the Universe. Trans. by R. E. Latham. New York: Penguin Books, 1994. John Walsh Epicureanism 131 Era of Division (China) The fi rst part of the Era of Division that followed the Han dynasty, between 220 and 280 c.e., is called the Three Kingdoms period. It ended in 280, when the Jin (Ch’in) dynasty, led by the Sima (Ssu-ma) family, reunifi ed China. But the unity was fragile because the founding ruler divided his realm among his 25 sons on his death, giving each a principality under the nominal control of his principal heir. The princes and other noblemen soon fell upon one another in civil war, and one of them called on the Xiongnu (Hsiung-nu) northern nomads for help. The Xiongnu chief claimed descent from a Han princess, called himself Liu Yuan, and named the Shanxi (Shansi) region he controlled the Han but later changed its name to Zhao (Chao). Liu Yuan’s forces sacked both ancient capitals, Chang’an (Ch’ang-an) and Luoyang (Loyang), burning the imperial library of the Han dynasty. The Jin court fl ed south in 316 and set up a new capital in Nanjing (Nanking), which had been capital of the Wu state during the Three Kingdoms period in China. The year 316 marked the division of China into two halves that lasted until 589. It was called the era of the Northern and Southern dynasties. Chinese rule was superseded in northern China, which became the battleground of different nomadic groups, the Xiongnu, the Xianbei (Hsien-pei), both Turkic in ethnicity, and the Toba (T’o-pa), who were ethnically Tungustic. In 387 the Xiongnu attempted to conquer the south, but the watery southern terrain was unsuited to their cavalry, and they were decisively repulsed at the Battle of Feishui (Fei Shui) in modern Anhui (Anhwei) Province. As a result, the situation between the north and south was stalemated. In 386 a new nomadic group from the northeast defeated both the Xiongnu and Xianbei and established the Northern Wei dynasty in northern China that lasted until 557. The Tungustic rulers of the Northern Wei dynasty fi rst established their capital city at Datong (Tatung) in modern Shanxi Province, a logical place for a nomadic dynasty because it was located near the Great Wall of China. Fierce warriors (the Toba population was estimated to be no more than 200,000 people), with no written language and a primitive culture, the Toba soon embraced Buddhism, ordering the excavation of extensive cave temples outside Datong at a site called Yungang (Yunkang). They also embraced Chinese culture with enthusiasm. In 494 the Northern Wei moved the capital to Luoyang to be near the heartland of Chinese culture and ordered the excavation of another series of caves devoted to Buddhist worship nearby at a site called Longmen (Lungmen). At the same time the government also forbade the Toba people to wear their traditional clothing or use their tribal titles, ordering them to adopt Chinese surnames and speak Chinese instead. Some Toba people revolted against sinicization, which split the dynasty into two short-lived rival kingdoms called the Eastern Wei and Western Wei, which were followed by the Northern Qi (Ch’i) and Northern Zhou (Chou). None of the dynasties that followed the Northern Wei ruled all of North China. The era of division ended in 581 c.e. when a Northern Zhou general, Yang Jian (Yang Chien), usurped the throne and went on to unify the north and south under his new dynasty, the Sui. Meanwhile in southern China, from the Yangtze River valley south, fi ve dynasties followed one another. They were the Jin (Chin), 317–419; Liu Song (Sung), 420–477; Qi (Ch’i), 479–501; Liang, 502–556; and Chen (Ch’en), 557–587. Nanjing was capital to all fi ve. There was large-scale immigration of northerners to southern China during the Era of Division. The refugees who fl ed the nomads brought the refi nements and advanced culture of the north to southern China and absorbed the aboriginal populations into mainstream Chinese culture. Thus, whereas southern China was a frontier region during the Han and a place of exile for offi cials and convicts, by the end of the sixth century c.e. it had become developed and economically advanced. Culturally, the most remarkable change during the Era of Division was the phenomenal growth of Buddhism in China, an Indian religion that fi rst entered China during the beginning of the Eastern Han dynasty (25–220 c.e.), brought by missionaries and traders along the Silk Road. While making inroads, Buddhism had remained an exotic religion of foreigners and some Chinese during the Han dynasty. Confucianism as a state ideology collapsed with the fall of the Han dynasty. The primitive religions of North China’s nomads had little to offer confronted with the appealing theology of Buddhism and its stately rituals and ceremonies. Thus, a nomadic ruler stated in 335: “We are born out of the marches and though We are unworthy, We have complied with our appointed destiny and govern the Chinese as their prince . . . Buddha being a barbarian god is the very one We should worship.” The nomads’ Chinese subjects also embraced Buddhism for consolation in times of trouble and for its attractive and universalistic teachings. Adherence to Buddhism made the nomadic rulers less cruel to their Chinese subjects and built bridges between the rulers and ruled. Buddhism also became dom- 132 Era of Division (China) inant in southern China because its teachings assuaged the pain of exile for northern refugees and because of its theology, which answered questions that Confucianism and other Chinese schools of thought failed to address. Similarly the chaos and collapse of Confucianism as state ideology during the Era of Division revived interest in Daoism (Taoism), allowing some disillusioned intellectuals to take refuge in an escapist philosophy. Seeking longevity and immortality, some learned Daoists delved to learn about the properties of elements and plants and produced a vast pharmacopoeia. Popular Daoism was enriched as a result of borrowing ceremonies and monastic institutions from Buddhism. The Era of Division was politically a dismally chaotic period in Chinese history. However, intellectually it was not a dark age, principally due to the rapid growth of Buddhism, which contributed enormously to Chinese civilization. The nomads became rapidly sinicized, and intermarriages between northern urban upper-class Chinese and nomads leveled their differences. See also Fa Xian; Wen and Wu. Further reading: Dien, Albert E., ed. State and Society in Early Medieval China. CA: Stanford University Press, 1990; Eberhard, Wolfram. Conquerors and Rulers: Social Forces in Medieval China. 2nd ed. Leiden, Netherlands: E. J. Brill, 1949; Zurcher, Eric. The Buddhist Conquest of China: The Spread and Adaptation of Buddhism in Early Medieval China. 2 vols. Leiden, Netherlands: E. J. Brill, 1959. Jiu-Hwa Lo Upshur Essenes Several ancient informants discuss the Jewish sect known as the Essenes. The most famous three are Josephus, Philo, and Pliny, whose writings date to sometime around the fi rst century c.e. The etymology of the name Essenes remains uncertain. One theory proposed by the ancient Jewish philosopher Philo (c. 20 b.c.e.– 50 c.e.) suggests that the name is related to the Greek word for holiness. It is perhaps more likely that the name goes back to a Hebrew or Aramaic word that could be related to the word for “council,” or “doers (of the law),” or even “healers.” A renewed interest in this ancient Jewish sect coincided with the discovery of the Dead Sea Scrolls near Qumran. The idea that the Dead Sea Scrolls and Qumran are to be identifi ed with the Essenes has many strong supporters. The Jewish historian Flavius Josephus (b. 37 c.e.) has left us the most detailed information about this group. He mentions the Essenes several times throughout his writings and even claims to have been a member of this group during his youth. If indeed this was true, he could not have spent more than a few short months with them, according to the chronology given in his own autobiography. The lengthiest description of the Essenes appears in his multivolume work called The Jewish War, c. 73 c.e., where Josephus identifi es them as one of three Jewish philosophical schools. What is notable about this description of the Essenes is this: Josephus’s comments concerning them are far more detailed and much lengthier than his comments about the other two sects, the Pharisees and the Sadducees. Within this description Josephus gives an account of their procedure for admitting new members into the community and details many of their practices, including their shunning of marriage and their idiosyncratic practice of avoiding bowel movements on Shabbat. A candidate for membership is initiated into this hierarchical sect with a full year of probation during which time he is expected to live according to the terms of the community but not among them. After this year the candidate is permitted to draw closer to the group but may not participate in the meetings of the community and is also barred from the “purer kind of holy water.” A proselyte was expected to swear a series of oaths that insist upon the strict observance of various communal laws and also secrecy to the group. Josephus also gives an account of the provisions for those who have been expelled from the community for serious crimes. In addition to this lengthy account in The Jewish War, Josephus refers to the Essenes twice in his multivolume work the Jewish Antiquities. All three of the ancient informants, Pliny, Philo, and Josephus, write that the Essenes were characterized by their shared wealth and their avoidance of married life, but Josephus does not say that celibacy is the condition for membership in the community. Scholarly interest in the Essenes grew alongside the study of Qumran and the Dead Sea Scrolls. Even though other theories for the identifi cation of the wilderness community at Qumran existed, many scholars found the Essene identifi cation to be the most convincing, and it enjoyed the widest popularity from the earliest days of scholarship on the scrolls. There are two major arguments advanced in favor of the Essene identifi cation of the Qumran community. The fi rst argument relies upon the Roman historian, Pliny the Elder (23–79 c.e.), who locates the Essene community near the Dead Sea. In his multivolume work Natural History, Pliny gives what has Essenes 133 now become a much-cited reference to the solitary group known as the Essenes in his description of the geography of the land of Judaea. While there is some question as to how to translate the Latin infra hos, as “below” (with respect to altitude) or “downstream from,” Pliny’s reference identifi es an Essene community in a location very close to the region of the Qumran settlement and caves. The second argument relies on the correlation of Josephus’s description of the Essenes and the Qumran community’s own account of their belief system. Many diverse categories of writings were discovered at Qumran, including copies of biblical texts and pseudepi graphic writings that would probably have been common to collections and libraries of different kinds of Jewish sects during that time. In addition to these texts were scrolls that appeared unique to this community and likely composed by them. This latter category of writings expresses a distinctive theology and worldview and is categorized as sectarian or unique to the community at Qumran. A comparison of these sectarian writings with Josephus’s description of the philosophy of the Essenes and other ancient informants provides some interesting points of correlation in theology and worldview. One point of correlation concerns the doctrine of predestination. Josephus writes that the Essenes understood “fate” to determine all things. In another place in The Jewish War, Josephus writes that this sect leaves everything in the hands of God. This idea that events have been predestined is found in several places among the sectarian writings, including column three of the sectarian scroll the Community Rule, which reads as follows: “All that is now and ever shall be originates with the God of knowledge. Before things come to be, He has ordered all their designs, so that when they do come to exist—at their appointed times as ordained by His glorious plan—they fulfi ll their destiny, a destiny impossible to change. He controls the laws governing all things, and He provides for all their pursuits.” However, because of inconsistencies, some scholars are not persuaded by the Essene identifi cation of the Qumran community. Those scholars who remain unconvinced that the Qumran community is Essene note other points of difference in the worldview and ideology of the two groups. These scholars hold out for the possibility that the Qumran group could be an otherwise unknown ancient Jewish sect or propose alternative identifi cations. Some scholars have attempted to reconcile the Qumran sectarian writings with these ancient descriptions of the Essenes by theorizing that there was a split in the Essene movement or a schism that might account for the variations. This is known as the Groningen hypothesis. See also apocalypticism, Jewish and Christian; John the Baptist; Judaism, early (heterodoxies); messianism. Further reading: Beall, T. S. Josephus’ Description of the Essenes Illustrated by the Dead Sea Scrolls. Cambridge: Cambridge University Press, 1988; Cansdale, L. Qumran and the Essenes Texte und Studien zum Antiken Judentum 60. Tübingen, Germany: J.C.B. Mohr (Paul Siebeck), 1997; Goodman, M. D. “A Note on the Qumran Sectarians, the Essenes and Josephus.” Journal of Jewish Studies 46 (1995); Mason, S. “What Josephus Says about the Essenes in His Judean War.” In S. G. Wilson and M. Desjardins, eds. Text and Artifact in the Religions of Mediterranean Antiquity: Essays in Honour of Peter Richardson. Waterloo, UK: Wilfrid Laurier University Press, 2000; Rajak, T. “Ciò Che Flavio Giuseppe Vide: Josephus and the Essenes.” In F. Parente and J. Sievers, eds. Josephus and the History of the Greco-Roman Period: Essays in Memory of Morton Smith. Leiden, Netherlands: E. J. Brill, 1994; Roth, C. “Why the Qumran Sect Cannot Have Been Essenes.” Revue de Qumran (1959); Stegemann, H. “The Qumran Essenes—Local Members of the Main Jewish Union 134 Essenes The Dead Sea Scrolls were found in these caves near Qumran, which are believed to have been an Essene monastery. in Late Second Temple Times.” Madrid Qumran Congress. Proceedings of the International Congress on the Dead Sea Scrolls, Madrid, 18–21 March 1991. Leiden, Netherlands: E. J. Brill, 1992; Vermes, G., and M. D. Goodman, eds. The Essenes According to the Classical Sources. Sheffi eld, UK: JSOT, 1989. Angela Kim Harkins Esther, book of The book of Esther tells the story of the Persian queen Esther and her uncle Mordechai, who foil the plot of Haman, a wicked Persian courtier, to exterminate the Jews. Haman is hanged on the very scaffold where he intended to hang Mordechai, and Mordechai replaces him as an adviser of the Persian king. The book, written originally in biblical Hebrew, survives in various forms. The differences among these forms determine the many afterlives that the book has enjoyed. The date of the original form of the book is uncertain. The Jewish book of Esther is best known from the liturgy. The scroll (Hebrew: megillah) of Esther is read on the feast of Purim, the origins of which are commemorated in the book. Esther is grouped in the Jewish Bible with other festival scrolls, the Five Megilloth. Purim is an early spring feast associated with revelry and even drinking. Its ultimate origins are Mesopotamian or perhaps Iranian. Esther tells the story of the Jewish form of the feast: Jews commemorate the success of Esther and Mordechai in halting Haman’s wicked plans for Jewish genocide, an echo of the genocide planned in Exodus. Esther has also served as a talisman for the historically large community of Iranian Jews, who call themselves “Esther’s children.” The Protestant book of Esther is identical to the Jewish version except in being categorized as a historical book, put in sequence with Joshua, Judges, and others as describing the history of God’s chosen people. Much Protestant exegesis and scholarship has focused on Esther as a historical text. To be sure, Jews through the ages have thought of Esther as a genuine historical fi gure but with much less urgency than most Christian students of scripture. Protestant interpreters were also infl uenced by the identifi cation of the Persian king Ahasuerus as one of the historical rulers called Artaxerxes. The Catholic–Orthodox book of Esther is the result of another facet of the early Greek translation. The Hebrew text of Esther, and thus the Jewish and Protestant versions, do not mention the name of God at all, although the story as it unfolds is clearly overshadowed by divine providence. The Greek translators remedied this supposed defi ciency by inserting two long prayers (one spoken by Mordechai, the other by Esther), along with a variety of other elements. These passages, called the “Additions to Esther,” are canonical parts of scripture for Catholics and Orthodox; they are among the Apocrypha of Protestant Bible translations. The only portions of Esther that contributed to Catholic and Orthodox liturgies are the two prayers. Traditional Catholic- Orthodox interpretation has also taken Esther as a historical book. The literary character of the book has been emphasized by modern biblical scholarship: The book is a story explaining how Jews in a non-Jewish world can be successful and protect themselves. It has some historical basis in that Jews lived all over the Persian Empire, but there is no evidence for a king named Ahasuerus, no Jew ever rose to the status Mordechai attains of controlling the empire, and no Jew ever became the queen of Persia. See also Judaism, early (heterodoxies); Medes, Persians, and Elamites; Moses; Persepolis, Susa, and Ecbatana; Pseudepigrapha and the Apocrypha. Further Reading: Hazony, Yoram. The Dawn: Political Teachings of the Book of Esther. Jerusalem: Shalem Press, 2000; Vanderkam, James C. An Introduction to Early Judaism. Grand Rapids, MI: Eerdmans Publishing, 2000. M. O’Connor Ethiopia, ancient Ethiopia is known to be one of the earliest places inhabited by humans. Bone fragments found in November 1994 near Aramis, in the lower Awash Valley by Yohannes Haile Selassie, an Ethiopian scientist trained in the United States, have been connected with the Australopithecus afarensis, an apelike creature that lived some 4 million years ago, who may be an ancestor of modern humans. Subsequently, other bones were found attesting to the very early hominid activity in the country. There are also stone hand tools and drawings from a much more recent period of prehistory in limestone caves near Dire Dawa, with the initial discoveries being made by H. Breuil and P. Wernert in 1923, further work in the late 1940s site by Frenchman H. Vallois, and then in the 1970s by Americans C. Howell and Y. Coppens. Work in the Awash Valley and also at Melka-Kunture, during the 1960s and early 1970s, Ethiopia, ancient 135 was conducted by Jean Chavaillon, N. Chavaillon, F. Hours, M. Piperno, and others. Another prominent anthropologist, Richard Leakey, has worked in the Omo river region of southwest Ethiopia and participated in much research in neighboring Kenya, where his father, Louis Leakey, was involved in many excavations. It appears that some time between the eighth and sixth millennia b.c.e. people were beginning to domesticate animals, and archaeological evidence has shown that by 5000 b.c.e. communities were being formed in the Ethiopian highlands, and it seems probable that the languages started developing at this time. Linguists attribute an ancient tongue, based on the modern Afro- Asiatic (formerly Hamito-Semitic) languages, as developing later into the Cushitic and Semitic languages that are used today. By 2000 b.c.e. evidence of grain cultivation of cereals and the use of the plow, probably introduced from Sudan, and animal husbandry, have been found. It is believed people during this period would have spoken Geez, a Semitic language that became common in Tigray, which is believed to be the origin of the modern Amharic and also Tigranya. There were many early links between ancient Ethiopia and Egypt starting with Piye, a ruler of the Fifth Dynasty in Egypt (2500 b.c.e.), and there were occasions when the two countries were recorded as having the same ruler, whose capital was at Napata, north of modern-day Sudan. Indeed, Pharaoh Sahure sent a voyage to the land of Punt during the Fifth Dynasty, and most scholars believe that this represents a part of modern-day Ethiopia, although some place Punt as being in modern-day Yemen or even as far south as Zanzibar, or even the Zambezi. This expedition sent by Sahure returned with 80,000 measures of myrrh; 6,000 weights of electrum, an alloy made from silver and gold; and 2,600 “costly logs,” probably ebony. The most famous expedition to Punt was that led by Queen Hatsehpsut in about 1495 b.c.e., according to inscriptions detailing it that have been found on the temple of Deir el-Bahri in Thebes. The carvings show traders bringing back myrrh trees, as well as sacks of myrrh, incense, elephant tusks, gold, and also some exotic animals and exotic wood. DA’AMAT From about 800 b.c.e., several kingdoms started to emerge in Ethiopia. The fi rst was the kingdom of 136 Ethiopia, ancient Ethiopia is one of the earliest places inhabited by humankind. Archaeological evidence shows that by 5000 b.c.e. communities were forming in the Ethiopian highlands and by 2000 b.c.e. cultivation of cereals, the use of the plow, and animal husbandry left traces. Da’amat, which was established in the seventh century b.c.e. and dominated the lands of modern-day western Ethiopia, probably with its capital at Yeha. A substantial amount about Yeha is known, owing to the excavations of Frenchman Francis Anfray in 1963 and again in 1972–73, as well as work by Rodolfo Fattovich in 1971. Much of the early work of the former was concentrated in rock-cut tombs, with the latter working extensively on pottery fragments. From their work and the work of other archaeologists it was found that Yeha was an extensive trading community, well established in the sale of ivory, tortoiseshell, rhinoceros horn, gold, silver, and slaves to merchants from south Arabia. It also seems to have had close links with the Sabaean kingdom of modern-day Yemen, as all the surviving Da’amat inscriptions refer to the Sabaean kings. The kingdom of Da’amat used iron tools and grew millet. It fl ourished for about 400 years but declined with the growing importance of other trade routes and possibly due to the kingdom not being able to sustain itself, having killed many of the animals in its region and possibly exhausted the mines. Substantial archaeological work has been carried out on this period of Ethiopian history with one search by Jean Leclant in 1955–56, fi nding two sites at Haoulti- Melazo with a statue of a bull, incense altars, and some fragmentary descriptions. AXUM The next kingdom, which gradually took over from Da’amat, was the kingdom of Axum (Aksum), from which modern Ethiopia traces its origins. The large temple at Yeha dates to 500 b.c.e., and scholars question whether it was built by the kingdom of Da’amat or that of Axum. Axum may have emerged from 1000 b.c.e., but it was not until 600 b.c.e. that it become important. Like Da’amat, it also relied heavily on trade with Arabia, forming a power base in Tigray, and controlling the trade routes from Sudan and also those going to the port of Adulis on the Gulf of Zula. The kingdom of Axum used Geez as its language, with a modifi ed south Arabian alphabet as their script. Indeed, so much of Axum’s architecture and sculpture are similar to earlier designs that have been found in South Arabia as to suggest to some historians that the kingdom might have been largely established by people from Arabia. This is reinforced by the fact that Axum also used similar deities to those in the Middle East. During the eighth century b.c.e. it is thought that Judaism reached Ethiopia—the modern-day Falashas are the descendents of the Ethiopian Jews. It seems likely that Jewish settlers from Egypt, Sudan, and Arabia settled in Ethiopia, but attempts to link them chronologically with a specifi c biblical event such as Moses leading the Jews from Egypt or the Babylonian Captivity have not been successful. In this debate exists the legend of the queen of Sheba. She was known locally as Queen Makeda and is believed to have ruled over an area of modern-day southern Eritrea and was involved in a pilgrimage to Jerusalem. There she met the Israelite king Solomon, and they may have had an affair that led to the birth of a son who became Menelik I, the ancestor of the Ethiopian royal house that ruled the country until 1974, although this rule was interrupted by the Zagwe dynasty. Certainly the dynasty tracing their ancestry from Menelik calls itself the Solomonic dynasty. One version of the legend includes Menelik I returning to Jerusalem where he takes the Ark of the Covenant, which some believe is still in Ethiopia. By the fi fth century b.c.e. Axum had emerged as the major trading power in the Red Sea, with coins minted bearing the faces of the kings of Axum being widely distributed in the region. Mani (216-c. 274 c.e.), the Persian religious fi gure, listed the four great powers during his life as being Rome, Persia, China, and Axum. During the third century b.c.e. Ptolemy II and then Ptolemy III of Egypt both sent expeditions to open up trade with Africa and, it has been suggested, also to obtain a source of war elephants for the battles against their rival, the Seleucid Empire. The latter tended to gain a military advantage by using Indian elephants, with the Ptolemies using either Indian elephants or North African elephants, which are smaller than Indian elephants. Although the Ptolemies soon stopped sending missions to the Red Sea and beyond, trade relations continued. The Roman writer Pliny, writing before 77 c.e., mentioned the port of Adulis, and the fi rst-century c.e. Greek travel book Periplus Maris Erythraei describes King Zoskales living in Adulis—then an important trading destination and the port for the kingdom of Axum—as being the source for ivory taken from the hinterland to the capital of Axum, eight days inland from Adulis. Zoskales in Adulis was described as “a covetous and grasping man but otherwise a nobleman and imbued with Greek education.” The writer of Periplus Maris Erythraei also notes that there was a large number of Greco-Roman merchants living at Adulis, and it seems likely that it was through them that the ideas of Judaism and then Christianity started to fl ourish. Ethiopia, ancient 137 The arrival of Christianity in Ethiopia is ascribed to Frumentius, who was consecrated the fi rst bishop of Ethiopia by Athanasius of Alexandria in about 330 c.e. He came to Axum during the reign of the emperor Ezana (c. 303–c. 350), converting the king as is evident in the design of his coins, changed from an earlier design of a disc and a crescent. This meant that the Monophysite Christianity of the eastern Mediterranean region was established fi rmly in Axum during the fourth century, and two centuries later monks were converting many people to Christianity in the hinterland to the south and the east of Axum. The Christianity in Axum became the Ethiopian Orthodox Church, heavily infl uenced by the Egyptian Coptic Church. The last stela at Axum, late in the fourth century, mentions King Ouszebas. At its height Axum not only dominated the Red Sea in areas of commerce but even held land controlling the South Arabian kingdom of the Himyarites in modern-day Yemen, with King Ezana described on his coins not only as “king of Saba and Salhen, Himyar and Dhu-Raydan” but also “King of the Habshat”—all these places being in South Arabia. He had also, by this period, adopted the title negusa nagast (“king of kings”). On the African continent their lands stretched north to the Roman province of Egypt and west to the Cushite kingdom of Meroë in modern-day Sudan. Indeed, it seems that the forces of Axum had captured Meroë in about 300 c.e. However, during the reign of Ezana it experienced a decline in fortune but regained its former strength over the next century. This is borne out by the few inscriptions that survive, which were either in Geez or in Greek. AXUM’S DECLINE When Christians were attacked in Yemen in the early sixth century, Emperor Caleb (r. c. 500–534) sent soldiers to prevent them from being persecuted by a Jewish prince, Yusuf Dhu Nuwas, who attacked the Axum garrison at Zafar and burned all the nearby Christian churches. This represented a time when Axum was probably at its height in terms of its power and diplomatic connections. The Book of the Himyarites revealed previously unpublished information about Caleb’s attack on Yemen. King Caleb spent his last years in a monastery, but by this time Axum was in control of land on both sides of the Red Sea and was in regular communications with the Byzantine Empire at Constantinople. Axum’s power waned when the Sassanid Empire invaded the region in 572. Although it is not thought that the Sassanids conquered the kingdom of Axum, they probably did defeat its armies in battle and certainly cut off its trade routes not only to Arabia but also into Egypt, thus ensuring its gradual decline. The political infl uence of Axum had ended, and the city would have declined. Some 30–40 years later the whole of South Arabia and also Egypt were controlled by the Arabs, cutting off the connections between Axum and the Mediterranean. See also Christianity, early; Oriental Orthodox Churches. Further reading: Butzer, Karl. “The rise and fall of Axum, Ethiopia: A geo-geographical interpretation.” American Antiquity 40 (1981); Connah, Graham. African Civilisations: Precolonial Cities and States in Tropical Africa: An Archaeological Perspective. Cambridge and London: Cambridge University Press, 1987; Doresse, Jean. Ancient Cities and Temples: Ethiopia. Woking, UK: Elek Books, 1959; Fattovich, Rodolfo. “Remarks on the late prehistory and early history of northern Ethiopia.” In Proceedings of the Eighth International Conference of Ethiopian Studies, Addis Ababa, 1984. Addis Ababa, Ethiopia, 1989; Hable Sellassie, Sergew. Ancient and Medieval Ethiopian History to 1270. Addis Ababa, Ethiopia: Haile Selassie I University, 1972; Hable Sellassie, Sergew. Bibliography of Ancient and Medieval Ethiopian History. Addis Ababa, Ethiopia: privately published, 1969; Kobishchanov, Yuri M. Axum. Philadelphia: Pennsylvania State University Press, 1979; Landström, Björn. The Quest for India. London: Allen and Unwin, 1964; Marcus, Harold Golden. A History of Ethiopia. Berkeley: University of California, 1994; Munro-Hay, Stuart C. H. Aksum: An African Civilisation of Late Antiquity. Edinburgh, Scotland: Edinburgh University Press, 1991. Justin Corfi eld Etruscans The Etruscans left no historical or written records other than tomb inscriptions with brief family histories. Other than this burial genealogy, most writing about the Etruscans is from later sources, including the Romans. Only recently has archaeology begun to unravel the mystery of the Etruscans. During the Renaissance, in 1553 and 1556 two Etruscan bronzes were discovered, but excavation of Etruscan sites did not begin in earnest until the 18th century. After the Etruscan cities of Tarquinia, Cervetri, and Vulci were excavated in the 19th century, museums began collecting objects from the digs. More than 6,000 Etruscan sites have been examined. 138 Etruscans Dionysius of Halicarnassus in the fi rst century b.c.e. thought the Etruscans were Pelasgians who settled in modern-day Tuscany and were absorbed by the native Tyrrhenians. Livy and Virgil in the fi rst century c.e. thought the Etruscans came after the fall of Troy and the fl ight of Aeneas. Herodotus, in the fi fth century c.e., claimed a Lydian origin, with the Tyrrhenians being named for the Lydian leader Tyrrhenos. Until recently scholars agreed with Herodotus and Dionysius that the Etruscans were migrants from Asia Minor between 900 and 800 b.c.e. Modern scholars believe that the Etruscans descended from the Villanovans, whose peak was in the ninth and eighth centuries b.c.e. In the seventh century b.c.e. Etruscan villages supposedly took the place of Villanovan villages. The Etruscans were neighbors to a small village of Latins in northern Latium. The Etruscan city-states were located in the marshy coastal areas of westcentral Italy, that is, modern Tuscany. Permanent settlement dates from the end of the ninth century b.c.e., including Vetulonia and Tarquinii (now Tarquinia). Burial chambers of that era differ from those of earlier eras and contain amber, silver, gold, and gems from Egypt, Asia Minor, and other parts of the world. The Etruscans were sea people as well as miners of copper, tin, lead, silver, and iron. The Etruscan alphabet was based on the Greek but with a distinctively Etruscan grammar. The Etruscan language is similar to a sixthcentury b.c.e. Greek dialect common to Lemnos but differs from other Mediterranean languages. Inscriptions are in the Greek alphabet but written from right to left. Precise defi nitions of some words are still not known. By the seventh and sixth centuries b.c.e. the Etruscans had conquered Rome, much of Italy, and non- Italian areas such as Corsica. This success brought their political and cultural peak in the sixth century b.c.e. Etruscans were largely agrarian, as were the surrounding peoples, but they had a powerful military that allowed them to dominate their neighbors, using them as labor on their farms, and devote their own time to commerce and industry. Greek infl uence was strong in Etruscan religion, with human-type gods and highly sophisticated rituals for divination, but Etruscan mythology also included some unique elements. Etruscan religion clearly separated the human and divine, and it established exact procedures for keeping the goodwill of the gods. Religion mattered greatly to the Etruscans. They built tombs resembling their houses and gave the deceased household objects for use in the afterlife. Rome inherited Etruscan religion, including books of divination and the Lares, their household gods. Scholars of the 19th and 20th century assessed Etruscan painting and sculpture as original and creative but not nearly as great as the art of the Greeks. The preference at that time was for the Greek mathematical ideal of beauty. Etruscan art is better able to capture feeling and the essence of the subject. Much of the remaining examples of Etruscan art are funerary, but there is evidence from existing frescoes and other works of art that Etruscans used color liberally. Etruscan art was a major stylistic infl uence on Renaissance artists who lived in the area of the old Etruria. Etruscan jewelry, pottery, and portable art was so prized during the Renaissance and after that collectors destroyed many Etruscan sites to attain it, making periodization of Etruscan styles diffi cult. Etruscan cities were fortifi ed and ruled by a king. An aristocracy ruled Etruscan society and controlled the government, military, economy, and religion. Cities such as Tarquinii and Veii dominated their regions and began colonizing adjacent areas. Independent citystates entangled themselves in economic and political alliances. Rule by kings gave way to rule by oligarchs. In some cases the kings or oligarchs allowed governance by council or by elected offi cials. The Etruscan city alliances provoked responses from Romans, Greeks, and Carthaginians who regarded the Etruscans as a threat. Etruscan technology, such as the engineering that allowed water to move via canals and irrigation channels, long predated the Roman aqueducts. The Etruscans built much of Rome, including the Cloaca Maxima, the walls around the town, and the Temple of Jupiter. Etruscans implemented an effi cient administrative system for Rome. Legendary Etruscan kings of Rome may have used warrior status to gain their crowns. Among these were the Tarquins Lucius Priscus and Lucius Superbus. The last of Rome’s seven kings was the Etruscan Tarquin the Proud (Tarquinius Superbus), replaced in 510 b.c.e. when Rome chose a republic. In 504 b.c.e. the Etruscans were expelled from Latium, beginning the end of Etruscan power and the rise of Roman culture. The Etruscans kept north of the Tiber, and their infl uence on Rome diminished. Etruscan power was further weakened in the fi fth century b.c.e. when the navy of Syracuse defeated the Etruscan coalition fl eet off Cumae in 474 b.c.e. The Etruscan confederation allied with Athens in a futile attack on Syracuse in 413 b.c.e. Rome besieged Veii and, after 10 years, defeated the city in 396 b.c.e. In 386 b.c.e. the Etruscans lost their trading routes over the Alps after the Gauls conquered Rome and the Po valley. Etruscans 139 Rome’s century-long conquest of Etruria was fi nished in 283 b.c.e., and in 282 b.c.e. Rome defeated the Etruscans a fi nal time. The Etruscans accepted a peace treaty. Increasing control by Rome cost the Etruscans their cultural identity. Cities such as Caere, Tarquinia, and Vulci ceded territory and paid tribute to Rome. Decline fueled dissension among the aristocracy, and the lower classes rose in protest. Cities such as Volsinii lost their social structure. Some Etruscan cities allied with Rome and came under Roman law. Rome helped the Etruscan cities to defeat their rebellions, even when the Etruscans had help from the Gauls, Samnites, Lucancians, and Umbrians. In the fi rst century b.c.e. Etruscans accepted the offer of Roman citizenship, but their new status was lowered when they supported the losing side in the Roman civil wars of 88–86 b.c.e. and 83 b.c.e. Lucius Cornelius Sulla, the winner, razed cities, seized land, and limited Etruscan civil rights. Subsequent Etruscan rebellions failed, and Romans colonized Etruria in the next century, furthering the Romanization of Etruria. Rome absorbed every Etruscan city, and Etruria was no more. The Etruscan culture and society dominated the Italian Peninsula in the eighth through fourth centuries b.c.e. They were a strong infl uence on Roman culture and society, however, they fell to Roman dominance in the fourth century b.c.e., and shortly after that their language and writings disappeared, only to be recovered in the 20th century. See also Roman Empire. Further reading: Barker, Graeme, and Tom Rasmussen. The Etruscans. London: Blackwell Publishers, 2000; Haynes, Sybille. Etruscan Civilization: A Cultural History. Los Angeles: J. Paul Getty Museum, 2005; Spivey, Nigel. Etruscan Art. London: Thames and Hudson, 1997. John H. Barnhill Euripides (c. 484–406 b.c.e.) Greek playwright Euripides was one of the three great Athenian tragic dramatists, with Aeschylus and Sophocles. He was reputed to have been the author of some 92 plays and received a considerable level of public and critical acclaim. He was on 20 occasions chosen to be one of the three annually chosen laureates of Athens. Nineteen of his plays have been preserved. The life and career of Euripides are not known in great detail, and it is likely that some sources, for example constant references to him in the plays of Aristophanes, are scurrilous or at least satirical. It does appear that he was born into a wealthy family and was talented in a number of fi elds other than drama. His parents were named Mnesarchus and Cleito. He married a woman named Melito and had three sons; one became a poet of some distinction. Euripides participated in just one known public activity, when he served on a diplomatic mission to Syracuse. A great deal of his later life was lived during the Peloponnesian War with Sparta, and the travails associated with living in a city involved in a seemingly endless war may have contributed to his decision to accept an invitation from King Archelaus of Macedonia to live in that country in 408 b.c.e., and it is there that he died. Euripides has been compared unfavorably to Aeschylus and Sophocles on account of his greater reliance on poetic oratory and rhetoric, rather than genuine dramatic intensity and because of his use of Socratic or Sophistic philosophy in his works. However, those same qualities have in some ways made him more popular than his contemporaries in the modern world because his themes and language appear more accessible and comprehensible. Even so he received less critical acclaim than his rivals, supposedly to his chagrin. 140 Euripides Etruscan painting and sculpture were seen as less creative than ancient Greek art, but were highly prized during the Renaissance. The best-known plays of Euripides include The Bacchae, Medea, Electra, and Iphigenia at Aulis. Medea was fi rst produced in 431 b.c.e. and highlights the oppression of women, which is a central theme in the works of Euripides. In this play the hero Jason takes Medea as a wife, and they go to live in Corinth. After some years of happy marriage, which included the birth of two children, Jason announces his intention to abandon his wife and pursue the princess of Corinth. Deeply distressed, Medea ultimately resolves to murder the princess and her own children, thereby denying Jason the consolation of family in his later life. She is able to escape from his revenge by riding away in the chariot of the sun god, who is her grandfather. Despite this overturning of all accepted proprieties, Euripides succeeds in causing the audience to sympathize with the plight of the abandoned woman. The play Iphigenia at Aulis demonstrates another theme of importance to Euripides, which is the struggle between the dictates of public duty with personal morality and decency. The play is set in the beginning of the Trojan War and depicts the Greek fl eet, led by Agamemnon, becalmed at Aulis, which has been caused by the ill will of the goddess Artemis. Agamemnon determines that the only way to placate Artemis is to sacrifi ce his daughter Iphigenia. Unwilling though he is, Agamemnon feels he cannot escape his duty and his destiny, so he tricks his daughter into coming to join him. Once she has arrived, she learns the truth, and after some heart-rending scenes, she willingly volunteers for her sacrifi ce. However, it is clear that this initial act of violence will lead to a spiral of acts in the future, and many of these episodes are explored in other plays of Euripides. The Bacchae is perhaps the one existing play of Euripides that attempts to reconcile the outbreak of violence with the possibility of returning society to harmony once more. The action centers on the god Dionysius and his attempt to introduce the brand of unbridled lust that he accepts as the appropriate form of worship into the city of Thebes. This is resisted by the king, Pentheus, and Dionysius takes violent revenge against the king and his people. However, those women who had been convinced to enter a state of divinely inspired violence are then depicted as having the opportunity to return to a rational, human state. This play demonstrates some opportunity for redemption from violence and oppression on Earth. Euripides was one of the great exponents of the tragic art, which involved the classical elements of chorus, divine intervention, and savage scenes that can distance the work from the modern sensibilities. However, the power of Euripides’s language is, compared to all the Greek tragic dramatists, perhaps the best able to bridge that gulf. See also Greek drama; Greek mythology and pantheon; Greek oratory and rhetoric. Further reading: Euripides. Medea and Other Plays. London: Penguin Classics, 1997; ———. The Bacchae and Other Plays. Trans. by Philip Vellacott. London: Penguin Classics, 1973; Foley, Helene P. Ritual Irony: Poetry and Sacrifi ce in Euripides. Ithaca, NY: Cornell University Press, 1985. John Walsh Eusebius (c. 260–339 c.e.) historian and religious leader Born in Caesarea, Eusebius studied under the director of the theological school in that city, Pamphilus (a future martyr), whom he so admired that he adopted his name, calling himself Eusebius Pamphili. A devoted disciple of Origen, Pamphilus expanded the library that Origen had established at Caesarea and passed on to Eusebius the great master’s critical and scientifi c approach to texts. After Pamphilus’s martyrdom, Eusebius fl ed to Tyre and then to Egypt, where he may have been imprisoned for the faith. When the persecution ended in 313 c.e., he returned to Caesarea and was made its bishop. As bishop of an important diocese—and one not far from Alexandria—Eusebius naturally became involved in the controversy of Arianism. He was present at the Council of Nicaea in 325 and signed the Orthodox statement produced by the council known as the Nicene Creed, but he signed more for peacekeeping reasons than for a genuine conviction of its theological precision. He was wary of the term homoousios (of one being), because he felt it smacked of Sabellianism (an earlier heresy that taught that the Trinity was three modes of being God, with no real distinction between Father, Son, and Holy Spirit). After Nicaea, Eusebius became a leader of the moderate (or semi-Arian) party, which sought compromise and harmony over precise theological expression; this was a position favored also by the emperor Constantine the Great. The oft-presented view that Eusebius enjoyed a close friendship with the emperor and was his trusted adviser has been criticized given their relatively few personal or literary exchanges. It is more true that Eusebius’s uncritical admiration of the fi rst Christian emperor—whom he believed God had sent to bring the Eusebius 141 church into an era of peace—enthusiastically colored his theology of the church with a certain triumphalism. In spite of his tainted theological associations with the Arians and his biased treatment of Constantine, Eusebius will always be honored as the “Father of Ecclesiastical (Church) History.” He is the fi rst to attempt to compose a work chronicling the important people and events in the early church up to his own day, in c. 324. Entitled Church History, it is a rich collection of historical facts, documents, and excerpts from pagan and Christian authors, some of which are extant only in this work. Some of the principal themes traced throughout the work are the list of bishops in the most important cities, Orthodox Christian writers and their defense of the faith up against the heresies of their day, the times of persecutions along with authentic stories of the martyrs and confessors in each of the periods, the fate of the Jews, and the development of the canonical books of the New Testament. Although it contains a number of errors, the very “plethora of details” it gives and the eyewitness accounts of the persecutions and martyrdoms make it a work of inestimable value. Eusebius is witness to an understanding of ecclesiology that is both rooted in tradition and energetically engaged in expressing the faith in terms called for by the shifting winds of time. Also, his candid approach to the canon of the New Testament allows the reader a splendid glimpse of early Christianity in the process of discerning an important issue. Besides his Church History, Eusebius’s writings include Life of Constantine (an unfi nished work, which is more of an encomium than a historical biography), apologetical works (against pagans and Jews), biblical works (including commentaries, a harmony of the Gospels, and a geographical dictionary of the Bible), dogmatic works (such as Defense of Origen), sermons, and a few extant letters. See also Christianity, early. Further reading: Barnes, Timothy D. Constantine and Eusebius. Cambridge, MA: Harvard University Press, 1981; Eusebius. The Church History. Translated by Paul Maier. Grand Rapids, MI: Kregel Publications, 1999. Gertrude Gillette Ezana (Abreha) (c. 320–350 c.e.) Ethiopian king Abreha, also known as King Ezana, was a fourthcentury c.e. king who converted to Christianity and subsequently established this faith as the state religion in Axum (Aksum), part of modern-day Ethiopia. Scholars do not agree on the details of Ezana’s life, but several have documented information about his reign through trilingual inscriptions on stone tablets of the period. Most Ethiopians believe that Abreha, along with his twin brother Atsbeha, inherited the throne of Axum when their father died. Since the boys were too young to take over the reigns of government, their mother, Sawya (Sophia), served as queen regent from around 325 to 328 c.e. Upon ascending to the throne, Abreha took Ezana as his throne name, and Atsbeha opted for Sayzana. Ezana and Sayzana were tutored by two Hellenic Syrians who had been rescued as young boys after other occupants of their ship had been either murdered or killed in a shipwreck. The king subsequently accepted responsibility for the brothers, who were classifi ed as slaves. However, recognizing their unique abilities, he named Aedesius as the royal cupbearer and placed Frumentius in the position of royal treasurer and secretary. After the king’s death the Syrians continued to tutor the royal twins and served as advisers to the queen. Although the exact date is not known, it is believed that Ezana and Sayzana ascended the throne sometime between 320 and 325. As monarch, Ezana claimed many titles and is credited with being the fi rst to call himself the “king of kings.” He identifi ed himself as the king of Axum, Saba, Salhen, Himyar, Raydan, Habashat, Tiamo, Kasu, and of the Beja tribes. The kingdom over which King Ezana ruled stretched out on both sides of the Red Sea and extended into what is modern-day Sudan and Somalia. Between 330 and 360 the outside world was made aware of his kingdom. At the time, outsiders referred to Nubia and all of tropical Africa as Ethiopia. However, residents of Axum generally referred to themselves as Habashats. The term Ethiopian, which means “burned faces,” originated with Greek traders and was fi rst used by Ezana in inscriptions that appeared on stone tablets between 333 and 340. Ezana is considered to have been the ablest and most politically astute of the brothers, and some scholars doubt that he even had a twin. At any rate, Ezana reigned over Axum at a time when it was fl ourishing as a viable political, economic, and agricultural African state. His tenure was marked by territorial expansion and signifi cant economic growth, and Ezana opened up a major trade route with Egypt. Consequently, a large number of Greek traders immigrated to Ethiopia in order to take advantage of its 142 Ezana rich resources of gold, ivory, spices, and tortoiseshell. By some accounts it was these Greek merchants who fi rst introduced Christianity to Ethiopia. However, some scholars believe that Frumentius and his brother were entirely responsible for converting the royal family to Christianity. Most sources agree that Frumentius, either by his own initiative or on orders from Ezana, traveled to Alexandria to ask Patriarch Athanasius (c. 293–373) to send a bishop to start a church in Axum. Instead, the patriarch appointed Frumentius as the bishop. From the date of his return, somewhere around 305, Frumentius devoted his life to evangelizing. Within a few months tens of thousands of Ethiopians from all social classes had become Christians. Evidence shows that early in their tenure as monarchs of Axum, Ezana and Sayzana paid allegiance to pagan gods. Ezana often called himself the “Son of Mahrem,” which was equivalent to identifying himself with Ares, the Greek god of war. After the brothers’ conversion to Christianity, Axumite coins most often depicted the cross, or sometimes multiple crosses. After his death on the battlefi eld at around 25 years of age, Ezana was buried in a rock-hewn church that still stands in present-day Ethiopia. Sayzana became the sole monarch, governing for the next 14 years. Upon Sayzana’s death, he was buried beside his brother. The church of Ethiopia subsequently canonized both Abreha and Atsbeha, and Ethiopians honor these saints each year on October 14. There is some evidence that the Ark of the Covenant was brought to Axum from Jerusalem in the 10th century where it was placed in the sanctuary of St. Maryam Tseyon. As a result of this belief, Axum is considered Ethiopia’s holiest city. Archaeologists are in the process of uncovering relics that have traced the history of the area back to the fi rst century c.e. See also Christianity, early; Ethiopia, ancient; Fertile Crescent. Further reading: Harris, Joseph E. Pillars in Ethiopian History: The William Leo Hansberry African History Notebook, Vol. 1. Washington, D.C.: Howard University Press, 1974; Henze, Paul B. Layers of Time: A History of Ethiopia. New York: Palgrave, 2000; Hess, Robert L. Ethiopia: The Modernization of Autocracy. Ithaca, NY: Cornell University Press, 1970; Munroe-Hay, Stuart. The Unknown Land: A Cultural and Historical Guide. London: I. B. Tauris Publishers, 2002; Shinn, David H., and Thomas P. Ofcansky. Historical Dictionary of Ethiopia. Lanham, MD: Scarecrow Press, 2004. Elizabeth Purdy
The Expanding World 600 CE to 1450 Edit
East African city-states The Bantu migration from the central Sahara, perhaps the defi ning event in the history of Africa south of the Sahara, brought people to the region of East Africa as the nucleus of the emerging city-states. From the 10th century, Arab traders noticed the importance of such settlements to their trade. From the onset the citystates were fi ercely independent, and no East African empires emerged in the way that Ghana, Mali, and Songhai did in the west. As Richard Hooker wrote in Civilizations in Africa: The Swahili Kingdoms: “The major Swahili city-states were Mogadishu, Barawa, Mombasa (Kenya), Gedi, Pate, Malindi, Zanzibar, Kilwa, and Sofala in the far south. These city-states were Muslim and cosmopolitan and they were all politically independent of one another; nothing like a Swahili empire or hegemony was formed around any of these city-states. In fact they were more like competitive companies or corporations each vying for the lion’s share of African trade.” However while the Arabs provided much of the impetus for economic and cultural development, the original settlements were defi nitely rooted among Africans. Joseph E. Harris writes in Africans and Their History that “the most important pre-Islamic commercial town on the coast seems to have been Rhapta, about which little is known except that it was the center for the export of ivory that Arab merchants controlled. Rhapta was probably located on the northern coast of Tanganyika [now Tanzania].” The culture of the region became increasingly diverse, as Persians and Indians would join the Bantu Africans and the Arabs in the city-states. When Idi Amin Dada drove the Indians from what is now Uganda during his rule (1971–79), he was ending an Indian presence in his country that had its roots in the fi rst traders from India hundreds of years before. While ivory, sandalwood, and gold were important exports, tragically the largest part of the economy was the slave trade. Zanzibar and Mombasa became the eastern terminus points for slaves the Arabs took out of Africa for shipment to Arabia and Yemen. Kilwa emerged by the 12th century as perhaps the most powerful of the city-states, containing a mosque made from coral. The great Arab traveler Ibn Batuta stopped in Kilwa in 1331. It was ruled, as Harris notes, by the Shirazis, who had originally left the Persian city of Shiraz and intermarried with the Bantu population. Kilwa spread its infl uence south into the region of Zimbabwe and became a decisive factor in the trade in southern Africa as well. Symbolic of the wide-ranging trade was the voyage from Malindi to China in 1414. On that trip, the ruler of the city-state of Malindi sent a live giraffe to the Ming emperor of China, Emperor Yongle (Yunglo). It was the early 15th century that saw the great Ming dynasty exploring fl eets sailing from China, perhaps even as far as the Americas, under Admiral Zheng He. Admiral Zheng would ultimately make seven historic expeditions from 1405 to 1433. The Chinese fl eets made several stops on the East African coasts, making the Swahili city states part of a vast panoceanic trading economy. Trade was determined by the prevailing winds of the monsoon seasons. From November to March, Arabs, Indians, and Persians would sail south toward the Swahili coast and make their return voyages north between July and September. The Indian Ocean trade would be monopolized by Arabs until the arrival of the Portuguese in 1498. It would mark the beginning of the end of the prosperous East African city-states. Mocambique, spelled also as Mozambique, would not be free from Portugal’s imperial rule until 1975. See also Ethiopian Empire; gold and salt, kingdoms of; Hausa city-states; Zimbabwe. Further reading: Harris, Joseph E. Africans and Their History. New York: Meridian, 1998; Hourani, George F., and John Carswell. Arab Seamanship. Princeton, NJ: Princeton University Press, 1995; Indakwa, John, and Daudi Ballali. Beginner’s Swahili. New York: Hippocrene, 1995; Kusimba, Chapurukha. The Rise and Fall of Swahili States. Walnut Creek, CA: Altamira Press, 1999; Levathes, Louise. When China Ruled The Waves. Oxford: Oxford University Press, 1994; Nicolle, David. Historical Atlas of the Islamic World. New York: Checkmark Books, 2003; Reader, John. Africa: A Biography of the Continent. New York: Vintage, 1991. John F. Murphy, Jr. Edward I and II kings of England The origins of the modern British parliament can be traced to the reign of two medieval English kings: Edward I (1239–1307) and Edward II (1284–1327). Parliament had its roots in the king’s Great Council, a primarily judicial and executive body in which prominent barons counseled the king and considered petitions for the redress of grievances. Although Parliament had legislative, judicial, and fi scal responsibilities, it was the fi scal duties, in the form of grants of taxation, that transformed it from a feudal council into a more representative body. Prior to 1200 English kings drew most of their income from independent sources, such as land rents, judicial fi nes, and feudal aids. However by the ascension of Edward I in 1272, taxation provided most royal revenue. The wars of Edward I necessitated the frequent summoning of Parliament to approve the granting of taxes. However, taxation required the consent of Parliament, which became less forthcoming as the costs of Edward’s military campaigns mounted. The need to secure tax revenue forced Edward to accept a Parliament that included not only barons and bishops, but also country gentlemen and burgesses from the towns. The result was the meeting of the Model Parliament in 1295, a landmark on the road to representative government in England. Growing resistance to taxation forced Edward I to consent to further concessions. In 1297 Edward agreed to issue a confi rmation of the charters of liberties, including the Magna Carta and the Provisions of Oxford (1258), in exchange for taxation. The king promised to collect taxes only with the consent of Parliament. In the reign of Edward II (1307–27) the barons sought to recover the political power they had lost during the reign of Edward I. The barons regarded themselves as the king’s rightful councilors and resented the infl uence of Piers Gaveston, a royal favorite. Opposition to Gaveston grew until 1310, when the barons compelled the king to consent to the appointment of a committee of 21 to reform government and reassert baronial authority. The committee drafted the Ordinances of 1311, which placed restrictions upon royal power. In 1321 the barons rebelled against another 108 Edward I and II Enormous ivory tusks are readied for sale at the local market. Arabs controlled the export of ivory in the African city-state of Rhapta. royal favorite, Hugh Despenser (1262–1326), but were defeated at the Battle of Boroughbridge in Yorkshire. In 1322 Edward summoned a parliament at York, which revoked the Ordinances and restored the authority of the king. Edward failed to redress the baron’s grievances, and they soon joined with Queen Isabella and Roger Mortimer to invade England in 1326. Thereafter the barons summoned a parliament, which charged Edward with rejecting good counsel. A delegation from Parliament demanded his abdication in 1327, and he was murdered the following year. The community of the realm had served notice on future kings that they were to govern by the law, of which Parliament was the guardian. See also English common law. Further reading: McKisack, May. The Fourteenth Century, 1307–1399. London: Oxford University Press, 1959; Powicke, Maurice. The Thirteenth Century, 1216–1307. London: Oxford University Press, 1962; Prestwich, Michael. English Politics in the Thirteenth Century. London: Macmillan, 1990. Brian Refford El Cid (c. 1043–1099) medieval Spanish warrior The title El Cid was given to a Spanish early medieval warrior called Rodrigo (or Ruy) Díaz de Vivar, also known as El Campeador (“the Champion”). After his death, he became a folk hero with many Spanish ballads written of his rise from obscurity to lead the Castilians against the Moors. He was born at Vivar, near Burgos, in the kingdom of Castile; his father a minor Castilian nobleman, but his mother was well connected and ensured that from a young age he attended the court of King Ferdinand I as a member of the household of king’s eldest son, Sancho. When Sancho succeeded his father as King Sancho II of Castile, he appointed the 22-year-old Rodrigo Díaz de Vivar as his standard bearer as he had already achieved a reputation for valor in battle, taking part in the Battle of Graus in 1063. When Sancho attacked Sargasso in 1067, Rodrigo accompanied him and took part in the negotiations that led the ruler of Sargasso, al-Muqtadir, to acknowledge the overlordship of Sancho. In 1067 Sancho went to war with his brother Alfonso VI, who had been left the kingdom of León. Some ballads portray El Cid as unwilling to support this invasion, which went against the will of Ferdinand I, but he was likely a willing participant. During the following fi ve years El Cid was a vital military leader on behalf of Sancho. Sancho was killed when laying siege to Zamora. Alfonso, deposed from León, was the heir, and the new king found himself in a diffi cult political position. Count García Ordóñez, a bitter enemy of El Cid, became the new standard bearer, but El Cid was able to remain at court, as Alfonso did not want such a tough opponent. It was probably Alfonso who planned the marriage of El Cid to Jimena, daughter of the count of Oviedo. They had a son, Diego Rodriguez, and two daughters. In 1097 Diego was killed in battle in North Africa. Castilians who had supported Sancho were naturally nervous about Alfonso’s becoming king, and these simmering resentments began to be expressed through El Cid, who served as a conduit for them. In 1079 El Cid was sent to Seville on a mission to the Moorish king. Coinciding with this trip, García Ordóñez aided Granada in their attack on Seville, but El Cid defeated the forces from Granada at Cabra, capturing García Ordóñez. His easy victory gained him enemies at court. When El Cid attacked the Moors in Toledo (who were allied to Alfonso), the king exiled him, and although he returned some years later, he was never able to remain for long. El Cid went to work for the Moorish king of Sargasso, serving him and his successor for several years. This gave him a better understanding of Muslim law, which would help him in his later career. In 1082 he led the forces of Sargasso to victory over the Moorish king of Lérida and the count of Barcelona; two years later, undefeated in battle, he defeated the forces of the king of Aragon, Sancho Ramirez. When the Almoravids from Morocco invaded Spain in 1086 and defeated Alfonso’s army, the two were briefl y reconciled but soon afterward El Cid returned to Sargasso and did not help prevent the Christians from being overwhelmed. Instead El Cid focused his attention on becoming the ruler of Valencia. This required political machinations and El Cid had to reduce the infl uence of other neighboring rulers. The importance of the counts of Barcelona came to an end when Ramon Berenguer II’s forces were decisively defeated at Tebar in May 1090 by El Cid’s Christian and Moorish forces. El Cid then utilized loopholes in Muslim law when Ibn Jahhaf killed al-Qadir, the ruler of Valencia. He besieged the city, which was controlled by Ibn Jahhaf, and when an Almoravid attempt to lift the siege in December 1093 failed, the city realized it could not hold out for much longer, and in May 1094 it surrendered. El Cid then proclaimed himself the ruler of Valencia, serving as the chief magistrate and governing for both Christians and Muslims. In law El Cid still owed fealty El Cid 109 to Alfonso VI, but in practice he was totally independent of the king. El Cid’s victories encouraged many Christians to move to Valencia and a bishop was appointed. El Cid ruled Valencia until his death on July 10, 1099. Had El Cid’s only son survived him, there would have been a dynasty, and possibly a new royal house. However that was not the case, and Valencia was ruled by Muslims again until 1238. As he had never been defeated in battle, the story of El Cid, with increasing literary license, became a great ballad for Christians, who overlooked his years working for Moors and hailed him as the hero for the “Reconquista”—the retaking of Spain from the Moors. See also Almoravid Empire; Christian states of Spain; Muslim Spain; Reconquest of Spain. Further reading: Barton, Simon, and Richard Fletcher. The World of El Cid. Manchester: Manchester University Press, 2000; Clissold, Stephen. In Search of the Cid. London: Hodder & Stoughton, 1965; Fletcher, Richard. The Quest for El Cid. London: Hutchinson, 1989. Justin Corfi eld English common law Common law developed after the Norman Conquest of England. In 1066 England was peopled with Angles, Saxons, Vikings, Danes, Celts, Jutes, and other groups who were suddenly ruled by French-speaking Normans. Most law at the time was customary law that had been handed down orally from generation to generation. In addition there were the legal code of Alfred the Great, which was biblical in nature, and the Danelaw of the Vikings and Danes. Most of the courts were communal courts (folk-moot), the hundred and shire courts, and baronial, or manorial, courts administering justice in the interest of the local nobility. Immediately after the Norman Conquest the king would hear cases coram rege (before the king) that involved royal interests. However, the king with the royal court tended to be on the move in England or away in France. Consequently the legal work was soon delegated to an appointed tribunal, the Curia Regis. From it came the three royal common law courts that were used to unify the kingdom. The fi rst of the royal common law courts was the Exchequer. Originally concerned with the collection of taxes and the administration of royal fi nances, by 1250 it had become a court exercising full judicial powers. The second royal common law court to develop was the Court of Common Pleas (or Common Bench), which was probably established during the reign of Henry II (1154–1189). This court heard cases that did not involve the king’s rights. It was fi rmly established at Westminster after King John was forced to sign the Magna Carta in 1215. The third royal common law court to evolve from the Curia Regis was the King’s Bench. Eventually this court heard cases involving the king’s interests, criminal matters, and cases affecting the high nobility. It also developed the practice of issuing writs of error for review of cases decided in Common Pleas. One factor promoting the development of the common law courts was their ability to settle land disputes. All of the land in England belonged to the king by right of conquest. He then awarded it to his vassals to hold and utilize in exchange for loyalty and for services. Because economic production was almost exclusively agricultural, title to the use of land was extremely valuable. Disputes over who was entitled to possess land created innumerable cases. As the justices in Eyre traveled their assigned circuits to hold court, they would decide cases using the Bible, canon law, and most especially reasoning applied to the customary law of that place. When the judges returned to London they would go to their places of permanent residence in taverns or cloisters. These residences of the judges, who were often monks or bachelors, eventually became the Inns of Court, where cases were heard and experts were trained in law. In the course of over 200 years the judges “discovered” the law common to all the people of England. The belief was that underlying the thicket of unwritten customary law was a common foundation that could be discovered by reason. In effect the judges were developing legal principles or laws as they made judicial rulings in particular cases. Among the principles of the common law are stare decisis (let the decision stand). Stare decisis means that a judge in deciding a case should look to similar cases from the past for guidance. The use of similar cases is itself a legal principle, namely, that like cases should be tried alike. However in the absence of a precedent setting rule the judge would in effect “legislate” and create a new rule. This meant that the common law was case law or judge-made law created by legal reasoning about legal problems. It was well established centuries before the rise of Parliament. The developing common law had the virtue of stability; however, it lacked fl exibility. To bring a case into a common law court was often too costly for common people. The common law courts also moved slowly; that could mean that justice delayed was justice denied. To 110 English common law lodge a complaint in a common law court an appropriate writ had to be obtained. If the wrong kind of writ were used, of which there were eventually over 100 kinds, the case would be dismissed. In addition some of the rules of the common law were injurious to justice. For example before bringing a suit for an injury to a person or to property in a common law court real injury had to be sustained. The common law lacked a mechanism for preventing irreparable harms from happening. Since the king was believed to be the fountainhead of justice in England—that is, the person who ruled by divine right and though whom the justice of heaven fl owed to the people—equity courts were established to restore fairness or equity to the legal system. People would appeal to the king for justice. In response the kings ordered the court chancellor to issue decrees of equity. Chancery courts developed to hear cases of equity and to correct the common law. See also Norman and Plantagenet kings of England. Further reading: Cantor, Norman F. Imagining the Law: Common Law and the Foundations of the American Legal System. New York: HarperCollins, 1997; Caenegem, R. C. Van. The Birth of the Common Law. Cambridge: Cambridge University Press, 1990; Hudson, John. The Formation of the English Common Law: Law and Society in England from the Norman Conquest to Magna Carta. New York: Longman, 1996; Megarry, Robert. Hon. Sir Justice. Inns Ancient and Modern: A Topographical and Historical Introduction to the Inns of Court, Inns of Chancery, and Serjeants’ Inns. London: Selden Society, 1972. Andrew J. Waskey Ericson, Leif (c. 980–1025) Icelandic explorer Leif Ericson was an Icelandic explorer who is believed to have been the fi rst European to discover North America and, more specifi cally, the region that would become known as Newfoundland and then Canada. It is believed that Ericson was born around 980 to Erik the Red, a Norwegian outlaw and explorer who founded two Norse colonies in Greenland. During a stay in Norway in 999, Leif converted to Christianity, as did many other Norse around that time. He also traveled to Norway to serve King Olaf I (Tryggvason). When he returned to Greenland, he purchased the boat of Bjarni Herjólfsson and set out to explore the land that Bjarni had sighted, which later became known as North America. In 986 Bjarni was driven off course by a fi erce storm between Iceland and Greenland and sighted hilly, heavily forested land far to the west but never set foot on it. One of the sagas, “The Saga of the Greenlanders,” states that Leif embarked around the year 1000 to follow Bjarni’s route in reverse. Leif was motivated by a sense of adventure and a desire to fi nd more land to farm. The expedition made three landfalls. The fi rst land they met was covered with fl at rock slabs and was probably present day Baffi n Island. Leif called it Helluland, which means “land of the fl at stones” in Old Norse. Next he sailed to a land that was fl at and wooded, with white sandy beaches, which he called Markland, meaning “woodland” in Old Norse. Markland is commonly assumed to have been Labrador. Continuing south Leif and his men discovered land again, disembarked, and built some houses. They found the land pleasant. Salmon were plentiful in the rivers, the climate was mild, and the land was lush and green for much of the year. Leif’s 35-member party remained at this site over the winter. The sagas mention that one of Leif’s men, Tyrkir, a German warrior, found grapes. As a result, Leif named the country Vinland, meaning “land where the grapes grow” in Old Norse. Historians disagree on the exact location of Vinland. However, several sites along the eastern coast of the United States and Canada, from Newfoundland to Virginia, have been suggested. Many believe that the Norse settlement at L’Anse aux Meadows in Newfoundland was Leif’s colony. Others argue that Vinland must have been more southerly, since grapes do not grow as far north as Newfoundland; however, grapes may have grown there during the Medieval Warm Period. On the return voyage to Greenland, Leif rescued an Icelandic castaway and his crew. This deed earned him the nickname “Leif the Lucky” and made him rich from his share of the rescued cargo. Another saga, “The Saga of Erik the Red,” asserts that Leif discovered the American mainland purely by accident. According to this saga, Leif was blown off course while returning from Norway to Greenland around 1000 and landed on the shores of North America. However the saga does not mention any attempt to settle there. “The Saga of the Greenlanders” is generally considered to be the more reliable of the two. Leif’s father, Erik the Red, died shortly after his return home. As a result Leif stayed in Greenland to govern his father’s settlements. He died in 1025. All historical sources agree that Leif never returned to North America and his brother, Thorvald, led the next voyage Ericson, Leif 111 to the new territory. Subsequent attempts to settle Vinland were unsuccessful because of friction between the Norse settlers and the native North Americans. Nevertheless Leif stands as one of history’s greatest explorers, besting Christopher Columbus’s discovery of the New World by almost fi ve centuries. See also Vikings: Iceland; Vikings: North America; Vikings: Norway, Sweden, and Denmark. Further reading: Barrett, James H. Contact, Continuity, and Collapse: The Norse Colonization of the North Atlantic. Turnhout: Brepols, 2003; Fitzhugh, William. Vikings: the North Atlantic Saga. Washington, D.C.: Smithsonian Institution Press in association with the National Museum of Natural History, 2000; Hreinsson, Vidar. The Complete Sagas of Icelanders, Including 49 Tales. Reykjavik: Leifur Eiríksson, 1997; Ingstad, Helge. The Viking Discovery of America: The Excavation of a Norse settlement in L’Anse aux Meadows, Newfoundland. St. John’s: Breakwater, 2000; Seaver, Kirsten. The Frozen Echo: Greenland and the Exploration of North America, ca. a.d. 1000–1500. Stanford, CA: Stanford University Press, 1996. Scott Fitzsimmons Ethiopian Empire Ethiopia’s unique and venerable identity stems from its claims to have deep roots in the ancient and biblical world. On one hand it continued the ancient civilization represented by Axum, the trading intermediary for Rome and India on the Red Sea. And on the other hand, it promoted its mythical link to King Solomon 112 Ethiopian Empire “The Discovery of Greenland,” an illustration for a 19th-century magazine, depicts Erik the Red’s exploration of Greenland. His son, Leif Ericson, is credited with discovering the North American continent almost fi ve centuries ahead of Christopher Columbus. by keeping a tenacious grip on its Christian (and even Jewish) faith all through the Muslim period, and its strong church-state hegemony gave teeth to its claims. The Christian king of late Axum, El-Asham, distinguished himself in Muslim memory by giving sanctuary to followers of the prophet Muhammad who were driven out of the city of Mecca in 615. Nonetheless confl ict broke out when Muslims from North Africa moved southward in the eighth century and cut off Ethiopia from the Christian world. For the next 900 years the history surrounding Ethiopia would be muted in its contact with the West, and outlines of its history hinted at through folklore and archaeology. Hemmed in, the kingdom of Axum spread to the south into the highlands. Here non-Semitic peoples resided, the Agaws of Cushitic ethnicity. For reasons unknown some had been previously touched by Jewish infl uences and called themselves Falashas, and others were Christians already or were converted gradually. The latter group merged with the Axum elites and eventually replaced the ruling house with the Zagwe dynasty. The Zagwes moved the capital 160 miles south to Roha. According to tradition they were especially devout, and one of their more venerated emperors, Lalibela (1195–1225), directed the construction of 11 churches hewn out of solid rock. These churches served as a pilgrimage destination for believers when they could not overcome Muslim refusals to visit the Holy Land. The dynasty following the Zagwes in 1270 returned to the age-old tradition that their kings had descended from King Solomon. Their epic tale, Kebre Negast (Glory of the Kings), speaks of their ancient ancestor and fi rst king, Menelik I, being born of the queen of Sheba (Saba) and Solomon. The son Menelik returned to his native land with the Ark of the Covenant, having shrewdly taken it from his father. Ethiopians to this day believe that the Ark is present in their country. Its presence guarantees a biblical covenant with their nation. The new Solomonids were proactive, cultivating their biblical image. They merged their role in government with the role of the clergy in the church, putting their sons into a monasterylike community on Mount Geshen so that they would pray and learn while they waited for their call to govern the nation. Their reputation was one of priest-kings. By the mid-14th century they advanced against Muslims and pagans surrounding them. Their king, Amde-Siyon, was reported by one Arab historian, Al-Omari, to have a following of 100 kingdoms. Zar’a Ya’kob (1434–1468) achieved even greater status, aspiring to greatness by being crowned in ancient Axum. During his regime he developed new evangelistic campaigns to convert all subjects to the Christian faith. Church schools were opened up for priests, as well as ruling class students and seminarians. Ethiopian arts fl ourished, and these writings, artifacts, and murals still exist. When Zar’a Ya’kob died the Muslims and outsiders rose up in rebellion. Sultan Ahmad Gran conducted a devastating jihad against the monasteries and churches (1527–43). Meanwhile for several centuries Europeans had heard reports of a legendary African emperor named Prester (“priest”) John who would help out Christian armies in the Crusades against the Muslims of the Near East and Africa. In hopes that Ethiopia was Prester John’s domain, 400 Portuguese troops were sent to put a stop to Sultan Ahmad Gran’s campaign against Ethiopia. Further reading: Burstein, Stanley M., tr. and ed. Ancient African Civilizations: Kush and Axum. Princeton, NJ: Markus Wiener Publisher, 1997; Jones, A. H. M., and Elizabeth Monroe. A History of Ethiopia. Oxford: Clarendon, 1966. Mark F. Whitters
The First Global Age 1450 to 1750 Edit
Eck, Johann Maier von (1486–1543) religious humanist and polemicist Johann Eck is best remembered for his debates with Martin Luther during the initial period of the Reformation. He was born Johann Maier in the city of Eck in southern Germany on November 13 (some say November 15), 1486, and later took the name of his city as his surname. At age 12, he entered Heidelburg University and went on to Tübingen, where he received his master’s degree. He continued his studies in both theology and classical languages. In 1508, at age 22, he was ordained a Roman Catholic priest. In 1510, at age 24, he received a doctorate in theology. After receiving his doctorate, he went to the University of Ingolstadt in southern Germany as a full professor. Eck was a humanist in the tradition of Desiderius Erasmus of Rotterdam and was well versed in Greek and Hebrew. He was interested in many theological topics, and when the monk Martin Luther posted his 95 Theses on the castle church door in Wittenberg in 1517, he at first received a cordial reception from his fellow humanist Eck. Luther’s expectation in his posting of the Ninety-five Theses was a debate with fellow academics and church theologians, and he hoped for gradual reform of the Roman Catholic Church. As Luther’s writings became almost instantly popular, Eck saw Luther’s theology as both wrong and dangerous for the Roman Catholic Church and decided to take action against Luther. In 1518, he circulated among other academics a work attacking Luther’s theology titled Obelisci and in it accused Luther of being a follower of John Huss, a Bohemian reformer from the previous century who was burned at the stake for his views. Luther’s fellow professor Carlstadt responded to the Obelisci with a document refuting Eck and declared himself ready to meet Eck in a public disputation. This series of debates took place at the University of Leipzig, beginning in June 1519, and continuing through July. The debate was academic in style (as would befit university professors). Eck clearly won the debate against Carlstadt, forcing Luther to defend his doctrines. While Eck and Luther were more evenly matched in intellect and debating ability, most agree that Eck won the debates. Returning to Ingolstadt, Eck attempted to get the other universities to condemn Luther’s theological writings but failed. He continued to write against Luther and in 1520 went to Rome to help with the official Catholic attack on Luther. Eck was a significant contributor to the papal document Exsurge Domine (Arise, O Lord), which condemned Luther’s teaching as heretical. Eck continued to write and campaign against Luther as well as other Protestants, particularly Ulrich Zwingli. Eck debated supporters of Zwingli in 1526 near Zürich, Switzerland. He never succeeded in his goal of bringing about a clear condemnation of Luther by the political authorities. Luther was seen in the eyes of many Germans as a champion for Germany against the influence of Rome and was simply too popular among both the nobles and common persons to be suppressed effectively. Eck is also known for his translation of the Bible into German, published in 1537. (Luther had published his own translation into German about 10 years previous.) Roman Catholics normally used the Latin Bible, but Eck as a humanist followed Erasmus and others in promoting the Bible in the vernacular, the language of the people. Eck died on February 13 (some say February 10), 1543, in Ingolstadt. See also Counter-Reformation (Catholic Reformation) in Europe; humanism in Europe. Further reading: Bainton, Roland H. Here I Stand: A Life of Martin Luther. New York: Pierce and Smith, 1950; Dillenberger, John, ed. Martin Luther: Selections from His Writings. New York: Random House, 1958; Ziegler, Donald, ed. Great Debates of the Reformation. New York: Random House, 1969. Bruce D. Franson Edo period in Japan The Edo period in Japanese dates between 1600 and 1867. It denotes the government of the Tokugawa Shogunate from Edo. The shogunate was officially established in 1603 with the victory of Tokugawa Ieyasu over supporters of Toyotomi Hideyori in the Battle of Sekigahara (1600). The Tokugawa shoguns ruled Japan for more than 250 years with iron fists and tight discipline. Ieyasu had centralized control over the entire country with his strategic power sharing arrangement between daimyo (feudal lords) and samurai (warriors). Daimyos were ordered to be present every second year in Edo to give an account of their assigned work. Tokugawa Ieyasu promoted economic development through foreign trade. He established trading relations with China and the Dutch East India Company (Indonesia/ Batavia). While Osaka and Kyoto became emerging centers for trade and handicraft production, his capital Edo became the center for supply of food, construction, and consumer items. To ensure its control, the shogunate banned all Japanese people from travel abroad in 1633. Japan thus was isolated except for limited commercial contact with the Dutch in the port of Nagasaki. All Western books were banned in Japan. Despite Japan’s cultural isolation from the rest of the world, new indigenous art forms such as Kabuki theater and ukiyo-e, woodblock prints and paintings of the emerging urban popular culture, gained increasing popularity. Intellectually the most important state philosophy during the Edo period was Neo-Confucianism. Neo-Confucianism stressed the importance of morals, education, and hierarchical order in the government. A rigid class system also took shape during the Edo period with samurai at the top, followed by the peasants, artisans, and merchants. Below them were outcasts (burakumin) or pariahs or those who were deemed impure. Neo-Confucianism contributed to the development of kokugaku (national learning) that stressed the study of Japanese history. In 1720, with the lifting of the ban on Western literature, some Japanese began studying Western sciences and technologies, rangaku (Dutch studies). The fields that drew most interest were related to medicine, astronomy, natural sciences, art, geography, languages, as well as physical sciences including mechanical and electrical engineering. External pressure on Japan grew toward the end of the 18th century. The Russians tried to establish a trade link with Japan to export their Russian goods, particularly vodka and wine. Other European nations also This print, titled Yoroi ferry at Koami District, is from the series Meisho Edo hyakkei, an Edo period series. 118 Edo period in Japan became interested. Finally the United States forced Japan to open to the West when Commodore Matthew Perry sailed into Edo Bay with a flotilla of warships. Meanwhile, anti-Tokugawa sentiments had been growing that demanded the restoration of imperial power. In 1867–68, the Tokugawa government collapse was partly due to foreign threat and to tensions that had been growing against a political and social system that had outlived its usefulness. The shogunate surrendered power in 1867 to Emperor Meiji, who began the Meiji Restoration in 1868. See also Bushido, Tokugawa period in Japan; Tokugawa bakuhan system, japan. Further reading: Gordon, Andrew. Modern History of Japan: From Tokugawa Times to the Present. New York: Oxford University Press, 2002; Jansen, Marius B. The Making of Modern Japan. Cambridge, MA: Harvard University Press, 2002; McClain, James L. L. Japan: A Modern History. New York: W. W. Norton, 2002. Mohammed Badrul Alam Edward VI (1537–1553) king of England Edward VI was the only son of Henry VIII, king of England, born from his marriage to his third wife, Jane Seymour, on January 28, 1537. He succeeded to the English throne at age nine by his father’s last will and by the parliamentary statute of 1543, and died unmarried at the age of 16 on July 6, 1553. The young king inherited from his father a constitution, under which he was not only the secular king but also the supreme head of the Church of England. However, the kingdom was deeply divided among factions of great nobles in the court, and, in the countryside, the people were unsettled by the direction of the religious policy under the new king. In spite of his lovable personality, good education, and well-respected intellectual capacity, the young king could hardly design and dictate policies on his own. Edward Seymour, the duke of Somerset and the king’s maternal uncle, ran the kingdom as lord protector in loco parentis (in the place of a parent) for the first three years. After his dismissal from the court in 1549, John Dudley, the earl of Warwick, who became duke of Northumberland in 1551, ruled the nation as the chief minister under the pretense that the king had assumed full royal authority. The two chief ministers shared similar interest in moving the Church of England toward Protestantism. In 1547, Parliament repealed the Six Articles, enacted in 1534 by the Reformation Parliament, to keep Catholic doctrines and practices in the Church of England. In 1549, the publication of Thomas Cranmer’s Book of Common Prayer and the adoption of his 42 Articles by Parliament pushed the Anglican Church closer to Calvinism. In 1552, Parliament enacted the Act of Uniformity, requiring all Englishmen to attend Calviniststyled Anglican Church services. Moreover, Parliament stopped enforcing laws against heresy, permitted priests to get married, and even confiscated the property of Catholic chantries, where for centuries, local priests had been praying for souls wandering in purgatory. To the Protestants in the Continent, these policy changes made England a safe haven and an escape from persecution by the Catholic Church. In England, the Protestants welcomed the reforms, although they felt that the policies did not satisfy their Calvinist needs. The Catholics, however, were shocked by their loss of properties, privileges, and powers and were provoked into rebellions in 1549. Neither of the two chief ministers was a master of statesmanship. They failed to curb runaway inflation and continuous devaluations of English currency. They lacked competence in pacifying domestic unrests caused by enclosure of land and worsening living conditions of the rural poor. They appeared shortsighted and clumsy in maneuvering diplomacy to meet increasingly complicated challenges from other European nations. Most of all, they mismanaged the young king’s marriage, the great affair of the state. The duke of Somerset invaded Scotland in 1547, intending to conclude the negotiation, which had begun under Henry VIII, for the marriage of Edward VI to Mary of Stuart, the four-year-old daughter of King James V. Although the duke defeated the Scots at the Battle of Pinkie, the Scots betrothed the princess to Francis, the dauphin of the French throne, in 1548. After the fall of Somerset, the duke of Northumberland appeared to be actively negotiating a marriage of Edward to Elizabeth, the daughter of French king Henry II, in 1551. The marriage never materialized. In 1553, rumors spread around the diplomatic circle in Paris that the duke was going to manage a marriage between Edward VI and Joanna, a daughter of Ferdinand, the brother of Charles v, the Holy Roman Emperor. Despite his apparent busy diplomacy, the duke was secretly carrying out a plan of his own, probably with the king’s Edward VI 119 knowledge, that would enable Lady Jane Grey, his daughter-in-law and the granddaughter of Henry VIII’s sister, Mary, to succeed Edward and thus disinherit Mary I, the Catholic sister of the king, who had been bastardized by her father but later placed to succeed her brother in his last will. Following the death of Edward VI, Lady Jane Grey was proclaimed queen with the military support of her father-in-law. However, much of the nation, though favoring a Protestant ruler, rallied against the conspiracy of the duke of Northumberland. The “reign” of Lady Jane Grey lasted only nine days, and Mary I eventually succeeded to the throne in 1553. The dramatic turn toward Protestantism under Edward VI and the even more dramatic restoration of Catholicism under Queen Mary have been viewed as the major aspects of the so-called mid-Tudor crisis by many historians. See also Calvin, John; Reformation, the. Further reading: Alford, Stephen. Kingship and Politics in the Reign of Edward VI. Cambridge: Cambridge University Press, 2002; Loach, Jennifer. Edward VI. New Haven, CT: Yale University Press, 1999; Loades, David. Intrigue and Treason: The Tudor Court, 1547–1558. Upper Saddle River, NJ: Pearson/Longman, 2004; Jones, Whitney R. D. The Mid-Tudor Crisis, 1539–1563. New York: Macmillan, 1973; Jordan, Wilbur K. Edward VI: The Threshold of Power; the Dominance of the Duke of Northumberland. New South Wales, Australia: Allen & Unwin: 1970. Wenxi Liu Elizabeth I (1533–1603) English monarch Queen Elizabeth I is regarded as one of the greatest monarchs in English history, reigning as queen of England and queen of Ireland from 1558 until her death in 1603, and, in name only, styling herself as queen of France. Elizabeth was born the second daughter of King Henry VIII. King Henry had the marriage to his first wife, Catherine of Aragon, annulled as she had given birth to a daughter, Mary, and he had started a romance with Anne Boleyn, whom he married. She gave birth to Elizabeth on September 7, 1533, and although Anne Boleyn was pretty, intelligent, witty, clever, and a devout Protestant, her inability to give Henry VIII a son essentially caused her to be executed, although the charge leveled against her was incestuous adultery. As a result, Elizabeth, who was three when her mother was executed, grew up secluded from the court. When Henry VIII died in 1547, he was succeeded by his sickly son Edward VI. By this time Elizabeth could speak and read not only English and Latin, but also ancient Greek, French, Italian, and Spanish. She managed to keep a low profile during the reign of Edward VI and tried to do the same during the reign of her older sister Mary, after Edward had died in 1553. Mary, however, was a devout Roman Catholic and determined to rebuild the Catholic Church in England. Elizabeth, by contrast, was Protestant but she was careful to keep herself removed from plots against her Catholic sister. The most serious of these was Wyatt’s Rebellion of 1554, which sought to depose Mary and replace her with Elizabeth. Even though she was not involved, Elizabeth was, nevertheless, arrested and placed in the Tower of London, making the entry by boat through “Traitor’s Gate.” The death of Mary on November 17, 1558, led to Elizabeth’s succeeding to the throne. She was crowned on January 15, 1559, by Owen Oglethorpe, bishop of Carlisle, as the Roman Catholic archbishop of Canterbury, Reginald Pole, had already fled and refused to take part in the coronation. It was to be the last coronation where the Latin service was used; all subsequent coronations except that of George I in 1714 were in English. In 1559, Queen Elizabeth enacted the Act of Uniformity whereby all churches had to use the Book of Common Prayer. In the same year, she also signed into law the Act of Supremacy whereby all public officials had to acknowledge, by oath, Elizabeth’s right, as sovereign, to be head of the Church of England. In these two acts, her main adviser, who would remain as such for the rest of her reign, was Sir William Cecil (later Lord Burghley). There were many stories regarding whether Queen Elizabeth I wanted to marry. Certainly she enjoyed a long affair with Robert Dudley, earl of Leicester, whom she appointed as master of the Queen’s Horse. She was acutely aware of her sister’s bad move in marrying Philip II of Spain, and anxious not to marry any foreign Roman Catholic prince, although there were moves made by the French. With constant plots against Elizabeth, she faced trouble in Scotland from Mary, Queen of Scots, who was her first cousin once removed. Mary was the granddaughter of Margaret, sister of Henry VIII. Mary was, however, unpopular in Scotland and after the death of her first husband in France, she returned to Scotland, where her second husband was murdered, most probably by the man 120 Elizabeth I whom she was subsequently to marry, Lord Bothwell. Mary was hounded out of Scotland, fleeing to England, where she was arrested and held in close confinement for the next 18 years. In 1569, the Northern Rebellion led by Thomas Howard, the fourth duke of Norfolk; Charles Neville, the sixth earl of Westmoreland; and Thomas Percy, the seventh earl of Northumberland, failed, although it led to Elizabeth’s being excommunicated by the pope. With Elizabeth allying herself to the Protestants in France and the Netherlands (United Provinces), she viewed the developments in Europe with concern, especially when Philip II of Spain became the king of Portugal after the last Portuguese king, Henry, died childless. There was also a rebellion in Ireland, and when Sir Francis Walshingham, Elizabeth’s main spymaster, uncovered the Babington Plot implicating Mary, Queen of Scots. Mary was put on trial for treason, sentenced to death, and beheaded on February 8, 1587, at Fotheringay Castle. With Mary having willed her lands to Philip II, Elizabeth was facing a major threat from the Spanish king, who was also angered at the way in which English ships attacked his treasure ships and others bringing wealth from the Americas. Francis Drake, who circumnavigated the world in 1577–79, Walter Raleigh, and John Hawkins, and Martin Frobisher were among the “sea dogs” preying on the Spanish ships. In 1588, Philip II sent a massive navy and expeditionary force known as the Spanish Armada against England. By a mixture of luck and good planning, the Spanish Armada was crushed, with a few ships managing to escape around the northern coasts of Scotland and Ireland. Queen Elizabeth I’s speech at Tilbury, rallying her soldiers and sailors, is one of the most famous in history: “I know I have the body of a weak and feeble woman, I have the heart and stomach of a king, and a king of England too.” The reign of Queen Elizabeth I, known as the Elizabethan age, was also a period of great prosperity in England, with the Levant Company leading to the later formation of the East India Company. Many books were published, and many playwrights, notably William Shakespeare and Christopher Marlowe, wrote large numbers of plays. During the 1590s, Elizabeth continued to receive threats to her rule in Ireland, and in 1599 a plot was mounted by Robert Dudley’s stepson, Robert Devereaux, the earl of Essex, who had emerged as Elizabeth’s new favorite. Essex was executed on February 25, 1601. Elizabeth gradually came to see that her heir would be King James VI of Scotland, and when she died on March 24, 1603, James succeeded her. Further reading: Haigh, Christopher. Elizabeth I. New York: Longman, 1998; Jenkins, Elizabeth. Elizabeth and Leicester. London: Phoenix, 2002; Ridley, Jasper. Elizabeth I. London: Constable, 1987; Somerset, Anne. Elizabeth I. London: Weidenfeld & Nicolson, 1991; Weir, Alison. Elizabeth the Queen. London: Jonathan Cape, 1998. Justin Corfield encomienda in Spanish America Encomienda ranked among the most important institutions of early colonial Spanish America. Described as a kind of transitional device between the violence of conquest and the formation of stable settler societies, encomienda has been the topic of enormous research and debate among scholars. Rooted in the verb encomender (“to entrust”, “to commend”), an encomienda was a grant of Indian labor by the Crown to a specific individual. Holders of such grants, called encomenderos, were said to hold Indians in encomienda in Spanish America 121 Elizabeth I, queen of England, signs the death warrant of Mary Stuart, accused of treason against the English throne. encomienda or “in trust.” The institution and practice of encomienda originated during the Spanish Christian reconquest of Iberia from the Moors (718–1492 c.e.), creating an institutional template that was quickly transferred to the New World after 1492. Unlike its Iberian predecessor, encomienda in the Americas did not include land grants, except occasionally in marginal areas. Instead, it was primarily a mechanism of labor control that also permitted the Crown to maintain the legal fiction that Indians held in encomienda were technically free, were not chattel, and could not be bought or sold. It also served as an effective way to reward conquistadores and others in service to the Crown, including priests and bureaucrats. The term encomienda was often used interchangeably with repartimiento (“distribution” or “allotment”) during the early years of conquest and colonization, though the two were legally distinct. The later practice of compelling subject Indian communities to purchase Spanish goods, common in the 17th and 18th centuries, was also called repartimiento. Later forcedsale repartimiento had little relation to the institution of encomienda. The first substantial effort to codify encomienda in the New World were the Laws of Burgos (1512–13), which required encomenderos to “civilize,” “Christianize,” protect, and treat humanely Indians held in encomienda. A vast corpus of subsequent laws, proclamations, and edicts further refined and limited the institution. The practical effect of these laws was minimal. In practice encomienda was akin to slavery, especially during the early years of the conquests. Abundant evidence exists of the abuses and mistreatment inflicted upon encomienda Indians, who were bought and sold, worked to death, and in other ways treated for all practical purposes as slaves. These abundant abuses prompted some Spaniards to condemn the institution as unchristian, most prominently the priest Bartolomé de Las Casas, beginning in 1514. In response to this simmering debate, in 1520 Holy Roman Emperor Charles V decreed that the institution of encomienda was to be abolished. In the Americas the decree had little practical effect, as most encomenderos and officials ignored it. The Crown, concerned that encomenderos not become a permanent aristocracy, continued its efforts to impose strict limits on the institution, culminating in the so-called New Laws of 1542–43, which from the perspective of encomenderos were far more draconian than the Laws of Burgos issued 30 years earlier. The major features of the New Laws included provisions preventing the inheritance of encomiendas; the forbidding of new grants, requiring royal officers and ecclesiastics to give up their encomiendas; and prohibitions against Indian enslavement for whatever reason. The New Laws provoked an outcry across the colonies, especially in Peru, where factions of colonists rose in rebellion against them. In 1545–46, three years after they were issued, the New Laws were repealed as unenforceable. Encomienda nevertheless died a slow death over the next half-century. The principal cause for its decline was not royal decree but Indian depopulation. Grants of Indian labor became moot when there were so few Indians left to grant. Encomienda lasted less than a century in most areas, enduring into the late colonial period only in peripheral regions such as Yucatán. The transition from encomienda to hacienda (private landownership) was neither direct nor clearcut, and comprises another major arena of scholarly research and debate. See also voyages of discovery; Yucatán, conquest of the. Further reading: Gibson, Charles. Spain in America. New York: Harper & Row, 1966; Hanke, Lewis. The Spanish Struggle for Justice in the Conquest of America. Philadelphia: University of Pennsylvania Press, 1949; Simpson, Lesley Byrd. The Encomienda in New Spain: The Beginning of Spanish Mexico. Berkeley: University of California Press, 1950. Michael J. Schroeder epidemics in the Americas The European encounter with the Americas after 1491 set in motion a demographic catastrophe among indigenous peoples across the hemisphere, specifically epidemic and pandemic diseases against which native peoples had no biological immunities, and a crucial component of the larger Columbian exchange between the Old World and New. The precise characteristics and magnitude of this catastrophe remain a matter of scholarly debate. Population estimates for the Americas on the eve of the encounter vary widely. The most reputable estimates fall between 40 and 100 million for the hemisphere as a whole, a population reduced by an estimated overall average of 75 to 95 percent after the first 150 years of contact, with tremendous variations in time and space. 122 epidemics in the Americas Colonial Latin America and the circum-Caribbean Central Mexico is the most intensively studied region regarding the impact of European diseases on indigenous demography. Where in 1520 there lived an estimated 25 million native peoples, in 1620 there lived some 730,000—a decline of 97 percent, attributed overwhelmingly to disease. Similar catastrophes unfolded across the hemisphere. The most precipitous decline is thought to have occurred in the Caribbean, where the precontact indigenous population of several millions had been all but exterminated by the 1550s. Such diseases spread rapidly in all directions, preceding and accompanying military incursions, weakening indigenous polities, and facilitating the process of conquest and colonization in the Caribbean, Mexico, the Andes, Brazil, New England, and beyond. This process of demographic catastrophe, an unintended consequence of the European encounter with the Western Hemisphere, affected every aspect of the subsequent history of the Americas. In the English-speaking world, the predominant view for centuries regarding Indian depopulation in postconquest Spanish America centered on the “Black Legend” of Spanish atrocities, a view most forcefully articulated and propagated by the Spanish bishop Bartolomé de Las Casas in the 1500s. By the early 2000s, a scholarly consensus had emerged that the principal cause of indigenous population declines was in fact pandemic and epidemic diseases. The exact sequence and timing varied greatly from place to place. Every locale had its unique history of demographic decline, with periodic outbreaks of various pathogens: smallpox, measles, typhus, influenza, yellow fever, diphtheria, bubonic plague, malaria, and others. Far and away the deadliest killer was smallpox, the first documented New World outbreak occurring in the Caribbean in 1518. Spanish friars, reporting to King Charles V in January 1519, estimated that the disease had already killed nearly one-third of Hispaniola’s Indians and had spread to Puerto Rico. In these earliest outbreaks, influenza probably accompanied the spread of smallpox. By the early 1520s, three principal disease vectors, mainly of smallpox and influenza, were spreading rapidly through indigenous populations. One had entered through northern South America near the junction with the Central American isthmus, and by the late 1520s had spread far into the interior along the northern Andes. The second had entered along the gulf coast of Mexico, from Yucatán to present-day Veracruz, and by mid-1521 was decimating the population of the Aztec capital of Tenochtitlán. By the late 1520s, this second vector had bifurcated, spreading south into Central America and north into western and northern Mexico, where it was poised to sweep farther north. The third disease vector was launched with the first exploratory expeditions along the Pacific coast of Central America and Peru, beginning in the early 1520s. By the late 1520s, this third vector had also bifurcated, spreading north through Nicaragua and Guatemala, and in less than a decade racing 3,000 miles south down the Andes, reaching as far as southern Bolivia. A fourth set of vectors began spreading inland from the Brazilian coast from the beginning of permanent settlements in the early 1550s. By the late 1550s and early 1560s, the epidemics had spread along much of the Brazilian coast and were sweeping into the interior. Widespread death from disease weakened indigenous polities, engendering profound cultural crises and facilitating processes of conquest and colonization. The most dramatic and extensively documented such instance occurred in Tenochtitlán during the conquest of Mexico, where a major smallpox outbreak coincided with the Spanish invaders’ siege of the island city. From May to August 1521, as many as 100,000 of the city’s inhabitants succumbed to the disease. The smallpox virus typically enters the victim’s respiratory tract, where it incubates for eight to 10 days, followed by fever and general malaise, then the eruptions of papules, then vesicles, and finally large weeping pustules covering the entire body, followed soon after by death. Scholars agree that this smallpox epidemic, occurring just as their empire and capital city were under assault by the Spanish and their Indian allies, fatally weakened the Aztec capacity to mount an effective resistance. A similar if distinctive dynamic is thought to have unfolded before and during the conquest of Peru. Again, the timing of the Spanish invasion could not have been more propitious. Less than a decade before the incursion of Francisco Pizarro in 1532, the vast Inca Empire was in relative tranquility under a unified ruling house. Around 1525–28, at the height of the Inca Huayna-Capac’s northern campaign against recalcitrant indigenous polities around Quito, an unknown pestilence, probably smallpox, ravaged the northern zones. During this epidemic, the Inca was struck by fever and died. Spanish chronicler Pedro de Cieza de León recorded that the first outbreak of the disease around Quito killed more than 200,000 people. Other chroniclers offered similar descriptions of a wave of pestilence in the northern districts during this same period. Huayna- Capac’s death set in motion a crisis of dynastic succession and civil war that Pizarro deftly exploited to the epidemics in the Americas 123 Spaniards’ advantage. Contributing to the spread of the disease was the Andean tradition of venerating the mummified corpses, as thousands of indigenous Andeans came into contact with the dead Inca and those who ritually had prepared his body. During this early period, more politically decentralized zones including the Central American isthmus, the Maya regions, northern South America, and the Brazilian coast and hinterlands were also severely stricken, facilitating Spanish and Portuguese incursions less by exacerbating elite divisions or shattering cosmologies than by the sheer magnitude of the deaths. Almost everywhere that Europeans intruded, indigenous polities, societies, and cultures became profoundly weakened by maladies with no precedent and no cure, as emphasized repeatedly in scores of locales by a diversity of Spanish, mestizo, and indigenous chroniclers. The second major pandemic to sweep large parts of the Americas was measles, beginning in the early 1530s. From the Caribbean islands the pathogen quickly spread to Mesoamerica, South America, and Florida, causing mortality rates estimated at 25–30 percent. Outbreaks of bubonic and pneumonic plague began erupting around the same time. In the mid-1540s, came another series of waves of epidemics across large parts of Mesoamerica and the Andes. The precise bacterial or viral agents responsible for the “great sickness” that swept Central Mexico in the 1540s remain the subject of debate, though the evidence suggests typhus, pulmonary plague, mumps, dysentery, or combinations of these. There is little disagreement that the death rates thus generated were extremely high, as upward of a million natives in New Spain succumbed to the collection of epidemic diseases in the 1540s. By this time, bubonic plague, typhus, and other pathogens had spread to the Pueblo Indians in the Southwest and to Florida. The spread of epidemic diseases swept inland from Florida beginning in the 1520s and perhaps earlier. The odyssey of Álvar Núñez Cabeza de Vaca and his small party of shipwreck survivors across the U.S. South and Southwest (1528–37) is thought to have introduced numerous diseases to the native inhabitants. In particular, the expedition of Hernando De Soto from Florida through the North American Southeast to the Mississippi River Valley (1538–42) is believed to have wreaked tremendous ecological damage, introducing previously unknown pathogens across large parts of the interior. By the time of sustained European encounters with these regions, beginning in the 1680s, the dense populations and many towns and settlements described by De Soto more than a century before had vanished, leaving behind a landscape largely denuded of its human inhabitants. Local and regional studies show endless variations on these more general themes, with wave after wave of epidemic diseases wreaking demographic havoc for centuries after the initial encounter. In Brazil, the creation of numerous disease vectors along the coast from the 1550s to the 1650s, diseases often carried by African slaves, generated repeated epidemics of smallpox, typhus, and other pathogens that dramatically reduced populations in the interior. The disease chronology of northwestern Mexico in the first half of the 17th century illustrates the more general pattern of repeated outbreaks, which in this case were recorded in 1601– 02, 1606–07, 1612–15, 1616–17, 1619–20, 1623–25, 1636–41, 1645–47, and 1652–53. In his classic study of the postconquest Valley of Mexico, Charles Gibson recorded major disease outbreaks every few years, with 50 major epidemics from 1521 to 1810, an average of a major epidemic every six years. Colonial North America The Pilgrims in Massachusetts and the first Europeans to settle on the coast of Maryland and Virginia found a nearly empty country. Almost nine-tenths of the former Native American populations had been wiped out by smallpox in an epidemic of 1618–19. John Winthrop, the leader of colonial Massachusetts, commented in 1684: “For the native, they are neere all dead of the small Poxe, so as the Lord hathe cleared our title to what we possess.” This Puritan leader and others felt that this disease was God’s plan to make land available for Europeans by eliminating the Native Americans who had previously occupied it. Smallpox followed the priests, explorers, traders, soldiers, and settlers from Europe into the heartland of the North American continent. The Hurons were affected in 1640, the Iroquois in 1662. In British North America, smallpox indirectly promoted the growth of institutions of higher learning. Wealthy colonial families sent their sons to England to educate them. Many of these young men, born in North America, did not have the immunity to smallpox their fellow students in England possessed. Enough of these young men from the colonies contracted and died from smallpox while being educated in Europe that colonial North Americans founded their own colleges, including Harvard, William & Mary, and Yale. In some cases, smallpox was spread to North American indigenous peoples intentionally, as a form of germ warfare. During the American Revolution, 124 epidemics in the Americas American troops were victims of the disease during a campaign in Quebec. George Washington successfully had the susceptible American troops inoculated. British troops, who had grown up in England and Ireland, had immunity to the disease. By the time George Vancouver explored the Pacific coasts of what would become Washington State and the Province of British Columbia, he found entire villages of Native Americas in ruins and deserted with skeletons lying all around. By the 20th century, smallpox had wiped out as much as 90 percent of the preconquest Native American population. In sum, the impact of hitherto unknown European diseases on indigenous societies unleashed a demographic cataclysm across the Western Hemisphere, representing one of the most important chapters in the history of the postconquest Americas, whose characteristics and impacts scholars are still grappling to comprehend. Further reading: Alchon, Suzanne Austin. Native Society and Disease in Colonial Ecuador. Cambridge: Cambridge University Press, 1991; Cook, Noble David. Born to Die: Disease and New World Conquest, 1492–1650. Cambridge: Cambridge University Press, 1998; Cook, Noble David, and W. George Lovell, eds. Secret Judgments of God: Native Peoples and Old World Disease in Colonial Spanish America. Norman: University of Oklahoma Press, 1992; Hopkins, Donald R. The Greatest Killer: Smallpox in History. Chicago: University of Chicago Press, 2002; Tucker, Jonathan B. Scourge: The Once and Future Threat of Smallpox. New York: Atlantic Monthly Press, 2001. Nancy Pippen Eckerman and Michael J. Schroeder Erasmus of Rotterdam (1466–1536) Renaissance humanist and writer Desiderius Erasmus was an internationally acclaimed celebrity and the greatest European scholar during the 16th century. Despite the polemics of the Protestant Reformation, he could make friends among kings and lords in every land and on all sides of the central questions of his day, and this trait led him to reside in Holland, France, England, Switzerland, and Italy. His pursuit of Christian humanism and his intellectual curiosity led into a lifetime of travel and writing, seeking to promote the values of the Italian Renaissance in northern Europe. Erasmus was born in Rotterdam on October 27, 1466, as an illegitimate child. His father was Roger Gerard, who later became a priest, and his mother Margaret, the daughter of a physician. One of the major Catholic renewal groups of the Low Countries, the Brethren of the Common Life, adopted him and no doubt generated in him an unpretentious and broadminded orientation toward spirituality. For the rest of his life, Erasmus never was enticed by the outward show of formal religion, whether it came from Catholic pomp or Protestant sectarianism. He never held an office in the church, even though he was offered the cardinal’s hat by the pope; he also rejected the pandemonium caused by the likes of Martin Luther, Henry VIII, and Ulrich Zwingli. At first, he spent time in a religious order, though he probably chafed at requirements that he remain in a monastery under a superior. What attracted him were the disciplined study and fraternal companionship a monastic life afforded. He found an excuse to leave when he took up a position with a local bishop and later obtained permission to study theology in Paris. It was not theology that interested him as much as the life of Erasmus of Rotterdam 125 For Erasmus, virtue came not from religious ritual, but in each person’s desire and ability to emulate the life of Christ. intellectual stimulation and possibilities of travel. After leaving the monastery, he never looked back. In the university he gravitated toward literature and humanism of the Renaissance more than toward the theology and philosophy of Scholasticism. He made friends with Italian scholars in Paris, who kept him informed about the intellectual currents of the Renaissance. His skills at Latin and his need for income led him into contact with English students, who in turn invited him to England. At the age of 33, he accepted their invitation and emigrated there. The English intellectuals he met included John Colet, Sir Thomas More, John Fisher, and Archbishop Warham, men of the “New Learning” school who were interested in reviving the Greek and Latin classics instead of the hidebound studies of medieval Europe. Erasmus began to realize that such a philological methodology could also be applied to the church fathers and the scriptures, the literary pillars of his traditional Catholic faith. His object was not to undermine the established religious doctrines of his time, but simply to make the writings more available and understandable to the broader public. Erasmus discovered the advantages of travels and friends in high positions. Whereas other scholars had to worry about financial support and institutional approval, Erasmus attracted the favor of benefactors in many countries, especially those who were outside the church hierarchy. This new life afforded him independence of thought, though it meant that he never lived in one place more than eight years. His celebrity status as an intellectual can only be compared to the likes of Herodotus among the ancient Greek and Persian officials or Voltaire among the Enlightenment thinkers. He was a trendsetter in bringing the ideas of the Renaissance to northern Europe. His book of commonplace wisdom, Adagia, propelled him into the limelight and was published more than 12 times between 1500 and 1535 in several languages. On the topic of religion he wrote Enchiridion militis Christiani (Handbook of a Christian Knight), a book that found its way throughout Europe. This book attempted to make Christianity practical by teaching about how to choose virtuous life. For Erasmus, this choice did not come through rite or ceremony; nor was it mental speculation or Scholastic dialectic, but it was learned through practice and imitation of Christ. However, Christ was Savior, as well as supreme teacher, and only Christ and conversion of heart could make Christian life possible. Enchiridion stays within Catholic bounds by stressing the need for the external church as a peaceful and orderly environment where such learning about Christ can occur. Erasmus’s most lasting contribution lies in the field of biblical studies and patristics. He can only be compared to Origen and Jerome, Christian scholars of the third and fourth centuries. He compiled the manuscripts that led to five new editions of the New Testament. His historical-critical methodology for studying the Bible laid the groundwork for a new generation of interpretation and modern thinkers. He edited and commented on many writings of the church fathers. These include Jerome (1516), Augustine (1529), John Chrysostom (1530), and Origen—his favorite— (1536), and also Athanasius and Ambrose. Erasmus died a Catholic in Basel, a Protestant city, without Catholic last rites and was buried under a cathedral that had been converted to a Protestant church. Many of his writings were put on the Index of Forbidden Books by the Council of Trent as supportive of the Protestant critique of the Catholic Church. Protestants maintained that they brought into the light what Erasmus had already hinted at in the dark. Yet Erasmus never refused to submit to the Catholic Church. He feared that the Protestants’ invectives against the church destroyed the irenic atmosphere so necessary for learning and dialogue. He also believed that the church was in spite of its flaws the necessary environment where virtue could be lived out. He stood in the lonely middle ground, saying that the Apostles Creed held both groups together. As early as 1516, his opposition to Luther was known. Finally, in 1524 he wrote De libero arbitrio (On free choice) against Luther’s ideas, arguing that the consensus of the church was authoritative for biblical interpretations. By the end of his life, Erasmus had alienated many erstwhile Protestant friends and allies, including Luther, Zwingli, and Henry VIII. The principles that animated his life and inspired a whole generation of thinkers were his respect for conscience and the rule of reason over coercion and military might. Both of these principles proved to be impossible to live out in the politics of the Reformation. He saw his best friend in England, Thomas More, executed by Henry VIII for these humanist ideals, the year before his own death. See also Bible traditions; Bible translations; humanism in Europe. Further reading: Erasmus, Desiderius. The Praise of Folly. New Haven and London: Yale University Press, 2003; 126 Erasmus of Rotterdam Huizinga, Johan. Erasmus and the Age of Reformation. Princeton, NJ: Princeton University Press, 1984. Mark F. Whitters Ewuare the Great (1440–1473) king of Benin Oba Ewuare the Great of West Africa was one of the most celebrated kings of Benin. However, since most of the history of Benin during this period was oral, it is sometimes difficult to separate legend from reality in the accounts of this powerful and charismatic monarch. Known as the first of the warrior kings of West Africa, Ewuare belonged to a group of 15th and 16th century kings of Ife origin who transformed Benin City from a group of small villages into a thriving metropolis. Ewuare’s three brothers, Egbeka, Orobiru, Uwaifiokun, occupied the throne of Benin for 70 years. After succeeding Uwaifiokun, Ewuare continued to reign for 33 years. As oba, Ewuare designated his eldest son as the heir-apparent, discontinuing the practice of collateral transmission to the throne. Ewuare subsequently bestowed the title of Ihama upon his family. Ewuare is credited with conquering at least 201 surrounding towns and villages during his reign. By the time his new subjects had been resettled, Ewuare’s kingdom had grown from a small group of villages to a substantial kingdom. To solidify his position, Ewuare built a palace and fortified the city’s defenses. He also proceeded to rid the Beninese government of hereditary tribal heads. In their place, Ewuare created a patrimonial bureaucracy in which freemen served as military and administrative chiefs. Ewuare did not strip these chiefs of all powers, however, but divided Benin into departments and placed each department under the control of a group of chiefs. Ewuare also persuaded the tribal chiefs to allow their firstborn sons to serve him in the palace. Together, Ewuare and his son and successor Oba Ozolua were responsible for establishing a viable foreign trade in Benin. Consequently, by the time the Portuguese arrived in Benin in 1486, trade was already well established. After the arrival of the Europeans, Benin became the entry point for arms and other European goods designated for transport to points around Africa. Oba Ewuare was a monarch of wide interests and was responsible for establishing a number of religious and cultural rituals. He was also widely known for his celebration of Beninese arts. During this period, art in Benin was practiced chiefly by hereditary craftsmen who lived in the palace. To honor members of the royal family, Ewuare had brass smiths cast the heads of the royal family, both past and present, on a variety of objects. According to Beninese lore, Ewuare preferred the likenesses of himself created by brass smiths to those created in other forms because he believed he looked younger in the brass casts. It was common practice at the time to depict all kings as young men rather than the way they looked later in life. The technique used by the brass smiths of Benin combined European techniques with those handed down among the Ife people. Ewuare also had a more practical side and was responsible for massive architectural innovations and extensive town planning in Benin. The monarch was a great lover of ceremony, and he established the practice of holding annual ceremonies in which the participants wore elaborate costumes and used ritualistic paraphernalia to depict various religious and cultural elements. Ewuare commanded the Beninese people to wear distinctive facial markings that identified them according to their status and barred all foreigners from the palace. Among the Beninese people, Ewuare was highly esteemed for his introduction of coral beads, which became an essential part of royal symbolism. The Beninese people also greatly admired Ewuare for his discovery of red flannel, which he had probably received from a source with European connections. Under Ewuare, ivory and woodcarvings became common in Beninese works of art. Somewhat surprisingly, Ewuare was also interested in herbology and was a noted herbologist. Dedicated to building up the treasures of Benin, Ewuare founded the Iwebo Palace Association, which was given the responsibility for caring for all royal regalia. However, during Ewuare’s reign, the royal storehouses were twice burned down, and an untold number of priceless relics were destroyed. Further historical relics were lost to history when the royal storehouses were looted in the early 18th century under the rule of Oba Ewuakpe and when they were again burned during the reign of Oba Osemwede in the early 19th century. In Benin, the Emeru were designated as caretakers of all iru, the sacred brass vessels used in Beninese rituals. The more contemporary irus were replicas of those used during Ewuare’s time when it was believed that the vessels had mystical powers that allowed spirits who resided in the vessels to affirm the prayers of the faithful in audible voices. These vessels were placed on the Ebo n’Edo shrine in Ewuare’s palace. According to the legend Ewuare the Great 127 of the iru, after Ewuare died, a successor broke the pots in an attempt to discover what was inside. Because the spirits supposedly fled from the broken pots, new vessels were cast. Thereafter, the royal family was required to mimic spirit voices during ceremonies. Another legend has it that Ewuare predicted that if a king named Idova ascended to the throne of Benin, the country would experience a major change in government. He declared that he did not know whether the change would be for good or ill. When Oba Ewuakpe became king in 1700, it was noted that his given name was Idova. Whether Oba Ewuare had had some premonition of what would happen during Ewuakpe’s reign, or whether events were a result of his being expected to institute major changes, Oba Ewuakpe responded to political conflicts by initiating a number of reforms in Benin. However, the monarch later fell out of favor with the people. When his mother died, he ordered that human sacrifices be made in her honor. Outraged, the people rebelled and thereafter boycotted the palace. See also Africa, Portuguese in. Further reading: Ben-Amos, Paula Girshick. Art, Innovation, and Politics in Eighteenth Century Benin. Bloomington and Indianapolis: Indiana University Press, 1999; Iliffe, John. Africans: The History of a Continent. New York: Cambridge University Press, 1995; Morton-Williams, Peter. Benin Studies. New York: Oxford University Press, 1973. Elizabeth Purdy exclusion laws in Japan In 1534, the first Portuguese ship arrived in southern Japan bringing a cargo that included firearms. For the next hundred years, Japanese-Western trade flourished and Christian missionaries converted many Japanese to Catholicism. However in 1636 strict isolation laws were enforced, foreigners were expelled, Japanese Christians were compelled to renounce their religion on pain of death, and Japanese were forbidden to leave the country. These strict exclusion laws would last until 1854. The Japanese had known about gunpowder since the 13th century. However in the midst of extensive civil wars in the 16th century, Japanese feudal lords were immediately impressed by the accurate firing aquebuses and cannons the Portuguese traders introduced and immediately began to buy and then make them in Japan. These new weapons changed the nature of the warfare and led to the building of heavily fortified castles. Catholic missionaries followed merchants. Francis Xavier, associate of Ignatius Loyola, founder of the Society of Jesus, arrived in Japan in 1549. Franciscan and Dominican missionaries soon followed. Many feudal lords, anxious to increase trade with European merchants, and seeing the deference Portuguese and Spanish merchants showed to priests, welcomed missionaries to their domains; some converted and even ordered their subjects to convert also. Oda Nobunaga, the most powerful military leader of Japan, became a patron of the Jesuits. The number of converts increased dramatically, to 150,000 and two hundred churches by 1582 and perhaps to as many as 500,000 by 1615. The very success of the Catholic missionaries created a backlash against Christians. Some opponents were Buddhists. Significantly political leaders began to fear the political loyalty of their Christian subjects. Thus Oda’s successor Toyotomi Hideyoshi (1536–98) banned Christianity in 1587 but did not strictly enforce his edict until 10 years later. It was Hideyoshi’s successor Tokugawa Ieyasu (1542–1616) who seriously persecuted Christians, beginning in 1612 when, as shogun, he ordered all Japanese converts to renounce Christianity on pain of death and then to be registered in a Buddhist temple. He also executed some missionaries and expelled all others. His policies were ruthlessly carried out, with military force where there were large Christian communities. Tens of thousands were killed and only isolated clandestine communities remained. The Tokugawa Bakufu, or shogunate, expanded the ban on missionaries to include all Spanish, Portuguese, and English traders also. Only the Dutch among Europeans were allowed to send two ships annually to Nagasaki under strict supervision. Chinese ships were also allowed under license. In 1636, another law was promulgated that prohibited all Japanese from leaving Japan and members of the sizable Japanese communities in Southeast Asia from returning. Shipbuilding was limited to small coastal vessels to prevent Japanese from secretly trading with foreigners. Fear and insecurity motivated the newly established Tokugawa Shogunate (1603–1868) to ban Christianity and foreign contacts. Seclusion became Japan’s national policy. See also Christian century in Japan; Jesuits in Asia. Further reading: Boxer, C. R. The Christian Century in Japan, 1549–1650. Berkeley: University of California Press, 1967; Totman, Conrad. Politics in the Tokugawa Bakufu, 128 exclusion laws in Japan 1600–1843. Cambridge, MA: Harvard University Press, 1967; Totman, Conrad D. Early Modern Japan. Berkeley: University of California Press, 1993. Jiu-Hwa Lo Upshur expulsion of the Jews from Spain (1492) and Portugal (1497) In a year most remembered for Christopher Columbus’s discovery of the New World, the Spanish monarchs were also making history at home. In March 1492, Ferdinand V and Isabella I of Spain issued the Alhambra Decree, which ordered the expulsion of all Jewish men and women from the newly united kingdom. Also known as the Edict of Expulsion, this decree gave the Jews four months to depart and forbade their return to Spain. Those who did not comply with the decree would be stripped of all belongings and put to death. Traditional accounts of the expulsion contend that as many as 400,000 Jews fled to North Africa and Turkey in response to this decree. Recent scholarship has challenged this account and reduces the number of refugees to a total of 30,000–40,000, with only 10,000 fleeing to Turkey from the western provinces. The remaining refugees from Spain fled overland to neighboring Portugal, where tensions were already growing between the native Christian and Jewish populations. The addition of 20,000 Jewish refugees led to increased persecution, and just four years after the Alhambra Decree was issued in Spain, King Manuel of Portugal followed suit by ordering the expulsion of all Jews residing within the borders of his kingdom. Hoping to avoid the logistical problems of the Spanish expulsion, Manuel gave the Jewish community 10 months to prepare, moving the actual date of expulsion to October 1497. In the interim, many of the Jews chose to convert to Christianity to avoid the treacherous journey across the Mediterranean. The Spanish refugees were also able to return to their homeland as “new Christians” if they were willing to convert. The small number of Jews unwilling to make this sacrifice had no choice but to travel across the Mediterranean to North Africa. It is most likely these Jews, expelled from Portugal and not Spain, made up the first population of Sephardic Jews in North Africa. The expulsion of the Jews from Spain has been a subject of great historical interest, and numerous scholars have weighed in with varying accounts of the causes, processes, and consequences of this event. All agree that the expulsion was the inevitable result of the Spanish Inquisition, instituted by Ferdinand and Isabella in 1478. Traditional theories hold that the Inquisition was created to combat the growing number of Jewish converts (conversos), who were thought to be practicing Jewish rituals in secret. According to this approach, the expulsion was an attempt to rid the kingdom of genuine Jews, who were assumed to be a bad influence on the conversos. Revisionist historians have challenged this account of the expulsion in one of two ways. Some have argued that the conversos were genuine converts to Christianity and that the Inquisition against them was instituted to undermine their economic and political success. According to this theory, the expulsion was an unintended consequence of an Inquisition that had gained its own inertia among the populace. Others have argued that the Inquisition was largely a political institution instituted to secure the religious unity of the newly united Spanish kingdom. On this account, the expulsion was actually less about removing the Jews from the kingdom than it was about forcing them to convert to Christianity by default. The historical events leading to the expulsion of the Jews from Portugal are less enigmatic. As was mentioned, tensions had been rising between Jews and Christians within Portugal for some time. In fact, King João II was considering an expulsion as early as 1493. After João’s death in 1495, the situation of the Jews improved for a brief period under the reign of Manuel. Yet, all hope was crushed when the Spanish Crown interfered, pressuring Manuel to expel his own Jews for the sake of greater Christendom. Ferdinand and Isabella were able to force this second expulsion because Manuel was intent upon marrying their daughter, Isabella. This marriage was an important political move for Portugal, and Ferdinand and Isabella made the expulsion of Portuguese Jews a necessary condition of the marriage contract. Thus, despite his unease over the expulsion, Manuel issued his decree in 1496. Further reading: Beinart, Haim. The Expulsion of the Jews from Spain. Oxford: Littman Library of Jewish Civilization, 2002; Edwards, John. The Spanish Inquisition. Charleston: Tempus Publishing, 1999; Kamen, Henry. “The Mediterranean and the Expulsion of Spanish Jews in 1492.” Past and Present (v. 199, May 1988); Peters, Edward. “Jewish History and Gentile Memory: The Expulsion of 1492.” Jewish History 9 (1995); Roth, Norman. Conversos, Inquisition, and the Expulsion of the Jews from Spain. Madison: University of Wisconsin Press, 1995. Elizabeth A. Barre
Age of Revolution and Empire 1750 to 1900 Edit
Eastern Question The Eastern Question, or what was to become of the declining Ottoman Empire, was one of the major diplomatic issues of the 19th century. The major European powers, Britain, France, the Austro-Hungarian Empire, and Russia, had differing and sometimes confl icting attitudes about what to do with the “Sick Man of Europe.” Each European power had territorial ambitions over parts of the Ottoman holdings and sought to further their ambitions through a variety of diplomatic and military means. The Eastern Question had similarities with the diplomatic maneuverings known as the Great Game by Britain to prevent Russian expansion into Afghanistan and other Asian territories and may be divided into several different phases. In phase one, 1702–1820, Russia and the Ottomans engaged in a number of wars over control of territories around the Black Sea. The long-term Russian strategy was to gain warm-water ports and entry into the Mediterranean through the Dardanelles. Generally, the British, and to a lesser extent the French, sought to thwart Russia expansion into the Mediterranean and supported the Ottomans diplomatically. The Austro-Hungarians who wanted to expand into Ottoman territory in the Balkans and feared growing Russian strength also sought to halt growing Russian power. But in general, during the fi rst phase of the Eastern Question, Russia won its wars against the Ottoman Empire and steadily extended its control around the Black Sea. The Napoleonic conquest of Egypt and the brief French occupation in 1798 highlighted the importance of the region and the growing weakness of the Ottomans. Phase two of the Eastern Question was a period when nationalist sentiments arose within the Ottoman Empire. This culminated in the Greek War of Independence, 1821–33, or phase three, when the Greeks, with the sympathy and support of France and Britain, rose up in armed rebellion against Ottoman domination. The Greek War culminated in the independence of Greece under the Treaty of Adrianople and the Protocols of London in 1830. The European powers guaranteed Greek independence in 1832. Although the British and French supported the Ottomans in their struggles against Russian encroachment in the Black Sea and the Balkans during the 19th century, both powers took territories away from the Ottomans in North Africa and along the Arabian Peninsula and Persian Gulf. Phase four of the Eastern Question culminated in the Crimean War, when the British and French, with small support from Piedmont- Sardinia, joined forces with the Ottomans against Russia. Although Russia gained some territory, the support of the European powers gave the “Sick Man of Europe” a new lease on life and forced a series of domestic reforms within the empire. During the Congress of Berlin in phase fi ve, the British agreed to defend the Ottoman Empire against Russian ambitions to partition it, but at the same time assented to the French takeover of Tunisia in North Africa. France had already taken the former Ottoman E territory of Algeria in 1830. Britain gained the important island of Cyprus in the eastern Mediterranean, while Austria-Hungary expanded into the Balkans. As Germany emerged as a major European power, it, too, entered into the diplomatic maneuverings involving the Ottoman Empire. Kaiser Wilhelm II visited Istanbul in 1889 and again in 1898 and announced his support for the aging empire. As a result, ties between the German and Ottoman military increased, and Germany began to invest in the Ottoman Empire. The Berlin to Baghdad railway was the cornerstone of German fi nancial interests. The growing German infl uence within Ottoman territories raised British opposition. The British were particularly opposed to the possibility that the Berlin to Baghdad Railway might extend the German presence into the Persian Gulf and eastern Asia where it would compete with the British. The Germans managed to gain a concession for the railway in 1903 and hoped that it would link up with rail lines in the eastern Mediterranean. Parts of the railway were constructed through Anatolia, but the railway was never completed to Baghdad. The Ottoman government was not a passive participant in the Eastern Question but took an active role in playing off the confl icting diplomatic policies of the European powers to prevent the dissolution of its empire. The territorial and economic rivalries of the European nations enabled the Ottoman Empire to prolong its existence while at the same time it continued to lose territories in the Balkans, North Africa, Egypt, the Sudan, and the Arabian Peninsula to the imperial European powers. See also Algeria under French rule; Anglo-Russian rivalry; Tanzimat, Ottoman Empire and. Further reading: Anderson, Matthew S. The Eastern Question, 1774–1923. New York: St. Martin’s, 1966; Marriott, John A. R. The Eastern Question: An Historical Study in European Diplomacy. Oxford: Clarendon Press, 1917, 1956. Moon, Parker Thomas. Imperialism and World Politics. New York: Macmillan, 1926, 1967. Janice J. Terry Eddy, Mary Baker (1821–1910), and the Christian Science Church The Church of Christ, Scientist (offi cial name) was established in 1879. However, the notion of Christian Science was cultivated by Mary Baker Eddy after her instantaneous recovery in 1866 from severe injuries sustained in an accident, in her words, “which neither medicine nor surgery could reach.” What did reach her serious condition were the healing words of Jesus, which became the foundation of her method for achieving authentic health. Born in a small New Hampshire village in 1821 to Congregational parents who were devoted to her education and her study of the Bible, Mary Baker had always been an unhealthy child and adolescent. Over the course of her life, she married three times: fi rst to George Washington Glover in 1843, who died suddenly six months later; then to Daniel Patterson in 1853, whom she divorced 20 years later after tolerating his numerous infi delities; and, fi nally, in 1877, to Asa Gilbert Eddy, who died in 1882. Mary, having survived ill health, marital tragedy, and injuries, lived into her 90th year, dying in 1910. Mary Baker Eddy’s discovery of Christian Science is documented in her book Science and Health, a title that she later extended to include With Keys to the Scriptures. This book, fi rst published in 1875, was quickly adopted as the textbook of a new religious movement. Besides a short autobiographical sketch of her recovery, it offers practical advice on family relationships and engages in analyzing literary issues such as the Genesis creation stories and scientifi c discussions on subjects such as Darwinism. But what sets her book apart as a new religious text is its exploration of a philosophy of radical idealism, in which only the divine mind exists, while matter is mere illusion. This illusion is what leads to intellectual error and ill health, and ultimately evil and death. Awareness of this illusion and the salvifi c need for a sense of “at-one-ment” with the divine mind of the biblical God is what leads to both spiritual and physical health. Eddy sustained considerable critique of her philosophy from both Joseph Pulitzer, who accused her of senility, and Mark Twain, who made her the target of his stinging wit, as well as numerous Christian theologians, who believed she had abandoned essential orthodoxy. Deeply infl uenced by her encounter in 1862 with Phineas P. Quimby, the famous mentalist and ridiculed progenitor of the mind-over-matter philosophy, Eddy’s resolve was more than enough to withstand a lifetime of criticism, which allowed her to publish several books and to found the Boston Mother Church, the Massachusetts Metaphysical College, the Christian Science Journal, and a world-class newspaper, the Christian Science Monitor. Each local branch church, without the benefi t of ordained clergy and guided by Eddy’s Church Manual, conducts simple Sunday services that 118 Eddy, Mary Baker (1821–1910), and the Christian Science Church consist of hymn singing and the reading of biblical texts and complementary passages from Science and Health. While the membership of the church is difficult to assess, given its prohibition on publishing statistics (though it claims 2,000 worldwide Branch Churches and Societies), and while the movement has faced legal challenges, given its practice of a strict form of faith healing that encourages the avoidance of hospitals, it is generally believed to have well over 300,000 American adherents and a growing European and Asian mission. See also Mormonism; transcendentalism. Further reading: Gill, G. Mary Baker Eddy. New York: Perseus Books Group, 1999; Gottschalk, S. Rolling Away the Stone: Mary Baker Eddy’s Challenge to Materialism. Bloomington: Indiana University Press, 2005; Schoepflin, R. B. Christian Science on Trial: Religious Healing in America. Baltimore, MD: The John Hopkins University Press, 2002; Wilson, B. Blue Windows: A Christian Science Childhood. New York: Picador, 1998. Rick M. Rogers enlightened despotism in Europe Enlightened despotism represented one of the most enduring experiments before the old order was forever turned upside down by the forces unleashed by the French Revolution in 1789. Ironically, enlightened despotism was fostered by the thoughts of French philosophers like Voltaire; Charles-Louis de Secondant, baron de Montesquieu; and Denis Diderot who would provide the ideological gunpowder that exploded with the revolution in 1789. Embraced by rulers in 18thcentury Europe like Catherine the Great of Russia, Maria Theresa of the Austrian Empire, and Frederick the Great of Prussia, enlightened despotism provided a philosophy of government that motivated rulers to pursue political changes, forever breaking any ties with the monarchies of the past. At its basis, enlightened despotism attempted to apply the rational spirit of the Enlightenment to guide governance, pushing them forward from the superstitions and sometimes barbarous practices of past centuries. It embraced not only what we would call now a progressive view of government but also the sciences and the arts. general welfare Above all, enlightened despots began to see themselves as the first servants of the state, whose duty was to provide for the general welfare of their subjects. When Frederick II became king of Prussia after his father’s death on May 31, 1740, he wrote “Our grand care will be to further the country’s well-being and to make every one of our subjects contented and happy.” A more mature Frederick later wrote in Essay on the Forms of Government, “The sovereign is the representative of his State. He and his people form a single body. Ruler and ruled can be happy only if they are firmly united. The sovereign stands to his people in the same relation in which the head stands to the body. He must use his eyes and his brain for the whole community, and act on its behalf to the common advantage. If we wish to elevate monarchical above republican government, the duty of sovereigns is clear. They must be active, hard-working, upright and honest, and concentrate all their strength upon filling their office worthily. That is my idea of the duties of sovereigns.” Above all, it was Montesquieu in his The Spirit of Laws (1748) who had the most practical influence on the enlightened despots and the American Revolution of 1775. Montesquieu wrote, “In every Thanks to her spontaneous recovery from illness, Mary Baker Eddy helped create the Church of Christ, Scientist. enlightened despotism in Europe 119 government there are three sorts of power; the legislative; the executive, in respect to things dependent on the law of nations; and the executive, in regard to things that depend on the civil law. By virtue of the fi rst, the prince or magistrate enacts temporary or perpetual laws, and amends or abrogates those that have been already enacted. By the second, he makes peace or war, sends or receives embassies; establishes the public security, and provides against invasions. By the third, he punishes criminals, or determines the disputes that arise between individuals. The latter we shall call the judiciary power, and the other simply the executive power of the state.” A SAY IN DESTINY While none of the enlightened despots like Frederick, Maria Theresa, or Catherine would willingly accept limitations on their sovereignty, they all accepted at least in principle the idea that those they governed should have some say in their own destiny. All used consultative assemblies, drawn from all the classes in society, at least several times during their reigns. In 1785, for example, Catherine issued two charters, one for the nobles, and one for the towns. The Charter for the Nobility inaugurated councils of nobles who could offer their opinions on laws that she proposed. The Charter for the Towns created municipal councils whose membership included all those who owned property or a business within the towns. At the same time, the legal status of Russia’s peasantry continued to slip under the control of the landowning nobility until they were hardly considered as human beings. The rationalization of government saw considerable progress in the Austrian Empire, where the Empress Maria Theresa reigned jointly with her son, as Joseph ii, after 1765. Reforms during their reign centralized government, making political and monetary bureaucracy answerable to the Crown. Reform of the legal codes provided a keystone for enlightened rule. In June 1767 Catherine gathered a Legislative Commission to hear her proposals for a new legal code for Russia. Although the code was never adopted, the document shows the direction of the political thought that guided her long reign (1762– 96). She wrote, “What is the true End of Monarchy? Not to deprive People of their natural Liberty; but to correct their Actions, in order to attain the supreme Good.” For the sake of the subjects, perhaps the most important part of enlightened despotism was the general belief that the use of torture to extract information was a savage relic of the Middle Ages and had no place in the judicial system of any enlightened monarch. In Russia, Catherine’s refusal to use torture was put to the test in the 1773–74 rebellion of the Cossack Emilian Pugachev. Although Pugachev’s revolt proved a distinct threat to her reign, after he was captured, Catherine refused to let his interrogators resort to the use of torture to fi nd out if he acted alone or was the representative of some conspiracy hatched to overthrow and kill her. All the enlightened monarchs were infl uenced by the thought of the Italian Cesare Beccaria, the author of the historic Of Crimes and Punishments in 1764. Enlightened despots attempted to improve their countries through advances in the sciences, industry, and agriculture as well. After Catherine II’s annexation of the khanate of the Crimea in 1783, she opened it to cultivation by German immigrants to improve agricultural production. When the Treaty of Jassy ended a war with the Turks in 1792, large new areas were opened in what is now southern Russia for improved agricultural production. Signifi cantly, one of the countries most resistant to the ideas of enlightened despotism was France, where the revolution that overturned the old order began. Religious tolerance also saw great advances in this age. Most of the prohibitions against Jews, existing from the Middle Ages, were lifted throughout much of Europe. In France, however, such a change would have to wait to take maximum effect in the reign of Napoleon I, after he crowned himself emperor in 1804, 11 years after Louis XVI of France had been sent to the guillotine in January 1793. Muslims also benefi ted from the general enlightenment. After the conquest of the Crimea in 1783 and the Treaty of Jassy with the Ottoman Turks in 1792, large numbers of Muslims became Catherine the Great’s subjects. It was perhaps the greatest irony of this age of enlightened despotism that it was brought to an end by the French king Louis XVI summoning to Paris in 1789 the Estates General, the representative body of French aristocracy, clergy, and the emerging middle class. The Estates General, having not been convened for over 150 years, had much to discuss with the king. When Louis XVI refused to do so and threatened to dissolve it, the Third Estate refused to leave Paris. Instead, the Third Estate met in an old tennis court and swore to remain in session until its grievances were heard by the king and redressed by him. The French Revolution had begun. 120 enlightened despotism in Europe Further reading: Alexander, John T. Catherine the Great: Life and Legend. New York: Oxford University Press, 1989; Erickson, Carolly. Great Catherine: The Life of Catherine the Great, Empress of Russia. New York: St. Martin’s Press, 1994; Troyat, Henri. Catherine the Great. Pinkham, Joan, trans. New York: Meridian, 1994. John F. Murphy, Jr. Enlightenment, the The Enlightenment in Europe came on the heels of the age of science. It dates from the end of the 17th century to the end of the 18th century. Beginning with John Locke, thinkers applied scientifi c reasoning to society, politics, and religion. The Enlightenment was especially strong in France, Scotland, and America. The Enlightenment may be said to culminate in the revolutions that occurred in America, France, and Latin America between 1775 and 1815. In attempting to justify England’s Glorious Revolution of 1688, Locke argued that man had inherent rights. Man, he posited, was a blank page who could be fi lled up with good progressive ideas. He laid the basis for people’s sovereignty. People voluntarily came together to form a government that would protect individual rights. Government, therefore, had a contract with the people. When the government violated people’s natural rights, it violated the social contract. Therefore, people had the right to withdraw their allegiance. Ironically, the rationale used to justify the triumph of Parliament over the Crown in England was used against Parliament and Britain nearly a century later in the American Revolution. Infl uenced by Newtonian science that posited universal laws that governed the natural world, the Enlightenment emphasis was on human reason. According to major Enlightenment thinkers, both faith in nature and belief in progress were important to the human condition. The individual was subject to universal laws that governed the universe and formed nature. Using the gift of reason, people would seek to fi nd happiness. Human virtue and happiness were best achieved by freedom from unnecessary restraints imposed by church and state. Not surprisingly, Enlightenment thinkers believed in education as an essential component in human improvement. They also tended to support freedom of conscience and checks in absolute government. EARLY ENLIGHTENMENT The early Enlightenment was centered in England and Holland. It was interpreted by conservative English fi gures to justify the limits on the Crown imposed by Parliament. The limited government supported by the Whigs who took over was spread abroad by the newly created Masonic movement. In Holland, which was the home of refugees from absolutist leaders such as refugees from England of the later Stuart monarchy and from France after the revocation of the Edict of Nantes, and was nominally a republic, the earliest writings appeared. Its most famous philosopher, Spinoza, argued that God existed everywhere in nature, even society, meaning that it could rule itself. This philosophy applied to arguments against state churches and absolute monarchs. BASIC ENLIGHTENMENT IDEAS The most famous fi gures of the 18th-century Enlightenment were Frenchmen, including Charles-Louis de Secondat Montesquieu, Voltaire, Denis Diderot, and Jean-Jacques Rousseau. Montesquieu in his greatest work, The Spirit of Laws, argued that checks and balances among executive, legislative, and judicial branches were the guarantors of liberty. Voltaire, the leading literary fi gure of the age, wrote histories, plays, pamphlets, essays, and novels, as well as correspondence with monarchs such as Catherine the Great of Russia and Frederick the Great of Prussia. In all of these works, he supported rationalism and advocated reform. Diderot edited an encyclopedia that included over 70,000 articles covering the superiority of science, the evils of superstition, the virtues of human freedom, the evils of the slave trade in Africa, and unfair taxes. Rousseau, however, was not a fan of science and reason. Rather, in the Social Contract, he spoke of the general will of the people as the basis of government. His ideas were to be cited by future revolutions from the French to the Russian. Enlightenment thought spread throughout the globe and was especially forceful in Europe and the Americas. In Scotland, some ideas of the Enlightenment infl uenced the writings of David Hume, who became the best known of skeptics of religion, and Adam Smith, who argued that the invisible hand of the market should govern supply and demand and government economic controls should not exist. In America, deism (the belief that God is an impersonal force in the universe) and the moral embodiment of the Newtonian laws of the universe attracted Thomas Enlightenment, the 121 Jefferson and Thomas Paine. On the political side, thinkers such as Thomas Hooker and John Mayhew spoke of government as a trustee that must earn the trust of its constituency and as a fi nancial institution with a fi duciary duty to its depositors. It was in the realms of politics, religion, philosophy, and humanitarian affairs that the Enlightenment had its greatest effect. The fi gures of the French Enlightenment opposed undue power as exemplifi ed by absolute monarchy, aristocracy based on birth, state churches, and economic control by the state as exemplifi ed by mercantilism. Enlightened thinkers saw the arbitrary policies of absolute monarchies as contradictory to the natural rights of man, according to the leaders of the American Revolution. The most fundamental part of their nature was human reason, the instrument by which people realized their potentials. The individual was a thinking and judging being who must have the highest of freedom in order to operate. The best government, like the best economy, was the government that governed least. THE ENLIGHTENMENT IN POLITICS The Enlightenment extended to the political realm and was especially critical of monarchs who were more interested in their divine right than in the good of their people. Man was innately good; however, society could corrupt him. Anything that corrupted people, be it an absolutist government or brutal prison conditions, should be combated. Absolutist policies violated innate rights that were a necessary part of human nature. Ultimately, political freedom depended on the right social environment, which could be encouraged or hindered by government. Absolutism, for this reason, was the primary opponent of political freedom. The progenitors of the political Enlightenment, John Locke and his successors, maintained that government should exist to protect property of subjects and citizens, defend against foreign enemies, secure order, and protect the natural rights of its people. These ideas found their way into the U.S. Declaration of Independence and Constitution. These documents asserted that every individual had “unalienable rights,” including rights to “life, liberty, and the pursuit of happiness.” Similarly, the preamble to the Constitution claimed that government existed to “promote the general welfare and provide of the common defense” in direct descent from Enlightenment thinkers. The ideas of social contract and social compact did not originate with either Locke or Rousseau, but with Thomas Hobbes. Hobbes viewed the social contract as a way for government to restrain base human nature. Ultimately, Locke maintained that people came together in a voluntary manner to form a government for protection of their basic rights. Therefore, government was based on their voluntary consent. If their basic natural rights were violated, they could withdraw their consent. This theory had echoes in the arguments of leaders of the American Revolution who argued that their revolt against various tariffs and taxes such as the tea tax was taxation without representation. Social contract theory argued that individuals voluntarily cede their rights to government, including the responsibility to protect their own natural rights. Consequently, government’s authority derived from the governed. To keep the potential for governmental abuse of power in check, Enlightenment fi gures argued for a separation of powers. Before Montesquieu made his specifi c suggestion in The Spirit of Laws, Locke had proposed that kings, judges, and magistrates should share power and thereby check one another. Spinoza also proposed the need for local autonomy, including a local militia to guard against power concentrated in the center including a standing army. These ideas found their way fi rst into the Articles of Confederation, which gave almost excessive power to the various states. The U.S. Constitution specifi cally stated all powers not expressly given to the national government are reserved to the states and the people. The emphasis of the political writers of the Enlightenment was on limited government rather than on direct democracy. Although their great enemy was arbitrary absolute central government, they were not enamored of the infl uence of the mob. Even though they saw the voting franchise as a check on overpowerful government, they limited the franchise to property owners. Male suffrage in America did not come into existence until the age of Jackson. The founders of the Constitution were anxious to include the electoral college as the fi nal selecter of presidents. Direct election of senators did not occur until 1912–13, and it was not until, 1962, with Baker v. Carr and the principle of “one man, one vote,” that there were direct elections to all legislative bodies in the United States. Technically, the United States remains a representative, not a direct, democracy. Even Rousseau, considered the advocate of direct democracy, felt that direct democracy was most suited to small states like his home city of Geneva rather than a large state like France. Along with Diderot, he advocated rule based on “general will.” However, 122 Enlightenment, the both Rousseau and Diderot defi ned general will as representing the nature of the nation or community as opposed to the selfi sh needs of the individual. The law should secure each person’s freedom but only up to the point that it does not threaten others. In this way, they prefi gured the utilitarian philosophy of Jeremy Bentham, which stressed the goal of human happiness as long as it did no harm to others. REACTIONS AGAINST ENLIGHTENMENT In the latter 18th century, there was a reaction against the overuse of reason and science in securing human potential. Religious, philosophical, and humanitarian movements put new emphasis on idealism and emotionalism when it came to religious, philosophical, and social reforms. Philosophically, the Newtonian vision of God as the great scientist in the sky and Locke’s equation of knowledge to the mind’s organization of sensory experiences along with the rise of atheism provoked a reaction. IMMANUEL KANT The foremost philosopher of the later Enlightenment was Immanuel Kant, whose Critique of Pure Reason argued that innate ideas exist before sensory experiences. Taking a page from Plato, Kant argued that certain inner concepts such as depth, beauty, cause, and especially God existed independently of the senses. Some ideas were derived from reason, not the senses. Kant went beyond pure reason. Reason was based on intuition as well as interpretation of sensory experiences. The conscious mind was integral to a person’s thinking nature. Therefore, abstract reason could have moral and religious overtones. This came to be called new idealism, as opposed to classical idealism. Another reaction to this scientifi c perspective on religion was a movement in favor of a feeling, emotional deity everpresent in daily life. Known as Pietism in Europe and in America variously as evangelism and charismatic Christianity, the movement known as the Great Awakening swept the Americas and Europe in the 1740s and 1780s. Preachers such as George Whitefi eld and the Wesley brothers gave stirring sermons with overtones of fi re and brimstone in response to excessive rationality in church doctrine. Their style of preaching appealed to the masses, whereas the intellectualized religion of the Enlightenment too often seemed like a creation for the educated upper classes. By the end of the century, the movement coalesced into the Methodist movement. A new movement from Germany that stressed Bible study and hymn singing as well as preaching—the Moravians— earned a following in both Europe and America. Similar movements occurred among Lutherans and Catholics. The Great Awakening in the United States led to the formation of new individual-centered denominations such as the Unitarians and Universalists. Both aspects of the religious side of the Enlightenment— rationalist and Pietist—were concerned with human worth. This desire for the improvement of human conditions led to humanitarian impulses. The antislavery movement gained momentum in the later 18th century. Other movements, such as the push for prison reform, universal elementary education, Sunday school, and church schools, were all evident by 1800. Whether rationalist or Christian evangelical, reformers supported these movements. Even absolute sovereigns such as Frederick the Great and Catherine the Great promoted reforms. Frederick abolished torture and established national compulsory education, while Catherine established orphanages for foundlings and founded hospitals. For reforms such as these, certainly not for their beliefs in human rights, they and other monarchs were termed enlightened despots. Enlightenment thinkers sought human betterment and the movement took many forms. Political fi gures sought to deliver people from arbitrary use of power. Deists questioned the use of power by established churches. Economic thinkers argued for liberation from state control of the economy. All believed in an implicit social contract and national human rights whether political, economic, religious, or moral. Separate currents of rationalism, idealism, and Pietism all contributed to the humanitarian and revolutionary movements that emerged at the end of the period. See also enlightened despotism in Europe; Freemasonry in North and Spanish America; French Revolution. Further reading: Anchor, Robert. The Enlightenment Tradition. Philadelphia: University of Pennsylvania Press, 1987; Brewer, Daniel. The Discourse of Enlightenment in Eighteenth Century France. Cambridge: Cambridge University Press, 1993; Dowley, Tim. Through Wesley’s England. Abingdon, PA: Abingdon Press, 1988; Gay, Peter. Voltaire’s Politics. New Haven, CT: Yale University Press, 1988; Yolton, John. Philosophies, Religion, and Sciences in the Seventeenth and Eighteenth Centuries. Rochester, NY: University of Rochester Press, 1990. Norman C. Rothman Enlightenment, the 123 Ethiopia/Abyssinia Ethiopia, formerly also known as Abyssinia, has a population of about 70 million in an area of approximately 435,000 square miles. It has a history going back more than two millennia. Topographically, it is a high plateau with a central mountain range dividing the northern part of the country into eastern and western highlands. The central mountain range is in turn divided by the Great Rift Valley, which actually runs from the Dead Sea to South Africa. The country, formerly thought to be predominantly Christian, is, in fact, multiethnic and multireligious. The largest group, the Oromo, predominantly Muslim and formerly called the Galla, make up 40 percent of the population. Other non-Christian groups are the Sidamo and the Somali, Afars, and Gurages. The Christian element is mostly made up by Amharic and Tigrean, speakers, who comprise about 32 percent of the population. Reasonably accurate and fair census reports estimate that 50 percent of the population is Muslim while 40 percent of the population is Christian, with the remainder being animist. The Christians have tended to live in the highlands in the north and central parts of the country, while, historically, non-Christians live in the lowlands. The country’s history extends as far back as the 10th century b.c.e., and tradition has it that the kings of Ethiopia were descended from the union of King Solomon with the queen of Sheba (which is identified with the northern part of the country) in the 10th century b.c.e. and claim to be the Solomonic dynasty. Historically, the first Ethiopians appear to have been at least in part descended from immigrants from across the Red Sea in the southwestern part of Arabia known as Sabea (present-day Yemen) who arrived before the first century c.e. However, the preexisting population had been engaged in agriculture in the highlands before 2000 b.c.e., so the population was most likely sedentary. The end result was that by the first century, a kingdom called Axum had been established, which had a great port at Adulis on the Red Sea. Trading with Greeks and Romans as well as Arabs and Egyptians, and as far east as India and Ceylon, and having agriculture based on then-fertile volcanic highlands, the trade empire became a great power between 100 and 600 c.e. It was so powerful that in the fourth century, it was able to destroy its great rival Kush/Merowe in what is now the Sudan and conquer Yemen in the sixth century. An important element in the emerging Ethiopian identity was the conversion of Axum to Christianity in the fourth century by the missionary Frumentius to the Monophysite nontrinitarian version of Orthodox Christianity also called Coptic Christianity. The ancient Geez language remains the language of the church and is still used in services, and the modern languages spoken in Ethiopia (Amharic and Tigrean) derive from it. The common language, religion, dynasty, and system of fortified monasteries were to be key elements in the formation and survival of Ethiopian culture. These features were critical after the seventh century, when the expansion of Islam cut off Ethiopia from the coast, as Muslim invaders occupied the lowlands. From this time forward, Ethiopia, as the country had become known by the ninth century, endured long struggles between the Christian highlands and mostly Muslim lowlands. When the “king of kings,” or Negus, was in power, he united the Christian highlands and expand- 124 Ethiopia/Abyssinia An Ethiopian chief, possibly Menelik, who stunned the world by defeating the Italians in 1896. ed into the lowlands. At other times, the mountains/ highlands were divided among rival chieftains, leaving Ethiopia vulnerable to attack from Muslim and non- Muslim lowlanders. However, the legend of a remote Christian kingdom (the kingdom of Prester John) fascinated Europeans. The arrival of European visitors, especially the Portuguese, who had established trade routes to India and identifi ed Ethiopia with the legendary kingdom, proved most timely. At this time, 1540–45, the Christian highlands faced their greatest challenge—a Muslim chieftain, Mohammed al-Gran, threatened to overrun the highlands. The intervention of the Portuguese military might at this critical juncture led to the defeat and the death of Mohammed al-Gran. Thereafter, the Portuguese were prominent in Ethiopia, but their zeal in promoting Roman Catholic Christianity led to their expulsion in 1633. The country lapsed once more into feudalism until various chieftains fought for the throne, claiming Solomonic ancestry for two centuries, until Theodore reunited the kingdom in 1855. He was succeeded by John in 1868 and Menelik II in 1889. Under the latter, who was from the central Amharic province of Shoa, with its capital city Addis Adaba, the Ethiopian state as it exists today was formed. With Western military weapons, Menelik expanded into the lowlands and stunned the world by defeating the Italians in 1896 when they tried to make Ethiopia into a protectorate. After the death of Menelik in 1908, the chieftain Ras Tafari gradually gained power, especially after 1916, and was crowned emperor in 1930. As the most powerful black leader of his time, he inspired the Rastafarian cult in Jamaica (from his name Ras, or Chief, Tafari). On his assumption of the title of emperor, he took the name Haile Selassie. Acclaimed for his resistance against Fascist Italy in 1935, Haile Selassie enjoyed great prestige after Ethiopia was liberated in 1941. He was given the Italian possession of Eritrea in 1952 (much against its will), and Addis Ababa was made the headquarters of the Organization of African Unity (now the African Union), formed in the early 1960s. See also Africa, exploration of. Further reading: Bahru, Zawde. A History of Modern Ethiopia, 1855–1974. Athens: Ohio University Press, 1991; Erlich, H. Ethiopia and the Challenge of Independence. Boulder, CO: Lynne Rienner, 1986; Gebru, T. Ethiopia: Power and Protest: Peasant Revolts in the Twentieth Century. Cambridge: Cambridge University Press, 1991; Marcus, Harold G. Ethiopia, Great Britain, and the United States, 1941–1974; The Politics of Empire. Berkeley: University of California Press, 1983; Marcus, Harold G. Haile Salessie I. Berkeley: University of California Press. Norman C. Rothman
Crisis and Achievement 1900 to 1950 Edit
Edison, Thomas (1847–1931) American inventor The Wizard of Menlo Park, as journalists called him in reference to his New Jersey research laboratory, Thomas Edison was the quintessential American innovator. While many inventors are a century later remembered for one principal invention (Bell’s telephone, Whitney’s cotton gin), Edison is responsible for or associated with the phonograph, lightbulb, the microphone used in telephones until the end of the 20th century, and direct current—along with more than 1,000 patents for lesserknown creations. Only the more fanciful Nikola Tesla, his rival in the “war of the currents,” approached the breadth and variety of his work. The seventh son of an Ohio family, Edison had less than a year of formal schooling and was largely educated by his mother, a retired schoolteacher. For the rest of his life, he praised her for encouraging him to read as a child and to experiment on what intrigued him. For some years he worked as a telegraph operator but at the age of 30 became famous for his invention of the phonograph, a device that recorded sound on tinfoil, later wax cylinders, then vinyl; though the sound quality was poor, the mere fact of its existence in 1877 was held as a marvel and captured the public attention, helping to create the fascination the public would have with inventors and cutting-edge technology. More inventions followed, as well as refi nements of earlier work; his incandescent lightbulb was not the fi rst of its kind but was the fi rst to be a success, effi cient and bright enough to be used on a wide scale. His Edison Electric Light Company provided not only electric lamps but the power needed to use them. Though Nikola Tesla had also developed a lightbulb, it was the “war of the currents” that made rivals of Edison and Tesla. While Edison had developed direct current (DC) for power distribution, Tesla developed alternating E Thomas Edison, pictured with a phonograph. Edison was the quintessential American innovator. current (AC), which could be carried by cheaper wires at higher voltages. Edison’s famous tactic was to promote AC power for the use of the electric chair in order to demonstrate the dangers of the method; his employees publicly electrocuted animals as a scare tactic. The effort was in vain. AC slowly replaced DC as the power distribution method of choice and remains so today. By the time of Edison’s death in 1931, his inventions had helped lead to a world lit by incandescent lights and powered by electricity; entertained by radio plays, records, and motion pictures; connected by telephone and telegraph; and home to such works as James Joyce’s Finnegan’s Wake and Marcel Duchamp’s “Nude Descending a Staircase,” both of them inspired by and possible only in the Edisonian world. Further reading: Baldwin, Neil. Edison: Inventing the Century. Chicago: University of Chicago Press, 2001; Israel, Paul. Edison: A Life of Invention. Indianapolis: John Wiley & Sons, 1998; Jonnes, Jill. Empires of Light: Edison, Tesla, Westinghouse, and the Race to Electrify the World. New York: Random House, 2003. Bill Kte’pi Egyptian Revolution (1919) The revolution in Egypt broke out in March 1919 after the British arrested Sa’d Zaghlul, the leader of the Wafd Party, the main Egyptian nationalist party, and several other Wafdists. They were then deported to Malta. The exile of these popular leaders led to student demonstrations that soon escalated into massive strikes by students, government offi cials, professionals, women, and transport workers. Nationalist discontent had been fueled by the protectorate established by the British at the beginning of the war, wartime shortages of basic goods, increased prices, the forced conscription of peasants as laborers for the military, and the presence of huge numbers of Western soldiers in Egypt. Within a week all of Egypt was paralyzed by general strikes and rioting. Violence resulted, with many Egyptians and Europeans killed or injured when the British attempted to crush the demonstrations. European quarters and citizens were attacked by angry crowds who hated the special privileges and economic benefi ts given to foreigners. Rail and telegraph lines were cut, and students refused to attend classes. Zaghlul had worked hard in the weeks prior to his arrest to mold the Wafd into an effi cient political party. He traveled around the countryside gathering support and collecting money. In spite of martial law, which was imposed by the British at the beginning of the war, large-scale public meetings were held. In his absence Zaghlul’s wife, Safi a, played a key role in party politics. Led by Safi a and Huda Shaarawi, upper-class Egyptian women staged a political march through the streets of Cairo, throwing off their veils, waving banners, and shouting nationalist slogans. Wafdist cells throughout the country coordinated the demonstrations and strikes through a central committee chain of command. Religious leaders, especially the sheikhs at al-Azhar, the premier Muslim university, also participated. Propaganda leafl ets, posters, and postcards with pictures of Sa’d and Safi a Zaghlul were distributed throughout the country. The Wafd’s central committee maintained an active role within unions, student groups, and professional organizations. Determined to maintain control over Egypt, the British government replaced High Commissioner Reginald Wingate, who was considered weak and too moderate, with General Edmund Allenby, the greatest British hero from World War I. Allenby promptly met with leading Egyptians, who convinced him that the only way to restore order was to release the Wafd leaders. A realist, Allenby complied and permitted Zaghlul and others to travel to Paris. The Wafd kept up the pressure in Egypt, organizing boycotts of British goods and refusing to meet with the Milner Mission that had been sent out from London to investigate the situation. Steps were taken for more economic independence, and Talat Harb established an Egyptian bank in 1920. Negotiations were held between the Wafd and the British in London in 1920, but the Wafd failed to secure a withdrawal of British troops, the end of the protectorate or the capitulations (favored status granted to foreign residents), or full independence. Nevertheless, the Wafd leaders were greeted as heroes when they returned home. When Zaghlul was again arrested and deported in 1921, a new wave of nationalist demonstrations erupted. In light of the determined nationalist movement, Allenby forced a reluctant foreign offi ce to end the protectorate, and in 1922 the British unilaterally declared Egyptian independence under a constitutional monarchy led by King Fu’ad. However, Britain retained widespread powers, including the stationing of troops in Egypt and a role in determining Egyptian foreign affairs as well as control over the Sudan. Consequently, 84 Egyptian Revolution (1919) Egyptian nationalists continued in opposition to Britain throughout the interwar years. Further reading: Berque, Jacques. Egypt: Imperialism and Revolution. London: Faber and Faber, 1972; Goldschmidt, Arthur, Jr. Modern Egypt: The Formation of a Nation State. Boulder, CO: Westview Press, 2004. Janice J. Terry Einstein, Albert (1879–1955) scientist Perhaps the most signifi cant individual of the 20th century, Albert Einstein’s contributions to science reshaped physics in ways that continue to be explored and led to the development of atomic energy and the atomic bomb. A nonobservant German Jew, he was a late bloomer as a student, showing slow language development. Although folklore claims Einstein was a poor math student, he had a knack for mechanics and geometry at an early age, teaching himself geometry and calculus from a copy of Euclid’s Elements. Any reputation he may have had as a poor student came from his dissatisfaction with the curriculum at the German gymnasiums; at age 16 he left school, failed his university entrance exam for the Federal Polytechnic Institute (FPI), and took steps toward formulating his theories of relativity. He was accepted at the FPI the following year and four years after that was granted a teaching position. His fi rst published paper, “Consequences on the Observations of Capillarity Phenomena,” hinted at his hopes for universal physical laws, binding principles that would govern all of physics. When he graduated FPI, he took a job as a patent clerk and continued to work on scientifi c papers in his spare time. Four such papers were published in the Annalen der Physik journal in 1905, each of them major contributions to the shape of modern physics. Today they are called the “Annus Mirabilis” (“Extraordinary Year”) papers. The Annus Mirabilis papers concerned the photoelectric effect; Brownian motion, Einstein’s treatment of which helped provide more evidence for the existence of atoms; matter and energy equivalence, the paper that included Einstein’s equation E=mc2; and special relativity, which contradicted Newtonian physics by stipulating the speed of light as a constant. The importance of these papers cannot be overstated—they continue to be relevant to physicists today, and the photoelectric effect paper had a huge effect on the development of quantum mechanics and earned Einstein a Nobel Prize. It was during the war years that Einstein introduced his theory of general relativity, more radical than his special relativity. The general relativity theory replaces that most basic and intuitive of concepts from Enlightenment physics, Newtonian gravity, with the Einstein fi eld equation. Under general relativity there is no ether or constant frame of reference, and gravity is reduced simply to an effect of curving space-time. Because of World War I, Einstein’s writings were not readily available to the rest of the world, but by war’s end general relativity became a controversial topic. Einstein’s importance to the scientifi c fi eld of his day was assured when journals reported that experiments conducted during a 1919 solar eclipse confi rmed general relativity’s predictions about the bending of starlight in contradiction to the effects demanded by Newtonian models. Throughout the next two decades Einstein sparred in papers and debates with other scientists, particularly about quantum theory, which he viewed as an inherently incomplete model of physical reality and hence an incorrect one. When the Nazis came to power, he was working at Princeton University in the United States, where he remained after renouncing his German citizenship. Fearing the Germans would develop nuclear weapons, Einstein wrote to President Einstein, Albert 85 Albert Einstein’s contributions to science reshaped physics in ways that continue to be explored. Franklin Delano Roosevelt advising the research and testing of fi ssion bombs, a suggestion that led to the United States’s Manhattan Project, the outcome of which was the development of the fi rst atomic bomb and its use to end the war in the Pacifi c. Einstein continued to search for a “unifi ed fi eld theory” that would describe all physical laws in one theory, the quest that had driven everything from his capillarity paper to his theory of general relativity. He lived a quiet life, refusing the request of the government of Israel that he serve as its president, and died in 1955 of an aneurysm. Further reading: Brian, Denis. Einstein: A Life. Hoboken, NJ: Wiley, 1996; Galison, Peter. Einstein’s Clocks, Poincare’s Maps: Empires of Time. New York: W.W. Norton, 2003; Pais, Abraham. Subtle is the Lord: The Science and Life of Albert Einstein. London: Oxford, 1982. Bill Kte’pi El Alamein El Alamein is railway station west of the Egyptian port of Alexandria where a series of three battles were fought in 1942. The result was the end of German and Italian aspirations of conquering Egypt and advancing into the Middle East. El Alamein was one of the most decisive battles of World War II not only because of its strategic results but also because of how it altered perceptions about who could win the war. In 1940 Libya was an Italian colony that bordered on Egypt, where British troops were stationed in force. When the Italians joined the war in June 1940 on Germany’s side, they expected that they would be able to attack France and gain some quick and easy concessions. They did not expect that the British, outnumbered by the Italians, would attack. Yet under the direction of Generals O’Connor and Wavell, that is exactly what they did. The British captured the port city of Benghazi. They were well on their way toward capturing all of Libya when they were counterattacked by the Germans, reinforcing the Italians. The Germans took the city of Benghazi back from the British and then advanced to Egypt and the Suez Canal. The German commander Erwin Rommel next attacked the city of Tobruk, was in turn attacked by the British, and was forced to retreat. The British managed to force him back deep into Libya. In the next year Rommel counterattacked, retaking Benghazi and capturing Tobruk. From there he again advanced and crossed the border into Egypt. He was stopped at the First Battle of El Alamein in July 1942. Both sides waited for a time. The British solidifi ed their positions, while Rommel gathered his increasingly small amount of supplies, including fuel for his vehicles. In the fi rst week of September, Rommel felt he had to attack and so launched an assault on the British positions at a place called Alam Halfa in what became known as the Second Battle of El Alamein. Repulsed by the British, Rommel now began efforts to fortify his positions, creating obstacles through the use of minefi elds. He had no realistic expectation of attacking again and so had to remain in place. Although he had advanced so far into Egypt, the situation now favored the British. The troops under the British commander Montgomery outnumbered Rommel’s nearly two to one and had at least twice as many tanks. In all aspects the British supply situation was much better. Rommel had so little fuel that his ability to move was severely limited. At the same time, the British gasoline was more plentiful than water. That logistical superiority was to translate into immense tactical superiority on the night of October 23, 1942. That night the British opened with a massive artillery barrage using over 600 guns. This extensive artillery preparation lasted several hours and moved its focal point up and down the German line. Then it moved forward to allow the combat engineers with supporting tanks to disarm the extensive minefi elds that formed the backbone of Rommel’s defenses. The process of attacking by the 8th British British troops, under the command of General Montgomery, march back from the battlefi eld after the victory at El Alamein. 86 El Alamein Army was slow and methodical and would concentrate fi rst on one part of the German line and then on another. Montgomery referred to this process as “crumbling” the enemy’s defenses. The Germans counterattacked but failed in their attempt to drive the British back. Finally, on November 2 the British broke through the last belt of minefi elds, and the attack could begin. By November 4 Rommel decided to retreat to the west. From the west, in French North Africa, the Americans landed an army of over 400,000 men, who advanced eastward. Caught between the British and the Americans, Rommel’s army surrendered in Tunisia in May 1943. The war in Africa was over. At El Alamein Rommel was at his farthest point from his base of supplies. He had one route that followed the coastline; everything had to come to him that way from the Italians in Libya. Their bases were supplied by ship from Italy. This supply route was under constant attack by British submarines and aircraft. Italian and German ships were sunk, and much of what was supposed to go to Rommel never reached him. Supply superiority translated into tangible benefi ts: more tanks and cannon as well as massive air superiority. That also translated into intangibles such as better morale helped by ample stores of food and other supplies. If the British had lost at El Alamein, they could have lost all of Egypt, which would have been catastrophic. The Germans had hoped to link up with their soldiers in Russia by this route. The Germans could have captured the Suez Canal and controlled the Mediterranean. A push further would have brought them into Palestine. There was no oil there, but a pipeline from Iraq built in the 1930s would have given them access to it. Considering that the government in Iraq was pro- Nazi, that would have solved Germany’s oil problems. Further, the loss would have damaged British prestige and credibility. Further reading: Barnett, Correlli. The Desert Generals. Bloomington: Indiana University Press, 1982; Barr, Niall. Pendulum of War: The Three Battles of El Alamein. London: Jonathan Cape, 2004; Bierman, John. Alamein: War Without Hate. London: Viking, 2002; Latimer, Jon. Alamein. Cambridge, MA: Harvard University Press, 2002; McKee, Alexander. El Alamein: Ultra and the Three Battles. London: Souvenir Press, 1991; Stewart, Adrian. The Early Battles of Eighth Army: “Crusader” to the Alamein Line 1941–1942. Barnsley, UK: Leo Cooper, 2002. Robert N. Stacy El Salvador/La Matanza La Matanza, a Spanish phrase translated as “the massacre” or “the slaughter,” refers to the aftermath of an indigenous, communist-inspired uprising in El Salvador in 1932. Although precise fi gures of the dead are diffi cult to discern, it is estimated that between 8,000 and 30,000 Salvadoran Indians were killed in the statesponsored violence. The roots of the insurrection lay in the appropriation of communal lands for coffee production by the elites and the resulting dislocation of a large number of peasants, many of them indigenous. In the 1880s the Salvadoran government passed laws outlawing Indian communal landholdings and passed vagrancy laws that forced the landless peasants to work on the large coffee plantations owned by the elites. In response peasants in El Salvador launched four unsuccessful uprisings in the late 19th century. Coffee production expanded into the 20th century, as the country was ruled by a coalition of the coffeegrowing oligarchy, foreign investors, military offi cers, and church offi cials. In the 1920s land used to grow coffee had expanded by more than 50 percent, causing the Salvadoran economy to be heavily dependent on the international price of coffee. This expansion also created a number of peasants with vivid memories of their recent displacement. The Great Depression in 1929 resulted in a dramatic decline in coffee prices. By 1930 prices were at half of their peak levels, and by 1932 they were at onethird of the peak levels of the mid-1920s. In response the coffee producers cut the already low wages of their laborers up to 50 percent in some places, in addition to cutting employment. Meanwhile, the country was experiencing a period of democratic reform unusual in Salvadoran history. In 1930 President Pío Romero Bosque announced that the 1931 election would be a free and open election. This democratic opening allowed Arturo Araujo to win the presidency with the support of students, peasants, and workers. Araujo was distrusted by much of the elite, whose distrust grew with his attempted implementation of a modest reform program. Araujo’s presidency would be marked by increasing social and political unrest and a deepening economic crisis, accompanied by the growth of leftist unions and political groups. On May Day 1930, 80,000 farm workers marched, demanding better conditions and the right to organize. On December 2, 1931, Araujo was deposed in a military coup, and his vice president, General Maximiliano El Salvador/La Matanza 87 Hernández Martínez, assumed the presidency. Martínez quickly ended Araujo’s program of social reform and also ended the democratic opening. In early 1932 Salvadoran Communist Party (PCS) members led by Augustín Farabundo Martí planned a revolt against the landowning elite. The insurrection was to be accompanied by a revolt in the military. Before the revolt could begin Martí was captured, and the rebels in the army were disarmed and arrested. Martí would be executed in the aftermath of the failed revolt. Despite these setbacks Indian peasants heeded the call of the PCS and revolted in western El Salvador. On the night of January 22 farmers and agricultural workers armed with machetes and hoes launched attacks against various targets in western El Salvador, occupying Juayúa, Izalco, and Nahuizalco in Sonsonate and Tacuba in Ahuachapán. The military counteroffensive quickly defeated the rebels and retook towns that had fallen to the rebels. While an estimated 20 to 30 civilians were killed in the initial revolt, thousands would die in its aftermath. The military along with members of the elite organized into a civic guard and carried out reprisals singling out Indian peasants, those who wore Indian dress, and those with Indian features. In the town of Izalco groups of 50, including women and children, were shot by fi ring squads on the outskirts of town. These reprisals would last for about a month after the insurrection. It is estimated that between 8,000 and 30,000 Salvadoran Indians were killed in the aftermath of the insurrection. In addition to the loss of life suffered by the indigenous community, La Matanza would have other longterm effects. The massacre infl uenced many Indians to abandon traditional Indian dress, language, and other identifi able cultural traits in many communities in western El Salvador, although recent research has suggested that Indian identity was not completely destroyed. For the Salvadoran elites the revolt would combine their strong fears of Indian rebellion and communist revolution. When the violence of La Matanza subsided, a combination of racism and anticommunism became the leading ideology of the elite. This ideology served to block social change and to justify repression. Politically, El Salvador would have a series of military juntas until the El Salvador civil war in the 1980s. Further reading: Anderson, Thomas P. Matanza: The 1932 “Slaughter” That Traumatized a Nation, Shaping US-Salvadoran Policy to This Day. Willimantic, CT: Curbstone Press, 1992; Booth, John A., Christopher J. Wade, and Thomas W. Walker. Understanding Central America: Global Forces, Rebellion, and Change. Boulder, CO: Westview Press, 2006; Paige, Jeffrey M. Coffee and Power: Revolution and the Rise of Democracy in Central America. Cambridge, MA: Harvard University Press, 1997. Michael A. Ridge, Jr. Ellis Island Ellis Island was the chief port through which immigrants came to the United States from 1892 to 1954. Located at the mouth of the Hudson River in New York Harbor, Ellis Island witnessed the arrival of more than 12 million immigrants into the United States, most of whom were European. Of the millions who came through Ellis Island, nearly 2 percent were denied entrance to the United States for one reason or another. Immigrants coming into the United States were classifi ed according to the manner in which they arrived. Those who came in fi rst- and second-class accomodations were presumed to be of good enough social standing that they would not prove to be a burden on American society. First- and second-class passengers came to Ellis Island only if they had particular legal or medical problems that could deny them entry into the country. Third-class, or steerage, passengers were not so lucky. The accomodations of their crossing were substandard, located on the bottom of the ship, often in cramped quarters near the ship’s supplies. The conditions in steerage were often unsanitary, crowded, and uncomfortable. Unaccompanied women were often in danger of sexual assualt from the other passengers. The trials of third-class passage did not stop with the arrival of the ship to the United States. Because of the low cost of their passage, steerage passengers carried the risk of becoming a fi nancial burden to the country. Hence, steerage passengers were sent to Ellis Island to gain entry. On Ellis Island these immigrants underwent legal and medical inspections that could last as long as fi ve hours. Immigrants with debilitating medical conditions or signifi cant legal problems were denied entrance. These inspections were performed by the U.S. Public Health Service and the Bureau of Immigration, who referred to manifest logs from the ships at the time of the inspection. These manifests included personal information about the passengers such as name, date of birth, country of origin, current amount of avail- 88 Ellis Island able funds, and an address to which the person was traveling—generally that of a relative. All passengers needed a destination and could be denied entrance if they did not have a specifi c place to which they were going. Examiners asked questions that were used to determine the general health of immigrants, to detect chronic disease and mental health concerns, and to highlight legal problems. Those who did not possess the basic skills to work or had chronically poor health were sent back to their country of origin. Others were quarantined to prevent the spread of infectious disease. More than 3,000 immigrants died in the hospital on the island. Once through the inspection, many of the new immigrants changed their names. Sometimes this was strictly for convenience, but often it was because both the immigrants and inspectors tended to be uneducated. Names were often spelled incorrectly, made more American, shortened, or spelled phonetically. Frequently, passengers came to Ellis Island without papers. These passengers, called “WOPs” by the examiners, were generally allowed to enter the country. Passengers traveling without papers tended to be Italian, and the term WOP quickly became an epithet for all Italian immigrants. While the immigration process was long and often frustrating, many underwent the process multiple times. Men frequently traveled back and forth between Europe and the United States as seasonal workers. Because of this, the immigration fi gures from Ellis Island are skewed. At the time there was no technology to accurately count people as repeat immigrants. In 1897 a fi re destroyed many of the Ellis Island facilities, causing them to close for a substantial renovation. During this time the Barge Offi ce in Battery Park served as a temporary immigration station until the Ellis Island facilities could be reopened on December 17, 1900. After the renovation the processing of immigrants became more effi cient. The facility expanded by 10 acres, and the island was capable of processing thousands of immigrants per day at a much faster pace than had been previously possible. Additionally, the facilities expanded to encompass a nearby island that included an administration building and hospital wards; 10 years later, a third island was added, housing additional hospitals for use as quarantine zones. Ellis Island acted as the staging point for more than 12 million European immigrants to America. The island operated between the years of 1892 and 1954 and was for many the fi nal stop on their journey between the continents. Ellis Island 89 Throughout much of its history, corruption was one of Ellis Island’s biggest problems. In 1901 President Theodore Roosevelt fi red several high-ranking offi - cials including the commissioner of immigration and the head of the Bureau of Immigration. Investigation found frequent instances of immigrants being pressured into bribing inspectors, with many being detained if the immigrants questioned the need for the bribe or did not (or were not able to) produce the money. Attractive young women, having survived the passage in steerage, were forced to grant sexual favors to inspectors to guarantee admittance to the country. Inspectors sold items such as lunches and railroad tickets at exhorbitant prices, forcing the new immigrants to pay, with the offi cials and inspectors taking the additional revenue for themselves. Workers frequently lied about the exchange rate, pocketing the extra money, while other inspectors sold fake immigration citizenship certifi cates, giving a cut of the proceeds to ship offi cers. To Roosevelt’s mind such corruption could not stand and needed to be stopped. Roosevelt appointed William Williams, a New York lawyer, as commissioner in April 1902. Williams created an environment in which the immigrants were treated with respect, consideration, and kindness. Signs were posted throughout the island promoting kindness and respect and serving as a constant reminder to workers on how to conduct themselves. Williams’s duty was to undo the damage caused by corruption. Many European immigrants came to the United States during World War I, but passage was eventually prohibited. Many immigrants stayed on Ellis Island because they could not be sent back to their home countries, and the island served as a confi nement center for 1,500 German sailors and 2,200 secret agents and foreigners. Travel by ship was hazardous because of the frequency of submarine attacks, and many European nations shut down their borders. Additionally, the navy took over the island’s large hospital during the war in order to care for injured naval soldiers and sailors. As a result, from 1918 to 1919 many immigrants and suspected subversives were taken off the island and sent elsewhere. During the Red Scare immigrants suspected of involvement with radical organizations or under suspicion of fomenting revolution were deported from Ellis Island. Such views were enhanced by the sabotage infl icted on Ellis Island on July 30, 1916. The Black Tom Wharf on the New Jersey shore was located about 300 yards from Ellis Island. Here there was a railroad yard and a place for barges to load cargo. On July 30 several railroad cars and as many as 14 barges were loaded with dynamite, ready to have their cargoes transferred to waiting freighters. The cargoes exploded early that morning, causing extensive damage to Ellis Island and creating a blast that was felt as far away as Pennsylvania. The damage to Ellis Island was estimated at $400,000—broken windows, jammed doors, and demolished roofs. During the chaos 125 workers transferred nearly 500 immigrants to the eastern part of the island and ferried them over to the Manhattan Barge Offi ce. Ellis Island reopened in 1920. Throughout the history of Ellis Island, laws and regulations were enacted to decrease the number of immigrants entering the United states. For instance, the Immigration Restriction League and other similar organizations created the Exclusion Act of 1882, prohibiting Chinese immigration for 10 years. This act continued to be reassessed and passed until 1943. In 1917 the Alien Contract Labor Law came into effect, further reducing immigration, while mandatory literacy tests in the same year allowed for the exclusion of more and more potential immigrants. While these acts did limit the number of new people entering the United States, more than half a million passed through Ellis Island in 1921 alone. In 1924 quota laws and the National Origins Act were passed through Congress; these laws allowed for limited numbers of specifi c ethnic groups to be given entry into the country as determined by the 1890 and 1910 censuses. Some 33 different classes of immigrants to be denied entrance were named in the legislation. In effect, the laws differentiated between northern European settlers and what were at the time immigrants from predominantly southern and eastern European countries. Adding an additional layer of bureaucracy for potential immigrants, following World War I it became necessary to apply for visas in one’s home country before being allowed to enter the United States. This increased the complexity of the immigration process, as it required a great deal of paperwork and medical inspection before arrival to the United States. Following 1924 Ellis Island stayed in use, but as more of a quarantine and detention center than a center for the processing of immigrants. Those who stayed on Ellis Island tended to be those with complications in their medical records or those who had been displaced. Immigrants in general entered the United States through other locations. Proposals were made as early as 1924 to close down the island, but this did not occur until 1954. Before that, Ellis Island was used as a place to confi ne enemy foreign nationals during World War II. In 1986 the island underwent a signifi cant restoration 90 Ellis Island to the main building, and Ellis Island reopened in 1990 as a museum. Here visitors can access the records of family members who came to or passed through Ellis Island during its tenure as the largest entry point for immigrants into the United States. Further reading: Anderson, Dale. Arriving at Ellis Island: Landmark Events in American History. Stongsville, OH: World Amanac Library, 2002; Brownstone, David M., Irene M. Franck, and Douglass Brownstone. Island of Hope, Island of Tears: The Story of Those Who Entered the New World through Ellis Island—In Their Own Words. New York: Barnes and Noble, 2000; Houghton, Gillian. Ellis Island: A Primary Source History of an Immigrant’s Arrival in America. New York: Rosen Publishing Group, 2003; Revees, Pamela. Ellis Island: Gateway to the American Dream. New York: Barnes and Noble, 1998. Nicole DeCarlo environmentalism/ conserving nature New conceptions of how humans should interact with the natural world put down roots in 19th- century America. Aristocratic Europe’s pastoral perspective valued neatly kept farms and artfully landscaped vistas. Some Americans had different views. Mid-19thcentury Massachusetts transcendentalist Henry David Thoreau studied natural processes and experimented with a new kind of natural simplicity at Walden Pond, bemoaning the noisy incursion of trains. Gaining infl uence after his death in 1862, Thoreau fathered what eventually became an environmental movement. By the fi rst half of the 20th century, a growing U.S. conservation movement had saved some of the nation’s most spectacular natural landscapes. In 1872 President Ulysses S. Grant and Congress created Yellowstone National Park in Montana and Wyoming, offi cially described as “a pleasuring-ground for the benefi t and enjoyment of the people.” Grant was fi rst in a series of presidents to protect certain lands from most kinds of human exploitation. Many individual states mounted smaller parks projects. By 1890, when the U.S. census revealed that America’s frontier—its stock of unclaimed land—had virtually disappeared, rescuing remaining natural treasures took on new urgency. California’s Yosemite became a national park in 1890. Taking offi ce in 1901, Theodore Roosevelt, an outdoorsman himself, initiated conservation programs that truly reshaped the nation. During his progressive presidency, Arizona’s Grand Canyon and four other national parks were established. Advised by forester Gifford Pinchot, Roosevelt set aside more than 231,000 square miles of forested land and established the National Forest Service. His 1906 Antiquities Act helped to identify and preserve prehistoric and historic sites of special signifi cance, including some Indian structures and major Civil War battlefi elds. President William Howard Taft in 1910 created Montana’s 1,600-square-mile Glacier National Park, long the dream of Forest and Stream editor George Bird Grinnell. But later that year a controversy between Taft and Pinchot over the proper use of forest set-asides led to Pinchot’s fi ring and became a factor in Roosevelt’s “Bull Moose” campaign against Taft in 1912. Their political feud revealed some of the diffi culties and ironies of a nation legislating “wilderness” and scenic beauty. Especially in the American West, where the federal government owned a large percentage of the land, many interests clamored for greater commercial and personal access. Was providing seemingly untouched natural beauty to awed urban visitors really more important than a rancher, miner, or farmer making a decent living? Conservationists were often a minority in these local and regional arguments, although railroad interests often supported conservation projects that enhanced tourist travel by train. Additionally, although this would hardly have bothered most white people at that time, many conservation, preservation, and set-aside programs effectively severed Native American tribes from their traditional uses of Yellowstone, Yosemite, Glacier, and other new American shrines. What conservationists worshipped as “virgin land” or “wilderness” had in many cases been used by Indians for centuries as habitat and hunting and fi shing grounds. Conservation leaders like Scots-born John Muir, a founder in 1892 of the Sierra Club, and Iowa native Aldo Leopold, cofounder in 1935 of the Wilderness Society, were naturalists who were primarily interested in protecting the natural environment as much as possible from human disturbance. Although they and their many allies worked closely with government agencies, there was a constant struggle over how protected lands could be used. Mining, grazing, farming, and timbering rights in park reserves were clearly a source of tension. So too was the very purpose of a growing national parks system—to expose large numbers of human visitors to “nature.” environmentalism/conserving nature 91 Tourism also could, and certainly would, endanger truly wild places. Teddy Roosevelt once spent four days in Yosemite with Muir camping and hiking, but that did not mean that conservationists always had the ear of politicians. President Woodrow Wilson, who in 1916 authorized creation of the National Park Service, had three years earlier accepted congressional approval of the Hetch-Hetchy dam that fl ooded part of Yosemite in order to provide San Francisco with drinking water. It was a bitter defeat for the Sierra Club and Muir’s last great wilderness crusade. In 1907 Pinchot had defi ned conservation as “the use of the Earth for the good of Man.” By the 1930s the New Deal was siting and building huge dams for travel, irrigation, and hydroelectric power across the American landscape. Especially in Appalachia, site of the Tennessee Valley Authority (TVA) and along the Columbia River in the Pacifi c Northwest, these dams permanently reshaped ancient landscapes and affected fi sh and wildlife, usually for the worse. In this same era President Franklin D. Roosevelt’s young men’s work initiative, the Civilian Conservation Corps, was a boon for neglected or underfunded national parks. Trails were cut, scenic overlooks created, and benches and tourist facilities provided or improved. But when the economy recovered, this meant that even more people could easily leave their own imprint on the landscape. Starting his career with the Forest Service in 1909, Aldo Leopold came to believe that managing forested areas was not the same as protecting trees and their ecosystem. Leopold and others began to believe that nature’s “rights” should and sometimes must trump human needs and desires. In his infl uential 1949 book, A Sand County Almanac, published after his death in a fi re near his Wisconsin home, Leopold called for a “land ethic” that would encompass respect for “soils, waters, plants and animals.” It was an early intimation of what emerged in the 1960s as a new environmental, or “Green,” movement that looked beyond scenery and natural magnifi cence to the fundamental health of “soils, waters, plants and animals” and humans worldwide. Further reading: Nash, Roderick. Wilderness and the American Mind. 3rd ed. New Haven, CT: Yale University Press, 1982; Ramachandra, Guha. Environmentalism: A Global History. New York: Longman, 2000. Marsha E. Ackermann Espionage and Sedition Acts On June 17, 1917, little over two months after the United States entered World War I as an associated power of the Allies, Congress passed the Espionage Act, which criminalized the provision to any party by any party of any information when the intent was to interfere with the success of the American armed forces. The wording of the law was general rather than enumerating specifi c potential instances, and a year after its passing socialist Eugene Debs was arrested for obstructing military recruiting with an antiwar speech delivered in Canton, Ohio. He ran for president from prison as a way to draw public attention to his fate and was pardoned by President Harding after serving a third of his sentence. Dozens of socialist and antiwar newspapers and magazines were forced to avoid coverage of the war, suspend publication, or risk having the Postmaster General revoke their right to use the mails. The law was challenged in Schenck v. United States, when Charles Schenck was arrested for circulating a pamphlet calling for resistance to the draft; the Supreme Court upheld the law, and its decision introduced two common phrases of American legal language. Justice Oliver Wendell Holmes, the author of the decision, said fi rst that the guarantee of free speech did not protect words that presented a “clear and present danger,” and that “the most stringent protection of free speech would not protect a man falsely shouting fi re in a theater.” In 1918 the Sedition Act extended the bounds of the Espionage Act, outlawing various instances of speech against the government. Most of the laws associated with the two acts were repealed in 1921. Further reading: Holmes, Oliver Wendell. The Common Law. Library of Essential Reading Series. New York: Barnes & Noble, 2004; Murphy, Paul. World War I and the Origins of Civil Liberties in the United States. New York: Norton, 1979. Bill Kte’pi Estrada Cabrera, Manuel (1857–1923) Guatemalan president Manuel José Estrada Cabrera was president of Guatemala from 1898 to 1920 and established a tradition of Guatemalan strongmen that was to be revived by Jorge Ubico and later presidents. Estrada Cabrera is 92 Espionage and Sedition Acts also credited with running the longest one-man dictatorship in Central American history. Born on November 21, 1857, in Quezaltenango in the southwest of Guatemala, the nation’s secondlargest city, Estrada Cabrera was educated in Roman Catholic schools, training as a lawyer. After many years practicing in Quezaltenango and then in Guatemala City, he became a judge of the Guatemalan supreme court before entering politics. Elected to congress, he became minister of public instruction, minister of justice, and then minister of the interior during the presidency of José María Reina Barrios. On February 8, 1898, the president was assassinated, and Estrada Cabrera, who was in Costa Rica, returned to Guatemala City. He was the second in line to the presidency. Estrada Cabrera was said to have burst in on the cabinet meeting where the politicians were discussing the succession. Charging in unannounced, he walked around the cabinet ministers and then drew a revolver from his pocket. Placing it on the table, he then announced: “Gentlemen, you are looking at the new president of Guatemala.” Estrada Cabrera was sworn in as the provisional president, elected soon afterward, and offi cially inaugurated on October 2, 1898. During his fi rst term in offi ce he respected the constitution, which forbade presidents’ serving more than one term. Before this fi rst term was over Estrada Cabrera changed the constitution to allow himself to be reelected in 1904, again in 1910, and on a third occasion in 1916, remaining president until April 15, 1920. Political commentators do not credit him with any personal popularity or any plan of action or change except anything that might keep him in offi ce. During his time as president of the country, Estrada Cabrera certainly gave Guatemala internal peace, and this was welcomed by the landowners and the Guatemalan middle class, although the latter gradually tired of his rule. There had been a fi nancial crisis just before he came to power, and he managed to steer the country through it. He also encouraged investment by the United Fruit Company, which during his presidency started to take over the economic life of the country. Minor Keith of the United Fruit Company was also granted the rights to establish a railway across Guatemala in 1906. When it was completed, the company took ownership not only of the railway but also of 170,000 acres of agricultural land. The actions of the United Fruit Company led to increased control of the Guatemalan economy by U.S. business interests, in contrast to the situation faced by U.S. companies in Nicaragua, where the reformist president, José Santos Zelaya, was trying to replace U.S. businesses with European ones. In 1910 the Chicago Tribune sent Frederic Palmer to visit Guatemala and other parts of Central America. He found that the president was living not in the presidential palace but in a nearby building that was easier to secure. In a meeting with the president, the journalist was told that the Guatemalan army numbered 15,000 to 16,000, but that in a time of war 60,000 could be fi elded, which meant that Guatemala had one of the largest, relative to its population, standing armies in the world. Certainly Estrada Cabrera used the army and, more importantly, his secret police, controlled by Justo Rufi no Barrios, to ensure he had no opposition, removing any liberal moves that had been introduced just before he came to power. He also used the presidency to loot the treasury and make himself a large fortune. Estrada Cabrera was also responsible for building a few schools; improving sanitation, especially in Guatemala City, the nation’s capital; and raising the level of agricultural production. However, he kept the Indians in a terrible state, marginalizing them politically and economically. One of Estrada Cabrera’s eccentricities was to establish a cult to Minerva in Guatemala, with Greek-style “Temples of Minerva” built in many cities throughout Guatemala. In 1906 rebels supported by other governments in Central America threatened to push him from offi ce. However, Estrada Cabrera managed to get help from neighboring dictator Porfi rio Díaz of Mexico. The Mexicans later became worried by Estrada Cabrera’s power, and after the Mexican Revolution he was to face bitter political opponents on Guatemala’s northern borders, although internal strife in Mexico prevented them from intervening in Guatemala. In April 1920 an armed revolt overthrew Estrada Cabrera, and the former dictator was thrown into jail. On April 15 the congress declared Estrada Cabrera to be medically unfi t to hold offi ce. He was replaced by Carlos Herrera and then by José María Orellana. This change ushered in a period of liberal political laws and a new reform government, which recognized opposition parties. Estrada Cabrera had hoped for U.S. intervention to save him, but the U.S. president, Woodrow Wilson, decided not to intervene. In fact, the conspirators who overthrew Estrada Cabrera moved only when they had information that Wilson would not act. Manuel Estrada Cabrera died on September 24, 1924, in jail in Guatemala City. Estrada Cabrera, Manuel 93 Further reading: Rendón, Mary Catherine. Manuel Estrada Cabrera: Guatemalan President 1898–1920. Oxford: University of Oxford, 1988. Justin Corfi eld Ethiopia (Abyssinia) and Italian aggression In October 1935 Italian armies invaded Abyssinia (Ethiopia), beginning an eight-month war and a sixyear occupation. Starting purely as an Italian colonial venture to expand Italy’s control as well as to impress European nations, it came to have a signifi cance all out of proportion to its original objectives. Italy, as a unifi ed nation, did not come into existence until the Risorgimento of 1870. For that reason, it was very late in developing an overseas empire; most of the colonial pickings had been taken by France and Britain. Italy had managed in the closing years of the 19th century to establish itself in eastern Africa (Eritrea), although a sound beating by the Abyssinians in 1896 at the Battle of Adowa stopped their progress there. Although Adowa was to be the most severe defeat ever suffered by Europeans in Africa, Italy managed to not only keep its Eritrean possessions but gain a bit more as well. In 1908 Somalia was declared to be an Italian colony, and the border between Somalia and Ethiopia was agreed on. Additionally, in 1911–12 Italy had managed to seize the Ottoman possessions in Libya. None of this, however, managed to satisfy a nation that as part of its mythic past looked back on the Roman Empire. Compounding that sense of unfulfi lled entitlement, Italy, although an ally in World War I, had not gained the territory it believed was its due. The sense of injury and historic destiny was given an added impetus in the 1920s and 1930s with the rise of the Fascists. In the interim several events occurred. Although Abyssinia was an independent nation, it was not altogether considered to be the equal of other nations; when it applied for membership in the League of Nations, there were several delegates who were opposed to its entry. At fi rst Italy opposed Abyssinia’s application but then supported it. Abyssinia became a full member of the league in 1923. That fact would have later consequences, as membership meant that Italy could not attack Abyssinia without the threat of action of the entire league. Italy and Abyssinia signed a treaty of friendship in 1928, but the Italians would maintain a very strong military presence on their borders and on occasion send military detachments across the borders to see how far they could push without starting a war. By 1932 Benito Mussolini was committed to an eventual war of conquest in the area, and military planning began at about this time. Finally, in 1934 the Italians engineered a border incident that would eventually become the offi cial cause of the war, which would start in October 1935. The extent of military planning and the allocation of Italy’s resources for this war would become a major effort. While in retrospect the campaign was one of tanks, aircraft, and machine guns against a primitively armed native population, there was no assumption of an easy military victory. Adowa, less than 40 years before, had been a serious and sobering defeat. Even new weapons, as the British, Spanish, and French had learned, did not guarantee victory in colonial wars. The Abyssinians, with their population of an estimated 12 million living in a rugged and wide-ranging homeland, could not be counted on to surrender at the fi rst sight of an Italian tank or airplane. On October 3, 1935, Italian forces attacking from Eritrea in the north and Italian Somaliland in the south invaded Abyssinia, meeting with substantial opposition from the very beginning. Mechanized and motorized forces and aircraft overpowered organized resistance. By May 5, 1936, the Italians had managed to defeat the Abyssinian army and entered the capital of Addis Ababa. Italian forces suffered about 5,000 casualties; most of these were natives serving as part of the Italian force. With the capture of Abyssinia’s capital, the Italians believed their mission accomplished and organized their African possessions into one large colony, Africa Orientale Italia (AOI), which they divided into six governorships. Occupying the territory and controlling all of it turned out to be a different matter: They never succeeded in holding more than half of the country. There was widespread opposition throughout the countryside that grew in severity. In 1937 an attempted assassination of Marshall Badoglio, the commander of the region, spurred extensive reprisals. This opposition kept up until the Italians were fi nally driven out in 1941 by the British. Aside from the military aspects of the campaign, which showed how new technology could be effectively applied against native armies, the war had a political signifi cance on an international scale. The confl ict showed very quickly the ineffectiveness of the League of Nations. Further, it demonstrated both splits between 94 Ethiopia (Abyssinia) and Italian aggression what were supposed to be solid allies and the lack of internal resolution of those allies. On October 10, 1935, the league agreed to impose economic sanctions against Italy as punishment for its unprovoked invasion in direct defi ance of the league’s rules. The sanctions were not enthusiastically endorsed, although Canada suggested additional oil sanctions be applied. Part of the problem was that the league’s standing did not support strong measures. Another factor was that despite the fact that Abyssinia was a member, many other members considered it to be little more than a very backward region. In their view, despite the unanimous declaration of 1923, Abyssinia should not be thought of as an independent nation. Also, sanctions were useless unless they were supported by everyone. The United States, which was not a member of the league, increased its exports of oil to Italy at this time. There were attempts to resolve the crisis by diplomacy of individual nations, but these were not only ineffective but did not refl ect well of the proposing nations. In negotiations with the Italians, the British and French offered to let Italy have large parts of the country. Britain would then donate part of British Somaliland, one of its ports, to Abyssinia. Neither Haile Selassie nor any member of his government was brought into these talks. These negotiations were not looked on well by several members of the league who rightly thought it was rewarding aggression. Thus, the plan died, and Italy continued its war. Abyssinian emperor Haile Selassie went to the League of Nations for assistance in June 1936. He got nothing for his efforts. Italian claims of atrocities partially undermined Ethiopia’s case, although it was clear that the league would not have supported Ethiopia in any event. The occupation of Abyssinia was not a quiet experience for occupiers or occupied. The Italians brought in the machinery and infrastructure of a colonial government, but nothing went exactly as it had been planned. For one thing, there was the active opposition of the natives, which never decreased from the day Addis Ababa fell until the British liberated the country. In 1935 Italians opened a concentration camp in Somalia. Eventually, more than 6,000 people from all over the AOI, but principally Abyssinia, were processed there. Its peak operating period was from the major repression of 1937 until the British arrived in 1941. In 1937 some opponents of the regime were sent to Eritrea and from there on to Italy. In a reversal, political detention camps were opened in the AOI that were used to house Italian political dissidents. There were reported to be mass executions as well. There were some positive developments. The Italians did bring an improvement in health care. Also, they stopped much of the intertribal fi ghting that had always plagued Abyssinia. These advantages must be seen, however, against the larger issue of Italy forcefully occupying a nation and repressing its people. One of the major reforms was a negative one that had to do with education. Italy feared the educated elite in Abyssinia, which they correctly saw as the backbone of opposition. The Italians repressed this elite and also ensured that there would be no schooling beyond the most basic for the general population. Finally, the area was liberated in 1941 and administered by the British until after the war. Then Italy returned but only as a mandatory power for Eritrea and Somaliland. These countries eventually gained their independence. Abyssinia, more commonly referred to now as Ethiopia, regained its independence with the return of its emperor. For what started as a colonial venture, the war between Italy and Abyssinia had far-reaching consequences. It demonstrated what military force could do against civilian populations and how far international bullying could go as well as improving the chances for a war in Europe. Mussolini’s popularity and political strength in Italy were improved by the war. In the minds of many, the victory and acquisition of land removed some of the perceived disgrace that came from the consequences of World War I. Mussolini, who often ruled by the creation and management of crises, mobilized a great deal of support for the prosecution of the war. In addition, the threat of league sanctions helped strengthen popular resolve because the Italian government managed to stir the population into a feeling that it was united against the league, improving the degree of political cohesion, at least for a while. Even the Catholic Church, which sometimes opposed Mussolini’s policies, came down publicly in favor of the Italian effort in Africa. Another development of great signifi cance was the deployment of the technology of destruction. The Italians used their air force extensively in this war. Pioneers in the use of aircraft against ground targets, they had used aircraft in Libya against the Ottomans and later used them against the Libyan natives from 1921 to 1931. Now, after also leading the world in developing the theory of air power, they showed themselves to be expert practitioners. The latest in modern weaponry was used more widely and ruthlessly than ever against Ethiopia (Abyssinia) and Italian aggression 95 not only combatants but against the civilian population. The Italians also bombed Red Cross stations, hospitals, ambulances, and civilian targets. In a way, the air attacks on the Abyssinians prefi gured not only Guernica but later Warsaw, Rotterdam, and London. On the continental scale, the war accelerated the political decisions and rivalries in Europe. It destroyed the good will that had existed between Britain and Mussolini’s Fascist government. The crisis surrounding the war highlighted and increased the mutual suspicion between France and Britain. That impression was reinforced at Munich in 1938, leading Adolf Hitler and Mussolini into assumptions that would lead them to war in 1939 and 1940. The alienation of Italy from its former allies and Europe at large brought it closer to Hitler’s Germany. At the same time it deepened the contempt that Hitler and Mussolini had for the western powers, in large part because of their inability to do anything constructive. Finally, it signaled the effective end of the League of Nations as a body capable of protecting small nations from aggression and preventing aggressive war. There had been defections from the league at least as far back as the 1920s based on smaller nations stating that the league was useless in protecting them. The Japanese invasion of Manchuria in 1931 and the invasion of Abyssinia only demonstrated and reinforced the perceived weaknesses of the league. While the league could point to accomplishments in areas such as improving health of people in poorer nations, it could not stop a war. Further reading: Andall, Jacqueline, and Derek Duncan, eds. Italian Colonialism: Legacy and Memory. Oxford: Peter Lang, 2005; Ben-Ghiat, Ruth, and Mia Fuller, eds. Italian Colonialism. New York: Palgrave Macmillan, 2005; Lamb, Richard. Mussolini as Diplomat: Il Duce’s Italy on the World Stage. New York: Fromm International, 1999; Larebo, Haile M. The Building of an Empire: Italian Land Policy and Practice in Ethiopia, 1935-1941. New York: Oxford University Press, 1994; Mockler, Anthony. Haile Selassie’s War. New York: Olive Branch Press, 2003; Sbacchi, Alberto. Legacy of Bitterness: Ethiopia and Fascist Italy, 1935-1941. Lawrenceville, NJ: Red Sea Press, 1997. Robert Stacy eugenics Sir Francis Galton, a cousin of Charles Darwin, coined the term and concept of eugenics in 1883. Eugenics, often defi ned as “well-born,” was an effort to apply Darwinian evolution and Gregor Mendel’s recently recognized genetic discoveries to the physical, mental, and moral improvement of human beings. Eugenics gained many supporters in the progressive-era United States, Canada, and much of Europe. But the concept was riddled with class and racial biases that infl icted harm on thousands of supposedly “inferior” humans. When the excesses of Adolf Hitler’s World War II eugenics programs became known, this effort at human engineering fell into disrepute. Galton was a respected scientist and statistician, but his eugenics notions were based less on evolution than on Social Darwinism, a philosophy that conveniently justifi ed growing inequities in industrializing societies. Nations could no longer wait for evolution to weed out the weak and stupid; rather, experts would facilitate the process of improving the race, by which most eugenicists meant white northern Europeans. Positive eugenics tried to encourage “superior” men and women to produce superior offspring. (The Galtons were childless.) Negative eugenics went much further. It proposed to discourage “defective” humans from reproducing at all. Soon, eugenics agencies and research facilities were springing up. A eugenics laboratory, later named in Galton’s honor, was founded at London’s University College in 1904. In the United States Charles Davenport created a Eugenics Record Offi ce on Long Island. U.S. president Theodore Roosevelt, fearing “race suicide,” heartily approved of this burgeoning movement to weed out the “unfi t.” The state of Indiana in 1907 was the fi rst to pass a eugenics sterilization law. Buck v. Bell, a eugenics sterilization case from Virginia, came before the U.S. Supreme Court in 1927. Speaking for eight of the nine justices, Oliver Wendell Holmes, Jr., ruled in favor of the state. Carrie Buck, he noted, “is a feeble-minded white woman . . . the daughter of a feeble-minded mother . . . and the mother of an illegitimate feeble-minded child,” adding, “Three generations of imbeciles are enough.” By 1933 28 states had sterilized more than 16,000 unconsenting women, men, and children. In Canada interest in eugenics peaked among English speakers during the Great Depression, when the poor and sick seemed an impossible burden. The Soviet Union and many European nations also promoted fi tter families while trying to minimize the “unfi t.” Everywhere the poor and uneducated, racial and ethnic minorities, and criminals were overwhelmingly benefi ciaries of “genetic cleansing.” But none took eugenics as far as Nazi 96 eugenics Germany, where Hitler copied many aspects of U.S. eugenics practices and passed laws in the 1930s that foreshadowed the elimination of millions of Jews, Gypsies, gays, and others considered unfi t. In the wake of these atrocities, most eugenics organizations disbanded or rethought their goals. In 1942 the Supreme Court struck down involuntary sterilization of criminals; in 2001 Virginia apologized for Buck and other eugenics interventions. As genetic science has expanded dramatically, the ethics of genetic improvement remains a very touchy topic. Birth control pioneers Margaret Sanger of the United States and Marie Stopes in Britain were both ardent eugenicists, leading today’s abortion foes to distrust the underlying aims of family planning. New technologies raise the specter of prenatal engineering for “perfect” babies—a concept Galton did not precisely foresee but would probably have applauded. Further reading: Kevles, Daniel J. In the Name of Eugenics: Genetics and the Uses of Human Heredity. Cambridge, MA: Harvard University Press, 1995; Paul, Diane B. Controlling Human Heredity, 1865 to the Present. Atlantic Highlands, NJ: Humanities Press, 1995. Marsha E. Ackermann existentialism Existentialism is a chiefl y philosophical and literary movement that became popular after 1930 and that provides a distinctive interpretation of human existence. The question of the meaning of human existence is of supreme importance to existentialism, which advocates that people should create value for themselves through action and living each moment to its fullest. Existentialism serves as a protest against academic philosophy and possesses an antiestablishment sensibility. It contrasts both the rationalist tradition, which defi nes humanity in terms of rational capacity, and positivism, which describes humanity in terms of observable behavior. Existential philosophy teaches that human beings exist in an indifferent, objective, ambiguous, and absurd context in which individual meaning is created through action and interpretation. Although there is a diversity of thought in the movement, its thinkers agree that all individuals possess the freedom and responsibility to make the most of life. Existentialists maintain the principle that “existence precedes essence,” an observation made by Jean-Paul Sartre (1905–80), atheist humanist and the only selfproclaimed “existentialist.” This principle advocates that there is no predefi ned essence of the human being and that essence is what a human makes for itself. Each of the existentialist thinkers, however, worked out their own interpretations of existence. Søren Kierkegaard (1813–55), a religious Danish philosopher known as the “father of existentialism,” possessed a belief in the Christian God. He attacked abstract Hegelian metaphysics and the worldly complacency of the Danish Church. Kierkegaard believed that individual existence indicates being withdrawn from the world, which causes individual self-awareness. Individuals despair when confronted with the truth that their fi nite existence emerged detached from God. This despair, thus, gives rise to faith, despite the absurdity of that faith. Other philosophical precursors who are believed to have infl uenced modern existentialist philosophy include St. Thomas Aquinas (1224–74), Blaise Pascal (1623–62), Fyodor Dostoyevsky (1821–81), and Friedrich Nietzsche (1844–1900). German philosopher Martin Heideger (1889–1976) believed that the starting place for philosophy should be studying the nature of the existence of the human being. In his book Being and Time (1962), he intended to provoke people to ask questions about the nature of human existence. He intended that such questioning would have the result of causing people to live a desirable life and “possess an authentic way of being.” Several French authors possessed existentialist beliefs. Parisian-born Gabriel Marcel (1889–1973) advocated that the purpose of philosophy was to elevate human thinking to the point of being able to accept divine revelation. He coined the term existentialism in order to characterize the thought of Sartre and his lifelong friend and associate Simone de Beauvoir (1908–86). De Beauvoir, a Parisian existentialist author and feminist, penned She Came to Stay (1943) and The Blood of Others (1945). These works suggested that the viewpoint of someone else is necessary for an individual to have a self or be a subject. Jean-Paul Sartre, also a Paris native, popularized existentialism in his widely known 1946 lecture “Existentialism and Humanism.” The lecture set out the main tenets of the movement. Taking Sartre’s lead, existentialists rejected the pursuit of happiness, as it was believed to be nothing but a fantasy of the middle class. Sartre’s existential thought can best be observed in his novels Nausea (1938), credited as the manifesto of existentialism, and No Exit (1943). existentialism 97 Existential thought became further disseminated through Sartre’s colleagues, who included Maurice Merleau-Ponty (1908–61) and Albert Camus (1913– 60). Merleau-Ponty sought to provide a new understanding of sensory phenomena and a redefinition of the relationship between subject and object and between the self and the world. Perhaps the most influential and well-known 20th-century existential writers, Sartre and Camus, also took part in the French Resistance, having been galvanized by the atrocities of World War II. Although the only self-professed existentialist was Sartre, the other thinkers associated with the movement are associated with it because of their similar beliefs. Camus wrote novels concerned with the existential problem of finding meaning in an otherwise meaningless world and taking responsibility for creating human meaning. He advocated that the chief virtue of humanity was the ability to rebel against the corrupt and philosophically undesirable status quo. From the 1940s on, the movement influenced a diversity of other disciplines, including theology, and thinkers such as Rudolf Bultmann (1884–1976), Paul Tillich (1886–1965), and Karl Barth (1886–1968), whose 1933 biblical commentary on the Epistle to the Romans inspired the “Kierkegaard revival” in theology. The principles of existentialism entered psychology through the 1965 work of Karl Jaspers (1883–1969), General Psychopathology, and influenced other psychologists such as Ludwig Binswanger (1881–1966), Otto Rank (1884–1939), R. D. Laing (1927–89), and Viktor Frankl (1905–97). Other writers who expressed existentialist themes included the marquis de Sade (1740–1814), Henrik Ibsen (1828–1906), Hermann Hesse (1877–1962), Franz Kafka (1883–1924), Samuel Beckett (1906–89), Ralph Ellison (1914–94), Marguerite Duras (1914–96), and Jack Kerouac (1922–69). The work of artists Alberto Giacometti (1901–66), Jackson Pollock (1912–56), Arshile Gorky (1904–48), and Willem de Kooning (1904–97) and filmmakers Jean-Luc Godard (b. 1930) and Ingmar Bergman (1918–2007) also became understood in existential terms. Further reading: Cooper, David E. Existentialism. 2d ed. Oxford: Blackwell, 1999; Grossmann, Reinhardt. Phenomenology and Existentialism: An Introduction. London: Routledge, 1984; Kaufmann, Walter Arnold. Existentialism from Dostoevsky to Sartre. New York: Penguin Group, 1988; Luper, Steven. Existing: An Introduction to Existential Thought. New York: McGraw-Hill, 2000; Oaklander, L. Nathan. Existentialist Philosophy: An Introduction. New York: Prentice Hall, 1995; Raymond, Diane Barsoum, ed. Existentialism and the Philosophical Tradition. Englewood Cliffs, NJ: Prentice Hall, 1991. Christopher M. Cook expatriates, U.S. Since the beginning of the U.S. republic, artists and writers have felt the need to study, paint, and write in Europe while maintaining their U.S. citizenship. For these artists, insecure about their young nation’s rawness, Europe long represented true civilization, steeped in aristocratic traditions. Before 1850 some U.S. painters trained in Europe, but few stayed beyond their apprenticeships. By the middle of the 19th century, some found it more advantageous to their careers to stay. John Singer Sargent and Mary Cassatt spent major parts of their painting careers in Europe; James McNeill Whistler, who left for Europe at age 21, never returned home. By 1904 the California impressionist Guy Rose observed that Giverny, where Claude Monet lived and painted, was overrun by American artists. Affluent writers like Henry James and Edith Wharton began to establish residences in Europe during the late 19th century. By 1900 Ezra Pound had installed himself in London, and shortly afterward Gertrude and Leo Stein left Baltimore for Paris, where they became important patrons of modern art. U.S. artists understood that they could only keep up with trends in modern art (cubism, fauvism) by going to Paris, and in 1913 two of them, Stanton Macdonald- Wright and Morgan Russell, created a movement called synchromism, which applied methods of musical composition to painting by using a color wheel. It was the only school of modern painting up to that time founded by Americans. St. Louis–born poet T. S. Eliot made his home in London after 1914. By the 1920s artists including Man Ray and Thomas Hart Benton and musicians George Gershwin and Virgil Thompson were living in Europe for extended periods. The flow of writers accelerated greatly as politically committed writers came to Europe to assist the British in World War I, and others, who had been too young for military service, arrived once the war ended. Many gravitated to the salon led by Gertrude Stein, who coined the phrase the lost generation to describe them. This was a generation disgusted with U.S. materialism and prudery, including Prohibition; 98 expatriates, U.S. expatriates, U.S. 99 they included Ernest Hemingway, F. Scott Fitzgerald, John Dos Passos, E.E. Cummings, Djuna Barnes, and Thornton Wilder. Expatriates even had a meeting place in Paris at Shakespeare and Company, a bookstore run by the American Sylvia Beach. The literary critic Malcolm Cowley described expatriation during the 1920s as a rite of passage based on the idea that “the creative artist is . . . independent of all localities, nations and classes.” African Americans particularly found Europe to be a refuge from racial discrimination. Harlem Renaissance writers Langston Hughes, Claude McKay, and Countee Cullen lived in Europe during the 1920s, as did dancer Josephine Baker. Many expatriates were forced home by the Great Depression; scandalous writer Henry Miller was an exception, spending the decade in France. After World War II writers continued to expatriate. African Americans Richard Wright and James Baldwin traveled to avoid continuing bigotry; others such as Irwin Shaw, William Styron, and several beat writers left to avoid the excesses of the U.S. Red Scare. Writers, trying like many other Americans to avoid the military draft, sat out the Vietnam War in Canada and Europe. Now, as historian Michel Fabre notes, expatriation has come to refer to “living abroad” and has none of the characteristics of exile. See also art and architecture; literature. Further reading: Benstock, Shari. Women of the Left Bank: Paris, 1900–1940. Austin: University of Texas Press, 1986; Crunden, Robert M. American Salons: Encounters with European Modernism, 1885–1917. New York: Oxford University Press, 1993; Fabre, Michel. From Harlem to Paris: Black American Writers in France, 1840–1980. Urbana: University of Illinois Press, 1991. David Miller Parker
The Contemporary World 1950 to the Present Edit
Eastern bloc, collapse of the The end of the cold war was the collapse of the binary international power structure instigated by the military and political rivalry of the United States and the Soviet Union in the wake of World War II. It was also a consequence of the reforms initiated by the first secretary of the Soviet Communist Party in the years 1985–91, Mikhail Gorbachev; the result was the collapse of the Soviet Union and the demise of communist systems in the countries of eastern Europe. east germany One of the symbolic moments announcing the end of the cold war was the fall of the Berlin Wall in November 1990. The Berlin Wall was built in August 1961 in order to prevent refugee migration from the communist German Democratic Republic (GDR) to the Western Federal Republic of Germany (FRG). During the era of the leadership of Erich Honecker, the first secretary of the Socialist Unity Party of Germany from 1971, the GDR remained an orthodox socialist and highly repressive state. The GDR leadership maintained its dictatorial and conservative character. In fact, a diplomatic discord developed between reform-oriented Gorbachev and Honecker in the late 1980s. In the summer of 1989 Hungary decided to open its boundaries with Austria. A number of GDR residents moved to West Germany though Hungary and Austria. In connection with Gorbachev’s visit at the 40th anniversary of the establishment of GDR, pro-reformist and pro-democratic demonstrations were organized in Leipzig and Berlin, which subsequently spread through the whole GDR. The protesters demanded government guarantees that human rights and civic rights would be respected, as well as that democratic restructuring be initiated. On November 9, 1989, the vehement civic protests and the confusion of the party leadership resulted in an unanticipated decision to annul the requirement for exit visas of East German residents who were crossing the border between the GDR and FRG. On November 10, five crossing points in the Berlin Wall were opened and approximately 40,000 East Berliners crossed into West Berlin. The atmosphere of festivity and celebration prevailed among the crowds, and people on both sides of the Berlin Wall started to make openings in the wall and bring parts of it down. On December 22 the Brandenburg Gate officially opened. The image of East and West Berliners jointly destroying the Berlin Wall became a powerful symbol of the collapse of the cold war and of the termination of the division of Europe. Honecker resigned from his post as the first secretary of the party and as the chairman of the Council of State of the GDR on October 18, 1989, and was temporarily replaced by another Communist politician, Egon Krenz. Honecker later fled to Moscow and was extradited in 1992, but avoided trial for health reasons. In March 1990 the first postcommunist democratic elections took place, and the Christian Democratic Union of Germany achieved victory. The collapse of the Berlin Wall also paved the path for the reunification of Germany. On October 3, 1990, the GDR ceased to exist, and its territory E became absorbed by the state of Germany. In 1990 the Socialist Unity Party of Germany transformed itself into the Party of Democratic Socialism. poland In Poland, an indication of increased political relaxation took place between 1986 and 1987 with a general amnesty of political prisoners. A series of strikes in 1988 pressured the communist authorities to re-legalize the independent trade union Solidarity, which had been made illegal after martial law was instituted in Poland in 1981. At that time the first secretary of the Polish Communist Party and the head of state was General Wojciech Jaruzelski. In spring 1989 the reform-oriented factions of the Polish Communist Party decided to enter talks with the dissident groups associated with the Solidarity movement. The negotiations were chaired both by Lech Wałe˛sa, leader of the Solidarity movement and a winner of the Nobel Peace Prize, and by Czesław Kiszczak, a chief of the Polish secret services and the minister of internal affairs beginning in 1981. The negotiators agreed that Solidarity would be re-legalized and that partially free parliamentary elections would be organized. The parliamentary elections on June 4, 1989, brought an overwhelming majority of representatives from Solidarity, which had transformed into the Solidarity Citizens’ Committee. It received 161 of 460 total seats in the Sejm and 99 of 100 total seats in the Senate. A coalition government was formed with a Catholic dissident, Tadeusz Mazowiecki, as prime minister. Jaruzelski served as the president of Poland from 1989 to 1990, when Wałe˛sa was elected to that post in presidential elections. In 1991 free parliamentary elections were organized, and a coalition government of anticommunist groups emerged. The party Social Democracy of the Republic of Poland was formed in 1990 with no official ideological ties, but with evident personal ones, to its communist predecessor. hungary In Hungary, the deteriorating economic situation, due to increasing foreign debt, spurred public debates on the possibility of introducing radical reform policies. They facilitated the creation of the opposition movement, the Hungarian Democratic Forum, on September 27, 1987. The leader was József Antall, a historian who was known for his engagement in the Hungarian revolt in 1956. In May 1988 the first secretary of the party, János Kádár, was removed from his post and replaced by Károly Grósz. Grósz was inclined to introduce moderate economic reforms within the systemic socialist framework, but was opposed to the idea of organizing a roundtable discussion between the party and the anticommunists At the congress in October 1989, the power within the Hungarian Socialist Workers’ Party was seized by soft-liners such as Gyula Horn and Imre Pozsgay. In March 1989, the Hungarian Democratic Forum held a national meeting at which it demanded democratic reforms and agreed to enter negotiations with the party representatives at the elite level. On March 22, 1989, the National Roundtable Talks were organized, and their results were a series of reformatory events: The power monopoly of the party was abandoned, the constitution was amended, and multiparty democracy was reconstituted in 1989. In October 1989 the Hungarian Socialist Workers’ Party renounced its Marxist-Leninist legacy and endorsed a social-democratic political direction, changing its name to the Hungarian Socialist Party. In April 1990 democratic parliamentary elections took place. The result was the victory of the Hungarian Democratic Forum, and its leader, József Antall, became prime minister of the coalition government. Czechoslovakia In Czechoslovakia in January 1989, students organized a peaceful rally to commemorate the anniversary of the suicide of Jan Palach, a student who committed self-immolation as an act of demonstration against the Warsaw Pact invasion of the country in 1968 dur- 128 Eastern bloc, collapse of the Prior to its eventual dismantling, the Berlin Wall became a site for grafitti, much of it in favor of the destruction of the wall. ing the Prague Spring. The student demonstrations were brutally broken down by the riot police. Another student demonstration was organized by the Socialist Youth Union on November 17, 1989, in Prague and Bratislava. More than 30,000 participating students commemorated the anniversary of the murder of another Czech student figure, Jan Opletal, who was killed in 1939 by pro-Nazi forces. On November 19, different opposition and human rights groups created Civic Forum. Its spokesman became Václav Havel. Together with its Slovak counterpart, the Public Against Violence, Civic Forum demanded the resignation of the first secretary of the Czechoslovak Communist Party, Miloš Jakeš, holding him responsible for the maltreatment of the demonstrating students. On November 24 Jakeš resigned from his post. On the same day Alexander Dubcˇek, the architect of the Prague Spring events, made a public speech. On November 28 Prime Minister Ladislav Adamec declared abandonment of the power monopoly of the Czechoslovak Communist Party. On December 17 there was an official spectacle of cutting through the wire border between Czechoslovakia and Austria. The first postcommunist democratic parliamentary elections took place in June 1990. Two anticommunist blocs, the Civic Forum and the Public against Violence, emerged victorious, with over 50 percent of the votes. The Czechoslovak transformation was called the Velvet Revolution, because in spite of the deeply orthodox and dictatorial character of the Czechoslovak communist regime, the collapse of the system and the initiation of democratic change were accomplished without violence. Bulgaria In Bulgaria the late 1980s witnessed the emergence of discriminatory nationalistic policies authored by the president of Bulgaria and the first secretary of the Bulgarian Communist Party, Todor Zhivkov. These were directed against Bulgaria’s large Turkish minority. The result was a massive emigration of the Bulgarian Turks and a rapidly deteriorating economic situation in the country. This increased opposition against Zhivkov among reform-oriented members of the party. During the Central Committee meeting on November 10, 1989, the foreign minister, Patur Mladenov, condemned Zhivkov’s hard-line economic policies and authoritarianism and managed to secure Zhivkov’s removal from his leadership position. Mladenov consequently took over Zhivkov’s secretarial and presidential posts. Famously, he publicly pledged a turn toward political democratization, far-reaching economic reforms, and amnesty for political prisoners. In January 1990 pro-democracy demonstrations involving 40,000 people took place in Sofia. As a consequence the Bulgarian National Assembly made a number of path-paving decisions: The power monopoly of the Bulgarian Communist Party was revoked, the Bulgarian secret police was dismantled, and Zhivkov was charged with fraud and corruption. In April 1990 the Bulgarian National Assembly elected Mladenov as president and subsequently dissolved itself. The Bulgarian Communist Party renounced its ideological attachment to Leninism and transformed itself into the Bulgarian Socialist Party, with Alexander Lilov as its chairman. In June 1990 postcommunist democratic elections were organized and the Bulgarian Socialist Party achieved a narrow victory. Later Mladenov was forced to resign from his presidential post after it was made public that he had considered the possibility of using force against the pro-democratic demonstrators in Sofia earlier that year. On August 1, 1990, he was replaced by Zhelyu Mitev Zhelev, a former oppositionist, professor of philosophy, and founder of the dissident Club for the Support of Glasnost and Restructuring. He was reelected in 1992 and remained in the presidential post until 1997. Zhelev represented the Union of Democratic Forces, a party that consisted of various anticommunist groups formed in December 1989. In November 1990 a series of general strikes was organized, which instigated a sense of political and economic crisis in the country and which brought about the complete discrediting of the Bulgarian Socialist Party. In 1991 a new democratic constitution was adopted, and in 1992 the Union of Democratic Forces took over power in the national elections and embarked on a series of radical economic and political reforms. Romania In Romania the Communist dictatorship of President Nicolae Ceaus¸escu was particularly oppressive. Although in other East European countries popular demonstrations and negotiations took place throughout 1989, it seemed that Ceaus¸escu’s position would remain unchallenged. On December 17 street protests were organized in the city of Timis¸oara against the decision by the Romanian Secret Police (Securitate) to deport local bishop László Tokés. The protests against Tokés’s eviction were transformed into anticommunist and anti- Ceaus¸escu demonstrations. Hundreds of demonstrators who gathered on the streets of Timis¸oara were attacked Eastern bloc, collapse of the 129 by military forces. Nearly 100 of them were killed, and many more were injured. Beginning on December 20 the antiregime demonstrations and a wave of strikes took place in Romania’s other large cities. Ceaus¸escu condemned the protests in Timis¸oara and ordered the organization of a pro-regime gathering in the center of Bucharest on December 21. Mass mobilization and civic unrest continued throughout the country, and the regime made extensive use of violence to put down the revolutionary occurrences. On December 22 the National Salvation Front was formed in a national TV studio. It was led by Communist politician Ion Iliescu, and its other members were Silviu Brucan, a former diplomat and an opponent of Ceaus¸escu; and Mircea Dinescu, a dissident poet. Subsequently, the National Salvation Front restored peace and formed a temporary government with Iliescu as a provisional president, following Ceaus¸escu’s execution on December 25, 1989. Later the National Salvation Front was transformed into a political party and achieved the majority of votes in the democratic elections in May 1990. The important difference between the postcommunist elections in Romania and in other East European countries was that in Romania, the victorious National Salvation Front comprised former socialist officials. In 1992 it was divided into two leftist Romanian parties: the Democratic Party and the Social Democratic Party. See also Gorbachev, Mikhail; Reagan, Ronald. Further reading: Kenney, Padraic. A Carnival of Revolution: Central Europe 1989. Princeton, NJ: Princeton University Press, 2002; Offe, Claus. Varieties of Transition. The East European and East German Experience. Cambridge, MA: MIT Press, 1997; Okey, Robin. The Demise of Communist East Europe: 1989 in Context. London: Arnold, 2004; Ross, Corey. The East German Dictatorship: Problems and Perspectives in the Interpretation of the GDR. London: Arnold, 2002; Schweizer, Peter. The Fall of the Berlin Wall. Stanford, CA: Hoover Institution Press, 2000. Magdalena Zolkos East Timor The Democratic Republic of Timor-Leste, or East Timor, was a Portuguese colony until 1975. On the eve of the Portuguese departure in August 1975, a civil war broke out, leading to the deaths of 1,500 to 2,000 people. There was a unilateral declaration of independence on November 28, 1975, by the East Timorese people. With U.S. assistance, Indonesia invaded East Timor in December. Afterward, Indonesia incorporated East Timor as its 27th province in July 1976. The United Nations (UN) did not recognize this. A guerrilla war against Indonesian occupation followed amid reports of brutality by the army. The ensuing civil war was marked by brutality, loss of life, and human rights abuses. From 1982 onward, the UN secretary-general endeavored to bring a peaceful solution to the conflict. In 1998 Indonesia was prepared to grant autonomy to East Timor, but its proposal was rejected by the East Timorese. It was decided to hold a plebiscite in East Timor, resulting in a declaration of independence on August 30, 1999. The army, along with pro-Indonesian militia, unleashed a reign of terror in East Timor. There was a pacification campaign during which more than 1,300 people were killed and 300,000 more were forcibly sent into West Timor as refugees. The ethnic conflict and genocide by Indonesian troops devastated East Timor. Violence was brought to an end by an international peacekeeping force. The Timorese tragedy had taken the lives of 21–26 percent of the population. East Timor was placed under the transitional administration of the United Nations Transitional Administration in East Timor (UNTAET) on October 25, 1999. There were about 8,000 peacekeepers and civilian police helping the administration. The National Consultative Council (NCC), consisting of 11 East Timorese and four UNTAET members, worked as a political body in the transitional phase. An 88-member Constituent Assembly was elected in August 2001 to frame a new constitution. East Timor became a fully independent nation on May 20, 2002, with international recognition. Nation-building was difficult for the East Timorese. The reconstruction of their damaged infrastructure and the creation of viable administrative machinery became priorities for the new regime. The United Nations Mission of Support in East Timor (UNMISET), which had replaced the UNTAET, gave necessary support to the new government, which was headed by Xanana Gusmão. Further reading: Emmerson, Donald K., ed. Indonesia Beyond Suharto: Polity, Economy, Society, Transition. New York: M.E. Sharpe, 1999; Kiernan, Ben. “Genocide and Resistance in East Timor, 1975–1999: Comparative Reflections on Cambodia.” In War and State Terrorism: The United States, Japan, and the Asia-Pacific in the Long Twentieth 130 East Timor Century. Mark Selden and Alvin Y. So, eds. New York: Routledge, 2003; Ricklefs, M. C. A History of Modern Indonesia: c. 1300 to the Present. 2nd ed. Stanford, CA: Stanford University Press, 1993. Patit Paban Mishra Ebadi, Shirin (1947– ) Iranian human rights activist Shirin Ebadi is a democracy and human rights activist and a lawyer. She was born in northwestern Iran to a Shi’i Muslim family in 1947 and studied law at Tehran University. In 1975 she became the first woman judge in Iran and was appointed president of the Tehran City Court. Following the Islamic revolution in 1979, all female judges, including Ebadi, were removed from the bench and given clerical duties. Ebadi quit in protest and wrote books and articles on human rights, particularly on the rights of children and women, for Iranian journals. After many years of struggle, in 1992, Ebadi won her lawyer’s license and opened her own practice. She is known for taking cases at the national level, defending liberal and dissident figures. In 2000 she was arrested and imprisoned for “disturbing public opinion” and was given a suspended jail sentence and barred from practicing law (the restriction was later removed). She campaigns for strengthening the legal rights of women and children, advocating a progressive version of Islam. Her legal defense in controversial cases, pro-reform stance, and outspoken opinions have caused the conservative clerics in Iran to oppose her openly. In 2003 Ebadi was the first Muslim woman and Iranian recipient of the Nobel Peace Prize for her efforts to promote democracy and human rights both domestically and abroad. She teaches law at Tehran University, writes books and articles, and runs her own private legal practice. Her books include The Rights of the Child (1993), Tradition and Modernity (1995), The Rights of Women (2002), and Iran Awakening: A Memoir of Revolution and Hope (2006). See also Iran, contemporary; Iranian revolution. Further reading: Frängsmyr, Tore, ed. Les Prix Nobel. The Nobel Prizes 2003. Stockholm: Nobel Foundation, 2004; Parvis, Dr. Leo. Understanding Cultural Diversity in Today’s Complex World. London: Lulu.com, 2007. Randa A. Kayyali Economic Commission for Latin America (ECLA) One of the world’s most influential schools of economic thought was founded by the United Nations Economic and Social Council Resolution 106(VI) on February 25, 1948, as the Economic Commission for Latin America (ECLA; in Spanish, Comisión Económica para América Latina, or CEPAL), headquartered in Santiago, Chile. Under the intellectual leadership of Argentine economist Raúl Prebisch, Brazilian economist Celso Furtado, and others, the ECLA offered an analysis of Latin American poverty and underdevelopment radically at odds with the dominant and neoclassical “modernization” theory espoused by most economists in the industrial world. Building on the work of world-systems analysis, the ECLA pioneered an approach to understanding the causes of Latin American poverty commonly called the “dependency school” (dependencia) in which the creation of poverty and economic backwardness, manifested in “underdevelopment,” was interpreted as an active historical process, caused by specific and historically derived international economic and political structures, as conveyed in the phrase, “the development of underdevelopment.” This approach was then appropriated by scholars working in other contexts, especially Asia and Africa, as epitomized in the title of Guyanese historian Walter Rodney’s landmark book How Europe Underdeveloped Africa (1972). Since the 1950s, the theoretical models and policy prescriptions of the ECLA have proven highly influential, sparking heated and ongoing debates among scholars. From its foundation the ECLA rejected the paradigm proposed in the neoclassical, Keynesian, modernization school, which posited “stages of growth” resulting from the transformation of “traditional” economies into “modern” economies, a perspective epitomized in U.S. economist Walter W. Rostow’s book The Stages of Economic Growth (1960). Instead, the model formulated by the ECLA posited a global economy divided into “center” and “periphery,” with the fruits of production actively siphoned or drained from “peripheral” economies based on primary export products (including Latin America) to the “center” (the advanced industrial economies of Europe and the United States). Based on this model, in the 1960s ECLA policy prescriptions centered on the promotion of domestic industries through “import substitution industrialization” (ISI), diversification of production, land reform, more equitable distribution of income and productive resources, debt relief, and increased state intervention to Economic Commission for Latin America 131 achieve these aims. Key analytic concepts of these years included “dynamic insufficiency,” “dependency,” and “structural heterogeneity.” In the 1970s attention shifted to “styles” or “modalities” of economic growth and national development. The economic crisis of the 1980s generated another shift toward issues of debt adjustment and stabilization, while the 1990s saw heightened emphasis on issues of globalization and “neostructuralism,” in opposition to the “neoliberalism” promoted by the International Monetary Fund and related international financial bodies. In 1984 the United Nations (UN) broadened the mandate of the ECLA to include the Caribbean, and it became the Economic Commission for Latin America and the Caribbean (ECLAC); its Spanish acronym, CEPAL, remained the same. It is one of five UN regional commissions and remained highly influential into the 21st century. Further reading: Cockroft, James D., André Gunder Frank, and Dale L. Johnson. Dependence and Underdevelopment: Latin America’s Political Economy. Garden City, NY: Anchor Books, 1972; Furtado, Celso. Accumulation and Development: The Logic of Industrial Civilization. Oxford, UK: M. Robertson, 1983; Raúl Prebisch and Development Strategy. New Delhi: Research and Information System for the Non- Aligned and Other Developing Countries, 1987. Michael J. Schroeder ecumenical movement In 1517 Martin Luther nailed Ninety-five Theses to the door of a church in Wittenberg, a university town in the German province of Saxony, to start a debate over indulgences and related questions about Christian salvation. His action is often understood to be the beginning of the Protestant Reformation. In 1529 Protestant representatives to the imperial Diet in Germany presented the Augsburg Confession, which enshrined the Protestant position at that time and is still accepted by all the Lutheran churches. The rejection of that confession by the Roman Catholics with the support of the emperor, Charles V, has since been understood by many historians as the definitive division between the Protestant and Roman Catholic Churches, resulting in a plurality of churches in Western Christendom no longer in communion with one another. Central to the Augsburg Confession was the doctrine of “justification by faith alone,” which together with “grace alone” and “scripture alone,” summarized the Protestant concerns. In addition, the Reformers insisted on changes in worship (especially the Mass) and the sacraments, changes unacceptable to the Roman Catholics and viewed by them as heretical. What began as a movement for the reformation of the Western (Latin) Church ended up with doctrinal division and ecclesiastical separation. The Lutheran churches were not the only churches that came from the Reformation. Shortly after the Lutheran movement began, a similar movement, resulting in the formation of the Reformed churches, arose in Switzerland under the leadership of Ulrich Zwingli in Zurich. From there churches were established in many countries of Europe, with the predominant theological influence coming from John Calvin in Geneva. In addition, groups of radical reformers (termed Anabaptists by their opponents) were formed and were persecuted by Reformed, Lutheran, and Roman Catholic Churches alike. Each of these groups developed distinct theological positions. From them, especially from the Church of England in England and its American colonies, came many new churches including the Baptists, Methodists, and Pentecostals. In the 20th century the ecumenical movement was born. The 1910 World Christian Missionary Conference in Edinburgh is often considered its beginning. The conviction of missionaries that church division was harmful for their outreach gave rise to a worldwide (ecumenical) movement to overcome those divisions. By 1948 many of the churches affected by that movement formed the World Council of Churches, an interchurch body representing a large percentage of Protestant churches. They were, in addition, joined by many Orthodox churches— churches that had become separate from the Roman Catholic Church long before the Protestant Reformation but that are much closer in theology and practice to Catholicism than to Protestantism. At first opposed to ecumenical endeavors, the Roman Catholic Church during the Second Vatican Council in 1962–65 accepted the ecumenical movement as a fruit of “the grace of the Holy Spirit.” Afterward, it entered into more active cooperation with other churches and also began a series of dialogues over doctrinal differences with Orthodox and many Protestant churches, even though it did not join the World Council of Churches. Many Evangelical churches also did not join the World Council of Churches but have formed their own world alliance and national associations for cooperation. As a result of the ecumenical movement, the climate has changed among a large number of Christian 132 ecumenical movement churches from hostility to friendliness and growing cooperation. In addition, dialogues among theologians representing their churches have produced a number of accords on previous doctrinal differences. The Faith and Order section of the World Council of Churches has sponsored multilateral dialogues. The wide-ranging 1982 statement on Baptism, Eucharist and Ministry, focusing on disputed areas of worship and sacraments, is often cited as the most successful result of those endeavors. In addition, various churches or church bodies have entered into bilateral dialogues with one another. The bilateral dialogues have produced some notable doctrinal accords. Many Protestant churches have joined together or established communion with one another as a consequence of these accords. Some of the more significant have been concluded by the Roman Catholic Church with Oriental Orthodox and Assyrian churches. Commissions of Eastern Orthodox theologians have come to agreements with their Oriental Orthodox counterparts. Doctrinal differences that antedate the Reformation by a millennium are now discussed if not reconciled. The Roman Catholic Church has, in addition, conducted a series of dialogues with the Anglican Communion that have produced a body of agreed statements on many of the disputed points between the two church bodies. Not all of the important dialogues have been official dialogues between church bodies. Informal study groups like the Groupe des Dombes have made independent contributions. Perhaps the most significant result produced by such groups has been the series of statements by Evangelicals and Catholics Together, a committee of prominent Evangelicals and Roman Catholics in the United States. The first statement, The Christian Mission In the Third Millennium in 1994, was widely influential in fostering rapprochement between two Christian groups that are sometimes considered to be the farthest apart from one another. Symbolically, one of the most notable results of the bilateral accords has been the Joint Declaration on the Doctrine of Justification (JDDJ), signed by official representatives of the Roman Catholic Church and the Lutheran World Federation. The JDDJ was prepared for by 35 years of dialogue between Lutheran and Roman Catholic theologians on the international level and the national level, most notably in the United States and Germany. In 1983 the United States dialogue produced an agreement, The Doctrine of Justification. This was followed by The Condemnations of the Reformation Era: Do They Still Divide?, a significant 1986 statement produced by a German study group. Then the international commission in 1993 produced Church and Justification: Understanding the Church in the Light of the Doctrine of Justification. On the basis of these and other works, the JDDJ was produced and agreed to. The JDDJ is noteworthy as being the only agreement officially accepted by the highest authority in the Roman Catholic Church and a Protestant church body. It is even more noteworthy as being an accord on the doctrine of justification, the point of disagreement that began the Reformation. While the JDDJ acknowledges that it did not resolve all questions about justification, it did resolve enough of the most fundamental ones that, in the view of the two parties, the doctrine of justification no longer had to be church dividing. Although other points of disagreement remain, the JDDJ in effect marked an official recognition by the two church bodies that they do not have incompatible views of what it is to be a Christian. The JDDJ was signed in 1999, just in time for the beginning of the new millennium. It was signed in the city of Augsburg, the city where the Augsburg Confession was presented to the emperor. It was signed on Reformation Sunday, the day that commemorates the posting of the Ninety-five Theses. The JDDJ did not put to an end the disunity caused by the Reformation. It was, however, in the minds of those who signed it, an indication that the crucial step towards ending that disunity had been taken. Further reading: Lutheran World Federation, and the Roman Catholic Church. Joint Declaration on the Doctrine of Justification. Grand Rapids, MI: Eerdmans, 2000; Toon, Peter. What’s the Difference? Basingstoke, Hants: Marshalls, 1983. Stephen B. Clark Egyptian revolution (1952) In 1952 a group of Free Officers led by Gamal Abdel Nasser overthrew the corrupt monarchy of King Farouk in a bloodless coup. After World War II and the loss to Israel in the 1948 Arab-Israeli War, Egypt gradually slid into political chaos. The king was known internationally for his profligacy, and the Wafd Party—the largest Egyptian party, led by Mustafa Nahhas—had been discredited by charges of corruption and cooperation with the British during the war. Other political parties, some supporting the monarchy; the small Egyptian Communist Party on the left; and the far larger Egyptian revolution (1952) 133 Muslim Brotherhood on the right vied for power and sometimes engaged in terrorism and assassinations of rivals to gain power. Attacks against the British forces still stationed along the Suez Canal also escalated. The British reinforced their troops, and after fighting broke out between British soldiers and Egyptian police forces, a massive riot erupted in Cairo in January 1952. During “Black Saturday,” angry Egyptian mobs stormed European sectors, burning European-owned buildings and businesses in a demonstration of nationalist discontent and opposition to imperial control and the British refusal to leave Egypt. On July 22, 1952, the Free Officers, who had secretly been plotting to overthrow the government for some time, took key government buildings, and on July 26 they deposed King Farouk. Farouk was permitted to go into exile, and his young son Ahmad Fu’ad II was made the new king. The young officers, most of whom were in their 30s, chose the elder and more well-known Brigadier General Muhammad Naguib as their figurehead leader, although it was known within the group that Nasser was the real political force. They formed an executive branch, the Revolutionary Command Council (RCC), including Anwar el-Sadat, Abd al-Hakim Amr, and Zakariyya Muhi al-Din. In January 1953 political parties were abolished in favor of one party, the Liberation Rally, and in June the monarchy was abolished in favor of a republic with Naguib as president. The new government was anti-imperialist, anticorruption, and eager to develop the Egyptian economy and to secure full and complete Egyptian independence. Naguib and Nasser soon argued over the course Egyptian politics was to take, and, after an assassination attempt against Nasser failed, allegedly by the Muslim Brotherhood, Naguib was forced to resign. Under a new constitution, Nasser was elected president in 1956, a post he would hold until his death in 1970. In 1954 the new regime negotiated an agreement with the British for the full withdrawal of British troops and an end to the 1936 treaty between the two nations. Under the agreement the old conventions regarding control of the Suez Canal by private shareholders were maintained; this issue led to a major war in 1956 after Nasser nationalized the canal. Economic development was the cornerstone of the new regime’s program. Under a sweeping land reform program, land ownership was limited to 200 feddans, and major estates, many formerly owned by the royal family, were redistributed to the peasants. Plans for the construction of one of the largest development projects of its type at the time, the Aswa¯n Dam, were announced. Although the financing and construction of the dam became a major point of conflict between Egypt and the United States, it was duly built with Soviet assistance. With the formation of the United Arab Republic with Syria in 1958, the pan-Arab policies of Nasser seemed ascendant in the Arab world; however, the union collapsed in 1961. Egypt also became bogged down in the Yemeni civil war. In 1962 pro-Nasser forces in Yemen overthrew the weak Imam Muhammad al-Badr and established a republic. Pro-monarchy forces assisted by arms and money from Saudi Arabia supported the monarchy while Egypt assisted the republican forces with arms, money, and troops. The war dragged on, draining Egyptian resources, and Nasser referred to the conflict as his “Vietnam.” Following the disastrous Arab defeat in the 1967 Arab-Israeli War, Saudi Arabia and Egypt agreed to withdraw their support from both sides and, although it adopted a far more moderate and pro-Saudi stance, the Yemeni republic survived. In the 1960s Egypt turned increasingly toward the Soviet Union and state-directed socialism. In 1961 large businesses, industry, and banks were nationalized. Cooperatives for the peasants were established. With the creation of a new class of technocrats and officers, the power of the old feudal and bourgeoisie elites was gradually eliminated. In 1962 a new political party, the Arab Socialist Union (ASU), with a worker-peasant membership, was created. Under the 1962 National Charter the authoritarian state held political power exercising control from the top. The charter outlined an ambitious program of education, health care, and other social services; it also addressed the issue of birth control and family planning, as well as mandated equality of rights for women in the workplace. Many conservative forces in Egypt opposed the social changes, especially as they pertained to the family and the status of women, and consequently the social programs fell far short of their original intentions. Under Nasser, Egypt became the dominant force in the Arab world and attempted to steer a neutral course in the cold war. The Egyptian revolution failed to meet many of its domestic goals, and the state-run economy was often inefficient. Egypt’s neutrality in the 1950s alienated many Western powers and conservative Arab regimes, especially Saudi Arabia. Following Nasser’s sudden death in 1970, Anwar el-Sadat became the Egyptian president. Sadat, who showed far more political acumen than he had previously been credited with, gradually turned away from the Soviet bloc. 134 Egyptian revolution (1952) Sadat forged alliances with the United States and gradually dismantled most of the revolution’s economic and social programs. See also Arab-Israeli War (1956). Further reading: Gordon, Joel. Nasser’s Blessed Movement: Egypt’s Free Officers and the July Revolution. New York: Oxford University Press, 1992; Mansfield, Peter. Nasser’s Egypt. Harmondsworth, UK: Penguin Books, 1965; Mohi El Din, Khaled. Memories of a Revolution: Egypt 1952. Cairo: American University in Cairo Press, 1995; Sadat, Anwar el-. Revolution on the Nile. New York: The John Day Company, 1957. Janice J. Terry El Salvador, revolution and civil war in (1970s–1990s) In the 1980s the small Central American country of El Salvador made world headlines as a key site of struggle in the cold war, and in consequence of its leftist revolutionary movements and civil war (conventionally dated 1980–92) that left some 70,000 dead and the economy and society ravaged. The long-term roots of the crisis have been traced to the country’s history of extreme poverty, economic inequality, and political oppression of its majority by its landholding and power-holding minority. Important antecedents include the 1932 Matanza (Massacre), in which the military and paramilitaries killed upwards of 30,000 people, ushering in an era of military dictatorship that continued to the 1980s. The 1969 Soccer War with Honduras is also cited as an important antecedent. By the mid-1970s numerous leftist revolutionary groups were offering a sustained challenge to military rule, groups that in April 1980 came together to form the revolutionary guerrilla organization Farabundo Martí Liberation Front (Frente Farabundo Martí para la Liberación Nacional, or FMLN). Open civil war erupted soon after July 1979, when the leftist Sandinistas overthrew the Somoza dictatorship in Nicaragua. Fearing a similar outcome in El Salvador, the U.S. government increased its military aid to the Salvadoran regime, which launched an allout assault against revolutionary and reformist organizations. From 1979 to 1981, approximately 30,000 people were killed by the military and associated rightwing paramilitaries and death squads. On March 24, 1980, a right-wing death squad assassinated the archbishop of El Salvador, Óscar Romero, after his numerous public denunciations of the military regime and its many human rights violations. In December 1980 centrist José Napoleon Duarte assumed the presidency, the first civilian to occupy that post since 1931. Interpreted by many as a civilian facade installed to obscure a military dictatorship, his administration failed to staunch the violence. Especially after Ronald Reagan became U.S. president in January 1981, U.S. military and economic assistance to the Salvadoran regime skyrocketed. Framing the issue as a cold war battle, and despite much evidence to the contrary, the Reagan administration claimed that the FMLN and its political wing, the FDR (Frente Democrático Revolucionario), were clients of Cuba and the Soviet Union. It also alleged Sandinista complicity in funneling arms to Salvadoran revolutionaries, thus legitimating U.S. support for anti-Sandinista forces in the contra war. In 1982 the extreme right-wing party, the Nationalist Republican Alliance (Alianza Republicana Nacionalista, ARENA), won the presidency in an election marred by violence and fraud. The rest of the 1980s saw continuing civil war waged under a series of ostensibly civilian governments dominated by the military. In 1991, following United Nations–sponsored talks, the government recognized the FMLN as a legal political party. In January 1992 the warring parties signed the UN-sponsored Chapultepec peace accords, and in 1993 the government declared amnesty for past violations of human rights. The civil war and its aftermath left an enduring legacy throughout the country and region. Further reading: Armstrong, Robert, and Janet Shenk. El Salvador. Cambridge, MA: South End Press, 1982; United Nations Security Council. From Madness to Hope: The 12- Year War in El Salvador: Report of the Commission on the Truth for El Salvador. New York: United Nations, 1993. Michael J. Schroeder environmental disasters (anthropogenic) Several major environmental disasters, those that are man-made rather than naturally occurring, have taken place after the World War II due to the emphasis on heavy industrial development. In developed countries in the late 1960s, environmental movements led the public to be more concerned about the pollution of air, water, environmental disasters (anthropogenic) 135 and soil, and the danger of chemical agriculture. Several governments developed more policies for the preservation of the environment. The issues of environmental concerns became internationalized at the Stockholm conference in 1972, the United Nations National Conference on the Human Environment. Environmental, nongovernmental organizations started to play an important role in the deliberations. During the period 1971–75, 31 important national environmental laws were passed in the OECD countries. In 1983 the World Commission on Environment and Development (WCED), also known as the Brundtland Commission, was created to seek sustainable development. In December 1984 the world’s worst industrial disaster occurred in Bhopal, a city located in the northwest of Madhya Pradesh in central India. The leakage of a highly toxic gas (methyl isocyanate) from a Union Carbide pesticides plant killed more than 3,800 persons and affected more than 200,000 with permanent or partial disabilities. It is estimated that more than 20,000 people have died from exposure to the gas. Union Carbide was manufacturing pesticides, which were in demand because of the Green Revolution in India. This environmental disaster raised the public’s concern about chemical safety. Similar concerns are related to severe accidents in nuclear power plants such as the Chernobyl nuclear power plant accident in the Soviet Union on April 26, 1986. The accident occurred at the block number 4 of the Chernobyl Nuclear Power Plant. This nuclear power complex is located 100 kilometers northwest of Kiev, close to the border of Belarus. The initial explosion caused the reactor to melt down for 10 days. The result has been the discharge of radionuclides, which contaminated large areas in the Northern Hemisphere. This release of radioactive material has damaged the immune system of people in the area and has contaminated the local ecosystem. While natural processes, some as simple as rainfall, have helped restore the local environment, problems are still widespread. More than 750,000 hectares of agricultural land and 700,000 hectares of forest have been abandoned. In 2000, 4.5 million people were living in areas still considered radioactive. Two opposing explanations, poor reactor design and human error, have been advanced for the Chernobyl accident. The Chernobly accident occurred during the glasnost/ perestroika era of the Soviet Union. So, while the government performed its own investigations of the tragedy, additional citizens advisory boards, some without any government involvement, were set up. Chernobyl was not the first civilian nuclear power plant disaster. Accidents in nuclear power plant installations occurred in Windscale (in Great Britain) in 1957 and in the United States, such as in the Three Mile Island Unit 2, which was damaged during an accident in 1979. Since Chernobyl, other accidents, like those at Tokaimura (1999) and Mihama (2004)—both in Japan—have occurred. These accidents have brought the nuclear industry under greater scrutiny from the general public. Many feel that not only should the overall safety of such plants be improved, but also the preparedness and response to such disasters need to be more fully developed. The Bhopal and Chernobyl cases are disasters of similar magnitudes in terms of damage to people and the environment. The concerns go beyond safety to local populations. Today, such questions as environmental impact and sustainability have become at least as important as concerns over health and human welfare. See also environmental problems. Further reading: Dembo, David, Ward Morehouse, and Lucinda Wykle. Abuse of Power. Social Performance of Multinational Corporations: The Case of Union Carbide. Far Hills, NJ: New Horizons Press, 1990; Dinham, Barbara. Lessons from Bhopal, Solidarity for Survival. Newburyport: MA: Journeyman, 1989; Fortun, Kim. Advocacy after Bhopal: Enviromentalism, Disaster, New Global Order. Chicago: University of Chicago Press, 2001; Lapierre, Dominique, and Javier Moro. Five Past Midnight in Bhopal: The Epic Story of World’s Deadliest Industrial Disaster. New York: Warner Books, 2003. Nathalie Cavasin environmental problems From the 1950s, with the massive rise in the human population, the expansion of cities and towns, and the increasing use of natural resources, some scientists such as Rachel Carson have written about impending problems. However, most people only became aware of major environmental problems from the 1980s, with the environment becoming a major political issue from the 1990s. After World War II, the increasing use of pesticides in industrialized countries, especially the United States, led to Rachel Carson writing her book Silent Spring (1963), which highlighted the side effects of D.D.T. on the local environment. It led to the reduction in the amount of pesticides used, and this was followed by the 136 environmental problems banning of D.D.T. in the United States in 1972. Other environmental campaigns saw protests against the killing of seals in Canada, and also against whaling mainly undertaken by the Japanese and the Norwegians. The International Whaling Commission introduced a moratorium on whaling in 1986, although Japan has continued to conduct whaling under the guise of science. International environmental organizations such as Friends of the Earth and Greenpeace have been prominent in leading protests around the world, the latter becoming famous for taking part in direct action. Developing environmental problems around the world have been added to by many natural occurrences such as hurricanes in the Caribbean, in the United States and elsewhere, floods in Florence and Venice in 1966, the eruption of volcanoes such as Mount St. Helens in 1980, and the Indian Ocean tsunami of December 26, 2004. In some cases there were a combination of other man-made environmental disasters that have involved feeding sheep and cattle with substandard “food.” The destruction of forests either for timber or to clear land for cash crops continues, as does the contamination of rivers and the countryside by waste from mines. Even natural disasters such as Hurricane Katrina leading to flooding in New Orleans in August 2005 has subsequently led to an environmental disaster by creating a toxic stew of sewage, household chemicals, gasoline, and industrial waste that will take years to clean up. In addition there have been a large number of manmade environmental problems. The one which has resulted in the largest number of deaths in the short-term was undoubtedly the Bhopal poison gas explosion in India on December 3, 1984. The biggest disaster on an international scale was the Chernobyl nuclear power station accident in the Soviet Union in 1986. Others have included the venting of oil into the Persian Gulf by Saddam Hussein in 1991, and also a large number of oil spills around the world created by damage to oil tankers and the like, the largest being that of the Exxon Valdez in Prince William Sound, Alaska, in March 1989. In the 2000s, the major environmental issue became that of global warming, especially after the screening of former u.s. vice president and Nobel laureate Al Gore’s film An Inconvenient Truth. assault on forests The assault on the world’s forests are as old as humankind. Early people were quick to learn the many uses of wood: fuel for cooking, warmth, and the smelting of metals; materials for durable shelter; and a sign of fertile lands for the growing of crops. Wood was abundant in most places where early humans chose to settle. It was relatively easy to obtain and work with, and there was always more. Archaeologists are finding widespread evidence of wood-burning and log construction that began much earlier than anyone expected. Clearing of the land was rarely mentioned in the chronicles of the Western or Eastern world in the early modern period, but it seems obvious that as the population grew, the forests shrank. In central and northern Europe an estimated 70 percent of the land was covered by forest in 900 c.e.; by 1900 it had shrunk to only 25 percent. During this long period of growth and expansion, people learned how to fashion wood into sailing ships, opening up new sources of timber to exploit and new lands to settle. Clearing land in the tropics and subtropics helped the slave trade by creating vast plantations for the cultivation of sugar, coffee, tobacco, tea, rubber, rice, and indigo. The birth of the industrial age accelerated the onslaught. Trees could suddenly be turned into pulp for paper, wood for mass-produced furniture, plywood for lightweight construction, and countless other useful environmental problems 137 More and more carbon dioxide is being expelled into the atmosphere, creating a thick blanket of heat around the globe. products. Rubber trees produced the raw materials for automobile tires and other items for a growing consumer marketplace. By the mid-20th century, the development of chainsaws and heavy machinery had made the clear-cutting of entire forests easier than ever before. Today, the clear-cutting of forests is driven by a need for both wood and cropland, as the swelling global population demands more and more food. Our evolutionary ancestors faced widespread shifts in the climate as glacial periods, referred to as “ice ages,” came and went every 100,000 years or so. The impact of those early ice ages on human development are difficult to judge; it is likely that some proto-human species adapted and others did not. Some scientists now believe that it was climate change that spurred the migration of humans out of Africa. The fossil record, incomplete as it is, shows that Homo sapiens had emerged between 150,000 and 120,000 years ago in southern and eastern Africa, yet it took another 100,000 years or more for them to move into Europe, Asia, and beyond. Ice core samples and excavation of ancient seabeds indicate that the climate in that part of Africa underwent significant changes between 70,000 and 80,000 years ago, with annual precipitation rates fluctuating wildly for a long period of time, putting a strain on the food chain and forcing humans to look for new habitats. There is some evidence that there was a major volcanic eruption at Mount Toba in modern Indonesia around 73,000 years ago, which could have caused most of the planet to suffer the effects of a “volcanic winter,” lasting up to seven years. Some believe this could have caused the mass extinction of proto-human groups outside Africa, reducing the competition when humans from Africa began moving into their territories. climate change In climatological terms, we are just coming out of the latest glacial period, known as the Little Ice Age. This period extended from between the 13th and 16th centuries to around 1850. In the Northern Hemisphere the period was marked by bitterly cold temperatures, heavy snowfalls, and the rapid advance of glaciers. Unseasonable cold spells and precipitation lead to periodic crop failures and famines. Most notable was the Great Famine, which struck large parts of Europe in 1315. Heavy rains began in the spring of that year and continued throughout the summer, rotting the crops in their fields and making it impossible to cure the hay used to feed livestock. This cycle of rainy summer seasons would continue for the next seven years. Food scarcity hit Europe at the worst possible time: at the end of a long period known as the Medieval Warm Period, where good weather and good harvests had led to population growth that had already begun to push food supplies to the brink. Few seem to have died from outright starvation, but an estimated 15–25 percent of the population died from respiratory diseases such as bronchitis and pneumonia, the natural result of immune deficiency. The Great Famine had far-reaching effects on society. Crime increased along with food prices, with property crimes and murders becoming more common in the cities. There were stories of children being abandoned by parents unable to find food for them, and even rumors of cannibalism. This was during the height of the Catholic Church’s hegemony in Europe, and people naturally turned to the church in times of fear. When prayer failed, the church’s power was diminished. It was the beginning of a long drift towards the Protestant Reformation of the 16th–17th centuries. The Little Ice Age was releasing its grip in the early part of the 1800s when a massive volcanic eruption on Mount Tambora in present-day Indonesia ejected a huge amount of volcanic ash into the atmosphere. This ash cloud encircled the Northern Hemisphere over the next year or more, creating climatological havoc throughout Europe, the United States, and Canada. In May 1816 a killing frost destroyed newly planted crops. In June, New England and Quebec saw two major snowstorms, and ice was seen on rivers and lakes as far south as Pennsylvania. The crop failures that year led to food riots across Europe. Many historians believe that the summer of 1816 spurred the process of westward expansion in America, with many farmers leaving New England for western New York State and the Upper Midwest. At the beginning of the 21st century, signs of another great climate shift seem to be everywhere. Glaciers are receding at an unprecedented rate. Polar ice caps are shrinking. Sea levels are on the rise. Severe weather events, including droughts, heat waves, and hurricanes, are growing in length and intensity. Controversy continues among academics and policy makers over the exact cause of the warm-up: Is it being caused by humans, or is it simply the latest in an long series of climate changes? There is some support for the idea that this is an inevitable rise in temperatures growing out of the end of the Little Ice Age in the mid-19th century, but the majority of scientists now believe that humans are playing a significant role in global warming. World population has reached 6 billion, all of whom consume and burn 138 environmental problems biomass to survive. Whether from industrial smokestacks, millions of car exhaust systems, or open fires used to cook food across the developing world, more and more CO2 is being expelled into the atmosphere, creating a thick blanket of heat around the globe. The threat to both the environment and human life cannot be underestimated. Up to a third of the world’s species may go extinct by the beginning of the next century. While northern climates may see an initial surge in crop yields, high temperatures and persistent droughts in the southern climates will reduce yields and increase the threat of widespread famines. Water scarcity will become severe. The latest projections indicate that by 2030, hundreds of millions of people in Latin America and Africa will face severe water shortages. By 2050 billions of Asians will also be running far short of their freshwater needs, with the Himalayan glaciers all but gone as early as 2035. By 2080 100 million people living on islands and coastlines will be forced to flee their homes. The struggle for an increasingly small share of food, water, and other natural materials could spark “resource wars” among nations. The possibility of reversing this trend is not clear, but many scientists believe we have reached the “tipping point,” making a full reversal unlikely. See also Kyoto Treaty. Further reading: Hardoy, Jorge E., Diana Mitlin, and David Satterthwaite, eds. Environmental Problems in Third World Cities. London: Earthscan, 1993; Sandler, Todd. Global Challenges: An Approach to Environmental, Political, and Economic Problems. Cambridge: Cambridge University Press, 1997. Heather K. Michon Equal Rights Amendment The Equal Rights Amendment (ERA), first proposed in the U.S. Congress in 1923, guarantees the equality of rights for all people in the United States. The amendment has been pushed by women’s groups since 1920. Following the Great Depression and World War II, the rise of a second, more sweeping women’s rights movement led to reconsideration of an amendment to secure women the rights to equal wages and equal consideration under the law. The 1970s and 1980s saw the congressional approval of the amendment but the failure of enough states to ratify it into the Constitution. The failure of the ERA in 1982 was a step backward politically. The historical landmark for increased rights for women was the 1848 Seneca Falls Convention, a clarion call by concerned females to the rest of the country for increased rights. The unity of the convention was quickly disturbed by the Civil War and the subsequent passage of the Fourteenth Amendment, which was meant to give rights and liberties to freed slaves. However, women’s groups argued over how loosely the amendment could be interpreted and whether such equality was given to black women if the Constitution did not include rights for white women. Legal interpretations of woman’s rights in the United States became more sophisticated in the early 20th century, as the Supreme Court saw fit to deal with issues of labor. In Lochner v. New York (1905), the Supreme Court ruled that the number of hours worked by women was not related to the maintenance of public health. In Muller v. Oregon (1908), however, the Supreme Court ruled in favor of a 10-hour work day passed by the Oregon state legislature and aimed toward regulating industry in favor of employees. The first protests for suffrage began in front of the White House in January 1917, led by future members of the National Women’s Party (NWP), including equal rights advocate Alice Paul. Agitation by women dedicated to the cause of women’s suffrage, along with rights to fair wages, was successful, as the Nineteenth Amendment was ratified in 1920. Even with this success, a major rift developed between activists like Paul who sought quicker strides for women, and experienced professionals like Carrie Chapman Catt and Florence Kelly. Catt and Kelly feared the NWP’s agenda was too sweeping and harmful to progress already being Equal Rights Amendment 139 Protesters against the Equal Rights Amendment were spurred on by the rise of the feminist movement. made for women in the areas of judicial review and minimum-wage legislation. The 1920s–1930s saw several phases in the battle between the NWP and other women’s groups—including vacillation on whether an equal rights amendment would be effective, whether protective legislation for women should be incorporated with an amendment, and whether courts should be more active in providing equality for women. The NWP remained active not only in working for an amendment for equal rights but in creating a better work environment for women in the United States and expanding equal rights throughout the globe. However, the NWP was not successful in fulfilling many of its goals because of strong-arm tactics by more conservative groups in the United States, more conservative governments globally, and the devastation of the Great Depression. The idea of an equal rights amendment was not lost with the diminishing influence of the NWP. In every session of Congress between 1923 and the passage of the ERA in 1982, an amendment was introduced dealing with equal rights based on gender. The Republican Party included a fairly progressive plank in their 1940 platform. The U.S. Senate passed the Equal Rights Amendment three times—in 1949, 1953, and 1959—but each passage was marred by an irreconcilable rider exempting existing sex-specific legislation from the amendment. The period between the Great Depression and the rise of feminism was one of slow progress toward public acceptance of the ERA. The rise of a feminist movement in the 1960s was broad and rapidly well organized. The movement encompassed all aspects of female life in the United States. The expression of sexuality by women was made a topic of discussion after Betty Friedan’s Feminist Mystique was published in 1963. The creation of a marketable birth control pill in 1960 made a woman’s control over her own body an important aspect of public health. The establishment of the National Organization for Women in 1966 and its rapid acceptance among other lobbying groups gave the feminist movement a political organization that would be unrivaled within a few years. The Equal Rights Amendment was passed several times in the 1970s by the House of Representatives, but was not passed through to the ratification process. In August 1970 the House passed the ERA 352- 15, and in the fall of 1971, on the back of Representative Martha Griffins (D-MI), the House passed the ERA 354-23; it was moved further by congressional approval in 1972. It was not until 1982, however, that the legislative approval of the ERA was followed up by the ratification process. The amendment failed when only 13 of the state legislatures ratified. One cause of trepidation by the public toward the amendment was the activism of antifeminists such as Phyllis Schlafly, who saw the amendment as an unnecessary exercise and a waste of energy for women. However, the amendment’s process and the rise of feminism and antifeminism have opened a dialogue for women’s issues and legal interpretations of equal rights in already existing amendments. See also Aung San Suu Kyi. Further reading: Becker, Susan. Origins of the Equal Rights Amendment: American Feminism between the Wars. Westport, CT: Greenwood Press, 1981; Berry, Mary Frances. Why ERA Failed: Politics, Women’s Rights, and the Amending Process of the Constitution. Bloomington: Indiana University Press, 1986; Stakup, Brenda. The Women’s Rights Movement: Opposing Viewpoints. San Diego, CA: Greenhaven Press, 1996. Nicholas Katers Eritrea Eritrea is an African country lying along the southwestern coast of the Red Sea and to the northeast of Ethiopia. Its capital and largest city is Asmara. Eritrea gained its independence from Ethiopia in 1993. The country’s diverse population speaks many languages and reflects many cultures. About half the inhabitants are Christian and about half are Muslims. In spite of this diversity, Eritrea has had little internal conflict in part because most factions were united in a struggle for independence from Ethiopia. The Eritrean region was one of the first areas in Africa to produce crops and domesticate animals. Early people also engaged in extensive trade from Eritrea’s Dead Sea ports. In the fourth century, Eritrea was a relatively independent part of the Askum Empire. In the 16th century the area became part of the Ottoman Empire, and in 1890 it became a colony of Italy. Italian rule lasted until World War II, when Britain conquered the territory in 1941. In 1952 the United Nations (UN) approved a federation of Eritrea and Ethiopia in an attempt to settle the dispute between Ethiopian claims of rights to the land and Eritrea’s desire for independence. Ethiopia’s emperor, Haile Selassie, quickly acted to end the federation and to annex Eritrea as a province. 140 Eritrea Eritrea began a war for independence from its longlasting domination by other countries. The Eritrean Liberation Front (ELF) was formed in 1958 and initiated armed resistance in 1961. The next three decades were filled with bitter warfare before Eritrea finally gained its independence in 1993. In the 1970s, due in part to the internal conflicts within the ELF, a new and more tightly organized group—the Eritrean People’s Liberation Front (ELPF)—emerged. This group became dominant in the struggle against Ethiopian rule. The Soviet Union and Cuba came to the aid of Ethiopia’s new regime after Haile Selassie was deposed in 1974, but the alliance was unable to dominate the rural districts of Eritrea. By 1980 the ELPF was increasing its control over more areas of the province, and in 1990–91 it gained possession of two major cities, including the capital. At that point the ELPF was recognized as the provisional government by many other countries. Ethiopia and Eritrea agreed to hold a referendum on independence in 1993, which resulted in almost unanimous approval for the initiative. In May of 1993 the United Nations admitted Eritrea to membership and granted a four-year transitional period for the formation of a constitution. The ELPF dominated the early years of independence, and Isaias Afwerki—former general secretary of the ELPF—was elected the first president of the National Assembly. The constitution, formally approved in 1997 but not yet implemented, outlines a government directed by the National Assembly—whose members are elected for five-year terms—a president, and a supreme court. The president holds great power, since he appoints the members of the Supreme Court and the administrators of each of Eritrea’s six regions. The only legal political party is the People’s Front for Democracy and Justice (formerly the ELPF). The National Assembly elections scheduled for 2001 were postponed indefinitely. Eritrea’s independence and democratic government have been threatened by a number of factors, including the government itself and the economic and physical damages of the long war for independence. During the 1970s–1980s, nature dealt Eritrea devastating blows in the form of droughts and famine. In addition, the government pursued policies that led to engagement in several wars. Eritrea fought the Sudanese on a number of occasions. Eritrean forces invaded the Red Sea island of Hanish al Kabir, a possession of Yemen, in 1995 and claimed ownership. Arbitration settled the dispute in Yemen’s favor in 1998. Conflict that led to thousands of deaths broke out again between Ethiopia and Eritrea in 1998 over disputed territory. In 2000 the two countries agreed to a cease-fire, but a formal agreement on the borders between them was not approved. A UN peacekeeping force located in Eritrea continued to patrol a 25-mile-wide Temporary Security Zone along the countries’ borders. With less than 5 percent of its land arable, Eritrea continues to face severe economic and ecological concerns arising from deforestation, soil erosion, overgrazing, and its decayed infrastructure. See also Ethiopia, Federal Democratic Republic of. Further reading: Jacquin-Berdal, Dominique, and Martin Plaut, eds. Unfinished Business: Ethiopia and Eritrea at War. Lawrenceville, NJ: Red Sea Press, 2004; Pateman, Roy. Eritrea: Even the Stones Are Burning. Lawrenceville, NJ: Red Sea Press, 1998. Jean Shepherd Hamm Ethiopia, Federal Democratic Republic of The Federal Democratic Republic of Ethiopia is situated in the east of Africa in the region known as the Horn of Africa. Its capital is Addis Ababa. This country is bound to the west and northwest by Sudan, to the south by Kenya, to the east and southeast by Somalia, and to the east by Djibouti and Eritrea. Ethiopia is 1,221,900 square kilometers in size. Its topography consists of rugged mountains and isolated valleys. It has four main geographic regions from west to east: the Ethiopian Plateau, the Great Rift Valley, the Somali Plateau, and the Ogaden Plateau. The diversity of Ethiopia’s terrain determines regional variations in climate. This country has three climatic zones: a very cool area, where temperatures range from near freezing to 16°C; a temperate zone; and a hot area, with both tropical and arid conditions, where temperatures range from 27°C to 50°C. The semiarid part of the region receives fewer than 500 millimeters of precipitation annually and is highly susceptible to drought. The most important current environmental issues are deforestation, overgrazing, soil erosion, desertification, and water-intensive farming and poor management that contribute to water shortages in some areas. Another problem that the country faces is the constant loss of biodiversity and the threat to the ecosystem and the environment. Ethiopia’s population is mainly rural and has a high annual growth rate. In 2004 the United Nations Ethiopia, Federal Democratic Republic of 141 estimated Ethiopia’s population at more than 70 million. There are more than 70 distinct ethnic groups in Ethiopia. The principal groups include the Oromo, who account for 40 percent of the population; the Amhara, 25 percent; and the Tigre, 12 percent. Smaller groups are the Gurage, 3.3 percent; the Ometo, 2.7 percent; the Sidamo, 2.4 percent; and other ethnic minorities. More than half of Ethiopians, 53 percent of the population, are Christians (Orthodox), and around 31 percent are Muslims; there are also other indigenous tribal beliefs. Ethiopia is one of Africa’s oldest countries. Although Ethiopia was considered a strategically important territory by superpowers during the colonial period, Ethiopia’s monarchy maintained its freedom. There were exceptions during the Italian invasion in 1895–96 and the occupation during World War II. During the cold war era in 1974, a military junta deposed Emperor Haile Selassie and established a socialist state, which maintained a relationship with the Soviet Union. After long period of violence, massive refugee problems, famine, and economic collapse, the regime fell in 1991 to a coalition of rebel forces, the Ethiopian People’s Revolutionary Democratic Front (EPRDF). In 1994 a new constitution was approved, and Ethiopia’s first multiparty elections were held in 1995. At the international level Ethiopia engaged in several disputes. Ethiopia-Eritrea Conflict In 1889 Ethiopia granted the control of its colony to Italy, but between 1941 and 1952 this country was put under British administration. An agreement was signed, and both countries formed a federation. However, 10 142 Ethiopia, Federal Democratic Republic of Three Ethiopian gunners from Addis Ababa preparing to fire a 75mm recoilless rifle. Modern Ethiopia has a long history of conflict with neighboring countries in the Nile River basin. years later Haile Selassie abolished it and imposed imperial rule throughout Eritrea, which became a province, causing a series of guerrilla attacks. In 1991 a provisional government was established in Eritrea, and it became an independent nation in 1993. But the border between Ethiopia and Eritrea was never precisely demarcated. So in 1998 Eritrean forces occupied the disputed Ethiopian town of Badme, and a new war began, lasting until 2000, when both countries signed a treaty. Despite an international commission that delimited the border, the relationship between them remains hostile. Ethiopia-Somalia Conflict Ethiopia has always sought access to the sea and looked to Somalia for the reunification of its territory. Somalia used to claim the Ogaden region, inhabited for the most part by Somali ethnic groups. During the conflict with Eritrea, Ethiopia controlled almost the whole region, with a consequent breaking off of diplomatic relations. In 1988, after 11 years of constant confrontation, Ethiopia removed the troops from the border with Somalia, reestablished diplomatic relations, and signed a peace treaty. But the central section of Ethiopia’s border with Somalia was never fully demarcated and is only provisional. Also, Ethiopia and Somalia have always had aspirations to control the territory of Djibouti. The Nile Basin Dispute The Nile River runs through nine states: Egypt, Burundi, Tanzania, Uganda, Sudan, Kenya, Rwanda, Ethiopia, and Congo. This river serves as a constant source of water for these countries. It has a vital role in agriculture and it also plays a major role in transportation. The river is born in Ethiopia’s territory, and Ethiopia controls 85 percent of its water, but Egypt is the country that makes the most profit from its water flow. This country, with its military superiority and economic and political stability, puts pressure on upstream countries. During recent years these countries have not been able to divert the water flow because of the constant tensions. Though the conflict is still between the main actors—Sudan, Egypt, and Ethiopia—it is probable that all the countries in the Nile basin will be affected while the population continues growing and water needs increase. Ethiopia’s economy is based on agriculture, and 90 percent of the products obtained are exported. The principal crops are cereals, pulses, oilseed, cotton, sugarcane, beans, and potatoes, but the most important is coffee. This sector suffers from frequent drought and poor cultivation practices. As a consequence the country has to rely on massive food imports. Ethiopia does not have many mineral resources. It has small reserves of gold, platinum, copper, potash, and natural gas. For these resources Ethiopia depends on imports too. The leading manufactures in Ethiopia include cement, construction materials, food processing, and textiles. It has extensive hydropower potential. The transportation network is poor. During the 1990s Ethiopia abandoned its exclusive bilateral policy with the Soviet Union and began to acquire more freedom. It became a decentralized, market-oriented economy with privatization and the cooperation of international financing organs. Agreements were also made to form regional organizations. But participation in the world economy remained marginal, and dependence on international financing organisms increased Ethiopia’s external debt. In fact, in 2001 Ethiopia qualified for debt relief from the Highly Indebted Poor Countries (HIPC) initiative, and in 2005 the International Monetary Fund voted to forgive Ethiopia’s debt. Ethiopia is among the poorest countries in the world according to the Human Development Index established by the United Nations. About 50 percent of the population is below the poverty line. Food shortage in Ethiopia has reached alarming levels. The climate conditions, the lack of means to develop agriculture, displacements, refugees, and AIDS are factors that contribute to worsening the situation. Therefore foreign aid is constantly needed to prevent diseases and famine, particularly in times of drought. Further reading: Fage, J. D., and Roland Oliver, eds. The Cambridge History of Africa. Cambridge and New York: Cambridge University Press, 1975–86; Keller, Edmond J. Revolutionary Ethiopia: From Empire to People’s Republic. Bloomington: Indiana University Press, 1988; Klare, Michael T. Resource Wars: The New Landscape of Global Conflict. New York: Henry Holt, 2001; Zewde, Bahru. A History of Modern Ethiopia, 1855–1974. Athens: Ohio University Press, 1991. Verónica M. Ziliotto European Economic Community/ Common Market The European Economic Community (EEC), also known as the Common Market, was established by the Treaty of Rome among France, Italy, West Germany, Belgium, Luxembourg, and the Netherlands. It was the European Economic Community/Common Market 143 core of what would become the European Community in 1967 and the European Union after the ratification of the Maastricht Treaty (1992). The EEC aimed to create a single economy among its members. Its acts were devised to achieve free labor and capital mobility; the abolition of trusts; and the implementation of common policies on labor, welfare, agriculture, transport, and foreign trade. The idea of a united European market has its roots in the aftermath of World War II. After Europe had been divided and ravaged by two brutal world wars, politicians such as German chancellor Konrad Adenauer, Italian prime minister Alcide De Gasperi, and French foreign minister Robert Schuman agreed on the necessity of securing a lasting peace among previous enemies. They believed that European nations should cooperate as equals and should not humiliate one another. In 1950 Schuman proposed the creation of a European Coal and Steel Community (ECSC), which was established the following year with the Treaty of Paris. France, Italy, West Germany, Belgium, Luxembourg, and the Netherlands consented to have their production of coal and steel monitored by a High Authority. This was a practical and a symbolic act at the same time: Steel and coal, the raw materials of war, became the tools for reconciliation and common growth. These first years of cooperation proved fruitful, and ECSC members started to plan an expansion of their mutual aid. Negotiations between the six countries making up the ECSC led to the Treaty of Rome (1957), which created the European Economic Community, a common market for a wide range of services and goods. The process of integration continued during the 1960s, with the lifting of trade barriers between the six nations and the establishment of common policies on agriculture and trade. Denmark, Ireland, and the United Kingdom joined the EEC. As the EEC grew, its leaders realized that European economies needed to be brought in line with one another. This persuasion, reached during the 1970s, was the starting point of the tortuous path that would finally lead to monetary union in 2002 with the circulation of the euro. To stabilize the fluctuations of European currencies caused by the breakdown of the Bretton Woods system, the European Monetary System (EMS) was created in 1979. The EMS helped to make exchange rates more stable and promoted tighter policies of economic solidarity and mutual aid between EEC members. It also encouraged them to monitor their economies. The monitoring of the members’ economies became vital during the 1980s, when membership in the EEC rose to 12, with the entries of Greece in 1981 and Spain and Portugal in 1986. The first Integrated Mediterranean Programme (IMP) was launched with the aim of making structural economic reforms and thus reducing the gap among the economies of the 12 member states. With the enlargement of its membership the EEC also started to play a more relevant role on the international stage, signing treaties and conventions with African, Caribbean, and Pacific countries. The worldwide economic recession of the early 1980s seemed to endanger the process of market integration. However, the commission, led by the French socialist Jacques Delors, gave new impetus to European incorporation. It was under Delors’s leadership that the Single European Act, the first major revision of the Treaty of Rome, was signed, setting a precise schedule for the removal of all remaining barriers between member states by 1993. The Delors Commission also worked to create a single currency for the European Common Market. The single currency option was chosen with the creation of a Central European Bank aiming to unify monetary policies and create a common currency. The choice was made explicit in the Treaty of Maastricht (1992), which set up a timetable for the adoption of a single currency. With the Maastricht Treaty, the European Economic Community was simply renamed the European Community, and the process of European integration was completed with the creation of the EU. Austria, Finland, and Sweden joined the union in 1995. Ten more countries (Cyprus, the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Malta, Poland, Slovakia, and Slovenia) joined in 2004, making the EU the world’s largest trading power. Bulgaria and Romania joined in 2007. In some countries the introduction of the euro was marked by controversies and heated debates. Yet economists have shown that the European Common Market has much to benefit from the euro. Frankel and Rose suggest that being part of a single currency tends to triple the country’s trade with other members of the singlecurrency zone, leading to increases in the country’s per capita income. Further reading: European Union, www.europa.eu; Mowat, R. C. Creating the European Community. London: Blanford Press, 1973; Sapir, A., and J. Alexis, eds. The European Internal Market. Oxford: Oxford University Press, 1989; Walsh, A. E., and J. Paxton. The Structure and Development of the Common Market. New York: Taplinger Publishing Company, 1968. Luca Prono 144 European Economic Community/Common Market European Union The European Union (EU), founded with the signing of the Maastricht Treaty in 1992, represents a large project of economic and political integration between an evergrowing group of European countries. The EU quickly became the world’s major trading power and enjoyed fast economic growth. Free internal trade and common customs duties, which member countries enjoyed since the beginning of the union, led to significant trade development among the different members. The EU, by 2006, included 25 member states. Bulgaria and Romania became members in January 2007. Croatia and Turkey were negotiating their membership, which was prevented by concerns over human rights violations in both countries. Of the 25 members, 12 adopted a single currency— the euro—for financial transaction in 1999. The euro entered circulation in January 2002. As the union expanded, however, it increasingly found resistance and obstacles on its way. A powerful movement of Euro-skeptics emerged throughout the EU in the late 1990s, pointing to a supposed lack of democracy in the EU institutions and to the danger of losing national sovereignty to a centralized body. Some politicians in those countries with more developed economies looked upon the enlargement of the union with suspicion, fearing a wave of uncontrollable migration. These concerns led to several serious defeats: Referenda in Denmark and Sweden showed that the majority of citizens were against adopting the euro; French and Dutch voters rejected the European Constitution in 2005. Although the Maastricht Treaty was signed in 1992, the idea of a united Europe dates back to the aftermath of World War II. After two world wars had divided European countries and massacred their people, statesmen such as German chancellor Konrad Adenauer, Italian prime minister Alcide De Gasperi, and French foreign minister Robert Schuman agreed on the necessity of building a lasting peace between previous enemies. The cooperation between these countries led to the Treaty of Rome in 1957, which established the European Economic Community (EEC) and the first European Commission, led by the German Christian Democrat Walter Hallstein. Customs duties among member states were entirely removed from 1968, and common policies for trade and agriculture were also devised. The fall of the Berlin Wall in 1989 and the disintegration of the Soviet Union in 1991 progressively shifted eastern European countries toward the EU. The most important EU institutions include the Council of the European Union, the European Commission, the European Court of Justice, the European Central Bank, and the European Parliament. The origins of the European Parliament, which convenes in Strasbourg, date back to the 1950s. It has been elected since 1979 directly by the European people. Elections are held every five years. The European Central Bank manages the union’s single currency, and the EU has a common policy on agriculture, fisheries, and foreign affairs and security. Although the policies devised by the EU range across a wide variety of areas, not all have binding power for the union’s members. The EU status, therefore, varies accordingly to the matters discussed. The union has the character of a federation for monetary affairs; agricultural, trade, and environmental policy; and economic and social policy, while each member state retains wider independence for home and foreign affairs. Policy making in the EU results in an interplay of supranationalism and intergovernmentalism. Following the Maastricht Treaty, the areas of intervention of the EU can be divided into three pillars: European Communities, Common Foreign and Security Policy, and Police and Judicial Cooperation in Criminal Matters. Supranational concerns are strongest in the first pillar, while the Council of Europe and thus intergovernmental opinions count the most in the second and third pillars. The Council of the EU, together with the European Parliament, form the legislative branch of the union, while the European Commission represents its executive powers. The council is formed by ministers of all the member states. The presidency of the council rotates between the members, and the council is made up of nine subcommissions, which meet in Brussels. The European Commission, whose president is chosen by the Council of Europe and is then confirmed by the European Parliament, has 25 members, one for each member state. Yet, unlike the Council, the commission is completely independent from member states. Commissioners, therefore, are not supposed to take suggestions from the government of the country that appointed them. Their only goal should be to propose legislation to favor the development of the whole union. The major setback for the EU was the rejection of the constitution by two of its founding members, France and the Netherlands. Signed in 2004, the constitution— whose elaboration was particularly difficult and thorny—aimed to make human rights uniform throughout the union as well as to make decisionmaking more effective in an organization that now European Union 145 includes as many as 25 members, each with priorities and agendas of their own. The main challenge that the EU will have to face in years to come is, paradoxically, a direct result of its success and its capacity to attract new nations. With more member countries, the EU is threatened by increasing regional interests that endanger the deployment of shared policies. Further reading: European Union, www.europa.eu; Karadeloglou, Pavlos, ed. Enlarging the EU. London: Palgrave Macmillan, 2002; McCormick, John. Understanding the European Union: A Concise Introduction. London: Palgrave Macmillan, 1999; Pinder, John. The European Union. New York: Oxford University Press, 2001. Luca Prono