Fandom

Coman Wiki

Encyclopedia of World History A

749pages on
this wiki
Add New Page
Talk0 Share

Ad blocker interference detected!


Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.

The Ancient World Edit

Adrianople, Battle of (378 c.e.)Edit

On August 9, 378 c.e., the Eastern Roman army under the command of Emperor Valens attacked a Gothic army (made up of Visigoths and Ostrogoths) that had camped near the town of Adrianople (also called Hadrianoplis) and was routed. The battle is often considered the beginning of the collapse of the Roman Empire in the fifth century. During the 370s c.e. there was a movement of peoples from Mongolia into eastern Europe. Called the Huns, they were driven from Mongolia by the Chinese. From 372 to 376 the Huns drove the Goths westward, first from the region of the Volga and Don Rivers and then the Dnieper River. This pushed the Goths into the Danube River area and into the Eastern Roman Empire. Seeking refuge from the Huns, Emperor Valens gave the Goths permission to settle in the empire as long as they agreed to serve in the Roman army. The Romans agreed to provide the Goths with supplies. Greedy and corrupt Roman officials tried to use the situation to their advantage by either selling supplies to the Goths that should have been free or not giving them the supplies at all. During a conference between the Visigoth leadership and Roman authorities in 377, the Romans attacked the Visigoth leaders. Some of the leaders escaped and joined with the Ostrogoths and began raiding Roman settlements in Thrace. Throughout July and August of 378 the Romans gained the upper hand and rounded up the Gothic forces. The majority of the Goths were finally brought to bay near the town of Adrianople. The Western and Eastern emperors had agreed to work together to deal with the Goths. Western emperor Gratian with his army was on his way to join Valens when Valens decided to attack the Goths without Gratian and his army. Moving from Adrianople against the Gothic wagon camp on August 9, Valens’s attack began before his infantry had finished deploying. As the Roman cavalry charged the camp, the Gothic cavalry, having been recalled from their raids on the surrounding countryside, returned and charged the Roman cavalry and routed it from the battlefield. The combined force of Gothic infantry and cavalry then turned on the Roman infantry and slaughtered it. The Goths killed two-thirds of the Roman army, including the emperor. It took the new emperor, Theodosius I, until 383 to gain the upper hand. Theodosius was able to drive many of the Goths back north of the Danube River, while others were allowed to settle in Roman territory as Roman citizens. In the short term this ended the problems with the Goths but set the stage for problems for the Western Roman Empire. With the peace the Eastern Roman Empire gained a source of soldiers for its army. These soldiers A Aeneid would eventually rebel and march against Rome. In 401 the Gothic leader Alaric led a Goth-Roman army on an invasion of Italy. The invasion was turned back in 402, and Alaric finally agreed to stop hostilities in 403. The peace only lasted until 409, when Alaric invaded Italy again and eventually captured and sacked Rome on August 24, 410 c.e. See also late barbarians; Roman Empire. Further reading: Collins, Roger. Early Medieval Europe, 300–1000. New York: St. Martin’s Press, 1999; Dupuy, R. Ernest, and Trevor N. The Harper Encyclopedia of Military History, from 3500 B.C. to the Present. New York: Harper- Collins Publishers, 1993; Ermatinger, James W. The Decline and Fall of the Roman Empire. Westport, CT: Greenwood Press, 2004; Ward-Perkins, Bryan. The Fall of Rome and the End of Civilization. Oxford: Oxford University Press, 2005; Wolfram, Herwig. The Roman Empire and Its Germanic Peoples. Trans. by Thomas Dumlap. Berkeley: University of California Press, 1997. Dallace W. Unger, Jr.

Aeneid Edit

Virgil’s Aeneid is arguably the most influential and celebrated work of Latin literature. Written in the epic meter, dactylic hexameter, the Aeneid follows the journey of Aeneas, son of Venus, after the fall of Troy. According to an ancient mythical tradition, Aeneas fled the burning city and landed in Italy, where he established a line of descendants who would become the Roman people. Virgil (70–19 b.c.e.) draws on the works of numerous authors, such as Lucretius, Ennius, Apollonius of Rhodes, and, especially, Homer. Virgil consistently adopts Homeric style and diction (a good example of this is the first line of the poem: “I sing of arms and a man . . .”). He also re-creates entire scenes from the Iliad and the Odyssey. Books 1 to 6 of the Aeneid show such close parallels to the Homeric epics that they are often called the “Virgilian Odyssey.” Books 7 to 12, meanwhile, closely echo the Iliad. Virgil’s use of Homeric elements goes beyond mere imitation. Virgil often places Aeneas in situations identical to those of Odysseus or Achilles, allowing Aeneas’s response to those situations to differentiate him from (and sometimes surpass) his Homeric counterparts. Virgil constructs his epic in relation to the Roman people and their cultural ideals. He defines Aeneas by the ethical quality of piety, a concept of particular importance for Rome at the time of the Aeneid’s composition. The Aeneid also contains several etiological stories of interest to the Roman people, most notably that of Dido and the origin of the strife between the Romans and the Carthaginians. The Dido episode is one of the most famous vignettes of the Aeneid. Dido, the queen of Carthage— also known by her Phoenician name, Elyssa—aids Aeneas and his shipwrecked Trojans in Book 1. Through Venus’s intervention, Dido falls desperately in love with Aeneas and wants him and his men to remain in Carthage. But a message from Jove reminds Aeneas that his fated land is in Italy. Immediately, he orders his men to depart. Dido is heartbroken over Aeneas’s leaving: She builds a pyre out of Aeneas’s gifts and commits suicide on it, prophesying the coming of Hannibal before she dies. When Aeneas descends to the Underworld in Book 4, Dido’s shade refuses to speak with him. Dido’s character shows a great deal of complexity. She appears first as an amalgam of Alcinous and Arete as she hospitably receives her Trojan guests but soon becomes a Medea figure, well acquainted with magic and arcane knowledge. Dido is a sympathetic character throughout the epic, though much of how Virgil describes her would have brought to the Roman reader’s mind the Egyptian queen Cleopatra (associated with Mark Antony and the civil war). Interpretations of the Aeneid are numerous and far from unanimous. The Aeneid’s composition coincides with the end of the civil wars and the beginning of Augustus’s regime. Virgil ostensibly endorses the new princeps by referring to him as the man who will usher in another golden age. Yet several elements of the epic might suggest that Virgil did not wholeheartedly support Augustus. Much of the debate centers on the war in Italy that occupies the second half of the epic, in which some scholars see a reference to the Battle of Perusia in 41 b.c.e., an event Augustus would have preferred to forget. Scholars also point to the end of the Aeneid, where Aeneas kills Turnus as he pleads for his life, as unambiguously criticizing the new leadership. This anti-Augustan view of the Aeneid has, however, met with opposition. Many scholars find more evidence of the Iliad than of Augustus’s campaign in the latter half of the Aeneid. Others suggest that in killing Turnus, Aeneas acted appropriately for his cultural circumstances. The Aeneid has also been proposed to represent, not Virgil’s view of Augustus, but rather the condition of the Roman people. Virgil seems to offer conflicting evidence for Aesop his perspective on Augustan Rome and may intentionally leave the matter ambiguous so that the reader may decide for him- or herself. The Aeneid was highly anticipated even before publication and has since enjoyed immense popularity. Quintilian regarded Virgil as nearly equal to Homer and credits him with having the more difficult task. Latin epic writers after Virgil looked to the Aeneid as their model. Statius even acknowledges that his epic, the Thebaid, cannot surpass that of Virgil. The Aeneid became a standard school text of the ancient world and was a critical part of a good education. Virgil, however, considered the work unfinished. At the time of his death he famously called for the Aeneid to be burned rather than published. Augustus saved the Aeneid from the flames and ordered its publication. See also Caesar, Augustus; Roman golden and silver ages; Roman pantheon and myth. Further reading: Galinsky, Karl. Augustan Culture. Princeton, NJ: Princeton University Press, 1996; ———. “The Anger of Aeneas.” American Journal of Philology (v.109, 1988); James, Sharon L. “Future Perfect Feminine: Women Past and Present in Vergil’s Aeneid.” In Anderson, William S., and Lorina N. Quartarone, eds. Approaches to Teaching Vergil’s Aeneid. New York: Modern Language Association of America, 2002; Knauer, Georg Nicolaus.“Vergil’s Aeneid and Homer.” Greek, Roman, and Byzantine Studies 5 (1964): 61–84; Putnam, Michael C. J. “Vergil’s Aeneid and the Evolution of Augustus.” In Anderson, William S., and Lorina N. Quartarone, eds. Approaches to Teaching Vergil’s Aeneid. New York: Modern Language Association of America, 2002; Thomas, Richard F. Virgil and the Augustan Reception. New York: Cambridge University Press, 2001. Jeffrey M. Hunt

AeschylusEdit

(525–456 b.c.e.) Greek playwright The son of a wealthy family in sixth century b.c.e. Attica, Aeschylus was a tragedian at a time when Greek theater was still developing from its beginnings as a form of elaborate dance. In contrast to the first dramas, performed in honor of Dionysus and under the influence of copious amounts of wine, Aeschylus’s work emphasized natural law and punishment at the hands of the gods, by examining the role of his characters in a larger world. His participation as a soldier in the Battle of marathon in 490 b.c.e., when the invading Persians were successfully repelled by vastly outnumbered Greek forces, probably informed his approach. The Persians told the story of the battle and was first performed 18 years later. Of Aeschylus’s 70-some plays, only seven survive. They are the earliest known Greek tragedies, as he is one of only three tragedians (with Euripides and Sophocles) whose works have survived to the modern era. Seven against Thebes is another battle narrative, concerning that of “the Seven” mythic heroes against Thebes in the aftermath of the death of the sons of Oedipus. The Suppliants is a simpler story about the daughters of Danaus fleeing a forced marriage, while the Oresteia is a trilogy of plays about the house of Atreus, starting with the return of Agamemnon from the Trojan War. The Oresteia has had enduring appeal in the modern world: 20th-century playwright Eugene O’Neill’s Mourning Becomes Electra was based on it, substituting the Civil War for the Trojan War in the backstory of O’Neill’s trilogy. Composers Richard Strauss and Sergey Taneyev each based operas on the Oresteia, and many more writers and artists have found compelling the idea of the Furies who in Aeschylus’s trilogy bring down the wrath of the gods upon Orestes for having killed his mother. In a sense the Oresteia is not just the earliest surviving trilogy of Greek plays. It is also one of the earliest horror stories, with the Furies tracking Orestes by following the scent of his mother Clytemnestra’s blood, and the play’s emphasis on the idea, so resonant in horror literature and ghost stories, of the supernatural exacting horrible justice on transgressors. Legend claims that Aeschylus met his death under the strangest of circumstances, when a passing eagle dropped a turtle on his head. Further reading: Aeschylus. Various works available online. URL: http://www.gutenberg.org/browse/authors/a#a2825; Griffith, Mark, ed. Aeschylus’ Prometheus Bound. Cambridge: Cambridge University Press, 1983; Sommerstein, Alan H. Greek Drama and Dramatists. London: Routledge, 2002. Bill Kte’pi

AesopEdit

(c. mid-sixth century b.c.e.) Greek writer A slave in ancient Greece in the sixth century b.c.e., Aesop was the creator or popularizer of the genre of African city-states fables that bear his name. Little about him is known: More than half a dozen places have claimed him as a native son, and although Herodotus records that he was killed by citizens of Delphi, he gives no indication of motive. Aesop’s fables were brief stories, appropriate for children and structured around a simple moral lesson. Most of them featured anthropomorphized animals— animals who spoke and acted like humans, often motivated by some exaggerated human characteristic. Unlike the animal tales of many mythic traditions—the Coyote stories of North America, for instance—Aesop’s animals did not represent spiritual or divine beings, nor did they explain the nature of the world. They were comparable instead to modern children’s literature and cartoons, though with an educational bent. The fables remain some of the best-known stories in the Western world, often lending themselves to proverbs. Some of the most famous include The Fox and the Grapes, from which the idiom sour grapes is derived, to refer to something that, like the grapes the fox cannot reach, is assumed to be not worth the trouble; The Tortoise and the Hare, which concludes that “slow and steady wins the race” and has been adapted to a number of media, including a Disney cartoon; The Ant and the Grasshopper, the latter of which suffers through a harsh winter he had not prepared for as the ant did; and perhaps most evocatively, The Scorpion and the Frog. In this tale a scorpion asks a frog to carry him across the river, and when the frog refuses out of fear of being stung, the scorpion brushes the concern aside, pointing out that should he sting the frog, both will die as the scorpion drowns. Nonetheless, the frog’s fear proves warranted—when the scorpion stings him partway across the river, he reminds the frog that such behavior is plainly the nature of a scorpion. Further reading: Aesop. Aesop’s Fables. New York: Barnes and Noble Books, 2003; Daly, Lloyd. Aesop Without Morals. New York: Thomas Yoseloff, 1961. Bill Kte’pi

African city-statesEdit

The emergence of African city-states began in North Africa with ancient Egypt and then later the formation of the Carthaginian empire. These civilizations are both heavily documented by written accounts, as are the other North African kingdoms of Numidia and Mauretania. However, apart from surviving secondhand accounts from early travelers from Egypt or Carthage, knowledge of city-states in the rest of Africa relies entirely on archaeological evidence. Carthage ruled the area around its capital through direct rule, and the remainder of its areas through client kings such as those of Numidia. The Numidians throwing their support behind the Romans at the Battle of Zama in 202 b.c.e. saw the defeat of the Carthaginians, setting the scene for the destruction of Carthage itself in 146 b.c.e. Numidia had a brief period of independence before it too fell under Roman control. The most well-known African city-states outside North Africa are thought to have emerged in modernday Sudan and Ethiopia, with many settlements near the confluence of the Blue and White Niles, and ancient megaliths were found in southern Ethiopia. Gradually two city-states, those of Meroë (900 b.c.e.–400 c.e.) and Axum (100–1000 c.e.), emerged, both transformed from powerful cities to significant kingdoms controlling large tracts of land, relying heavily on the early use of iron. The use of bronze and iron in war are also clearly shown by the location of some of these settlements. The remains of many ancient villages and small townships have been found in Sudan, which show that protection from attack was considerably more important than access to fertile arable land. The other area that seems to have seen the emergence of city-states in the ancient period was in sub- Saharan West Africa. The finding of large numbers of objects and artifacts at Nok in modern-day Nigeria, which flourished from 500 b.c.e., has demonstrated the existence of a wealthy trading city on the Jos Plateau. It seems likely that there would have been other settlements and small city-states in the region, with people from that area believed to have started migrating along the western coast of modern-day Gabon, Congo, and Angola, and also inland to Lake Victoria. The major African city-state emerging toward the end of this period was Great Zimbabwe. Its stone buildings, undoubtedly replacing earlier wooden ones, provide evidence of what the society in the area had developed into by the 11th century c.e. Further reading: Fage, J. D., ed. The Cambridge History of Africa: Volume 2: From c. 500 b.c.e. to c.e. 1050. Cambridge: Cambridge University Press, 1978; ———. A History of Africa. London: Routledge, 1997. Justin Corfield

African religious traditions Edit

Little contemporary written material has survived about religious traditions in ancient Africa, except in inscriptions by the ancient Egyptians about their beliefs and in accounts by Herodotus when he described the religions and folklore of North Africa. The Egyptian beliefs involved gods and the monarchs as descendants of these deities and their representatives on earth. Many of the Egyptian gods have different forms, with some like Horus and Isis being well known, and changes in weather, climate, and the wellbeing of the country reflecting the relative power of particular contending deities. Briefly during the eighteenth Dynasty, the pharaoh Akhenaten (14th century b.c.e.) tried to establish monotheism with the worship of the sun god Aten. The move eroded the power of the priests devoted to the sun-god Amun-Ra, who struck back. After establishing a new capital at Tel el Amarna, the pharaoh died under mysterious circumstances and the old religion was restored and continued until the Ptolemies took over Egypt in the fourth century b.c.e., which saw the introduction of Greek gods, and later Roman gods when Egypt became a part of the Roman Empire. Although these concepts started in Egypt, similar ideas, almost certainly emanating from Egypt, can be found in Nubia and elsewhere. At Meroë in modern- day Sudan, there is evidence of worship of gods similar to the Egyptians’. It also seems likely that similar ideas flourished for many centuries at Kush and Axum, the latter, in modern-day Ethiopia, influenced by south Arabia and introducing into Africa some deities from there. In Carthage many beliefs followed those of the Phoenicians. The deity Moloch was also said to be satisfied only by human sacrifice, with some historians suggesting that one of Hannibal’s own brothers was sacrificed, as a child, to Moloch. Modern historians suggest that the Romans exaggerated the bloodthirsty nature of the worship of the Carthaginian deity Moloch in order to better justify their war against Carthage and that the large numbers of infant bodies found by archaeologists in a burial ground near Carthage may have been from disease rather than mass human sacrifice of small children. The kingdoms of Numidia and Mauretania to the west of Carthage would have been partially influenced by Carthaginian ideas but later came to adopt Roman religious practices, both becoming parts of the Roman Empire. Much can be surmised about religious practices in sub-Saharan Africa during this period from the statuary found in places such as Nok, in modern-day northern Nigeria. Their carved stone statues of deities have survived, showing possible similarities with some Mediterranean concepts of Mother Earth. However, it seems more likely that ancestor worship was the most significant element of traditional African religion, as it undoubtedly was for many other early societies. Human figurines, as the hundreds of carved peoples of soapstone from Esie in southwest Nigeria and the brass heads from Ife are thought to represent ancestors, chiefs, or other actual people. At Jenné-jeno and some other nearby sites, the bones of relatives were sometimes interred within houses or burial buildings. As Islam came into the area, this dramatically changed the religious beliefs of the area. Islam led to the building of many mosques, with cemeteries located in the grounds of these mosques or on the outskirts of cities. The graves of holy men became revered and places of pilgrimage and veneration. In some places Islam adapted to some of the local customs, but in other areas, such as Saharan Africa, it totally changed the nature of religious tradition. In some parts of West Africa there was a clash between the fundamental concepts of Islam and tribal customs, but in most areas ancestor worship was replaced by filial respect for ancestors. Further reading: Charles-Picard, Gilbert and Colette. Daily Life in Carthage at the Time of Hannibal. London: George Allen and Unwin, 1961; Fage, J. D. A History of Africa. London and New York: Routledge, 1997; Lange, Dierk. “The Dying and the Rising God in the New Year Festival of Ife.” In Lange, Dierk, ed. Ancient Kingdoms of West Africa. Dettelbach, Germany: Roll, 2004. Justin Corfield

Ahab and Jezebel Edit

(9th century b.c.e.) king and queen of Samaria King Ahab and Queen Jezebel were the royal couple of Israel most vilified by later biblical writers, yet it is Ahab who made Israel and its army one of the strongest on the stage of Near Eastern nations and powers in the early ninth century b.c.e. He fortified and beautified the newly founded capital of Israel, Samaria. Archaeological excavations show that during his reign cities in various regions of his kingdom were built up so that Israel could withstand attack from neighboring peoples. His reputation gained the attention of the Akhenaten and Nefertiti Phoenicians to the north so that one of their priestkings offered his daughter Jezebel to Ahab in an arranged political marriage. The Bible records that Ahab fought three or four wars with the dreaded Aramaeans and won two of them. The genius of Ahab’s foreign policy seems to be his peacemaking with Judah to the south, the Philistine states to the west, and Phoenicia to the north. Conserving his resources and limiting his battles allowed him to gain concessions from the Arameans. The real challenge came from the traditional hotbed of imperial ambition, Mesopotamia. Here the fierce Assyrians were mobilizing their forces to reestablish their empire in the western end of the Fertile Crescent. Only a makeshift alliance of all the kingdoms could stand in Assyria’s way. The Assyrian records tell of a battlefield victory at Qarqar (853 b.c.e.) in the Orontes Valley in the coastal region of present-day Syria, but it was not decisive enough for the victors to push on toward their goal. Phoenicia was not even touched, much less Israel. Other minor losses for Israel during this time are reported in the Moabite Stone: A small region far to the southeast (present-day Jordan) seceded from the hegemony. Ahab also knew how to run the internal affairs of a state. He relied on the new capital of Samaria to integrate the non-Israelite interest groups, chiefly the advocates of Baal and Asherah worship, while the older city of Jezreel served as residence to the traditional elements of Israelite culture. This balance suggests that Ahab allowed the building of foreign temples, though he showed some wavering attachment to the Israelite God. The explanation for this double-mindedness, according to the Bible, was his increasing submission to his Phoenician wife, Jezebel. According to the geologies given in Josephus and other classical sources, she was the great-aunt of Dido, banished princess of Phoenicia and legendary founder of Carthage. She was an ardent devotee to Baal, working behind the scenes to achieve dominance for her religion and dynasty. She tried to eliminate the all-traditional prophets in Israel and plotted against the famous prophet Elijah. She outlived her husband by 10 years and only died when her personal staff turned against her in the face of a rebellious general. Her sons and daughter went on to rule: Ahaziah was king for two years after Ahab’s death; then her son Joram ruled for eight years; her daughter Athaliah married the king of Judah, then ruthlessly killed all offspring of her own son so that she could rule for six years after her son died. In the biblical account Elijah, the prophet of Israel, is the unadulterated light that casts the reputation of Ahab and Jezebel into dark shadows. Ahab stands as a pragmatist who compromises his faith and coexists with idolatry, while Jezebel takes on the role of a selfwilled and idolatrous shrew whose drive for power undermines divinely balanced government. In the New Testament, Jezebel becomes a type of seductive false prophetess who gives license to immorality and idolatry under the cloak of religion. See also apocalypticism, Jewish and Christian; Christianity, early; prophets. Further reading: Becking, Bob. Fall of Samaria. Boston: Brill Academic Publishers, 1992; Thiel, Winfried. “Ahab.” In Anchor Bible Dictionary, pp. 100–104. New York: Doubleday, 1992. Mark F. Whitters

Akhenaten and Nefertiti Edit

(d. c. 1362 b.c.e. and fl. 14th century b.c.e.) Egyptian rulers Akhenaten, the pharaoh of the eighteenth Dynasty of Egypt, was the second son of Amenhotep III (r. 1391–54 b.c.e.) and Tiy (fl. 1385 b.c.e.). His reign ushered a revolutionary period in ancient Egyptian history. Nefertiti was his beautiful and powerful queen. He was not the favored child of family and was excluded from public events at the time of his father Amenhotep III. Akhenaten ruled with his father in coregency for a brief period. He was crowned at the temple of the god Amun, in Karnak, as Amenhotep IV. From his fifth regnal year, he changed his name to Akhenaten (Servant of the Aten). His queen was renamed as Nefer-Nefru-Aten (Beautiful Is the Beauty of Aten). The pharaoh initiated far-reaching changes in the field of religion. He did away with 2,000 years of religious history of Egypt. In his monotheism, only Aten, the god of the solar disk, was to be worshipped. The meaning of the changed names for himself and his queen was in relation to Aten. Even the new capital that he constructed was given the name Akhetaton (Horizon of Aten). Making Aten the “sole god” curbed the increasing power of the priesthood. Earlier Egyptians worshipped a number of gods represented in animal or human form. Particular towns had their own gods. The sun god received the new name Aten, the ancient name of the physical Sun. Akkad The king was the link between god and the common people. Akhenaten was the leader taking his followers to a new place, where royal tombs, temples, palaces, statutes of the pharaoh, and buildings were built. In the center of the capital city, a sprawling road was built. Designed for chariot processions, it was one of the widest roads in ancient times. The capital city Akhetaton on the desert was surrounded by cliffs on three sides and to west by the river Nile. The tombs of the royal family were constructed on the valley leading toward the desert. Near the Nile, a gigantic temple for Aten was built. The wealthy lived in spacious houses enclosed by high walls. Others resided in houses built between the walled structures of the rich. About 10,000 people lived in the capital city of Akhetaton during Akhenaten’s reign. Artwork created during the reign of Akhenaten was different from thousands of years of Egyptian artistic tradition by adopting realism. Akhenaten, possibly suffering from a genetic disorder known as Marfan’s syndrome, had a long head, a potbelly, a short torso, and prominent collarbones. Representations of the pharaoh did not follow the age-old tradition of a handsome man with a good physique. The sculptor portrayed what he saw in reality, presumably at the direction of Akhenaten. The background of the exquisitely beautiful and powerful queen Nefertiti is unclear. Some believe that Queen Tiy was her mother. According to others, she was the daughter of the vizier Ay, who was a brother of Queen Tiy. Ay occasionally called himself “god’s father” suggesting that he was the father-in-law of Akhenaten. She carried much importance in her husband’s reign and pictures show her in the regalia of a king executing foreign prisoners by smiting them. According to some Egyptologists, she was a coregent with her husband from 1340 b.c.e. and instrumental in religious reforms. Some Egyptian scholars believe that in the same year she fell from royal favor or might have died. Nefertiti was probably buried in the capital city, but her body has never been found. Some researchers think that she ruled for a brief period after the death of Akhenaten. She had no sons, but future king Tutankhamun was her son-in-law. Known as the “first individual in human history,” the reign of Akhenaten forms an important period in Egyptian history. Despite his revolutionary changes, Egypt reverted to earlier religious discourse after his death. See also Egypt, culture and religion. Further reading: Aldred, Cyril. Akhenaten, King of Egypt. London: Thames and Hudson, 1991; David, A. Rosalie. The Making of the Past: The Egyptian Kingdoms. New York:E. P. Dutton, 1975; Freed, Rita, Yvonne Markowitz, and Sue D’Auria, eds. Pharaohs of the Sun: Akhenaten, Nefertiti, Tutankhamun. Boston: Museum of Fine Arts, 1999; Kemp, B. J. Ancient Egypt: Anatomy of a Civilization. New York: Routledge, 1989; Redford, Donald B. Akhenaten: The Heretic King. Princeton, NJ: Princeton University Press, 1984; Reeves, Nicholas. Akhenaten: Egypt’s False Prophet. London: Thames and Hudson, 2001; Shaw, I. The Oxford History of Ancient Egypt. New York: Oxford University Press, 2000. Patit Paban Mishra and Sudhansu S. Rath

AkkadEdit

Mesopotamia’s first-known empire, founded at the city of Akkad, prospered from the end of the 24th century b.c.e. to the beginning of the 22nd century b.c.e. Sargon of Akkad (2334–2279 b.c.e.) established his empire at Akkad; its exact location is unknown but perhaps near modern Baghdad. His standing army allowed him to campaign from eastern Turkey to western Iran. Although it is still unclear how far he maintained permanent control, it probably ranged from northern Syria to western Iran. His two sons succeeded him, Rimush (2278–70 b.c.e.) and Manishtushu (2269–55 b.c.e.), who had military success of their own by suppressing rebellions and campaigning from northern Syria to western Iran. Yet it was Manishtushu’s son Naram-Sin (2254–18 b.c.e.) who took the empire to its pinnacle. He established and maintained control from eastern Turkey to western Iran. In contrast to his grandfather who was deified after his death, Naram-Sin claimed divinity while he was still alive. The rule of Naram-Sin’s son Shar-kali-sharri (2217–2193 b.c.e.) was mostly prosperous, but by the end of his reign the Akkadian Empire controlled only a small state in northern Babylonia. Upon Shar-kalisharri’s death anarchy ensued until order was restored by Dudu (2189–2169 b.c.e.) and Shu-Durul (2168– 2154 b.c.e.), but these were more rulers of a city-state than kings of a vast empire. The demise of the Akkadian Empire can be explained by internal revolts from local governors as well as external attacks from groups such as the Gutians, Elamites, Lullubi, Hurrians, and Amorites. The Akkadian Empire set the standard toward which Mesopotamian kings throughout the next Alcibiades two millennia strove. Because of this, much literature appeared concerning the Akkadian kings, especially Sargon and Naram-Sin. In the Sargon Legend, which draws upon his illegitimate birth, Sargon is placed in a reed basket in the Euphrates before he is drawn out by a man named Aqqi and raised as a gardener. From this humble beginning Sargon establishes himself as the king of the first Mesopotamian empire. The King of Battle is another tale of how Sargon traveled to Purushkhanda in central Turkey in order to save the merchants there from oppression. After defeating the king of the city, Nur-Daggal, the local ruler is allowed to continue to govern as long as he acknowledges Sargon as king. Naram-Sin, however, is often portrayed as incompetent and disrespectful of the gods. In The Curse of Akkad, Naram-Sin becomes frustrated because the gods will not allow him to rebuild a temple to the god Enlil, so he destroys it instead. Enlil then sends the Gutians to destroy the Akkadian Empire. As we know, however, the Akkadian Empire continued to have 25 prosperous years under Shar-kali-sharri after the death of Naram-Sin, and the Gutians were not the only reason for the downfall of the Akkadian Empire. In fact, there is no evidence for the Gutians causing problems for the Akkadians until late in the reign of Shar-kali-sharri. Although this story had an important didactic purpose, it shows that caution must be used in reconstructing the history of the Akkadian Empire from myths and legends. In the Cuthean Legend, Naram-Sin goes out to fight a group that has invaded the Akkadian Empire. Naram-Sin seeks an oracle about the outcome of the battle, but since it is negative, he ignores it and mocks the whole process of divination. As in The Curse of Akkad, Naram-Sin’s disrespect of the gods gets him in trouble as he is defeated three times by the invaders. He finally seeks another oracle and receives a positive answer. Naram-Sin has learned his lesson: “Without divination, I will not execute punishment.” Despite these tales, there are others that paint Naram-Sin in a more positive light as an effective king with superior military capabilities. Along with a centralized government comes standardization. This included the gradual replacement of Sumerian, a non-Semitic language, with Akkadian, an East Semitic language, in administrative documents. Dating by year names, that is naming each year after a particular event such as “the year Sargon destroyed Mari,” became the system used in Babylonia until 1500 b.c.e. when it was replaced with dating by regnal years. There was also a standardized system of weights and measures. Taxes were collected from all regions of the empire in order to pay for this centralized administration. The Akkadian ruler appointed governors in the territories the empire controlled, but many times the local ruler was just reaffirmed in his capacity. The governor would have to pledge allegiance to the Akkadian emperor and pay tribute, but at times, when the empire was weak, the local rulers could revolt and assert their own sovereignty. This meant that the Akkadian rulers were constantly putting down rebellions. But perhaps the most important precedent started by the Akkadian Empire was the installation of Sargon’s daughter Enheduanna as the high priestess of the moon god Nanna at Ur. She composed two hymns dedicated to the goddess Inanna, making her the oldest known author in Mesopotamia. This provided much needed legitimacy for the kingdom in southern Babylonia and continued to be practiced by Mesopotamian kings until the sixth century b.c.e. See also Babylon, early period; Babylon, later periods; Elam; Moses; Sumer. Further reading: Franke, Sabina. “Kings of Akkad: Sargon and Naram-Sin.” In Sasson, Jack, ed. Civilizations of the Ancient Near East. New York: Charles Scribner’s Sons, 1995; Gadd, C. J. “The Dynasty of Agade and the Gutian Invasion.” In I. E. S. Edwards, C. J. Gadd, and N. G. L. Hammond, eds. The Cambridge Ancient History, 3rd ed., Vol. 1, Part 2, pp. 417–463. Cambridge: Cambridge University Press, 1971. James Roames

AlcibiadesEdit

(450–404 b.c.e.) Greek statesman and general Alcibiades was an Athenian who was influential in the creation of turmoil in his home city that went a long way to explaining the defeat by Sparta in the Peloponnesian War (431–404 b.c.e.). Alcibiades was a controversial and divisive figure, and his legacy in part continues to be colored by his character flaws even millennia after his death. Thucydides, Plato, and Plutarch recount the adventures of Alcibiades in their histories. Alcibiades was born into a powerful family, and his father commanded the Athenian army at the battle in which he was killed. Alcibiades was then only about seven years Alexander the Great old, and he became the ward of the statesman Pericles. He subsequently entered into Athenian public life in the political and military fields. Owing in part to his background, he quickly achieved high office and served with distinction. At the Battle of Delium, he assisted Socrates who had been wounded and in turn benefited from the older man’s advice. However, Alcibiades was too extravagant a personality to abide by the moral strictures that Socrates required of his pupils. Indeed, association with Alcibiades was later part of the charge brought against Socrates for corrupting the youth. Alcibiades was busy establishing himself as a leading personality in the Athenian assembly, the Ekklesia, while also becoming known as a budding socialite. His family had enjoyed personal relations with Spartan interests, and he had anticipated that he could call on these connections to broker a peace agreement to end the Peloponnesian War. However, Spartan leaders refused to countenance this personal approach and insisted on formal arrangements. Subsequently, Alcibiades pursued an anti-Sparta policy that probably perpetuated the war, arguably from a sense of pique. He organized the alliance with the Peloponnesian city-states of Argos, Elis, and Mantineia. The alliance was defeated at the Battle of Mantineia in 418, which led to Spartan dominance of the land and forced the Peloponnesian League to seek new fronts in the war. It was the necessity of opening a new front that led to the Syracusan campaign in Sicily. Alcibiades positioned himself to be one of the leaders of this campaign, but on the verge of the expedition leaving, statues of the god Hermes were found to have been mutilated and, on rather circumstantial evidence, Alcibiades became accused of violating the Eleusinian Mysteries. He sailed with the expedition, but inquiries continued during his absence. When it was determined that he should return to Athens to answer the charges against him, Alcibiades fled to Sparta and ensured his safety by providing the Spartans with valuable military advice. He made himself less popular by supposedly seducing the wife of the king of Sparta. Eventually the Spartans tired of Alcibiades, and he sought to make a new career for himself by courting the Persians, who saw the turmoil on the Greek mainland as a possible opportunity to expand their influence. For several years Alcibiades switched sides from Persia, to Athens, to neutrality, depending on the political winds. Brilliance of expression and savoir-faire were combined with total lack of scruples as he sought for the best advantage for himself. Finally Spartan naval victories secured a decisive advantage, and they took the opportunity to cause the governor of Phrygia, where Alcibiades had been taking shelter, to have him killed. Thus ended the life of one of the most vivid personalities of ancient Athens, who could surely have achieved genuine greatness if he could have married his gifts with some sense of personal integrity. See also Greek city-states; Persian invasions. Further reading: Kagan, Donald. The Peloponnesian War. New York: Penguin, 2004; Plutarch. Life of Alcibiades. Trans. by John Dryden. Available online. URL: http://classics. mit.edu (March 2006); Thucydides. The History of the Peloponnesian War. Trans. by Rex Warner. New York: Penguin Classics, 1954. John Walsh

Aleppo Edit

See Damascus and Aleppo.


Alexander the Great Edit

(356–323 b.c.e.) Macedonian ruler Alexander the Great was born in a town called Pella in the summer of 356 b.c.e. His father was Philip of Macedon, and his mother was Olympias. Philip II ascended to the throne in 359 b.c.e., at the age of 24. Under Philip II, Macedonia thrived and emerged as a strong power. Philip reorganized his army into infantry phalanx using a new weapon known as the sarissa, which was a very long (18-foot) spear. This was a devastating force against all other armies using the standard-size spears of the time. Alexander’s birth and early childhood are unclear, related only by Plutarch, who wrote his Life of Alexander around 100 c.e., many centuries later. In his youth Alexander had a classical education, with Aristotle as one of his teachers. One of his tutors, Lysimachus, promoted Alexander’s identification with the Greek hero Achilles. Later, Philip II took another wife, Cleopatra, who bore him a son named Caranus and a daughter. This created a second heir to the throne. Olympias was a strong-willed woman who jealously guarded her son’s right to succession. She had given Philip his eldest son, however, she was no longer in favor with Philip. At the age of 18, Alexander and his father led a cavalry against the armies of Athens and Thebes, which 10 Alexander the Great were fighting the last line of Greek defenses against Philip’s conquest. Philip had set a trap with his maneuver and at the decisive moment, Alexander, with his cavalry, sprung the trap. This victory at the Battle of Chaeronea in August 338 completed Philip’s conquest of Greece. In 336 Philip was murdered by Pausanias, a bodyguard. Upon the death of his father, Alexander and his mother, Olympias, did away with any of his political rivals who were vying for the throne. Philip’s second wife and children were slain. ALEXANDER THE KING Alexander became king in 336. He was an absolute ruler in Macedonia and king of the city-states of Athens, Sparta, and Thebes. As a new king, he had to prove that he was as powerful a ruler as his father, Phillip II, had been. Revolts against his rule first occurred in Thrace. In the spring of 335, Alexander and his army defeated the Thracians and advanced into the Triballian kingdom across the Danube River. Alexander faced the challenge of placating the recently conquered Greek city-states. While Alexander was in the Triballian kingdom, the Greek cities rebelled against the Macedonian rule. The Athenian orator Demosthenes spread a rumor that Alexander had been fatally wounded in an attack. News of Alexander’s death sparked rebellions in other Greek states, such as Thebes. The Thebans attacked the Macedonian garrison of their city and drove out the Macedonian general Parmenio. Their victory was due to a Greek mercenary named Memnon of Rhodes. Memnon defeated Parmenio at Magnesia and pushed him back to northwest Asia Minor. Alexander returned to Thebes after his victories and faced strong opposition from the Thebans, but Alexander defeated them swiftly. CAMPAIGN AGAINST PERSIA Alexander embarked on a campaign against Persia in the spring of 334. The Persians had attacked Athens in 480, burning the sacred temples of the Acropolis and enslaving Ionian Greeks. Alexander, a Macedon, won great favor with the Greeks by uniting them against Persia. He set out with an army of 30,000 infantry, 5,000 cavalry, and a fleet of 120 warships. The core force was the infantry phalanx, with 9,000 men armed with sarissa. The Persian army had about 200,000 men, including Greek mercenaries. Memnon, the Greek mercenary general, led the Persian force. Alexander had an excellent knowledge of Persian war strategy from an early age. In the spring of 334 he crossed the Hellespont (Dardanelles) into Persian territory. The Persian army stationed themselves uphill on a steep, slippery rocky terrain on the eastern bank of the river Granicus. Here they met Alexander’s army for the first time in May 334. Alexander was attacked on all sides but managed to escape, though he was wounded. The Persians left the battle, thinking they had claimed victory, and left behind only their Greek mercenaries to fight, resulting in a very high casualty rate on the Persian side. Alexander’s armies advanced south along the Ionian coast. Some cities surrendered outright. Greek cities, such as Ephesus, welcomed him as a liberator from the Persians. Memnon’s forces still presented a threat to Alexander. They stationed themselves at sea, and as Alexander did not wish to join in a sea battle, they were unable to stop his advances on land. In the city of Halicarnassus, Alexander and Memnon met in battle again. Alexander took the city, burned it down, and installed Ada, his ally, as queen. The Persian cities Termessus, Aspendus, Perge, Selge, and Sagalassus were taken afterward without much difficulty. This ease of conquest continued until he reached Celaenae, where he ordered his general Antigonus to placate the region. “DIVINE” RULER OF ASIA Throughout his military campaign people perceived Alexander to be divine. Even the ocean, according to legend, seemed to be servile toward him and his armies. There was a legend involving a massive knot of rope, stating that he who could unravel the knot Photo of a mosaic in the Museo Archeologico Nazionale, Pompeii, depicting Alexander the Great battling the Persian king Darius. Alexandria 11 would rule the world. Many had tried, while Alexander merely cut through the knot with his sword. Upon hearing this, King Gordius of Gordium surrendered his lands. The story of this divine prophecy being fulfilled spread quickly. Memnon’s death was also regarded as proof of Alexander’s divine quality. This hastened Alexander’s progress through the Persian territories of the eastern Mediterranean, which were long-held, conquered Greek states. The Battle of Issus in the gulf of Iskanderun was a decisive battle fought in November 333. The Persian king Darius himself led the Persians forces. Darius had a massive force, much larger than Alexander’s army. Darius was brilliant, approaching Alexander’s army from the rear and cutting off the army’s supplies. The battle occurred on a narrow plain not large enough for the massive armies; it was fought across the steep-sided river Pinarus. This lost the advantage for the Persians, and Alexander emerged victorious as King Darius III fled. The Battle of Issus was a turning point. Alexander moved from the Greek states that he liberated to lands inhabited by the Persians themselves. He conquered Byblos and Sidon unopposed. In Tyre he faced real opposition. The city fortress was on an island in the sea, and his prospects were worsened by his lack of a fleet. To his aid came liberated troops, defected from the Persian fleet. The army and the people of Tyre were defeated— most were tortured and slain, some were sold into slavery. Other coastal cities then readily surrendered. In 331 Alexander marched on to Egypt. Egyptians welcomed him as he was freeing them from Persian control, and the city of Alexandria was founded in his name. Alexander took a journey across the desert to the temple of Zeus Ammon, where an oracle told him of his future and that he would rule the world. From Egypt, Alexander corresponded with Darius, the Persian king. Darius wanted a truce, but Alexander wanted the whole of the Persian Empire. The same year he marched into Persia to pursue Darius. He conquered the lands around the Tigris and Euphrates Rivers. Alexander encountered Darius at Gaugamela and defeated the Persian army. Babylon and Susa fell, and he reaped their riches. After conquering the Persian capital of Persepolis, he rested there for a few months and then continued his pursuit of Darius. However, his own men had already assassinated Darius. Alexander started to adopt Persian dress and customs in order to combine Greek and Persian culture as a new, larger empire. He married Roxane, creating a queen who was not Greek, and this lost some of his Greek supporters. Still he gathered enough military support to invade India in 327. After many conquests he encountered Porus, a powerful Indian ruler, who put up a great battle near the river Hydaspes. After this his men were then reluctant to advance further into India. Alexander was seriously injured with a chest wound, and his armies retreated from India. Alexander died on June 10, 323 b.c.e., at the age of 33. Different scenarios have been proposed for the cause of his death, which include poisoning, illness that followed a drinking party, or a relapse of the malaria he had contracted earlier. Rumors of his illness circulated among the troops, causing them to be more and more anxious. On June 9, the generals decided to let the soldiers see their king alive one last time, and guests were admitted to his presence one at a time. Because the king was too sick to speak, he just waved his hand. The day after, Alexander was dead. See also Persian invasions of Greece. Further reading: Fox, Robin Lane. Alexander the Great. Malden, MA: Futura Publications, 1975; Green, Peter. Alexander of Macedon, 356–323 B.C.: A Historical Biography. Berkeley: University of California Press, 1991; Hammond, N. G. L. Alexander the Great, King Commander and Statesman. Park Ridge, NJ: Noyes Press, 1980; Stoneman, Richard. Alexander the Great. New York: Routledge, 2004. Nurfadzilah Yahaya

AlexandriaEdit

Alexandria, also known by its Arabic name al-Iskandariyya, was named after Alexander the Great. Alexandria was built on the Mediterranean Sea coast of Egypt at the northwest edge of the Nile Delta. The city lies on a narrow land strip between the sea and Lake Mariut (Mareotis in Greek). Alexander the Great founded the city in 331 b.c.e. He ordered Greek architect Dinocrates of Rhodes to build the city over the site of the old village of Rakhotis that was inhabited by fishermen and pirates. Alexander left the city under the charge of his general, Ptolemy (also known as Ptolemy I). The city would later become Alexander’s final resting place. After it was built, Alexandria evolved into an important economic hub in the region. It began by taking 12 Alexandrian literature over the trade of the city of Tyre whose economic prominence declined after an attack by Alexander. Alexandria soon surpassed Carthage as well, an ancient city that was the center of civilization in the Mediterranean. Although the city rose to great prominence under the Ptolemaic rulers during the Hellenistic period, it was soon surpassed by the city of Rome. During its peak Alexandria was the commercial center of the Mediterranean. Ships from Europe, the Arab lands, and India conducted active trade in Alexandria, and this contributed to its prosperity as a leading port in the Mediterranean Basin. The inhabitants of Alexandria consisted mainly of Jews, Greeks, and Egyptians. The Egyptians provided the bulk of the labor force. Alexandria was not only a bastion of Hellenistic civilization; it occupied a very prominent position in Jewish history as well. The Greek translation of the Old Testament in Hebrew was first produced there. Known as the Septuagint, the Hebrew Bible took between 80 and 130 years to translate. Thus, Alexandria was a major intellectual center in the Mediterranean. The city boasted two great libraries, with huge collections, one in a temple of Zeus, and the other in a museum. As early as the third century b.c.e., the libraries housed somewhere between 500,000 and 700,000 papyri (scrolls). A university was built near the libraries, attracting renowned scholars to Alexandria. One of them was the great Greek mathematician Euclid, a master of geometry, and author of the famous work Elements. After Cleopatra the queen of Egypt committed suicide in 32 b.c.e., the city of Alexandria came under the rule of Octavian, later known as Augustus, the first Roman emperor. Augustus installed a prefect in Alexandria, who governed the city in his name. Trade continued to flourish in the city under the Romans especially in the product of grain. The city went into decline under the Romans. A Jewish revolt in 116 c.e. weakened the city. It resulted in the decimation of the Jewish population residing there. Nearly a century later in 215 c.e., for reasons that are unclear, the Roman emperor Caracalla decreed that all male inhabitants be massacred, perhaps as punishment. This further undermined the city’s importance in the region and was worsened by the rise of other important cultural, economic, and intellectual centers such as Constantinople, founded in 330 c.e. by Roman emperor Constantine the Great. In both 638 and 646 c.e. Muslim Arabs invaded the city. During this time Cairo became another rival city. Alexandria soon weakened, and it was not resurrected until the 19th century. See also Jewish revolts; libraries, ancient. Further reading: Forster, Edward M. Alexandria: A City and a Guide. New York: Anchor Books, 1961; Parsons, Edward A. The Alexandrian Library, Glory of the Hellenic World; Its Rise, Antiquities, and Destructions. Amsterdam, ND: Elsevier Press, 1952; Vrettos, Theodore. Alexandria: A City of the Western Mind. New York: Free Press, 2001. Nurfadzilah Yahaya

Alexandrian literatureEdit

Alexandrian literature was very rich due to its multicultural heritage, as Alexander the Great’s empire encompassed Europe, Asia, and Africa. Alexander’s conquests opened up trade and travel routes across his empire, and Alexandria developed as a center of commerce between the Middle East, Europe, and India. The city was also known as a center of learning. Greek A sphinx and pillar from the temple of the Serapis in Alexandria, Egypt. Alexandria was the commercial center of the Mediterranean. Ambrose 1 3 was the lingua franca in Egypt for the people of different origins residing there. Due to the distinguished community of intellectuals living within the borders of Alexandria, Alexandrian literature is of high quality. The excellent libraries also attracted scholars of diverse origins to further enrich intellectual life in the vibrant city. In 283 b.c.e. a synodos, formed by 30 to 50 scholars, set up a library with several wings, shelves, covered walkways, lecture theaters, and even a botanical garden. The library was built under the direction of a scholar-librarian who held the post of royal tutor appointed by the king. By the third century b.c.e. the library had an impressive collection of 400,000 mixed scrolls and 90,000 single scrolls. The earlier scrolls on which scholars wrote were made of papyrus, a product monopolized by Alexandria for a period of time. Later scholars switched to parchment when the king, in a bid to stifle competing rival libraries elsewhere, stopped exporting papyrus. These scrolls, which constitute books, were stored in linen or leather jackets. In the library there were numerous translators, known as charakitai, or “scribblers.” The translators performed a vital function in transmitting the wisdom found in manuscripts that had been written in other languages in Greece, Babylon, India, and elsewhere. These manuscripts were meticulously copied and stored in the libraries of Alexandria, as the kings wished to amass all the knowledge that was available in the world of antiquity. This contributed greatly to Alexandria’s position as a center of knowledge in ancient civilization. Among the eminent scholars based in Alexandria were Euclid (325–265 b.c.e.), the famous mathematician who composed his influential masterpiece Elements in the city in about 300 b.c.e. Euclid provided useful definitions of mathematical terms in Elements. Apollonius of Perga wrote an equally seminal work in mathematics known as Conics. In this work, Apollonius discussed a new approach in defining geometrical concepts. Another Apollonius—Apollonius of Rhodes, who was a mathematician and astronomer—wrote his epic Argonautica in about 270 b.c.e. The epic was dubbed as the first real romance and regarded as an enjoyable read as it was written for pleasure and not for any explicitly didactical purpose. Alexandrian prose was often criticized for being pedantic, ornamental, and pompous; though some perceived Alexandrian literature to be erudite and polished. The novel is said to be an invention of Alexandrian writers. Archimedes of Syracuse (287–212 b.c.e.), the famous Hellenistic mathematician observed the rise and fall of the Nile, invented the screw, and initiated hydrostatics. The basis of calculus began in Alexandria, as it was where Archimedes started to explore the formula to calculate area and volume. Another brilliant scholar of Alexandria was the librarian Eratosthenes who was a geographer and a mathematician. Eratosthenes correctly calculated the duration of a year, postulated that the Earth is round, and theorized that the oceans were all connected. There was also Claudius Ptolemy whose great work was Mathematical Syntaxis (System), usually known by its Arabic name Almagest. It is an important work of trigonometry and astronomy. From the middle of the first century c.e., Christian hostility managed to push scholars away from Alexandria. As a result the city declined as a city of learning in the Mediterranean. The library in Alexandria was destroyed during a period of civil unrest in the third century c.e. In the fourth century not only were pagan temples destroyed, but libraries were also closed down under the orders of Theophilus, the bishop of Alexandria, further eroding Alexandria’s function as a bastion of literature. See also libraries, ancient. Further reading: Battles, Matthew. Library, an Unquiet History. New York: W. W. Norton, 2003; El-Abbadi, Mostafa. The Life and Date of the Ancient Library of Alexandria. Paris, France: UNESCO, 1990; Keeley, Edmund. Cavafy’s Alexandria. Princeton, NJ: Princeton Unversity Press, 1996; Watson, Peter. Ideas: A History from Fire to Freud. New York: HarperCollins, 2005. Nurfadzilah Yahaya

Ambrose Edit

(c. 340–397 c.e.) bishop and theologian Ambrose, bishop of Milan, was born in Trier of the noble Aurelian family. His mother moved the family to Rome after the death of his father. Educated in rhetoric and law, Ambrose was first employed in Sirmium and then in c. 370 c.e. as governor of Milan. After the death of the Arian bishop of Milan, a violent conflict broke out in the city over whether the next bishop would be a Catholic or an Arian. Ambrose intervened to restore peace and was so admired by all that both sides accepted him as a candidate for bishop, although 14 Andes: Neolithic he was not even baptized at the time. He was baptized and consecrated a bishop within a week. He immediately gave his wealth to the poor and devoted himself to the study of scripture and the Greek fathers of the church. As a bishop, he was famous for his preaching, which was partly responsible for the conversion of the great theologian Augustine of Hippo, whom Ambrose baptized at Easter in 387. Ambrose’s career was heavily involved with politics. He was continually defending the position of the Catholic Church against the power of the various Roman emperors during his episcopate: Gratian, Maximus, Justina (pro-Arian mother of Valentinian II), and Theodosius I. He was able to maintain the independence of the church against the civil power in his conflicts with paganism and Arianism. Regarding the former, Ambrose battled with Symmachus, magistrate of Rome, over the Altar of Victory in the Senate: The emperor Gratian had removed the altar in 382, and after Gratian’s death Symmachus petitioned Valentinian II for its restoration. Under Ambrose’s influence, the request was denied. Arianism received a blow when Ambrose refused to surrender a church for the use of the Arians. His decision was taken as sanctioned by heaven when—in the midst of the controversy—the bodies of the martyrs Gervasius and Protasius were discovered in the church. Ambrose further strengthened the church’s authority before the state in two incidents in which he took a firm stand against the emperor Theodosius I. One incident involved the rebuilding of the synagogue at Callinicum in 388; the other had to do with the emperor’s rash order that resulted in the massacre of thousands of innocent people at Thessalonica in the summer of 390. Ambrose refused to allow Theodosius to receive the sacraments until he had performed public penance for this atrocity. The reconciliation took place at Christmas 390. One reason for Ambrose’s influence over Theodosius was that, unlike most Christian emperors who delayed their reception into the church until their deathbed, he had been baptized and so fell under the authority of the church in his private life. Ambrose’s knowledge of Greek enabled him to introduce much Eastern theology into the West. His works include hymns, letters, sermons, treatises on the moral life, and commentaries on scripture and on the sacraments. He was also a strong supporter of the monastic life in northern Italy. See also Christianity, early; Greek Church; Latin Church; Monasticism. Further reading: Deferrari, Roy. Early Christian Biographies. Washington, DC: CUA Press, 1952; Dudden, F. Homes. Life and Times of St. Ambrose. Oxford: Clarendon, 1935; McLynn, Neil B. Ambrose of Milan: Church and Court in a Christian Capital. Berkeley: University of California Press, 1994. Gertrude Gillette

Andes: Neolithic Edit

In order to impose temporal order on the variety of cultures and civilizations that emerged in the Andes in the millennia before the Spanish invasion (early 1530s c.e.), scholars have divided Andean prehistory into “horizons” and “periods,” with horizons representing eras of relatively rapid change and periods being eras of relative stasis: Late Horizon 1400–1533 c.e. Late Intermediate Period 1000–1400 c.e. Middle Horizon 600–1000 c.e. Early Intermediate Period 100 b.c.e.–600 c.e. Early Horizon 700–100 b.c.e. Initial Period 1800–700 b.c.e. Preceramic Period 3000–1800 b.c.e. Lithic Period >10,000–3000 b.c.e. The boundaries between these temporal divisions are fluid and are mainly a matter of scholarly convenience and convention. Spatially, the Andes region is generally divided into coast and highlands, with these subdivided into northern, central, and southern, yielding a total of six broad geographic zones. ÁSPERO The earliest evidence for the formation of complex societies in the Andes region dates to between 3200 and 2500 b.c.e. along the Pacific coast. Altogether more than 30 rivers cascade down to the Pacific from the Cordillera Occidental of the Andes, many of whose valleys held the development of complex societies during the Preceramic Period. One of the most extensively researched of these coastal zones is the North Chico, a 30-mile-wide ribbon of coastland, just north of present-day Lima, encompassing the Huaura, Supe, Pativilca, and Fortaleza river valleys. Archaeological excavations in the North Chico beginning in the 1940s have revealed evidence of at least Andes: Neolithic 15 20 large settlements with monumental architecture, whose origins date to between 3200 and 1800 b.c.e. The most intensively researched of these sites are Áspero, at the mouth of the Supe River, and Caral, about 131⁄2 miles upstream from Áspero. It was his work at the site of Áspero that in 1975 prompted U.S. archaeologist Michael E. Moseley to propose a hypothesis conventionally called the “maritime foundations of Andean civilizations” (MFAC). According to the MFAC hypothesis, the initial formation of complex societies in the Andean region took place along the coast and was made possible through the intensive exploitation of maritime resources. This, in turn, was made possible largely through the cultivation of cotton, which was used to manufacture the nets needed to harvest the coast’s abundant fish, especially anchovies and sardines. Evidence unearthed at Áspero and other sites in the North Chico since the 1970s strongly supports the MFAC hypothesis, though debates continue regarding the origins and characteristics of these societies. The site of Áspero presents numerous anomalous features. It contains no pottery, only a few maize cobs, and some 17 large earthen mounds, some nearly 16 feet tall. The largest structure at the site, a flat-topped pyramid called Huaca de los Ídolos, covers some 16,145 sq. feet, upon which, it is hypothesized, Áspero’s elite undertook ritual and ceremonial displays. The site also contains some 30 to 37 acres of domestic middens (refuse areas), along with evidence that its residents were continually rebuilding the mounds and other structures. This latter characteristic is also apparent at other Pacific coast sites. Upriver from Áspero, at the site of Caral, which covers some 150 acres, investigations have revealed some 25 pyramids or mounds, one reaching 82 feet in height and covering some 247,570 sq. feet; two large, rounded, sunken ceremonial plazas; arrays of other mounds and platforms; extensive residential complexes; and evidence of long-term sedentary inhabitation. Radiocarbon dates indicate that Caral was founded before 2600 b.c.e. The same dating procedure applied to other sites in the North Chico indicates that most were founded between 3000 and 1800 b.c.e. Middens at Caral and other North Chico sites indicate that maritime resources exploited through cotton cultivation and net manufacture were supplemented by a variety of cultigens, including legumes and squash, and by the gathering of diverse wild foods. In addition to Áspero and Caral, the most extensively researched of these sites to date include Piedra Parada, Upaca, Huaricanga, and Porvenir and in the Casma Valley, the sites of Sechín Alto, Cerro Sechín, and Pampa de las Llamas-Moxeke. All fall within what is called the Áspero tradition. Other major Preceramic Pacific coast traditions are the Valdivia tradition (on the coast of contemporary Ecuador); the El Paraíso tradition (just south of the Áspero sites); and the Chinchoros tradition (centered at the Chinchoro complex near the contemporary Peru-Chile border). Archaeological excavations at these and other Preceramic coastal sites continue, as do scholarly efforts to understand the civilizations that created them. HIGHLANDS A related arena of debate among Andean archaeologists concerns the relationship between the Pacific coast settlements and the formation of complex societies in the highlands. Most scholars agree that complex societies began to emerge in the Central and South-Central Highlands soon after the florescence of complex societies in the North Chico and other coastal valleys. In the Central Highlands scholars have investigated what is called the Kotosh religious tradition at the Kotosh site. Not unlike those in the North Chico, this site includes a series of raised mounds with platforms, sunken plazas, and an array of small buildings. Sites exhibiting similar characteristics in the Central Highlands include Huaricoto, La Galgada, and Piruru. In the South-Central Highlands the emergence of complex societies evidently began in the Lake Titicaca Basin around 1300 b.c.e. Excavations at the site of Chiripa (in present-day Bolivia) have revealed that by this date there had emerged a nucleated settlement that included an array of small rooms, built of stone, with plastered floors and walls. By 900 b.c.e. the settlement of Chiripa included a ceremonial center surrounded by residential complexes. Between 1000 and 500 b.c.e. complex societies had emerged throughout much of the Lake Titicaca Basin. To the north the Qaluyu culture reached florescence in the five centuries after 1000 b.c.e. The Qaluyu type site, covering 17 acres, includes a large ceremonial mound, sunken plazas, and extensive residential complexes. Other Qaluyu sites in the north Titicaca Basin include Pucará, Ayaviri, and Putina. TITICACA BASIN The overall trajectory of this period was marked by the decline of North Coast polities and the rise of a 16 Andes: Neolithic series of civilizations and culture groups in the Central and Southern Highlands and Central Coast. After 1000 b.c.e. the Titicaca Basin constituted one broad locus of complex society formation. A second such locus emerged further north, in the Central Highlands and Central Coast, most commonly associated with the Chavín state and culture complex, which first emerged around 800 b.c.e. and declined some six centuries later. At the Chavín type site, Chavín de Huantar, excavations indicate a population of at least several thousand in a settlement covering some 104 acres. At the site’s core lie the ruins of a large and imposing ceremonial temple, dubbed El Castillo, built in the U shape characteristic of the North Chico architectural style. The evidence indicates that Chavín de Huantar was the political center of an expansive polity that extended through much of the Central Highlands and Central Coast. By this time exchange relations throughout the Andes and adjacent coastal regions were highly developed. These exchanges were based less on markets than on institutionalized reciprocal exchanges between extended lineage groups tracing their descent to a common ancestor, called ayllu, as well as between political networks resulting from the growth of state and imperial power. Such exchanges were based on what anthropologist John Murra described in the 1970s as the “vertical archipelago,” a concept that has gained broad scholarly acceptance. In the simplest terms the basic idea is that the Andes region consists of a vertical environment and that exchanges of goods and services took place among members of ayllus who lived in different “resource oases” or “islands” in different altitudinal zones. From the high plateau (or puna, elevation higher than 11,810 feet) came wool, meat, and minerals such as gold, silver, and copper; from the upper mountain valleys (between 9,840 and 11,810 feet) came potatoes, grains, including maize and quinoa, and other crops; and from the lowlands (below 6,560 feet) came maize, cotton, coca, legumes, and many fruits and vegetables. Scholarly consensus holds that large-scale state systems such as the Chavín built upon these lineage-based reciprocal exchange networks in order to extend their reach across vast expanses of territory without recourse to long-distance trade, as the concept of “trade” is generally understood. For these reasons, “markets” and “trade,” as understood in European, Asian, and African contexts, played little or no role in the formation and growth of complex societies and polities in the Andean highlands or coastal regions during the whole of the preconquest period. This was also the case with the Inca. As Chavín declined around 400 b.c.e., there emerged in the northern Titicaca Basin, in the six centuries between 400 b.c.e. to 200 c.e., a site and polity known as the Pucará, with architectural features similar to those described above, and ceramic styles suggesting Chavín influence. On the opposite side of the lake, in the southern Titicaca Basin during roughly the same time period, there emerged the settlement and state of Tiwanaku—again, with similar architectural features. By around 400 c.e. Tiwanaku had developed into a formidable state system. Scholarly debates continue on whether, during the period under discussion here, these were true urban centers or ceremonial sites intended principally for ritual observances and pilgrimages. NAZCA Another enigmatic culture complex to emerge during the Early Intermediate Period was the Nazca, centered in the southern coastal zone around the watersheds of the Ica and Nazca Rivers. Nazca pottery styles went through at least eight distinct phases, until their decline around 600 c.e. The Nazca are especially well known for their geoglyphs, or large-scale geometric symbols etched into the coastal desert. Further north, the Moche were another important coastal culture group and state to emerge in the Early Intermediate. The site of Moche, in the Moche River valley, has been identified as the capital of the Moche polity. Archaeologists consider Moche to have been a true city; perhaps South America’s first. The largest structure at the Moche type site, a pyramid dubbed Huaca del Sol, measures 525 by 1,115 feet at its base and stands some 131 feet tall, making it one of the Western Hemisphere’s largest preconquest monumental structures. All of these developments laid the groundwork for the subsequent emergence of two other major state systems, or empires, toward the end of the period discussed here: the Huari and the Tiwanaku. See also Maya: Classic Period; Maya: Preclassic Period; Mesoamerica: Archaic and Preclassic Periods; Mesoamerica: Classic Period. Further reading: Burger, R. Chavín and the Origins of Andean Civilization. London: Thames and Hudson, 1995; Haas, J., S. Pozorski, and T. Pozorski, eds. The Origins and Development of the Andean State. London: Cambridge University Press, 1987; Haas, J., W. Creamer, and A. Ruiz. “Dating the Late Archaic Occupation of the North Chico Region Anyang 1 7 in Peru.” Nature 432 (2004); Moseley, M. E. The Incas and Their Ancestors. London: Thames and Hudson, 2001; ——— The Maritime Foundations of Andean Civilization. Menlo Park, CA: Cummings, 1975. M. J. Schroeder

Antonine emperors Edit

The four Antonine emperors of Rome—Antoninus Pius (r. 138–161 c.e.), Marcus Aurelius (r. 161–180 c.e.), Lucius Verus (r. 161–169 c.e.), and Commodus (r. 180– 192 c.e.)—ruled over a time extending from the height of the Pax Romana to one where the Roman Empire was having increasing difficulty carrying its many burdens. The founder of the dynasty, Antoninus Pius, was born to a family that already numbered several consuls among its members. He served for many years in the Senate and as Roman official before being adopted as successor to the emperor Hadrian in 138 c.e. Part of the arrangement was that Antoninus would in turn adopt two boys as his heirs. One was the nephew of his wife, Annia Galeria Faustina. This was Marcus Antoninus, the future Marcus Aurelius. The other was Lucius Verus, the son of Hadrian’s previous choice as successor, Lucius Aelius Caesar. When Hadrian died the same year, Antoninus succeeded peacefully. Antoninus was more than 50 when he became emperor. The reign of Antoninus was marked by peace and by an emphasis on Italy and Roman tradition that broke with the practices of the globetrotting philhellene Hadrian. His dedication to traditionalism was one of the qualities for which the Senate gave him the title of “Pius.” Antoninus also cut back on the heavy spending on public works that had marked Hadrian’s reign. Antoninus spent most of his time in Rome, by some accounts never leaving Italy during his reign. The 900th anniversary of the city’s legendary founding took place in 147 c.e., and a series of coins and medallions with new designs stressing Rome’s ancient roots were issued to commemorate the occasion. In foreign policy Antoninus preferred peace to war and did not lead armies himself, but the empire waged war successfully on some of its borders. Antoninus’s death was followed by a dual succession, the first in Roman history. Lucius Verus and Marcus Aurelius became co-emperors, although Marcus was clearly the dominant partner in the relationship. The new emperors faced many challenges. In the east, the king of Parthia hoped to take advantage of the inexperienced new rulers with an intervention in the buffer state of Armenia. Marcus sent Lucius, accompanied by a number of Rome’s best generals, to deal with the Parthians. The Parthian war was successful but followed by a devastating plague and pressure from the Germanic peoples across the Danube as the Marcomanni and Quadi actually made it as far as northern Italy. The relationship between the emperors was troubled, as Marcus’s austere dedication to duty clashed with Lucius’s sometimes irresponsible hedonism. Lucius died on campaign against the Germans, however, before any open break could occur, and Marcus referred to him fondly in his Meditations. Marcus’s long campaigns against the Germans were successful, but he died before he could organize the conquered territories into Roman provinces, and his son and successor Commodus (who received the title of emperor in 177) quickly abandoned his father’s conquests, returning to Rome in order to enjoy the perquisites of empire. Commodus was the first son to succeed his natural father, rather than to be adopted by an emperor, since Domitian. The hedonistic and exhibitionistic Commodus contrasted with his grim, duty-bound father. His policy of generosity made him popular among Rome’s ordinary people, particularly in the early part of his reign, but the Senate despised him. Commodus was extraordinarily arrogant, renaming the months, the Senate, the Roman people, and even Rome after himself. Unlike Marcus, Commodus had little interest in persecuting Christians, and subsequent Christian historians remembered his reign as a golden age. In 192 he was removed in the traditional fashion for “bad emperors,” through an assassination plot—the first emperor since Domitian to be assassinated. Commodus left no heirs, and his death marked the end of the Antonine dynasty. See also Hadrian; Roman Empire. Further reading: Birley, Anthony. Marcus Aurelius: A Biography. London: Routledge, 2000; De Imperatoribus Romanis: An Online Encyclopedia of Roman Emperors. Available online. URL: http://www.roman-emperors.org (September 2006). William E. Burns

Anyang Edit

Anyang is the modern town where the last capital (Yin) of the Shang dynasty (c. 1766–c. 1122 b.c.e.) of China was located. The discovery of inscribed oracle bones 18 apocalypticism, Jewish and Christian there early in the 20th century and the scientific excavation of the site beginning in 1928 ended the debate on whether the Shang dynasty was historic. It is located south of the Yellow River in present-day Henan Province. The Shang dynasty, founded by Tang (T’ang) the Successful moved its capital several times until it settled at Yin in 1395 b.c.e. and remained there until its end in 1122 b.c.e. The last phase of the dynasty is therefore also called the Yin dynasty. After the city was destroyed when the dynasty was overthrown by the Zhou dynasty (c. 1122–256 b.c.e.), the site was known as Yinshu, which means the “waste of Yin.” The discovery of the Shang era ruins at Anyang came by accident. In Beijing (Peking) in 1900 an antiquarian scholar became ill, and among the ingredients for traditional medicine that were prescribed for him were fragments of old bones carrying incised marks. The apothecary called them dragon bones. This scholar and his friend made inquiries on the bones’ origins and traced them to Anyang, where farmers had found them in their diggings. They began to collect the bones and decipher the writings on them, which they established as the earliest extant examples of written Chinese. Archaeological excavations around Anyang found the foundations of palatial and other buildings but no city walls. They also found a royal cemetery with 11 large tombs, believed to belong to kings, which had all been robbed in centuries past. This authenticates ancient texts that identify 12 kings who ruled from Yin, but the last one died in his burning palace and so did not receive a royal burial. In 1976 an intact tomb belonging to Fu Hao (Lady Hao), wife of King Wuding (Wu-ting), the powerful fourth king to reign from Yin, was discovered. Although her body and the coffin had been destroyed by time and water, more than 1,600 burial objects were found, some with inscribed writing, which included her name, on elaborate bronze ritual vessels. Bronze vessels, jade, ivory, and stone carvings, and other objects show the advanced material culture of the late Shang era. More than 20,000 pieces of inscribed oracle bones (on the scapulae of cattle and turtle shells) provide important information on Shang history. Kings frequently asked questions and sought answers from the high god Shangdi (Shang-ti) on matters such as war and peace, agriculture, weather, hunting, pregnancies of the queens, and the meaning of natural phenomena. The questions, answers, and sometimes outcome contain dates, names of the rulers, and their relationship to previous rulers, including those of the pre-Anyang era. They were preserved in royal archives. The writing is already sophisticated and must have developed over a long period, but earlier evidence of writing has not been found. It is the ancestor of modern written Chinese and deciphering the characters and information provided from archaeological evidence has enabled historians to reconstruct Shang history. See also Wen and Wu. Further reading: Chang, Kwang-chih. The Archaeology of Ancient China, 4th ed. New Haven, CT: Yale University Press, 1986; Creel, Herrlee G. The Birth of China, a Survey of the Formative Period of Chinese Civilization. New York: Frederick Ungar, 1961; Keighytley, David N. Sources of Shang History: The Oracle-Bone Inscriptions of Bronze Age China. Berkeley: University of California Press, 1978. Jiu-Hwa Lo Upshur

apocalypticism, Jewish and Christian Edit

The scholarly use and understanding of the word apocalypticism has varied much in the history of research on these topics. The different words associated with apocalypticism each possess their own subtle connotations. The specific term, apocalypticism, and the many forms associated with it are derived from the first Greek word in the book of Revelation, apokalypsis (revelation). The noun apocalypse refers to the revelatory text itself. The particular worldview found within an apocalypse and the assumptions that it holds about matters concerning the “end times” is referred to as “apocalyptic eschatology.” The noun apocalypticism refers broadly to the historical and social context of that worldview. When scholars use the word apocalyptic, they typically assume a distinction between the ancient worldview and the body of literature associated with it. Apocalypticism refers to a worldview that gave rise to a diverse body of literature generally dating from the time of the Babylonian exile down to the Roman persecutions. Characteristic elements of this literature include a revelation of heavenly secrets to a privileged intermediary and the periodization of history. In these texts the eschatological perspective of the text reinforces the expectation that the era of the author will reach its end very soon. This apocalyptic eschatology suggests that the historical setting of these writings is one of crisis and extreme suffering. Scholars who work in the area of ancient Jewish and Christian apocalypticism are aware that Jewish apocalyptic literature survived due to ancient Christian Apostles, Twelve 19 appropriation and interest in it. This is because Jewish apocalypticism and the literature associated with it were generally viewed unfavorably by later forms of rabbinic Judaism after the destruction of the Second Temple. The lack of a developed Jewish interpretive framework for these texts accounts for part of the scholarly problem in determining the precise origins and influences of this phenomenon. Many historical questions about the social context and the use of these Jewish apocalyptic writings in ancient Jewish communities remain unclear and largely theoretical. What is certain is that Christian communities were responsible for the preservation and transmission of these writings, and they appropriated the worldview and the literary forms of Jewish apocalypticism. Scholars have long sought to identify the origins of Jewish apocalypticism with little consensus. Many have presumed that Jewish apocalyptic eschatology grew out of earlier biblical forms of prophetic eschatology. Other scholars have proposed a Near Eastern Mesopotamian influence on Jewish apocalypticism. While there is no clear trajectory from Mesopotamian traditions to Jewish apocalyptic, and admittedly no Mesopotamian apocalypses exist, there exist some striking resemblances between the two. Some shared characteristics include an emphasis on the interpretation of mysterious signs and on predestination. The motifs of otherworldly journeys and dreams are also prominent in both Mesopotamian traditions and Jewish apocalypticism. Other scholars have observed a Persian influence upon Jewish apocalypticism. Present in both is the struggle between light and darkness (good and evil) and the periodization of history. Identifying the relationship between Jewish apocalypticism and other traditions has been complex because some of these elements (e.g., otherworldly journeys and revelatory visions) become common to the Greco-Roman world as well. While early Jewish apocalyptic was rooted in biblical prophecy, later forms of apocalypticism from the Greek period have more in common with wisdom literature. Literary Genre Scholars often make a distinction between the general phenomenon of apocalypticism and the literary genre of “apocalypse.” A group of scholars led by J. J. Collins formulated the following frequently cited definition of the literary genre of apocalypse in 1979: “‘Apocalypse’ is a genre of revelatory literature with a narrative framework, in which a revelation is mediated by an otherworldly being to a human recipient, disclosing a transcendent reality which is both temporal, insofar as it envisages eschatological salvation, and spatial, insofar as it involves another, supernatural world.” Texts associated with apocalypticism are characterized by an understanding that salvation from a hostile world depends on the disclosure of divine secrets. The only example of an apocalypse from the Hebrew Bible is the book of Daniel. Other well-known examples of apocalypses include the writings of Enoch and Jubilees and the traditions associated with them, 4 Ezra, 2 Baruch, 3 Baruch, and Apocalypse of Abraham. Some texts from Qumran and the Dead Sea Scrolls present a worldview that is properly described as apocalyptic but do not qualify as examples of the literary genre (e.g., “Instruction on the Two Spirits” from the Community Rule text and the War Scroll). The last book in the New Testament, known as the Apocalypse of John, is an example of a Christian apocalypse. The canonicity of this book was not accepted at first in the East. The book is a record of the visions of John while he was exiled on the island of Patmos and possesses a prophetic authority among Christian communities throughout history. Highly symbolic language, the presumption of a cataclysmic battle, and the disclosure of heavenly secrets to a privileged intermediary make this text a classic example of the genre. Other examples of Christian apocalypse outside the Bible include the Ascension of Isaiah and the Apocalypse of Paul. See also Babylon, later periods; Christianity, early; Fertile Crescent; Hellenization; Homeric epics; Judaism, early (heterodoxies); messianism; Persian myth; prophets; Pseudepigrapha and the Apocrypha; Solomon; wisdom literature; Zoroastrianism. Further reading: Collins, J. J. “Introduction: Towards the Morphology of a Genre.” Semeia 14 (1979): 1–19; ———. Apocalypticism in the Dead Sea Scrolls. New York: Routledge, 1997; Hanson, P. D. The Dawn of Apocalyptic. Philadelphia: Fortress, 1975; VanderKam, J. C., and W. Adler, eds. The Jewish Apocalyptic Heritage in Early Christianity. Minneapolis: Fortress Press, 1996; Yarbro Collins, A. The Combat Myth in the Book of Revelation. Missoula, MT: Scholars Press, 1976. Angela Kim Harkins

Apostles, Twelve Edit

The word disciple is used most often in Greek philosophical circles to describe a committed follower of a master (such as Socrates). Jesus (Christ) of Nazareth had many such disciples, besides the 12 who 20 Apostles, Twelve became the apostles of the church. For example, Luke 6:13 hints at the existence of a larger circle of disciples: “And when it was day, he called his disciples, and chose from them 12, whom he named apostles.” Among the disciples who were not chosen as the 12 were women. This is noteworthy because few masters in the time of Jesus had female disciples. Beyond these disciples, many men and women were drawn to Jesus and followed him casually. The Gospels call them “crowds.” Jesus shared with the disciples thoughts that were kept from the crowds. For example, according to Mark, after Jesus had finished telling parables to the crowds, the disciples came to Jesus to learn their hidden meanings. The reason for this private tutoring was that the disciples were expected to develop ears and eyes to discern the true and deeper meaning of Jesus’ teachings. The 12 who were chosen, however, followed Jesus even more fully than the other disciples by leaving behind everything they had, including their jobs and families. The 12 were allowed to witness private details of Jesus’ life not available to the other disciples. For example, only the 12 were with Jesus on the night of his arrest. According to the synoptic Gospels and Acts, the names of the 12 were Simon Peter; James, son of Zebedee; John; Andrew; Philip; Bartholomew; Matthew; Thomas; James, son of Alphaeus; Thaddaeus (Judas); Simon the Cananaean; and Judas Iscariot, who betrayed Jesus. Unlike the other names, Simon Peter, Philip, and James, son of Alphaeus, consistently occupy the same positions (first, fifth, and ninth, respectively) on the list. Based on this observation, it has been suggested that the 12 were organized into groups of four and that Peter, Philip, and James, son of Alphaeus, were their group leaders. This intriguing suggestion, however, has no hard evidence for support. As far as we know, the 12 were all from Galilee. Peter, Andrew, James, and John were fishermen, who, except perhaps Andrew, constituted the innermost circle of Jesus’ apostles. Simon Peter was the undisputed leader of the 12. Andrew was his brother and introduced him to Jesus. According to tradition, Andrew preached in Greece, Asia Minor (Turkey), and the areas north and northwest of the Black Sea. Tradition claims that he was martyred in Patras. A late tradition claims him to be the founder of the church of Constantinople, the seat of the Greek Church. James and John, sons of Zebedee, were also brothers. Possessors of a fiery temper and ambition, they asked Jesus to appoint them to sit at his left and right hand when his kingdom came. James (known also as James the Greater to distinguish him from James, son of Alphaeus) became the first of the apostles to be martyred under Herod Agrippa I. According to tradition, James had preached in Spain before meeting his untimely death in Jerusalem. As for John, tradition claims that he was the beloved disciple who wrote the Gospel of John, the three Epistles of John, and possibly also the book of Revelation. Tradition also claims that John, having survived a boiling cauldron of oil and banishment to Patmos under Emperor Domitian for preaching the Gospel in Asia Minor, died a natural death in Ephesus in the company of Mary, mother of Jesus. Modern critical scholarship rejects most of these claims. Philip is best remembered in the New Testament for introducing Nathaniel to Jesus and for asking Jesus to show him the Father. According to tradition, Philip’s ministry and martyrdom took place in Asia Minor. Not much is known about Bartholomew in the New Testament. According to tradition, he is the same person as Nathaniel in John 1:43–51, the man whom Jesus said was without guile. Tradition claims Bartholomew preached in Armenia and India, among other places. Thomas, known also as Didymus (Twin), is best remembered as the cynical doubter who wanted to touch the scars on the hands and the body of the resurrected Jesus. Thomas is a prominent figure in the Syriac culture and church, and according to tradition, he preached in India, where he was martyred. He is also credited with the Gospel of Thomas (reportedly of the Gnostics), which some scholars date to the middle of the first century c.e. Matthew was a tax collector who, according to ancient tradition, was the writer of the Gospel of Matthew. Many scholars reject this tradition, largely because of Matthew’s apparent literary dependence on Mark. The New Testament gives virtually no information about James, son of Alphaeus (known also as James the Lesser). James and Matthew would be brothers if Matthew is Levi who is also called son of Alphaeus in Mark 2:14. Tradition makes the questionable claim that James the Lesser was a cousin of Jesus. According to one tradition, he preached in Palestine and Egypt, but according to another, he preached in Persia. Thaddaeus (of Mark 3) is probably the same figure as Judas, son of James (of Luke 6 and Acts 1). Not much is known in the New Testament about this man. According to tradition, he preached in Armenia, Syria, and Persia. In some manuscripts, his name appears as Labbaeus. Simon the Cananaean is also called Simon the Zealot. It is unclear whether he was a militant type. According to some tradition, his missionary zeal took Arabia, pre-Islamic 21 him to North Africa, Armenia, and possibly even Britain. Judas Iscariot, the treasurer for the 12, betrayed Jesus to the Jewish authorities who were seeking to kill him. According to Matthew, Judas hanged himself afterward from guilt. After the death of Jesus, Matthias, a man about whom nothing is known in the New Testament except the name, replaced Judas. According to Armenian tradition, however, Matthias evangelized Armenia alongside Andrew, Bartholomew, Thaddaeus, and Simon the Cananaean. The fact that the disciples of Jesus felt compelled to replace Judas Iscariot with Matthias to complete the number 12 seems to indicate that the 12 were believed to be the heads of a newly constituted Israel. Simon Peter is also referred to as Cephas in Paul and John. It is perhaps his unaffected humanity, accompanied by unrefined manners, that endeared him to Jesus and the rest of the group. He appears to have been the spokesman for the 12. For example, on the night Jesus was transfigured, he offered to build huts for Jesus as well as Elijah and Moses, who had come to visit Jesus. The leadership of the church, however, eventually appears to have gone to James, the brother of Jesus. According to ancient tradition, Peter went to Rome, which eventually became the seat of the Latin Church, and preached there and died a martyr, crucified upside down. See also Christianity, early; Herods. Further reading: Goodspeed, Edgar J. The Twelve: The Story of Christ’s Apostles. Philadelphia: John C. Winston Co., 1957; Wilkens, M. J. “Disciples.” In J. B. Green, S. McKnight, and I. Howard Marshall, eds. Dictionary of Jesus and the Gospels. Downers Grove, IL: InterVarsity Press, 1992. P. Richard Choi

Arabia, pre-Islamic Edit

Arabia, which spans an area of 1.25 million sq. miles, is a rugged, arid, and inhospitable terrain. It consists mainly of a vast desert, with the exception of Yemen on the southeastern tip, a fertile region with ample rain and well suited for agriculture. The southwestern region of Arabia also has a climate conducive to agriculture. The first mention of the inhabitants of Arabia, or “Aribi,” is seen in the ninth century b.c.e., in Assyrian script. The residents of northern Arabia were nomads who owned camels. In pre-Islamic Arabia, there was no central political authority, nor was there any central ruling administrative center. Instead, there were only various Bedu (Bedouin) tribes. Individual members of a tribe were loyal to their tribe, rather than to their families. The Bedu formed nomadic tribes who moved from place to place in order to find green pastures for their camels, sheep, and goats. Oases can be found along the perimeter of the desert, providing water for some plants to grow, especially the ubiquitous date palm. Since there was a constant shortage of green pastures for their cattle to graze in, the tribes often fought one another over the little fertile land available within Arabia, made possible by the occasional desert springs. Since warfare was a part of everyday live, all men within the tribes had to train as warriors. By the seventh century b.c.e. Arabia was divided into about five kingdoms, namely the Ma’in, Saba, Qataban, Hadramaut, and Qahtan. These civilizations were built upon a system of agriculture, especially in southern Arabia where the wet climate and fertile soil were suitable for cultivation. Of the five kingdoms Saba was the most powerful and most developed. Until 300 c.e. the kings of the Saba kingdom consolidated the rest of the kingdoms. Inhabitants of northern Arabia spoke Arabic, while those in the south spoke Sabaic, another Semitic language. As Yemen lay along a major trade route, many merchants from the Indian Ocean passed through it in south Arabia. The south was therefore more dominant for more than a millennium as it was more economically successful and contributed much to the wealth of Arabia as a whole. By the seventh century b.c.e. the oases in Arabia had developed into urban trading centers for the lucrative caravan trade. The agricultural base of Arabia contributed to the economy of Arabia, enabling inhabitants to switch to economic pursuits in luxury goods alongside an ongoing agrarian economy. The commercial network in Arabia was facilitated mainly by the caravan trade in Yemen, where goods from the Indian Ocean Basin in the south were transferred on to camel caravans, which then traveled to Damascus and Gaza. Arabia dealt in the profitable products of the day—gold, frankincense, and myrrh, as well as other luxury goods. The role of the Bedu, likewise, evolved. Instead of just being military warriors engaged in tribal rivalries, they were now part of the caravan trade, serving as guardians and guides while caravans traveled within Arabia. These Bedu were different from other nomadic tribes, as they tended to settle in one place. 22 Aramaeans Assyrians, followed by the neo-Babylonians, and the Persians disturbed unity in Arabia. From the third century c.e. the Persian Sassanids and the Christian Byzantines fought over Arabia. Later on, just before the rise of Islam, there emerged two Christian Arab tribal confederations known as the Ghassanids and the Lakhmid. The city of Petra in northwest Arabia was under the control of the Byzantines (through the Ghassanids), followed by the Romans, while the northeastern city of Hira fell under Persian influence (the Lakhmid). Under the Lakhmid and Ghassanid dynasties Arab identity developed, as did the Arab language. The central place of worship for the nomadic Bedu tribes was the Ka’ba, a cubic structure found in the city of Mecca, which houses a black stone, believed to be a piece of meteorite. The Ka’ba was the site of an annual pilgrimage in pre-Islamic Arabia. Abraham first laid the foundations of the Ka’ba. Over a millennium the function of the Ka’ba had drastically changed and just before the coming of Islam through Muhammad, idols were found within the shrine. The Bedu prayed to the idols of different gods found within. Although the various nomadic Bedu tribes often formed warring factions, within the sacred space of the Ka’ba, tribal rivalries were often put aside in respect for the place of worship. Mecca became a religious sanctuary and a neutral ground where tribal warfare was put on hold. By the seventh century c.e., besides being an important religious site, the city of Mecca was also a significant commercial center of caravan trade, because of the rise of south Arabia as a mercantile hub. Merchants of different origins converged in the city. Just before the rise of Islam, the elite merchants of the Quraysh tribe led Mecca loosely, although it was still difficult to discern a clear form of authoritative government in Mecca. Mecca, like southern Arabia, was home to many different people of various faiths. Different groups of people had settled in Arabia, especially in the coastal regions of Yemen, where a rich variety of religions had coexisted, having originated from India, Africa, and the rest of the Middle East. This is because of its strategic location along the merchant trade route from the Red Sea and the Indian Ocean. They were Jews, Christians, and Zoroastrians who had migrated from the surrounding region. These migrants were markedly different from the indigenous inhabitants of Arabia in that they adhered to monotheistic faiths, recognizing and worshipping only one God. Thus, the inhabitants of pre-Islamic Arabia were familiar with other monotheistic faiths prior to the coming of Islam, however, subsequent Muslim society would refer to those living in pre-Islamic Arabia as living in jahiliyya, or “ignorance.” See also Sassanid Empire. Further reading: Cleveland, William L. A History of the Modern Middle East. Boulder, CO: Westview Press, 2000; Imnadar, Subhash C. Muhammad and the Rise of Islam: The Creation of Group Identity. Madison, WI: Psychosocial Press, 2001; Mantran, Robert. Great Dates in Islamic History. New York: Facts On File, 1996; Von Grunebaum, Gustav E. Classical Islam: A History, 600 A.D. to 1258 A.D. Somerset, UK: Transaction Publishers, 2005. Nurfadzilah Yahaya

AramaeansEdit

The Aramaeans interest historians because of the two sources of information about them: the archaeological and the biblical. Part of the challenge in understanding the Aramaeans is in the effort to link both sets of data. According to the first citation, the people of ancient Israel and Judah consider themselves ethnic Aramaeans who became a distinct religious group as a result of their experience in Egypt. According to the second citation, the Aramaeans were a people who experienced the brunt of Assyrian aggression in the 12th century b.c.e. The 1993 discovery of the Tel Dan Stela, an Aramaic- language stone inscription that mentions Israel and David and apparently was written by Hazael, the king of Aram and the greatest Aramaean warrior, brings these two strands together in a historical and religious debate. ARCHAEOLOGICAL EVIDENCE The historian is faced with the dilemma of determining when this people first came into existence versus when there is a historical written record about them. The Aramaeans presumably were a West Semitic–speaking people who lived in the Syrian and Upper Mesopotamian region along the Habur River and the Middle Euphrates for the bulk of the second millennium b.c.e., if not earlier. Their first uncontestable appearance in the written record occurred when Assyrian king Tiglath-pileser I (1114–1076 b.c.e.) claimed to have defeated them numerous times. They very well may be connected to the Amorites who previously had been in that area before they spread out across the ancient Near East just as the Aramaeans would do 1,000 years later. Aramaeans 2 3 The early stages of Aramaean history are known not through their own writings, but from what others wrote about them. When the Assyrian Empire went into decline, the Assyrian references to the Aramaeans ceased. Presumably they continued to be the primarily pastoral people that the Assyrians had first encountered and lacked the urban-based political structure of the major powers of the region. They used this time to establish themselves in a series of small polities centering in modern Syria. The void in the record changed in 853 b.c.e. when, thanks to the Assyrians, the Aramaeans again appear in a historical inscription. They do so in the records of Shalmaneser III (858–824 b.c.e.), an Assyrian king who sought repeatedly to extend his empire to the west all the way to the Mediterranean Sea. His primary obstacle to achieving this goal was a coalition of peoples including Arabs, Egyptians, Israelites, and Aramaeans. According to the Assyrian inscriptions, it was Hadad-idr (Hadad-ezer, c. 880–843 b.c.e.) of Aram who led the coalition. The king was named after the leading deity of the Aramaeans, Hadad, the storm god. That deity is probably better known as Baal, a title meaning “lord,” than by his actual name. Shalmaneser tried again in 849, 848, and 845 b.c.e. to no avail. At that point the coalition crumbled, enabling Shalmaneser to focus on the new ruler of Aram, Hazael (c. 843–803 b.c.e.), a “son of a nobody” (meaning a usurper). Even though Hazael now stood alone, Assyria was unable to prevail in 841, 838, and 837 b.c.e. Shalmaneser then stopped trying. The withdrawal of Assyria from the land provided Hazael with the opportunity to expand his own rule. His success produced the pinnacle of Aramaean political power during the remaining years of the ninth century b.c.e. Hazael’s stature in the ancient Near East is attested by the Assyrian use of “House of Hazael” for the Aramaean kingdom in the eighth century b.c.e., and later Jewish historian Josephus’s discussion of Hazael’s legacy in Damascus in the first century c.e. Eventually Assyria did prevail over Aram. Around 803 b.c.e. Adad-nirari III (810–783 b.c.e.) attacked Aram and its new king, Ben-Hadad (c. 803–775 b.c.e.), the son of Hazael. The weakening of Aram aided Israel, which enjoyed resurgence during the first half of the eighth century b.c.e. The political life of the Aramaeans soon ended when Tiglath-pileser III (745–27 b.c.e.) absorbed all the Aramaean states into the Assyrian Empire. In a great irony of history the Assyrians required a more flexible and accessible language through which to govern their multi-peopled empire. Their cuneiform language was inadequate for the task. Centuries earlier, perhaps around 1100 b.c.e., the Aramaeans had adopted the 22-letter Phoenician alphabet. Following the Assyrian conquest of the Aramaeans, the latter’s language was accorded special status within the empire and then became the lingua franca of the realm. Its usage continued for centuries including among the Jews. BIBLICAL EVIDENCE The writers of the Jewish Bible were of mixed opinion concerning the origin of the Aramaeans. In some biblical translations they appear as Syrians, reflecting the Greek-derived name for their land, a name that continues to be used to this very day. In Genesis 10:22, Aram is a grandson of Noah and son of Shem. This genealogy puts the Aramaean people in Syria on par with the Elamites (in modern Iran) and the Assyrians (in modern Iraq). By contrast in Genesis 22:19, the Aramaeans are grandsons of Abraham’s brother Nahor and thus comparable to Jacob, the grandson of Abraham. In Amos 9:7, the Aramaeans had their own exodus relationship with Yahweh from Kir (sometimes spelled Qir) west of the Middle Euphrates, just as Israel had had from Egypt under Moses. Just as the archaeological record of the Aramaeans contains information involving Israel not found in the Bible, the Bible contains information about the Aramaeans during a time of minimal archaeological information about them. Biblical scholarship has struggled to integrate the archaeological and biblical data into a single story. Examples of points of contention include 1. Do the references to the Aramaeans in the stories of biblical Patriarchs better fit the circumstances of the 10th century b.c.e. in the time of David and Solomon? 2. What was David’s relationship with the Aramaeans particularly as recounted in II Samuel 8 and 10? 3. What was Israelite king Ahab’s relationship with the Aramaeans particularly as recounted in I Kings 20 and 22? 4. What was Hazael’s relationship with Israel during the Jehu dynasty, given the contrasting comments by the Israelite prophet Elijah in I Kings 19:15–17 and his successor the prophet Elisha in II Kings 8:8– 29? According to the biblical text, Elisha was right to weep when he names Hazael king of Aram, given the devastation which the new king would wreak on Israel (see II Kings 10:32, 12:17–18, 13:3). These biblical accounts do agree with the Assyrian account that Hazael was not heir to the throne. 24 Archaic Greece 5. What is the solution to the double murder mystery of Israelite king Jehoram and Judahite king Ahaziah: Was the murderer the Israelite usurper Jehu (II Kings 9–10) or the Aramaean king Hazael (Tel Dan Stela)? According to the biblical record, during the last century of Aram’s existence, Ramot Gilead in the Transjordan and the northern Galilee appear to have been a continual source of contention between Israel and Damascus. The biblical accounts in II Kings describe the ebb and flow to ownership of the land, with Hazael representing the pinnacle of Aramaean conquest, and Jeroboam II (c. 782– 748 b.c.e.), the height of Israelite success. During this time Assyria occasionally ventured into this arena generally to attack Aram, indirectly benefiting Israel. All this political maneuvering came to an end when Tiglath-pileser III ended the independent political existence of Aram in 732 b.c.e. Just over a decade later Israel fell to the Assyrians. See also Bible translations; Elam; Syriac culture and church. Further reading: Dion, Paul E. “Aramaean Tribes and Nations of First-Millennium Western Asia.” In Jack M. Sassoon, ed. Civilizations of the Ancient Near East. New York: Charles Scribner’s Sons, 1995; Pitard, Wayne. “Aramaens.” In Alfred J. Hoerth, et al., eds. Peoples of the Old Testament World. Grand Rapids, MI: Baker Books, 1998; ———. Ancient Damascus. Winona Lake, IN: Eisenbrauns, 1987. Peter Feinman

Archaic GreeceEdit

The Archaic Period in Greek history (c. 700–500 b.c.e.) laid the groundwork for the political, economic, artistic, and philosophical achievements of the Classical Period. Perhaps one of the greatest gifts to Western civilization by the ancient Greeks was the beginning of democratic government and philosophy. The seventh century b.c.e. witnessed the decline of the old aristocratic order that had dominated Greek politics and the rise of the tyrant. For the Greeks the term tyrant referred to someone who had seized power through unconstitutional means. Tyrants were often accomplished men from aristocratic families who had fallen from political grace. They rode the tide of discontent and demand for more opportunities spawned by population and economic growth to lead the charge against the old aristocracy. In order to help solidify their positions they often encouraged trade and business and sponsored ambitious building projects throughout their city-state. Tyrannies did not last beyond the third generation as the sons and grandsons of tyrants typically lacked the political skills and base of support enjoyed by their father and grandfather. The Archaic Period saw the continuation of Greek migration that had begun late in the Greek Dark Ages. An increase in population and the resulting land shortage combined with economic growth, primarily in trade, spurred the movement in search of new lands, colonies, and trading posts. The economic expansion brought the Greeks into extensive contact with other peoples and led to the development of Greek colonies throughout the Mediterranean, Ionia, and even into the Black Sea region. The growing economic prosperity of the Archaic Period led to cultural changes as city-states viewed building projects, particularly of temples, as expressions of their civic wealth and pride. During this period the Greeks used with greater frequency the more graceful Ionic style in their public buildings. Colonization and trade had brought the Greeks into more frequent contact with other great civilizations, such as Egypt. Some scholars give credit to Egypt and her development of large columned halls as influencing the Greeks and their move toward monumental architecture. The move toward monumental architecture was further encouraged as stone replaced wood in public buildings such as temples, treasuries, and the agora as it transformed from a public meeting site to a local marketplace. In addition to the use of the Ionic column, relief sculptures illustrating mythological scenes increasingly appeared on the pediments and entablatures of late sixth century b.c.e. temples. The seventh century b.c.e. saw the rise of lyric poetry, a song accompanied by a lyre. Unlike epic poetry (such as Homer’s Iliad and Odyssey), lyric poetry is set in the present and tells the interests and passions of the author. Lyric poetry provides us with a rare insight to the travails of an individual versus the epic sagas involving entire states. The poet Archilochus wrote a poem wishing harm to a man who had rejected the author as unsuitable for his daughter. Sappho, a poetess from the island of Lesbos, wrote a hymn to Aphrodite asking for assistance in a matter of love—her love for another woman. Both poems speak directly and passionately to the audience on matters of a very personal nature. Archaic Greece 25 In this period the Greeks took the creation of a practical item, pottery, and turned it into such a beautiful piece of art that it spawned cheap imitations and demand for the pieces throughout the Mediterranean. Greek pottery in the seventh century b.c.e. was dominated by Corinthian pottery and its portrayal of animal life. Athenian pottery and its portrayal of mythical themes rose to prominence in the sixth century b.c.e. The same century also saw the shift from black figures engraved on a red background to drawing red figures on a black background, which allowed for more detail and movement in their figures. Perhaps the greatest contribution made to Western civilization by the Archaic Greeks was in the realm of ideas further developed during the Classical Period that continue to influence us, such as the search for a rational view of the universe, a “scientific” explanation for the world, and the birth of philosophy by the cosmologists in sixth century b.c.e. Miletus. In addition, the Archaic Greeks bequeathed to humanity the concept of democratic government, wherein members of the polis (i.e., free men) enjoyed social liberty and freedom and willingly submitted to laws enacted directly by their fellow citizens. See also Greek Colonization; Greek Drama; Greek mythology and pantheon; Greek oratory and rhetoric. Further reading: Freeman, Charles. Egypt, Greece and Rome: Civilizations of the Ancient Mediterranean. Oxford: Oxford University Press, 1999; Perry, Marvin, ed. Western Civilization: Ideas, Politics, and Society. Boston: Houghton Mifflin, 2007; Pomeroy, Sarah B. Ancient Greece: A Political, Social, and Cultural History. Oxford: Oxford University Press, 1999. Abbe Allen DeBolt An illustration depicts life in ancient Greece: A musician plays the lyre for his audience—the seventh century b.c.e. saw the rise of lyric poetry, the performance of a song accompanied by a lyre. Such lyric poetry is set in the present and tells the interests and passions of the author.

ArianismEdit

Arianism receives its name from Arius, a Christian priest of Alexandria who taught that the Son of God, the second person of the Trinity, is not God in the same sense as the Father. He believed that the Son of God did exist before time, but that the Father created him and therefore the Son of God is not eternal like the Father. Arius was accustomed to say of the Son of God: “There was a time when he was not.” When the bishop Alexander opposed Arius, he took his case to Eusebius, bishop of Nicomedia, who had the ear of Emperor Constantine the Great. In order to put an end to the disputes that arose because of Arius’s teaching, Constantine called for a general council that met at Nicaea in 325 c.e. Arius and his followers were condemned by 318 bishops at Nicaea who also drew up a creed laying down the orthodox view of the Trinity. Known as the Nicene Creed, it states that the Son of God is “God from God, Light from Light, True God from True God, begotten not made, consubstantial with the Father . . .” The term used to express the idea that the Son of God is consubstantial, or of the “same substance,” as the Father, homoousios, became a rallying cry for the orthodox side, expressing the unity of nature between the Father and the Son of God. The years following the Council of Nicaea were turbulent, in which many groups opposed the teaching of the council. The reason Arianism continued to exert influence after its condemnation was due in large part to the emperors of this period. Some were openly sympathetic to this heresy, while others—wanting political peace and unity in the empire—tried to force compromises that were unacceptable to those fighting for the Son of God’s equality with the Father. Some bishops were orthodox in their understanding of the Son of God as truly God, but they were opposed to the word homoousios because they could not find it in scripture. Others feared that the word smacked of Sabellianism—an earlier heresy that had made no ultimate distinction between the Father and the Son of God, holding that the divine persons were merely different modes of being God. The defender of the orthodox position was Athanasius, the successor to Alexander in the diocese of Alexandria. Athanasius vigorously opposed all forms of Arianism, teaching that the Son must be God in the fullest sense since he reunites us to God through his death on the cross. One who is not truly God, he argued, cannot bring us a share in the divine life. Athanasius went into exile five times for his indefatigable defense of Nicaea. A synod held under his presidency in Alexandria in 362 rallied together the orthodox side after clearing up misunderstandings due to terminology. This synod, along with the efforts of the Cappadocians, theologians who took up the banner of orthodoxy after Athanasius’s death, paved the way for the Council of Constantinople in 381, which reaffirmed the Nicene Creed and its condemnation of Arianism. See also Christianity, early; Ephesus and Chalcedon, Councils of; Greek Church; Latin Church. Further reading: Ayres, Lewis. Nicaea and Its Legacy: An Approach to Fourth-Century Trinitarian Theology. Oxford: Oxford University Press, 2004; Williams, Rowan. Arius: Heresy and Tradition. London: Darton, Longman and Todd, 1987. Gertrude Gillette

AristophanesEdit

(450–388 b.c.e.) Greek playwright Aristophanes was a leading dramatist of ancient Athens and, owing to the quantity and quality of his works that have been preserved, is customarily recognized as being the leading comic playwright of his society and age. Greek comic drama passed through two main phases, referred to as Old Comedy and New Comedy. The transition between the two stages included Middle Comedy, which is largely conjectural, although the last work of Aristophanes is often ascribed to this stage. Old Comedy featured a chorus, which commented on the action in verse and song, mime and burlesque, as well as a sense of ribaldry, broad political satire, and farce. New Comedy dispensed with the chorus and adopted more of a sense of social realism, although this is still relative. As a representative of the end of one phase, Aristophanes was working in a time of innovation and change, and as might be expected, his works excited both favorable and unfavorable comment. The entire canon of Aristophanes’ works is not known, but it is believed to have extended to perhaps 40 works, of which 11 have survived in partial or complete forms. His career coincided with the Peloponnesian War, and this formed the backdrop of many of his surviving major works. Aristophanes’ most fantastical play is The Birds, which follows the adventures of a group of birds who become so disaffected by life in their home city that they leave to establish their own, which is called Cloud Cuckoo Land and is suspended between heaven and earth. Aristotle 2 7 The Birds can be read as an attack on the rulers of Athens and the idea that people would be better off elsewhere. Acharnians is an earlier play, which more directly addresses the misery of war. In Frogs the actions of the gods are explicitly brought into the sphere of humanity as Dionysus descends into hell to retrieve a famous tragedian to produce work that could enlighten the lives of the people of Athens, given the currently woeful state of that art in the city. See also Greek drama. Further reading: Aristophanes. Aristophanes: The Complete Plays. Trans. and ed. by Paul Roche. New York: Penguin, 2005; Bowie, A. W. Aristophanes: Myth, Ritual and Comedy. Cambridge: Cambridge University Press, 1996; Strauss, Leo. Socrates and Aristophanes. Chicago: University of Chicago Press, 1996. John Walsh

AristotleEdit

(384–322 b.c.e.) Greek philosopher Aristotle is one of the greatest figures in the history of Western thought. In terms of the breadth and depth of his thought, together with the quality and nature of his analysis, his contribution to a variety of fields is almost unparalleled. His areas of investigation ranged from biology to ethics and from poetics to the categorization of knowledge. Born in Stagira in northern Greece, with a doctor as a father, he studied under Plato for 20 years until Plato’s death and then left to travel to Asia Minor and then the island of Lesbos. He received a request in about 342 b.c.e. from King Philip of Macedon to supervise the education of his son Alexander, who was 13 at that time. He consented and prepared to teach Alexander the superiority of Greek culture and the way in which a Homeric hero in the mold of Achilles should dominate the various barbarians to the east. Alexander went on to conquer much of the known world, although he failed to observe Aristotle’s instruction to keep Greeks separate from barbarians by pursuing a policy of intermarriage and adoption of eastern cultural institutions. Alexander proved to be an obstinate student, and Aristotle’s influence was slight. Once this tutelage was completed, Aristotle retired first to Stagira and then to Athens to establish his own academy. He continued to be accompanied by former pupils of Plato such as Theophrastus. His academy became known as the Lyceum. Aristotle wrote his most developed works at this time, but much of what has been passed down through the ages was subsequently edited, and much of his work gives the impression that it contains interpolated material and other notes. His works were translated into Latin and Arabic and became immensely influential throughout the Western world. Aristotle departed Athens for the island of Euboea in 322 b.c.e. and died that year. SCIENTIFIC WORKS At the basis of Aristotle’s works is his close observation of the world and his astoundingly powerful attempts to understand and reconcile the nature of observed phenomena with what might be expected. This is perhaps most easily witnessed in Aristotle’s scientific works, including the Meteorologica, On the Movement of Animals, and On Sleep and Sleeplessness. Aristotle’s works were deeply rooted in the real world, since the establishment of fact is central to the inquiry. This is the strand of Aristotle’s work that was later developed by scholars such as Roger Bacon and early scientific experimenters. CATEGORIES Aristotle’s classification of all material phenomena into categories is contained in his work of the same name. According to this method, everything was part of substance and could be classified as such, while some individual items would be classified as an individual item. The latter are considered to be qualities rather than essential parts of substance. The ways in which Aristotle organized these categories does not always appear intuitively correct, which reflects differences in methods of thinking and language. He also distinguished between form and matter. Form is a specific configuration of matter, which is the basis or substance of all physical things. Iron is a substance or representation of matter, for example, which can be made into a sword. The sword is a potential quality of iron, and a child is potentially a fully grown person. It is in the nature of some matter, therefore, to emerge in a particular form. If form can be said to emerge from no matter, then it would do so as god. Whether one thing is itself or another thing depends on the four causes of the universe. The material cause explains what a thing is and what is its substance; the final cause explains the purpose or reason for the object; the formal cause defines it in a specific physical form, and the efficient cause explains how it came into existence. According to Aristotle’s thinking, all physical items can be explained and accounted for fully by reference to these four causes. In a similar way his exposition of the syllogism in all its possible forms 28 Aristotle and the definition of which of these are valid and to what extent are an effort to establish a system that is inclusive and universal and is both elegant and parsimonious in construction. The syllogism is Aristotle’s principal contribution to the study of logic. POETICS Aristotle’s methods enabled him to make a number of influential contributions to language and to discourse. His Sophistical Refutations, for example, analyzes the use of language to identify the forms of argument that are valid and discard false or disreputable discourse that is aimed at winning an argument rather than seeking the truth. Aristotle, like Socrates and Plato before him, was convinced of the primacy of the search for truth; no matter how uncomfortable this may prove to be. This placed him in occasional conflict with the Sophists, who were more willing to teach pupils to use philosophical discourse for self-advancement. Aristotle’s Posterior Analytics was aimed at determining the extent to which scientific reasoning rested on appropriately considered and evaluated premises that flow properly from suitable first principles. He applied the same rigorous approach to his examination of the Athenian polis and also to the study of tragedy in the Poetics. The Poetics remains one of Aristotle’s most influential works. It aims to outline the various categories of plot and chain of cause and events that are appropriate for the stage and the ways in which the various elements of theater should interact. His conception of the properly tragic character as one whose inevitable downfall is brought about by a character flaw, and that the anagnoresis, or reversal of fortune, was the plot device by which this most commonly was brought about, dominated the production of drama until the modern age. ARISTOTELIANISM A number or prominent scholars and thinkers of the medieval ages, called Aristotelians, seized upon Aristotle’s methods. From the time of Porphyry (260–305 c.e.), the Aristotelian method of analysis was used as a weapon to attack Christianity. This raised a theme that recurred numerous times throughout western Europe, particularly in the subsequently developed universities. While Arabic scholars generally saw no problem in utilizing the dialectical method as a tool in helping to understand the ways in which the physical universe worked, those from Christian countries faced opposition when Aristotelian thought was classified as irreligious or blasphemous. This was determined by the prevailing political and religious environment and meant that some scholars were able to avail themselves of Aristotelian thought quite freely, while others were constrained from doing so and their insights were lost to history. Among the former are, notably, Thomas Aquinas (1225–74 c.e.), whose writings investigated the canon of Aristotle with considerable intensity and clarity. Albertus Magnus (1200–80 c.e.), an important tutor of Aquinas, had achieved a great deal in integrating Aristotelian thought and methods into the mainstream of Christian thought in terms of responsible philosophical inquiry. Together with Roger Bacon (1220–92 c.e.), the Aristotelians made progress toward experimental science that would eventually flourish with the scientific method. In the Islamic world Aristotelianism is perhaps best known in the person of Ibn Sina (980–1037 c.e.), the Persian physician and philosopher whose ideas perhaps came the closest of all Muslim thinkers to uniting Islamic belief with the philosophy of Plato and Aristotle. Ibn Sina shared Aristotle’s devotion to the systematic examination of natural phenomena and his support for logical determinism brought him into conflict with religious authorities. His religious beliefs tended toward the mystic, possibly as a means of resolving the difficulties inherent in the gap between observable and comprehensible phenomena and divine revelations. The eastern part of the Islamic world had enjoyed the infusion of ideas from the Hellenistic tradition for some centuries and so was better able to integrate concepts more peaceably than in, for example, the western Islamic states of the Iberian Peninsula. Consequently the beneficial impact of Aristotle’s protoscientific method may be discerned in many of the scholarly works of the medieval Islamic world. This also provided a route by which ideas could be transmitted further east. See also Platonism; sophism. Further reading: Aristotle. The Complete Works of Aristotle: The Revised Oxford Translation. Ed. by J. Barnes. Princeton, NJ: Princeton University Press, 1995; Aquinas, Thomas. Summa Theologica. Trans. by the Fathers of the English Dominican Province. Christian Classics, 1981; Bernays, Jacob. “On Catharsis.” American Imago (v.61/3, 2004); Broadie, Sarah. “Virtue and Beyond in Plato and Aristotle.” Southern Journal of Philosophy (v.43, 2005); Clegg, Brian. The First Scientist: A Life of Roger Bacon. New York: Carroll and Graf Publishers, 2003; Halliwell, Stephen. Aristotle’s Poetics. Chicago: University of Chicago Press, 1998; Morewedge, Parviz. Metaphysica of Avicenna. New York: Global Scholarly Publications, 2003; ShiffArmenia 29 man, Mark. “Shaping the Language of Inquiry: Aristotle’s Transformation of the Meanings of Thaumaston.” Epoche: A Journal for the History of Philosophy (v.10/1, 2005); Weishepl, James A., ed. Albertus Magnus and the Sciences: Commemorative Essays. Toronto, Canada: Pontifical Institute of Medieval Studies, 1980. John Walsh

Ark of the CovenantEdit

The political and cult symbol of Israel before the destruction of the Temple was the Ark of the Covenant. This cult object was constantly found with the Israelites and treasured by them from the time of Moses until the time of the invasion of the Babylonians. It was a rectangular chest made of acacia wood, measuring 4 feet long by 2.5 feet wide by 2.5 feet high. The Ark was decorated and protected with gold plating and carried by poles inserted in rings at the four lower corners. There was a lid (Hebrew: kipporet, “mercy seat” or “propitiatory”) for the top of the Ark, and perched on top of the monument were two golden angels or cherubs at either end with their wings covering the space over the Ark. The first interpretations about the Ark were simple: It was simply the repository for the stone tablets of laws that Moses received on Mount Sinai. It was housed in a tent and on pilgrimage alongside the children of Israel in the desert. Ancient peoples would preserve treaties or covenants in such a fashion. Soon, however, the Ark became charged with deeper latent powers and purposes. For one thing it was the place where the divine being would choose to make some revelation and communication with Israel. Moses would go there for his meetings with God. So the Ark became more than a receptacle for an agreement; God’s presence filled the Ark. A parallel to this notion is the qubbah, the shrine that Arab nomads carry with them for divination and direction as they search for campsites and water. In a similar way the Ark was a supernatural protection—called a palladium—that ensured that Israel would never lose in battle. In this sense many Near Eastern cities and nations often had some token of divine protection. Similarly, the Greeks often symbolized their military invincibility through divine emblems such as Athena’s breastplate in Athens and Artemis’s stone in Ephesus. When the Jerusalem temple was built under Solomon, the Ark took on a more complex meaning. It had to take into account the kingdom of David and Solomon, the capital city of Jerusalem, and the rituals of temple and sacrifice. So the Ark became the throne or the divine contact point for God’s rule over the world. The Ark was no longer housed in a tent; it had its own inner courtroom. The angelic representation over the chest became a divine seat, or at least a footstool. Ancient artistic representations of this concept have been discovered in other cultures of the Fertile Crescent: Human or divine kings are often depicted as sitting on a throne supported by winged creatures. The Ark disappeared from Jerusalem after the Babylonians invaded in the sixth century b.c.e., but it did not disappear from later popular imagination. Some believed that Jeremiah the prophet or King Josiah hid it, others that angels came and took it to heaven; and to this day, Ethiopian Christians believe that they have it safeguarded in their country. That the Ark could fall into godless hands was considered to be more catastrophic than the destruction of the Temple. Whatever the cause, Josephus said that it was not present in the rebuilt temple of Herod. See also Babylon, later periods; Ethiopia, ancient; Greek mythology and pantheon. Further reading: De Vaux, Roland. Ancient Israel. Vol. 2: Religious Institutions. New York: McGraw-Hill, 1965; Price, Randall. In Search of the Ark of the Covenant. New York: Harvest House, 2005. Mark F. Whitters

ArmeniaEdit

Located at the flashpoint between the Roman and Persian Empires, “Fortress Armenia” stretched through eastern Anatolia to the Zagros Mountains. Armenia was a kingdom established during the decline of Seleucid control. Its independence ended with its incorporation into the Roman Empire in the third century c.e. The region was inhabited after the Neolithic Period, and evidence of high culture is evident from the Early Bronze Age. Urartu was an important regional power in the eighth to the sixth centuries b.c.e. The Indo-Europeans arrived from western Anatolia in this period and formed a new civilization that was Armenian-speaking and based on the local culture. The conversion of Armenia to Christianity is associated with a number of stages or traditions. The most 30 Artaxerxes important one was the work of Gregory Luzavorich, the “Illuminator” (d. 325 c.e.). Armenians greatly treasure their heritage as the first nation that converted officially to the Christian faith. Syriac Christianity first influenced Armenia: The Armenian version of the Abgar legend makes Abgar an Armenian king, and the evangelization of Addai is described as a mission to southern Armenia. The influence of Syriac literature and liturgy on Armenia remained strong even after the Greek influence, primarily from Cappadocia, and increased in the third century c.e. The Greek tradition states that Bartholomew was the apostle to the Armenians. The Abgar/Addai legend is earlier than that of Bartholomew. The traditions of the female missionaries and martyrs Rhipsime and Gaiane are among the earliest accounts of the conversion of Armenia. Tertullian (c. 200 c.e.) also mentions that there were Christians in Armenia. The conversion of the royal house of Armenia dates officially to 301 c.e., predating the conversion of the Georgian king Gorgasali and the Ethiopian Menelik by a generation. In that year Gregory the Iluminator persuaded King Tiridates III (Trdat the Great, 252–330) to be baptized. Gregory is identified as the founder of the Christian Armenian nation and as the organizer of the Armenian Church. Gregory founded Ejmiatsin, the mother cathedral of the Armenian Church, after an apparition by Jesus Christ who descended from heaven at the site of a significant pagan temple (Ejmiatsin means “The Only- begotten Descended”). Gregory’s original church was at Vagharshapat. The revelation to found the church at Ejmiatsin coincided with changing political circumstances. Politically, Armenians were always at the mercy of the great powers of Persia and Rome, and in 387 the Roman emperor Theodosius I and the Persian emperor Shapur agreed to partition Armenia, thus ending its independence. As the site of a dominical apparition, the place of Gregory’s Episcopal see, the residence of Armenian Catholicoi, and the most important administrative center of the Armenian Church, Ejmiatsin is for Armenians a holy site on a par with the Church of the Anastasis (Resurrection) in Jerusalem or the Basilica of Bethlehem, where Jesus (Christ) of Nazareth was born. The second most important event of the formative period of Armenian history was Mesrob Mashtots’s (c. 400) invention of the Armenian alphabet, which resulted in the translation of the Bible and the liturgy into Armenian and a rapid introduction of Christian and classical works, translated from Greek and Syriac into Armenian. During the Christological controversies of the fifth and sixth centuries, the Armenian Apostolic Church rejected the decisions of the Council of Chalcedon (451) and remains to this day one of the non-Chalcedonian churches that adhere to the strict interpretation of Cyril of Alexandria’s “one nature of the incarnate Logos” formula. For this reason, Armenians are often erroneously and polemically labeled “Monophysites.” See also cappadocians; Diadochi (Successors); Ephesus and Chalcedon, Councils of; Medes, Persians, and Elamites; Oriental Orthodox Churches; Roman Empire; Seleucid Empire; Syriac culture and church. Further reading: Garsoïan, Nina G. Church and Culture in Early Medieval Armenia. Brookfield, VT: Ashgate, 1999; Thomson, Robert W. Studies in Armenian Literature and Christianity. Brookfield, VT: Variorum, 1994. Robert R. Phenix, Jr.

ArtaxerxesEdit

(5th–4th centuries b.c.e.) Persian emperors The Persian Empire reached its greatest strength under Darius I; under the reign of the three Artaxerxes it began and concluded its decline, ending with Alexander the Great’s conquests in 330 b.c.e. Artaxerxes I, third son of Emperor Xerxes I, acceded to the throne in 465 b.c.e. following the murder of his father and his brother Darius, who was first in line to the throne. According to Josephus, the first century c.e. Jewish historian, Artaxerxes’ pre-throne name was Cyrus. The first century b.c.e. Roman historian Plutarch adds that he was nicknamed “long-armed” due to his right arm being longer than his left. Earlier kings of the Persian Empire, namely Cyrus II, Darius, and Xerxes, were discussed in the comprehensive works of the near contemporary Greek historian Herodotus of Halicarnassus, but unfortunately Herodotus’s work did not cover much of Artaxerxes’ reign, and none of the reigns of later kings. ARTAXERXES I The Bible refers to Artaxerxes explicitly in Ezra 4:7, in reference to a letter written by the Jews’ enemies in Samaria. Both Ezra and Nehemiah, significant figures in the later history of the biblical Israelite people, arrived in Judah in Palestine to serve the Jews there during the Artaxerxes 31 reign of Artaxerxes. If this is accurate then it was Artaxerxes for whom Nehemiah was cup bearer (Nehemiah 2:1), a position that gave him close access to the king, and it was to him that Nehemiah asked for permission to go to Jerusalem to oversee the rebuilding of the city walls. A. T. Olmstead in A History of the Persian Empire states the opinion that it was also Artaxerxes to whom Ezra went in 458 to ask permission to take a group of Jewish exiles back to Judaea in order to reestablish proper worship (Ezra 7:1, 8:1). During his reign Artaxerxes generally followed the administrative practices of his father Xerxes. However, it was increasingly clear was that the empire, having reached its maximum extent under Darius I, Artaxerxes’ grandfather, was weakening. Undoubtedly, a key cause was the high levels of taxation, which was stripping the satrapies, the regions of the empire, of gold and silver, enriching Persia’s vaults, but fostering discontent among the king’s subjects. In 460 ancient Egypt rebelled, drove out the Persian tax collectors, and requested aid from Athens. The Athenians, who were looking for a fight with Persia, sent a fleet; and by 459 nearly all of Egypt was in the hands of the rebel alliance. It was probably in this turbulent period that Ezra made his application to Artaxerxes to allow a contingent of Jews to organize the worship of the returned exiles in Judaea. The Jews of Babylonia were probably some of the more loyal citizens, and since Persian policy supported organized religion, Ezra’s appeal met with sympathetic ears. In the meantime Artaxerxes sent money to the Athenians’ Greek rival, Sparta, in order to counter their support of the Egyptian rebellion. Consequently, Athens was defeated at Tanagra (457), and with Judaea quieted, Artaxerxes sent his general Megabyzus at the head of a huge army down through the Levant to Egypt, taking back the country after one and a half years of siege. The resultant defeat left Athens severely weakened and demoralized. In 449 the Callian treaty was agreed between Athens and Persia in Susa, in which the parties accepted the maintenance of the status quo in Asia Minor, namely that those Greek citystates that were in either party’s control at the time of the treaty stayed under that party’s control. A few years later the general Megabyzus resigned from the army and retired to the satrapy he governed, “The land beyond the River,” namely modern-day Israel, Lebanon, and Syria—and there led a revolt. Possibly it was the rebellious courage stirred up by Megabyzus’s actions that led local authorities to pull down the Jerusalem walls lest there be another uprising. In 431 hostilities broke out between Athens and Sparta, thereby beginning the long Peloponnesian War. Artaxerxes decided to take a position of noninterference and made no effort to slow the course of events, ignoring the entreaties for support from both sides. Artaxerxes I died of natural causes toward the end of 424 b.c.e. ARTAXERXES II Artaxerxes II, the grandson of Artaxerxes I, acceded to the throne in March 404 b.c.e. on the death of his father, Darius II. However, the following year his younger brother Cyrus began plotting his overthrow. Cyrus gathered an army, significantly including 10,000 Greek mercenaries, and marched east. Finally battle was drawn in 401 against his brother’s army at Cunaxa in central Mesopotamia, but despite initial success on Cyrus’s part, his rashness led to a crucial mistake that resulted in his death, and Artaxerxes won the day. This notwithstanding, the Greek mercenaries were allowed to march the thousand miles home, Artaxerxes not wanting to tackle them. This “March of the Ten Thousand” from the heart of the Persian territory became a symbol of the internal weakness of the Persian Empire at that time. In 396 Sparta began a new war to take back control of the Greek cities of Asia Minor. While the Spartans played off one Persian satrap against another, Artaxerxes, aware of the empire’s military weakness, used its vast wealth to buy an alliance with Athens, Sparta’s local rival. The Athenians aided the strengthened Persian navy, successfully countering the Spartan threat, with the result that in 387–386 a peace was struck, which once again required Sparta to give up any claims to sovereignty over the Greek cities in Asia Minor. In 405 Egypt had revolted and remained independent from Persia throughout most of Artaxerxes’ reign. In 374 Artaxerxes sent a force to retake Egypt. The attempt failed, reinforcing the impression that the central authority was weakening. With rebellion rife the situation seemed to be slipping out of control and auguring the end of the empire. However, the rebels’ Egyptian ally, Pharaoh Nekhtenebef, died unexpectedly in 360, leaving Egypt in chaos and the satraps of Asia Minor to face the wrath of the emperor alone. Rather than risk losing to the central authority, the rebels made peace with Artaxerxes, and many were in fact returned to their satrapies. ARTAXERXES III In 358 b.c.e., after a long and moderately successful tenure, though rife with revolts, Artaxerxes II died. His son Ochus acceded to the throne taking the name 32 Aryan invasion Artaxerxes III. Ochus’s bloodthirsty reputation—possibly the worst in this regard of any of the Achaemenid kings—was compounded by the murder of all his relations, regardless of sex or age, soon after his accession. However, his ruthless ferocity did not stop revolts from rocking the empire. Ochus made a fresh attempt to take back Egypt in 351 but was repulsed, and this encouraged further rebellions in the western satrapies. In 339 Persia misplayed its hand with Athens by refusing Athenian aid to deal with the rising power of Philip of Macedon. Persia took on Philip alone but failed to defeat him, and in 338 Philip took overlordship of the whole of Greece. Greece united under Philip proved impervious to Persian might, and within eight years Persepolis, the Persian royal capital and the whole empire, was to collapse at the hands of Philip’s son, Alexander the Great. Ochus’s physician, at the command of the powerful eunuch Bagoas, murdered Ochus, and Bagoas made Ochus’s youngest son, Arses, king (338–336 b.c.e.). Arses attempted to kill the too powerful Bagoas and was killed, allowing Darius III to become king. Darius survived until his death in 330 b.c.e. at the hands of Alexander. See also Babylon, later periods; Greek city-states; Herodotus, Thucydides, and Xenophon; Medes, Persians, and Elamites; Persepolis, Susa, and Ecbatana; Persian invasions; pharaoh. Further reading: Fensham, Charles. The Bible: Books of Nehemiah and Ezra. Grand Rapids, MI: Eerdmans, 1982; Olmstead, A. T. History of the Persian Empire. Chicago: University of Chicago Press, 1959; Yamauchi, Edwin M. Persia and the Bible. Grand Rapids, MI: Baker Book House, 1990. Andrew Pettman

Aryan invasionEdit

The conquest and settlement of northern India by Indo- Europeans began c. 1500 b.c.e. The event marked the end of the Indus civilization and altered the civilization of the subcontinent. In ancient times seminomadic peoples lived in the steppe lands of Eurasia between the Caspian and Black Seas. They were light skinned and spoke languages that belong to the Indo-European or Indo-Aryan family. They were organized into patrilineal tribes, herded cattle, knew farming, tamed horses and harnessed them to chariots, and used bronze weapons. For reasons that are not clear, the tribes split up and began massive movements westward, southward, and southeastward to new lands around 2000 b.c.e., conquering, ruling over, and in time assimilating with the local populations. Those who settled in Europe became the ancestors of the Greeks, Latins, Celts, and Teutons. Others settled in Anatolia and became known as the Hittites. Another group settled in Iran (Iran is a cognate form of the English word Aryan). The most easterly group crossed the mountain passes of the Hindu Kush into the Indus River valley on the Indian subcontinent. Many tribes who called themselves Aryas (anglicized to Aryans) moved into India over several centuries. While there are several theories on the decline and fall of the Indus civilization, there is no doubt that the Indus cities were destroyed or abandoned around 1500 b.c.e., at about the same time that the newcomers began to settle in the Indus region. These newcomers lived in villages in houses that did not endure. Thus, there are few archaeological remains in India of the protohistoric age between 1500–500 b.c.e. Historians must therefore rely in part on the literary traditions of the early Aryans for knowledge on the era. The earliest oral literature of the Aryans were hymns and poems composed by priests to celebrate their gods and heroes and used in religious rites and sacrifices. They were finally written down c. 600 b.c.e., when writing was created. This great collection of poems is called the Rig- Veda, and it is written in Sanskrit, an Indo-European language. Although primarily focused on religion, there are references in the Rig-Veda to social matters and epic battles that the invaders fought and won. Some of the gods might also be deified heroes. The Rig-Veda and other later Vedas remain part of the living Hindu tradition of India. The Aryans were initially confined to the northwestern part of the Indian subcontinent but gradually spread across the north Indian plains to the Ganges River basin. By approximately 500 b.c.e. the entire northern part of the subcontinent had become part of the Aryan homeland, and Aryans dominated the earlier population. See also Vedic age. Further reading: Bryant, Edwin. The Quest for the Origins of Vedic Culture, The Indo-Aryan Migration Debate. Oxford, Oxford University Press, 2004; Sharma, Ram Sharan. Advent of the Aryans in India. New Delhi, Manohar Publications, 1999. Jiu-Hwa Lo Upshur Assyria 33

AshokaEdit

(269–232 b.c.e.) ruler and statesman Ashoka (Asoka) was the third ruler of the Mauryan Empire. Under his long rule the empire that he inherited reached its zenith territorially and culturally. Soon after his death the Mauryan Empire split up and ended. He is remembered as a great ruler in world history and the greatest ruler in India. Chandragupta Maurya founded the Mauryan dynasty in 326 b.c.e. Both he and his son Bindusara were successful warriors, unifying northern India and part of modern Afghanistan for the first time in history. Ashoka was not Bindusara’s eldest son, and there is a gap of time between his father’s death and his succession, due perhaps to war with his brothers. Ashoka continued to expand the empire by conquering southward. One war against Kalinga in the southeast was particularly bloody and filled him with remorse. As a result he converted to Buddhism (from Vedic Hinduism) and renounced war as an instrument of policy. He became a vegetarian, prohibited the killing of some animals, and discouraged hunting, urging people to go on pilgrimages instead. He also built many shrines in places associated with Buddha’s life. However, he honored all religions and holy men. Ashoka’s son and daughter became Buddhist missionaries to Ceylon (present-day Sri Lanka); Indian missionaries to the island also brought the people the advanced arts and technology of India. Around 240 b.c.e. he called the Third Buddhist Council at Pataliputra, his capital city, which completed the Buddhist canons and dealt with differences among the monastic orders. A great deal is known about the personality and policy of Ashoka because he ordered many of his edicts, laws, and pronouncements engraved on stone pillars and rock surfaces throughout his empire and ordered his officials to read them to the public periodically as instruction. Most of the inscriptions that survived used the Brahmi script, precursor of modern Hindi, but some were in other languages, depending on the vernacular of the district. Ten inscribed pillars survive. Different animals associated with Buddhism adorned the capital of each pillar; the one with lions (the roar of lion, heard far and wide, symbolized the importance of the Buddha’s teaching) is the symbol of modern India. Ashoka called the people of the empire his children and said: “At all times, whether I am eating, or in the women’s apartments . . . everywhere reporters are posted so that they may inform me of the people’s business. . . . For I regard the welfare of the people as my chief duty.” Ashoka lightened the laws against criminals, though he did not abolish the death penalty. He also exhorted his people to practice virtue, be honest, obey parents, and be generous to servants. He forbade some amusements as immoral and appointed morality officers to enforce proper conduct among officials and the people, allowing them even to pry into the households of his relatives. Little is known of his last years. It is also unclear who succeeded him; some sources even say that he was deposed around 232 b.c.e. In any case the Mauryan Empire soon fell into chaos and collapsed. History honors Ashoka as a remarkable man and great king. Present-day India has his lion and the wheel of Buddha’s law that adorned the capital of his inscribed pillar as symbols of the nation. See also Megasthenes. Further reading: Bhandarkar, D. R. Asoka. Calcutta, India: University of Calcutta Press, 1969; Dutt, Romesh Chander. A History of Civilization in Ancient India Based on Sanskrit Literature. New Delhi, India: Cosmo Publishers, 2000; Gokhale, Balkrishna Govind. Asoka Maurya. New York: Twayne Publishers, 1966. Jiu-Hwa Lo Upshur

AssyriaEdit

The country of Assyria encompasses the north of Mesopotamia, made up of city-states that were politically unified after the middle of the second millennium b.c.e. Assyria derived its name from the city-state Ashur (Assur). This city was subject to the Agade king, Manishtushu, and the Ur III king, Amar- Sin. During the Ur III period, Ashur also appears as the name of the city’s patron deity. Scholars have suggested that the god derived his name from the city and, indeed, may even represent the religious idealization of the city’s political power. The Old Assyrian period The Old Assyrian period (c. 2000–1750 b.c.e.) began when the city of Ashur regained its independence. Its royal building inscriptions are the first attested writing in Old Assyrian, an Akkadian dialect distinct from the Old Babylonian then used in southern Mesopotamia. This period also saw the institution of the limmu, whereby each year became named after an Assyrian official, selected by the casting of lots. The sequence 34 Assyria of limmu names is not continuous for the second millennium b.c.e., but has been completely preserved for the first millennium b.c.e. A solar eclipse (dated astronomically to 763 b.c.e.) has been dated by limmu and thus provides a fixed chronology for Assyrian and—by means of synchronisms—much of ancient Near Eastern history. During the Old Assyrian period Ashur engaged extensively in long-distance trade, establishing merchant colonies at Kanesh and other Anatolian cities. Ashur imported tin from Iran and textiles from Babylonia and, in turn, exported them to Kanesh. Due to political upheavals, Kanesh was eventually destroyed, and Assyria’s Anatolian trade was disrupted. Before this disaster, moreover, Ashur itself had been incorporated into the growing empire of Eshnunna. Around the end of the 19th century b.c.e., the Amorite Shamshi-Adad I attacked the Eshnunna empire and conquered the cities of Ekallatum, Ashur, and Shekhna (renamed Shubat-Enlil). With the defeat of Mari in 1796 b.c.e., Shamshi-Adad could rightfully boast that he “united the land between the Tigris and the Euphrates” in northern Mesopotamia. The Assyrian King List was manipulated so as to include Shamshi-Adad in the line of native rulers, despite his foreign origins. In the new empire Shamshi-Adad reigned as “Great King” in Shubat-Enlil, delegating his elder son, Ishme- Dagan, as “king of Ekallatum” and his younger son, Yasmah-Adad, as “king of Mari.” Government officials were frequently interchanged among the three courts. This mobility had the effect of homogenizing administrative practices throughout the kingdom, as well as creating loyalty to the central administration instead of to native territories. Shamshi-Adad’s empire, unfortunately, did not survive him for long. A native ruler, Zimri-Lim, reclaimed Mari and King Hammurabi of Babylon eventually subjugated northern cities such as Ashur and Nineveh. The four centuries after Ishme-Dagan are referred to as a “dark age,” when historical records are scarce. During this time the kingdom of Mittani was founded. As it expanded its territory in northern Mesopotamia, the city-states once united under Shamshi-Adad became separate political units. The Middle Assyrian kingdom (1363–934 b.c.e.) began when Ashur-uballit I threw off the Mitannian yoke. Whereas former rulers had identified themselves with the city of Ashur, Ashur-uballit was the first to claim the title “king of the land of Assyria,” implying that the region had been consolidated as a single territorial state under his reign. In his correspondence to the pharaoh, Ashur-uballit claimed to be a “Great King,” on equal footing with the important rulers of Egypt, Babylonia, and Hatti. Mitanni remained in the unenviable position of warfare on two fronts: the Hittites from the northwest and Ashur-uballit’s successors from the east. Adad-nirari I annexed much of Mitanni, extending Assyrian’s western frontier just short of Carchemish. Shalmaneser I turned Mitannian territory into the Assyrian province of “Hanigalbat,” governed by an Assyrian official. His reign also witnessed the first seeds of Assyria’s policy on deportation: Conquered peoples were relocated away from their homeland in order to crush rebellious tendencies as well as to exploit new agricultural land for the empire. Tukulti-Ninurta I conquered Babylon and deposed the Kassite king, Kashtiliash IV. He appointed a series of puppet kings on Babylon’s throne, but a local rebellion soon returned control to the Kassites. This Assyrian monarch also set a precedent by founding a new capital, naming it after himself (“Kar-Tukulti- Ninurta”). Tukulti-Ninurta was eventually assassinated by one of his sons, and the rapid succession of the next three rulers suggests violent contention for the throne. Stability returned to Assyria with the ascension of Ashur-resha-ishi I. Around this time the increased use of iron for armor and weapons greatly influenced the methods of Assyrian warfare. His son, Tiglathpileser I, achieved great victories in the Syrian region and even campaigned as far as the Mediterranean. He was the first to record his military campaigns in chronological order, thus giving rise to the new genre of “Assyrian annals.” To the south conflict between Assyria and Babylonia was temporarily halted by the advent of a common enemy: the Aramaeans. They were a nomadic Semitic people in northern Syria, who ravaged Mesopotamia in times of famine. Under this invasion Assyria lost its territory and may have been reduced to the districts of Ashur, Nineveh, Arbela, and Kilizi. Neo-Assyrian Kingdom The Neo-Assyrian kingdom (934–609 b.c.e.) began with Ashur-dan II, who resumed regular military campaigns abroad after more than a century of neglect. He and his successors focused their attacks on the Aramaeans to recover areas formerly occupied by the Middle Assyrian empire. Adad-nirari II set the precedent for a “show of strength” campaign, an official procession displaying Assyria’s military power, which marched around the empire and collected tribute from Assyria 35 the surrounding kingdoms. This monarch also installed an effective network of supply depots to provision the Assyrian army en route to distant campaigns. Ashurnasirpal II has been considered the ideal Assyrian monarch, who personally led his army in a campaign every year of his reign. He subjected Nairu and Urartu to the north, controlled the regions of Bit- Zamani and Bit-Adini to the west, and campaigned all the way to the Mediterranean. Shalmaneser III continued his father’s tradition of military aggression. From his reign to Sennacherib’s (840–700 b.c.e.), the annual campaigns were so regular that they served as a secondary means of dating (i.e., the “Eponym Chronicle”). At Qarqar on the Orontes River in 853 b.c.e., Shalmaneser fought against a coalition led by Damascus, which included “[King] Ahab, the Israelite.” Under Ashurnasirpal II and Shalmaneser III military strategy was honed to great effectiveness: When enemies refused to pay regular tribute, a few vulnerable cities would be taken and their inhabitants tortured by rape, mutilations, beheadings, flaying of skins, or impalement upon stakes. This “ideology of terror” was designed to discourage armed insurrection, lest Assyria exhaust its resources. As a last resort, however, the foreign state would be annexed as an Assyrian province. The strategy of forced deportations was employed with reasonable success. For the next century Assyria experienced a decline due to weakness in its central government, as well as the military dominance of its northern neighbor, Urartu. Tiglath-pileser III (biblical “Pul”), however, restored prestige to the monarchy by curtailing the power of local governors. Instead of levying troops annually, he built up a standing professional army. Tiglath-pileser defeated the Urartians and invaded their land up to Lake Van. In the west an anti-Assyrian coalition was crushed, and the long recalcitrant Damascus was annexed. He also adopted a new policy toward Babylonia. The Assyrian monarchs had traditionally restrained their efforts to control Babylonia, in deference to the latter’s antiquity as the ancestral origin of Assyria’s own culture and religion. In 729 b.c.e., however, Tiglath-pileser established a precedent by deposing the Babylonian king and uniting Assyria and Babylonia in a dual monarchy. Hebrew tradition credits Shalmaneser V with the fall of Samaria in 722 b.c.e., the very last year of his reign. Two years later, however, Sargon II still had to crush a coalition led by Yaubidi of Hamath, who had fomented rebellion in Arpad, Damascus, and Samaria. The victory was depicted on relief sculptures in the newly founded royal city, Dur-Sharrukin (modern Khorsabad). After a prolonged struggle, including a defeat by the Elamites at Der (720 b.c.e.), Sargon eventually wrested the Babylonian throne from Merodach- baladan II. In 705 b.c.e., however, Sargon’s body was lost in battle, prompting speculation about divine displeasure. Sargon’s successor, Sennacherib, eventually decided to move the capital to Nineveh. During his 701 b.c.e. campaign in Palestine, Sennacherib became the first Assyrian monarch to attack Judah. He also attempted various methods of controlling Babylonia. When direct rule failed, Sennacherib installed a pro-Assyrian native as puppet king. Thereafter, he delegated the control of Babylonia to his son, who was later kidnapped by the Elamites. Finally, in 689 b.c.e. he razed Babylon to the ground. Sennacherib was assassinated by two of his sons, a crime later avenged by another son, Esarhaddon. The latter was successful in his overtures to achieve reconciliation with Babylon. Esarhaddon may have overstretched Assyria’s limits, however, when he invaded Egypt and conquered Memphis in 671 b.c.e. At his death Esarhaddon divided the empire between two sons: Ashurbanipal in Assyria and Shamashshuma- ukin in Babylonia. Egypt proved troublesome to hold, and Ashurbanipal eventually lost it to Psammetichus I. Moreover, civil war broke out between Assyria and Babylonia. The Assyrians conquered Babylon by 648 b.c.e. and invaded Elam, which had been Babylon’s ally. Although successful, the civil war had taken its toll on Assyrian forces. Also, the crippled Elam was no longer a buffer between Assyria and the expanding state of Media. In 614 b.c.e. the Medes conquered the city of Ashur. Two years later, in coalition with the Babylonians and Scythians, they overthrew Nineveh. The defeated Assyrian forces fled to Haran, but the allied armies pursued them there and effectively ended the Neo-Assyrian kingdom in 609 b.c.e. See also Babylon, early period; Fertile Crescent; Egypt, culture and religion; Israel and Judah. Further reading: Grayson, A. Kirk. “Mesopotamia, History of: History and Culture of Assyria,” In The Anchor Bible Dictionary, vol. 4, edited by David N. Freedman, 733–755. New York: Doubleday, 1992; Oates, David. Studies in the Ancient History of Northern Iraq. London: Oxford University Press, 1968; Oppenheim, A. Leo. Ancient Mesopotamia. Rev. by Erica Reiner. Chicago: University of Chicago Press, 1977; Saggs, H. W. F. The Might That Was Assyria. London: Sidgwick and Jackson, 1984. John Zhu-En Wee

AthanasiusEdit

(c. 300–373 c.e.) theologian and bishop Probably first a deacon (311–328 c.e.) ordained by the bishop Alexander, and Alexander’s personal secretary at the Council of Nicaea (325 c.e.), Athanasius was elected bishop in 328 c.e. His tenure was marked by his conflict with the Meletian Church in Egypt, and with the pro-Arian bishops within and outside his jurisdiction. Alexander did not enforce the canons of Nicaea with respect to the Meletian bishops in Egypt, and Athanasius met with strong resistance upon his insistence on the Council of Nicaea’s decisions. The Meletians made cause with Arianism, whose strength in the East was supported by the pro-Arian Constantine the Great. Athanasius was dismissed from his see by a synod of bishops in Tyre in 335, and Emperor Constantine exiled him to Trier. After Constantine’s death (July 22, 337) the pro-Orthodox emperor Constantine II reinstated Athanasius. Athanasius’s main opponents were now the Arians, in part because of the support they enjoyed among the conservative anti-Nicaean bishops of the East as well as in the imperial court of some of the emperors. Indeed, Athanasius’s periods of exile correspond to pro-Arian emperors or caesars of the East exercising their religious policy. Athanasius was exiled again in 339 because of resentment of bishops in the east, led by Eusebius of Nicomedia, to Constantine II’s rejection of the decision of the Synod of Tyre and because these bishops were supported by the emperor of the east, Constantius II. Following official recognition by Pope Julius I of Rome and the Council of Sardica (343), which had been convoked by Constans, the emperor of the West, Constans himself exerted pressure on Constantius II, and Athanasius was reinstated in 346. Constantius II became sole emperor after the assassination of Constans in 350, and Constantius was free to enforce his pro-Arian policy. Synods and letters denouncing Nicaea and its strongest supporter led Athanasius to flee from arrest. From 356 to 361 he hid among the monks of Egypt, although he remained in control of the pro-Nicene clergy through an intelligence network. Emperor Julian the Apostate recalled him in 361 and because of his popularity and success in unifying the pro-Nicene parties in Egypt he was forced to leave Alexandria in 363 until the death of Julian the same year permitted his return. The pro-Arian emperor Valens (364–378) exiled Athanasius in 365 but in 366 sought his support against the Goths who were sweeping across North Africa, and he was reinstated on February 1. He remained bishop until his death in 373. Athanasius’s theology must be reconstructed from his works, which were composed for specific occasions such as sermons or specific problems such as commentaries, apologia, or polemical tracts particularly against the Arians. Athanasius described the qualities of God in apophatic terms (such as inconceivable and uncreated) and rejected anthropomorphism, which reflected the Alexandrine tradition and its debt to Platonist philosophy. God is the source of all creation by his will. God created and governs the world through his Logos with whom he is united from before all time. The Logos became united with humanity through the incarnation into an individual body. This incarnation was real, but Christ did not possess the human weaknesses (such as fear and passion). The incarnation was the union of the Logos with a human body; the Logos did not assume a human soul. Athanasius attempted to solve the problem of the human soul in the incarnated Logos through inclusions of this human soul and human “psychic” qualities in his definition of the human body. See also Greek Church; Latin Church; Neoplatonism; Philo. Further reading: Anatolios, Khaled. Athanasius: The Coherence of His Thought. New York: Routledge, 1998; Drake, H. A. Constantine and the Bishops: The Politics of Intolerance. Baltimore, MD: Johns Hopkins University Press, 2000. Robert R. Phenix, Jr.

Athenian predemocracyEdit

Ancient Athens underwent a series of governments and reforms before it became the well-known democratic city-state that epitomized the ideals and the culture of ancient Greece. During the Archaic Period, a historical time period lasting from 8000 to 1000 b.c.e., Athens was a city-state governed by a king, known as a basileus. Due to Athens’s geographic position on a beautiful harbor surrounded by agriculturally rich lands, the city was able to resist invasion and to maintain and expand its influence. As Athens’s trade and influence expanded, the king’s powers diminished. The Areopagus, a council of Athenian nobles, slowly usurped the king’s power. The council, called Areopagus for the name of the hill upon which they met, was filled with nobles who gained wealth and influence from controlling the city’s wine and olive oil markets. With their inAugustine of Hippo 37 creased wealth they were able to exert more influence over Athens and the king. Over time Athens became a de facto oligarchy, consisting of the Areopagus and nine elected rulers, known as archons, who were selected by the Areopagus. The archons tended to all matters of state but always had to receive approval for their decisions and actions from the Areopagus. Upon the end of their term archons became members of the Areopagus. Since rule was controlled by the wealthy, Athenian government ineffectively addressed the issues facing commoners. Since members of the Areopagus dominated olive oil and wine production, everyday wheat farmers were unable to break into these markets. Eventually, wheat prices dropped as Athens began to trade for cheaper wheat, leaving Athenian farmers in debt and often in partial slavery. With the city-state ripe for reform, prominent Athenians and members of the Areopagus agreed to appoint a dictator in order to reform the government and the economy. Together, they selected Solon, a prominent Athenian lawmaker, poet, and former archon. Solon voided outstanding debts, freed many Athenians from slavery, banned slavery loans, and promoted wine and olive production by common farmers. In the constitution that he created Solon established a four-tier class structure. The top two tiers, based on wealth, were able to serve on the Areopagus, while the third class was able to serve on an elected council of 400 citizens, if selected. This council effectively acted as a check upon the Areopagus. The lowest class was permitted to assemble and to elect some local leaders. Judicial courts were reformed, and trial by jury was introduced. As soon as the constitution was finalized Solon gave control of the government back to the Areopagus. Although the overwhelming majority of Athenians praised his governmental reforms, Solon failed to improve the economy. Peisistratus, a military general, took control and began reforming not just the economy but also religion and culture. He supported Solon’s constitution, as long as his supporters were chosen. Upon his death Peisistratus’s son, Hippias, was unable to maintain control, and the Athenian ruler was overthrown by Sparta, whose government placed their own supporters in Athenian posts. The Spartans selected Isagoras to lead Athens, but he began disenfranchising too many Athenians, leading to rebellion. Opposed by Cleisthenes, Isagoras was eventually forced to flee. Cleisthenes enfranchised all free men in Athens and the surrounding areas and reformed the government, allowing all male citizens to participate and to vote for a council made up of elected male citizens over the age of 30. In order to ensure that ambitious Athenians were controlled, the council was allowed to “ostracize” citizens by majority vote, banishing them from Athens for at least 10 years. With these reforms Cleisthenes effectively engineered Athens’s transition to democracy. See also Greek city-states. Further reading: Hooker, Richard. “Ancient Greece: Athens.” Available online. URL:http://www.wsu.edu (September 2005); Sinclair, R. K. Democracy and Participation in Athens. Cambridge, MA: Cambridge University Press, 1991. Arthur Holst

Augustine of HippoEdit

(354–430 c.e.) bishop and theologian Born in 354 c.e. to a pagan father and a Christian mother, (St.) Monica, in Tagaste in North Africa, Augustine received a classical education in rhetoric on the path to a career in law. During his studies at Carthage in his 19th year, he read Cicero’s Hortensius and was immediately converted to the pursuit of wisdom and truth for its own sake. In this early period at Carthage he also became involved with the ideas of Mani and Manichaeanism, which taught that good and evil are primarily ontological realities, responsible for the unequal, tensionfilled cosmos in which we live. However, the inability of their leaders to solve Augustine’s problems eventually led the young teacher to distance himself from the group. Leaving the unruly students of Carthage in 383, Augustine attempted to teach at Rome only to abandon the capital in favor of a court position in Milan the following year. This step brought him into contact with the bishop of Milan, Ambrose, whose preaching was instrumental— along with the writings of the philosophers of Neoplatonism—in convincing Augustine of the truth of Christianity. He could not commit himself to the moral obligations of baptism, however, because of his inability to live a life of continence. His struggle for chastity is movingly told in his autobiographical work Confessions: Hearing of the heroic virtue of some contemporaries who abandoned everything to become monks, Augustine felt the same high call to absolute surrender to God but was held back by his attachment to the flesh. However, in a moment of powerful grace which came from reading Romans 13:12–14, he was able to reject his sinful life and to choose a permanent life of chastity as a servant of God. This decision led him first to receive baptism at Ambrose’s hands (Easter 387 c.e.) and then to return to North Africa to establish a monastery in his native town of Tagaste. In 391 he was ordained a priest for the town of Hippo, followed by his consecration as bishop in 395. In his 35 years as bishop Augustine wrote numerous sermons, letters, and treatises that exhibit his penetrating grasp of the doctrines of the Catholic faith, his clear articulation of difficult problems, his charitable defense of the truth before adversaries and heretics, and his saintly life. Augustine’s theology was largely shaped by three heresies that he combated during his episcopacy: Manicheanism, Donatism, and Pelagianism. As a former Manichee himself, he was intent on challenging their dualistic notion of god: He argued that there is only one God, who is good and who created a good world. Evil is not a being opposed to God but a privation of the good, and therefore has no existence of itself. Physical evil is a physical imperfection whose causes are to be found in the material world. Moral evil is the result of a wrong use of free will. In fighting Donatism, Augustine dealt with an ingrained church division that held that the clerics of the church had themselves to be holy in order to perform validly the sacraments through which holiness was passed to the congregation. In rebutting the Donatists, Augustine laid the foundation for sacramental theology for centuries to come. He insisted that the church on earth is made up of saints and sinners who struggle in the midst of temptations and trials to live a more perfect life. The church’s holiness comes not from the holiness of her members but from Christ who is the head of the church. Christ imparts his holiness to the church through the sacraments, which are performed by the bishops and priests as ministers of Christ. In the sacraments Christ is the main agent, and the ministers are his hands and feet on earth, bringing the graces of the head to the members. Augustine’s last battle was in defense of grace. Pelagius, a British monk, believed that the vast majority of people were spiritually lazy. What they needed was to exert more willpower to overcome their vices and evil habits and to do good works. Pelagius denied that humans inherit original sin of their ancestor Adam, the legal guilt inherent in the sin, or its effects on the soul, namely a weakening of the will with an inclination toward sin. He believed that human nature, essentially good, is capable of good and holy acts on its own. In his thought grace is only given by God as an aid to enlighten the mind in its discernment of good and evil. For Augustine, whose own conversion was due to an immense grace of God, the attribution of goodness to the human will was tantamount to blasphemy. God and only God was holy. If humanity could accomplish any good at all, it was because God’s grace—won through the merits of Jesus (Christ) of Nazareth— was freely given to aid the will in choosing good. Grace strengthens the will by attracting it through innate love to what is truly good. Thus Christ’s redemption not only remits the sins of one’s past but continually graces the life of the believer in all his or her moral choices. In the midst of this long controversy (c. 415–430) Augustine also developed a theology of the fall of Adam, of original sin, and of predestination. Augustine is probably best known for his Confessions, his autobiography up to the time of his return to North Africa, and for the City of God, undertaken as his response to both the pagans and the Christians after the sacking of Rome in 410, the former because they attributed it wrongly to divine retribution and the latter because their faith was shaken by the horrific event. See also Christian Dualism (Gnosticism); Christianity, early; Nicaea, Council of; Rome: decline and fall. Further reading: Augustine of Hippo. Confessions. New York: Knopf, 1998; Brown, P. St. Augustine of Hippo: A Biography. Berkeley: University of California Press, 2000. Gertrude Gillette

Aurelius, MarcusEdit

(121–180 c.e.) Roman emperor Marcus Aurelius was the only Roman philosopher king, author of Meditations and last of the “good” emperors. The Pax Romana began its slow collapse during his reign. Marcus Aurelius Antoninus Augustus was born on April 26 in 121 c.e. His father, praetor Marcus Annius Verus, died when Aurelius was only three months old, and his mother, Lucilla, inherited great family wealth. Emperor Hadrian felt great empathy toward Aurelius, and Hadrian became his mentor. He made Aurelius a priest of the Salian order in 128. By age 12 Aurelius began to practice Stoicism and became extremely ascetic, scarcely sleeping and eating. Hadrian 38 Aurelius, Marcus controlled his education, having Rome’s brightest citizens tutor Aurelius. He studied rhetoric and literature under M. Cornelius Fronto, who taught him Latin and remained a mentor for life. In 136 Aurelius met Apollonius the Stoic. Hadrian adopted Aurelius in 138, and he was given the title caesar in 139. Realizing his death was approaching, Hadrian arranged for the future emperor Antoninus Pius (86–161 c.e.) to adopt Aurelius along with Lucius Verus (130–169 c.e.), who became Aurelius’s adoptive brother, making them joint heirs to succession. Aurelius was betrothed in 135 to Annia Galeria Faustina, the younger daughter of Antoninus Pius and Annia Galeria Faustina the Elder. They married in 141 and had 14 children in 28 years of marriage. Only five of their children, one son, the weak and unstable Commodus (161–192), and four daughters would survive to adulthood. By 147 Aurelius gained the power of tribunicia potestas, and he shared these powers with Pius. Aurelius was admitted to the Senate and held consulships in 140, 145, and 161 c.e., a rare honor for a private citizen. Marcus Aurelius and Lucius Verus became coemperors on March 7, 161. As co-emperors, Verus conducted battles in the east while Aurelius concentrated on fighting the ever-increasing threat from the German tribes in the north. Aurelius spent the majority of his reign fighting against the encroachment of the formidable German tribes that opposed Roman rule. Aurelius fought the Marcomanni and the Quadi, who settled in northern Italy, and the Parthenians, who moved into the east of the Roman Empire. Marcus Aurelius instituted positive reform in various elements of Roman society, including changes to Roman civil law. Upon the advice of the revered jurist Quintus Ceridius Scaevola he abolished inhumane criminal laws and severe sentencing. In family law he alleviated the absolute patriarchy fathers held over their children. Aurelius granted women equal property rights and the right to receive property on behalf of children. He created the equivalent of modern-day trust companies enabled to distribute parental/family legacies at the age of majority. Realizing the value of children in Roman society, Aurelius endowed orphanages and hospitals. In the military he allowed promotion only through merit. During the numerous economic crises of his reign Aurelius refused to raise taxes and used his own wealth many times to cover the financial stress caused by continuous warfare. He also debased the silver coinage several times. Returning legions serving under the command of Verus (who died in 169) brought plague to Rome from the East. Excessive and repeated flooding destroyed the granaries, leading to starvation. Avidius Cassius (130– 175), believing Aurelius was dead, unsuccessfully attempted to seize the throne in 175. He had little support once people realized Aurelius was still alive. His own men murdered him. Realizing the tragedy of Cassius’s error, Aurelius would allow no harm to come to Cassius’s family. The troops that Cassius had commanded once again brought plague back from the East. During his campaigns Aurelius wrote his 12 books of Meditations in Greek, detailing his reflections of life. His wife Faustina died in 175 at age 45. By 177 he allowed the self-indulgent Commodus full participation in his government. Aurelius died on March 17, 180 c.e., in Vindobona, present-day Vienna, at age 58. See also: Antonine emperors; Rome: decline and fall. Further reading: Briley, Antony. Aurelius: A Biography. New Haven, CT: Yale University Press, 1987; Farquharson, Arthur S. L. Aurelius: His Life and Times and His World. Oxford: B. Blackwell, 1951; Grant, Michael. The Roman Emperors: A Biographical Guide to the Rulers of Imperial Rome, 31 BC–AD 476. New York: Charles Scribner’s Sons, 1985; Long, George. The Meditations of Emperor Aurelius Antoninus. New York: Avon, 1993. Annette Richardson

Axial Age and cyclical theoriesEdit

The Axial Age is known as a pivotal period in history that dates from 800 to 200 b.c.e. Coined in the 20th century by the philosopher Karl Jaspers (1883–1969), the Axial Age refers to the period of history when the following major figures, among others, emerged: Confucius; Laozi; Gautama Buddha; Zarathustra; the Jewish prophets Elijah, Isaiah, and Jeremiah; the Greek thinkers Parmenides, Heraclitus, Plato, Socrates, and Archimedes; as well as the Greek tragedians. What the aforementioned individuals all have in common are their respective articulations of what have been called transcendental visions—articulations that differed greatly from the cosmological understandings of their time. The various prophets, philosophers, and sages began to ask a rather common set of ultimate questions regarding the nature and origin of the cosmos and all its various components, including themselves and their respective communities. Their inquiries and experiences

The Expanding World 600 CE to 1450 Edit

Abbasid dynastyEdit

The Abbasids defeated the Umayyads to claim the caliphate and leadership of the Muslim world in 750. The Abbasids based their legitimacy as rulers on their descent from the prophet Muhammad’s extended family, not as with some Shi’i directly through the line of Ali and his sons. The Abbasids attempted to reunify Muslims under the banner of the Prophet’s family. Many Abbasid supporters came from Khurasan in eastern Iran. Following the Arab conquest of the Sassanid Empire, a large number of Arab settlers had moved into Khurasan and had integrated with the local population. Consequently, many Abbasids spoke Persian but were of Arab ethnicity. THE NEW CAPITAL OF BAGHDAD The fi rst Abbasid caliph, Abu al-Abbas (r. 749–754), took the title of al-Saffah. His brother and successor, Abu Jafar, adopted the name al-Mansur (Rendered Victorious) and moved the caliphate to his new capital, Baghdad, on the Tigris River. Under the Abbasids the center of power for the Muslim world shifted eastward with an increase of Persian and, subsequently, Turkish infl uences. Persian infl uences were especially notable in new social customs and the lifestyle of the court, but Arabic remained the language of government and religion. Thus, while non-Arabs became more prominent in government, the Arabization, especially in language, of the empire increased. Mansur’s new capital, built between 762 and 766, was originally a circular fortress, and it became the center of Arab-Islamic civilization during what has been called the golden age of Islam (763–809). With its easy access to major trade routes, river transport, and agricultural goods (especially grains and dates) from the Fertile Crescent, Baghdad prospered. Agricultural productivity was expanded with an effi cient canal system in Iraq. Commerce fl ourished with trade along well-established routes from India to Spain and trans-Saharan routes. A banking and bookkeeping system with letters of credit facilitated trade. The production of textiles, papermaking, metalwork, ceramics, armaments, soap, and inlaid wood goods was encouraged. An extensive postal system and network of government spies were also established. HARUN AL-RASHID AND THE ABBASID ZENITH The zenith of Abbasid power came under the caliphate of Harun al-Rashid (r. 786–809). Harun al-Rashid, his wife Zubaida, and mother Khaizuran were powerful political fi gures. Zubaida and Khaizuran were wealthy and infl uential women and both controlled vast estates. They also played key roles in determining succession to the caliphate. Like the Umayyads, the Abbasids never solved the problem of succession, and their government was weakened and ultimately, in part, destroyed because of rivalries over succession. Under Harun al-Rashid the Barmakid family exerted considerable political power as viziers (ministers to the ruler). The Barmakids were originally from Khurasan and had begun serving the court as tutors to Harun al-Rashid. The Barmakids served as competent and powerful offi cials until their fall from favor in 803, by which time a number of bureaucrats and court offi cials had achieved positions of considerable authority. The wealth of the Abbasid court attracted foreign envoys and visitors who marveled over the lavish lifestyles of court offi cials and the magnifi cence of Baghdad. Timurlane destroyed most of the greatest Abbasid monuments in the capital, and Baghdad never really recovered from the destruction infl icted by him. Under the Abbasids, provinces initially enjoyed a fair amount of autonomy; however, a more centralized system of fi nances and judiciary were implemented. Local governors were appointed for Khurasan and soldiers from Khurasan made up a large part of the court bodyguard and army. In spite of their power and wealth the Abbasids twice failed to take Constantinople. The Abbasids also had to grapple with ongoing struggles between those who wanted a government based on religion, and those who favored secular government. CIVIL WAR OVER ACCESSION AND THE END OF THE ABBASIDS Harun al-Rashid’s death incited a civil war over accession that lasted from 809 to 833. During the war, Baghdad was besieged for one year and was fought for by the common people, not the elite, in the city. Their exploits were commemorated in a body of poetry that survives until the present day. The attackers fi nally won and the new Caliph Mutasim (r. 833–842) moved the capital to Samarra north of Baghdad in 833. During the ninth century the Abbasid army came to rely more and more on Turkish soldiers, some of whom were slaves while others were free men. A military caste separate from the rest of the population gradually developed. In Khurasan, the Tahirids did not establish an independent dynasty but moved the province in the direction of a separate Iranian government. As various members of the Abbasid family fought one another over the caliphate, rulers in Egypt (the Tulunids), provincial governors, and tribal leaders took advantage of the growing disarray and sometimes anarchy within the central government at Samarra to extend heir own power. The Zanj rebellion around Basra in southern Iraq in 869 was a major threat to Abbasid authority. The Zanj were African slaves who had been used as plantation workers in southern Iraq, the only instance of largescale slave labor for agriculture in the Islamic world. Other non-slave workers joined the rebellion led by Ali ibn Muhammad. Ali ibn Muhammad was killed fi ghting in 883 and the able Abbasid military commander, Abu Ahmad al-Muwaffaq, whose brother served as caliph, fi nally succeeded in crushing the rebellion. Under Caliph al-Muqtadir (r. 908–932) the capital was returned to Baghdad where it remained until the collapse of the Abbasid dynasty. By the 10th century any aspirant to the caliphate needed the assistance of the military to obtain the throne. The army became the arbiters of power and the caliphs were mere ciphers. A series of inept rulers led to widespread rebellions and declining revenues while the costs of maintaining the increasingly Turkish army remained high. By the time the dynasty fi nally collapsed, it was virtually bankrupt. In 945 a Shi’i Persian, Ahmad ibn Buya, took Baghdad and established the Buyid dynasty that was a federation of political units ruled by various family members. A remnant of the Abbasid family, carrying the title of caliph, moved to Cairo where it was welcomed as an exile with no authority over either religious or political life. See also Islam: art and architecture in the golden age; Islam: music and literature in the golden age; islam: science and technology in the golden age; Shi’ism; Umayyad dynasty. Further reading: Abbott, Nabia. Two Queens of Baghdad: Mother and Wife of Harun al-Rashid. Chicago, IL: University of Chicago Press, 1946; Lassner, Jacob. The Shaping of Abbasid Rule. Princeton, NJ: Princeton University Press, 1980; Shaban, M. A. The Abbasid Revolution. Cambridge: Cambridge University Press, 1970; Egger, Vernon O. A History of the Muslim World to 1405: The Making of a Civilization. Upper Saddle River, NJ: Pearson Prentice Hall, 2004. Janice J. Terry

Abelard, Peter and HeloiseEdit

Peter Abelard (1079–1142) was an abbot in the monastery of Saint-Gildas in the province of Brittany, France. He was born in Nantes, moved to Paris at the age of 15, and attended the University of Paris. He became a prolifi c writer, composing philosophical essays, letters, an autobiography, hymns, and poetry. He is best known for his intellectual work in the area of nominalism, the antithesis of realism and basis of modern empiricism. His book Sic et Non posed a number of theological and philosophical questions to its readers. In Ethics, he began two works: “Dialogue between a Philosopher, a Jew and a Christian” and “Know Yourself.” Neither work was completed. His rebellious nature frequently angered people, particularly those in positions of authority. Often his independent thinking gave rise to confl icts, especially when he demonstrated mastery of 2 Abelard, Peter and Heloise a subject being taught by a mentor. On one occasion he challenged his former teacher, William of Champeaux, regarding realism and logically proved that nominalism, also known as conceptualism, explained what realism could not prove. At a time when education was not yet public, professors had no permanent place to teach. They would post an announcement that advertised where and when they would teach a particular subject and wait for students to arrive. In this way they established a following. Abelard was quite brilliant at age 25 and set up his own school despite limited teaching experience. He founded his school uncomfortably close to his former teachers’, provoking their anger. He lived a life of extremes, gaining the admiration, respect, and awe of those who studied under him, but often receiving the wrath of those whom he defi ed. He was accused of heresy on many occasions and at one point was forced to leave his monastery because he aggravated his peers so intensely. On two occasions he was excommunicated from the church. Heloise (1101–64) was the highly intelligent, beautiful, and charming niece of Fulbert, a prominent canon of Notre-Dame. Fulbert doted on her and demanded that she have only the best education, which took her to Paris near the monastery. Abelard heard of Heloise and requested that he be allowed to tutor her in her home. Permission was granted, and he moved in. There he found an eager pupil, 22 years his junior, and they soon became involved in a physical as well as scholarly relationship. When Heloise became pregnant, they rejoiced in their child (whom they later named Astrolabe) and made plans to marry. Heloise was fi ercely independent and would not be forced into a marriage where she had no rights. But in her collected letters she mentions that she did not want to bring shame on Abelard by being a burden to him. In order to hide their relationship, and Heloise’s imminent delivery, Abelard took her to his sister’s house, where she stayed until she gave birth to their son. They secretly married in Paris, with only Heloise’s uncle and a few of their friends in attendance. Right after the marriage, Heloise took refuge in the Argenteuil convent to allay any gossip regarding her relationship with Abelard. Unaware that both Heloise and Abelard had planned this provisional measure, Fulbert thought that Abelard had abandoned Heloise and forced her into a nunnery. He planned to ambush and restrain him and cut off Abelard’s genitalia. In a series of maneuvers he arranged to pay one person to put a sleeping powder in Abelard’s evening meal and his servant to allow a gate to remain open. Fulbert sent word that he was looking for a Jewish physician to perform the sordid mutilation. After he had assembled his kinsmen and associates, they sought out Abelard and performed the horrible act. After the surgical alteration, Abelard took vows to become a monk at the monastery of Saint-Denis and persuaded Heloise to take vows to become a nun in a convent in Argenteuil. Although their physical relationship could not continue, they remained in contact throughout their lives. Ironically, Abelard, who had previously considered himself a ravening wolf to whom a tender lamb had been entrusted, wrote that the alteration had been a positive rather than a harmful event. He wrote, “…divine grace cleansed me rather than deprived me…” and that it circumcised him in mind as in body to make him more fi t to approach the holy altar and that “no contagion of carnal pollutions might ever again call me thence.” Abelard and Heloise have been resurrected in a variety of artistic genres since their plight was fi rst told in the 12th century. Although never completed, in 1606 William Shakespeare wrote the play Abélard and Elois, a Tragedie. Josephine Bonaparte, upon hearing the tragic story, made arrangements for the two to be buried together in Père LaChaise Cemetery in Paris. Their modest sepulcher can be found on the map at the entrance to the cemetery. In 1819 Jean Vignaud (1775– 1826) painted Abélard and Heloïse Surprised by the Abbot Fulbert (Les Amours d’Héloïse et d’Abeilard), which is now at the Joslyn Art Museum in Omaha, Nebraska. The extent to which artists have chosen Abelard and Heloise to create operas, plays, and movies is testament to the universality and poignancy of their story. Further reading: Brower, Jeffrey E., and Guilfoy, Kevin, eds. The Cambridge Companion to Abélard. Cambridge: Cambridge University Press, 2004; Moncrieff, C. K. Scott, trans. The Letters of Abélard and Héloïse. New York: Alfred A. Knopf, 1942. Lana Thompson

A’ishaEdit

(d. 678) wife of the prophet Muhammad A’isha bint Abu Bakr was the daughter of Abu Bakr, one of the fi rst converts to Islam and a close personal friend of the prophet Muhammad. According to the A’isha 3 custom of the time, the family arranged A’isha’s engagement to the prophet Muhammad when she was only nine years old. Because A’isha played an important role in the personal disputes that evolved over the leadership of the fledgling Muslim community after the Prophet’s death, accounts about her life vary widely between the majority orthodox Sunni Muslims and Shi’i Muslims. Sunni accounts argue that the marriage was only consummated after A’isha was older, while more negative Shi’i narratives accept the tradition that she was only nine. However, historical accounts are unanimous in describing the union as a close and loving one. A’isha was thought to have been the Prophet’s favorite wife. A’isha played an active role in the political and even military life of the Islamic community in Medina. She was seen as a rival to Ali, the Prophet’s son-in-law by marriage to his daughter Fatima. Ali’s followers, or Shi’i, viewed Ali and his descendants as the rightful heirs to the leadership of Islamic society. On the other hand, the Sunni, the overwhelming majority of Muslims worldwide, believed that any devout believer could assume leadership of the community. While the Prophet was still alive, Ali accused A’isha of adultery after she left the Bedu (Bedouin) encampment in search of a lost necklace and failed to find the group when she returned. She was rescued and returned to camp by a man named Safwan. A’isha’s rivals, including Ali, took this opportunity to urge the Prophet to divorce her. The Prophet took A’isha’s side and subsequently received a revelation that adultery had to be proven by eyewitnesses. According to Ibn Ishaq’s Life of Muhammad (Sirat Rasul Allah), the oldest existing biography, the Prophet died in A’isha’s arms in 632 c.e. A’isha’s father, Abu Bakr, was then chosen as the first caliph, or leader of the community. Although Ali’s supporters felt he should have been the rightful heir, they reluctantly went along with the majority. When Ali’s supporters were believed to have been involved in the assassination of the third caliph, Uthman, in 656 c.e. and proclaimed Ali the fourth caliph, A’isha, astride a camel, led an armed force in a pitched battle against him. A’isha lost what became known as the Battle of the Camel and was forced to retire to Medina, where she died in 678 c.e. See also Caliphs, first four; Shi’ism. Further reading: Spellberg, D. A. Politics, Gender, and the Islamic Past: the Legacy of A’isha bint Abi Bakr. New York: Columbia University Press, 1994; Walther, Wiebke. Women in Islam. Rev. ed. Princeton, NJ: Marcus Wiener, 1993. Janice J. Terry

Albigensian CrusadeEdit

The matter of heresy in the Catholic Church threatened the unity of Christendom precisely at the time that the pope was calling for an all-out war to reclaim the Holy Lands from the Muslims. Pope Innocent III conceived of the plan to wipe out the Albigensian heresy in the south of France in the early decades of the 13th century. He would call for a crusade. At first the plan seemed ingenious: The pope would grant to fighters the spiritual benefits of a crusade, but the time of service would be brief (40 days) and close to home in comparison to earlier wars in the Holy Land. His ultimate goal was to unify Europe under papal authority so that he could marshal its resources into the Byzantine Empire, Muslim Spain, and, most important, the Holy Land. However, the twists and turns in the politics of the Albigensian Crusade (1208– 29) ultimately drained resources from the wars abroad and strengthened the anti-Roman forces in France. In the next centuries the blunder of the Albigensian Crusade would be apparent in the schism of Avignon, where a French pope would oppose a Roman pope. Innocent at first supported the work of preaching and persuasion to win back the Albigensians, a loose network of sectarians and heretics of southern France. A variety of church investigators, from Bernard of Clairvaux to the pope, readily admitted that Catholic clergy serving the Albigensian natives stood in grave need of reform. But when peaceful measures did not make speedy enough progress, Innocent lost patience and turned to war. His decision came in 1208 when the papal delegate was murdered in Toulouse. Innocent held Count Raymond of Toulouse accountable both for his death and for the protection of the heretics in southern France and summoned the rest of France to take up arms. Some 20,000 knights and 200,000 foot soldiers responded. Their leader was the crusader veteran Simon de Montfort. Raymond lost no time in making peace with the papal forces, but Simon could never conquer the whole area of the Albigensians. Resistance was too entrenched, and Simon could only count on French troops for 40 days at a time, the terms of service that the church allowed for this crusade. Also, Simon was an outsider and extremely unpopular because of his brutality in war. In 1213 Innocent seemed to recognize the folly of the crusade and called it off. The king of Aragon, a warrior renowned for his battlefield skills against Muslims in Spain, took up the cause of Raymond. In effect, the Albigensian conflict became a tug of war between Spain and France. Although the pope now supported Albigensian Crusade Raymond, the French nobles supported Simon. In the political melee that followed, another crusade was summoned. Though it was nominally against heresy, it was really against Raymond and his Spanish allies. On the battlefi eld the French-backed forces defeated the Spanish- backed forces. Simon’s shocking brutality led to his excommunication by Innocent. He died in battle in Toulouse in 1218. His nemesis Raymond died in 1222. The Albigensians rebounded throughout these latter years, leading many Catholic and French offi cials to threaten yet another crusade. Raymond’s son, however, was able to negotiate the Treaty of Meaux (1229), ceding the territory to Capetian France and institutionalizing Catholic infl uence everywhere. The church meanwhile found a new weapon to combat latent heresy: the Inquisition. See also Avignonese papacy; Crusades; heresies, pre- Reformation. Further reading: Madden, Thomas F. The New Concise History of the Crusades. Lanham, MD: Rowman & Littlefi eld, 2006; Trevor-Roper, Hugh. The Rise of Christian Europe: History of European Civilization Library. Norwich, England: Thames and Hudson, 1965. Mark F. Whitters

AlcuinEdit

(c. 735–804) scholar Alcuin of York was an educator, poet, theologian, liturgical reformer, and an important adviser and friend of Charlemagne (c. 742–814 c.e.). He was a major contributor to the Carolingian Renaissance, a ninth century c.e. intellectual revival within Charlemagne’s domains that shaped the subsequent history of education, religion, and politics in the Middle Ages. Alcuin was born in Northumbria, England, around 735 c.e. and educated at the cathedral school at York under its master, Aelbert. In 778 c.e. Alcuin became the librarian and master of the cathedral school at Alcuin 5 Toulouse and Alba were the bastion cities of the heretics, and the main targets of the Albigensian Crusade. Carcassonne (above) is a walled city in southern France and was a stronghold of the Cathars during the confl ict. York, where his talent for teaching soon attracted students from other lands. Three years later, while in Parma (Italy), Alcuin met Charlemagne, who invited him to join his court. Excepting two journeys to his native England (in 786 and 790–793 c.e.), Alcuin lived and worked in the Frankish court from 782 c.e. until he retired in 796 c.e. to the abbey of St. Martin at Tours, where he was abbot until his death in 804 c.e. Although Alcuin never advanced beyond the clerical offi ce of deacon, by the late 780s c.e. his aptitude as a teacher and his infl uence on royal administrative texts distinguished him among the clerics and scholars of the Carolingian court. One of Alcuin’s most signifi cant (and original) contributions to medieval education lies in his mastery of the seven liberal arts and his composition of textbooks on grammar, rhetoric, and dialectic (the traditional arts of the trivium). Alcuin’s literary output also includes commentaries on biblical books, a major work on the Trinity, and three treatises against the Adoptionism of his contemporaries Felix of Urgel and Elipandus of Toledo. Adoptionism was the heretical belief that Christ was not the eternal Son of God by nature but rather merely by adoption. Alcuin also composed a number of poems and “lives” of saints. Alcuin contributed to the Carolingian Renaissance most directly as a liturgical reformer and editor of sacred texts. The various reforms that Alcuin introduced into liturgical books (books used in formal worship services) in the Frankish Empire culminated in his edition of a lectionary (a book containing the extracts from Scripture appointed to be read throughout the year), and particularly in his revision of what is known as the Gregorian Sacramentary (the book, traditionally ascribed to Pope Gregory I, used by the celebrant at Mass in the Western Church until the 13th century c.e. that contained the standard prayers for use throughout the year). In addition to revising liturgical texts Alcuin edited Jerome’s Vulgate in response to Charlemagne’s request for a standardized Latin text of the Bible. His edition of the Vulgate was presented to Charlemagne on Christmas Day, 800 c.e., the very day on which the Frankish king became emperor. As abbot of St. Martin’s, Alcuin supervised the production of several pandects or complete editions of the Bible. Alcuin’s preference for the Vulgate likely contributed to its fi nal acceptance as the authoritative text of Scripture in the medieval West. Alcuin died at Tours on May 19, 804 c.e., and his feast day continues to be celebrated on May 19. See also Frankish tribe. Further reading: Gaskoin, C. J. B. Alcuin: His Life and Work. New York: Russell & Russell, 1966; Wallach, Luitpold. Alcuin and Charlemagne. Ithaca, NY: Cornell University Press, 1959. Franklin T. Harkins

Alfred the GreatEdit

(849–899) king of England Alfred the Great was the fi fth son of King Ethelwulf (839– 55) of the West Saxons (Wessex) and Osburga, daughter of the powerful Saxon earl Oslac. When Alfred became king of Wessex in 871, his small realm was the last independent Saxon kingdom in England. A massive Viking force from Denmark, known as the “Great Army,” had landed in East Anglia in 865 and had quickly overrun the Saxon kingdoms of Northumbria, East Anglia, and, eventually, Mercia. During his older brother Ethelred’s reign (866–871), Alfred had helped fi ght off an initial invasion of the Great Army into Wessex, but when his older brother died and Alfred inherited the throne, he was forced to gain peace by buying the Vikings off. In 878 the Great Army returned, led by the Danish chieftain Guthrum. Alfred’s fortunes were considerably augmented at this point by the fact that nearly half of the Vikings in the Great Army had settled down in Northumbria to farm and hence took no part in this new attack. Even so Alfred and his men were hard pressed to survive. Driven from his royal stronghold at Chippenham in Wiltshire in early 878, he retreated to the marshes around Somerset, where he managed to regroup his forces. In May of that year he infl icted a solid defeat on the Vikings at the Battle of Edington and quickly followed this up with another victory by forcing Guthrum and his men to surrender their stronghold at Chippenham. By the Treaty of Wedmore (878), which brought hostilities to an end, the Danes withdrew north of the Thames River to East Mercia and East Anglia; together with Northumbria, these lands would constitute the independent Viking territories in England known as the Danelaw. Signifi cantly, through this settlement Alfred gained control over West Mercia and Kent, Saxon lands that he had not previously controlled. In addition to acknowledging a stable demarcation between Alfred’s kingdom and Viking lands, Guthrum also agreed to convert to Christianity and, shortly thereafter, was baptized. The signifi cance of this cannot be overstated, because it made the eventual assimilation of the Danes into Saxon, Christian society possible. 6 Alfred the Great With this latest Viking invasion having been thwarted, Alfred took steps to ensure the future safety of his people. Across his kingdom he created a series of fortified market places called burhs, which, in addition to aiding the economy of the realm, provided strong points of defense against Viking raids. These were strategically situated so that no burh was more than one day’s march (approximately 20 miles) from another. Alfred also reorganized his army so that at any one time, only part of the fyrd, or levy, was out in the field or defending the burhs, while the men in the other half would remain home tending their own and their absent kinsmen’s farms and livestock. This enabled Alfred to extend the time of service for which each half of the fyrd could be deployed, because it removed problems of supply and also relieved men from worrying about their families and farms back home. These measures proved immensely effective, not only allowing Alfred to successfully defend Wessex, but even enabling him to go on the offensive against the Vikings, so that by 879 much of Mercia had been cleared of Vikings, and in 885–886 he captured London. After the Danes launched a massive seaborne invasion against England in 892, the Anglo-Saxon Chronicle tells us that Alfred also created a new navy, comprised of large, fast ships, in order to prevent any such subsequent overseas invasions from being successful. Having dealt with the Vikings, in the second half of his reign Alfred took steps to improve the administration of his realm as well as increase the level of learning and culture among his people. In doing so he showed himself to be a competent administrator and possessed of an inquiring and capable mind. He established an Anglo-Saxon law code, by combining the laws and practices of Wessex, Mercia and Kent, and he kept a tight rein on justice throughout his lands. Like others of his time, the king had a deep respect for the wisdom and learning of the past, and he worked hard to make a variety of works available to his contemporaries for their religious, moral, and cultural edification. He took an active role in improving the spiritual and pastoral qualities of bishops and clerics throughout his realm by personally translating from Latin into the Anglo-Saxon language Pope Gregory the Great’s late sixth-century work titled Pastoral Care. He showed a similar interest in philosophical and moral issues by rendering Boethius’s early sixth-century treatise The Consolation of Philosophy into his native tongue, while sprinkling throughout his translation numerous personal observations. Alfred further engaged his passion for ethics, history, and theology by translating from Latin into Anglo-Saxon the work of the fifth-century Spanish prelate Paulus Orosius known as the Universal History. This latter work undertook to explain all history as the unfolding of God’s divine plan. To help foster a sense of pride and awareness of Anglo-Saxon history, Alfred rendered (rather loosely) the Venerable Bede’s eighth-century work Ecclesiastical History of the English People. To this same end he ordered the compilation of the Anglo-Saxon Chronicle that was continued from his reign until the middle of the 12th century. Around 888 Bishop Asser of Sherborne wrote his Life of King Alfred, celebrating the king as a vigorous and brave warrior, a just ruler, and a man of letters and intellect as well. The political, military, and cultural accomplishments of King Alfred the Great are significant, especially when viewed within the larger context of late ninthcentury European history. As much of the Carolingian dynasty fell into the chaos of feudalism because of the raids of Vikings, Muslims, and Magyars and the infighting among Charlemagne’s heirs, Alfred’s victories over the Vikings, and his subsequent expansion into Mercia and Kent, began a process that would result in his successors uniting all of England under the House of Wessex and in a fusion of Anglo-Saxon and Viking culture. Thus he is credited with establishing the English monarchy and alone among all English rulers bears the title “the Great.” See also Anglo-Saxon culture; Charlemagne; Vikings: Norway, Sweden, and Denmark. Further reading: Abels, Richard. Alfred the Great: War, Kingship, and Culture in Anglo-Saxon England. London and New York: Longman, 1998; Smyth, Alfred. King Alfred the Great. Oxford: Oxford University Press, 1995. Ronald K. Delph

Ali ibn Abu TalibEdit

(c. 598–661) founder of Shi’ism Ali ibn Abu Talib was the second convert to Islam. The son of Muhammad’s uncle Abu Talib, Ali married his cousin Fatima, the daughter of the prophet Muhammad and Khadija. Ali and Fatima had two sons, Hasan and Husayn, who both played key roles in the history of Islamic society. Ali also fought courageously in the battles between the small Muslim community based in Medina and the Meccan forces prior to the Prophet’s triumphal return to Mecca. Ali ibn Abu Talib Because of his familial relationship with Muhammad, many of Ali’s supporters thought he should be Muhammad’s successor. Although the Prophet had not named a successor, some of Ali’s allies claimed that Muhammad had secretly chosen Ali to rule the Islamic community after his death. However, after some debate the Muslim majority chose Abu Bakr to be the new leader, or caliph. Many members of the powerful Umayyad clan opposed Ali, and he had also feuded with A’isha, the Prophet’s favorite wife. Thus when the next two caliphs were chosen, Ali was again passed over as leader of the Islamic community. In 656 mutinous soldiers loyal to Ali assassinated the third caliph, Uthman, a member of the Umayyad family, and declared Ali the fourth caliph. But Muaw’iya, the powerful Umayyad governor of Syria, publicly criticized Ali for not pursuing Uthman’s assassins. A’isha sided with the Umayyads and raised forces against Ali. But she was defeated at the Battle of the Camel and forced to return home. Feeling endangered in Mecca— an Umayyad stronghold—Ali and his allies moved to Kufa, in present day Iraq. Ali’s followers were known as Shi’i, or the party of Ali. This split was to become a major and lasting rift within the Muslim community. Unlike the schism between Catholics and Protestants in Christianity, the division among Muslims was not over matters of theology but over who should rule the community. The majority, orthodox Sunnis, believed that any devout and righteous Muslim could rule. The Shi’i argued that the line of leadership should follow through Fatima and Ali and their progeny as the Prophet’s closest blood relatives. The Syrians never accepted Ali’s leadership and the two sides clashed at the protracted Battle of Siffi n, near the Euphrates River in 657. When neither side conclusively won, the famed Muslim military commander Amr ibn al-‘As negotiated a compromise that left Mu’awiya and Ali as rival claimants to the caliphate. The Kharijites (a small group of radicals who rejected city life and who believed that God should select the most devout Muslim to be leader) were outraged at Amr’s diplomacy, Mu’awiya’s elitism and wealth, and Ali’s indecisiveness. According to tradition, they devised a plot to kill all three during Friday prayers. The attacks on Amr and Mu’awiya failed, but a Kharijite succeeded in stabbing Ali to death in the mosque at Kufa in 661. Ali’s tomb in Najaf, south of present-day Baghdad, remains a major site of Shi’i pilgrimage to the present day. After Ali’s death, his eldest son, Hasan, agreed to forego his claim to the caliphate and retired peacefully to Medina, leaving Mu’awiya the acknowledged caliph. Ali’s descendants as well as Muhammad’s other descendants are known as sayyids, lords, or sherifs, nobles, titles of respect used by both Sunni and Shi’i Muslims. Within the various Shi’i sects Ali is venerated as the fi rst imam and the fi rst righteously guided caliph. See also Muhammad, the prophet; A’isha; Shi’ism; Caliphs, fi rst four; Umayyad dynasty Further reading: Kennedy, Hugh. The Prophet and the Age of the Caliphates: The Islamic Near East From the Sixth to the Eleventh Century. London: Longman, 1986; Madelung, Wilferd. The Succession to Muhammad: A Study of the Early Caliphate. New York: Cambridge University Press, 1997; Tabataba’i, ‘Allamah Sayyid Muhammad Husayn. Shi’ite Islam. Translated by Sayyid Hossein Nasr. Albany: State University of New York Press, 1975; Vecchia, Vaglieri, L. “Ali ibn Abi Talib,” Encyclopaedia of Islam. New ed., Vol. I, Leiden: Brill, 1960. Janice J. Terry

Almoravid EmpireEdit

North Africa’s Berber tribes began converting to Islam with the commencement of the Arab conquests during the second half of the seventh century under the al- Rashidun and Umayyad Caliphates. Although Berber Muslims were active participants in the expansion of the Islamic state north from Morocco into Iberia, they remained subservient to Arab commanders appointed by the reigning caliph in the Middle East. Around 1050 from the Sahara in Mauritania, the fi rst major political-military movement dominated by Berbers, the Almoravids, began to emerge. This revolutionary movement was founded and led by Abdullah ibn Yasin al-Gazuli, a fundamentalist Sunni preacher of the Maliki legal school who had been trained at Dar al-Murabitun, a desert religious school in the Sahara. Abdullah had begun his career as a preacher by teaching the Berber Lamatunah tribes in the Sahara, who had converted to Islam but remained ignorant of its intricacies, orthodox Sunni Islam. The origins of the Almoravid movement lay in the foundation by Abdullah of a small, militant sect that abided by a strict interpretation of Maliki Islamic law. To join Abdullah’s movement, new members were fl ogged for past sins, and infractions of Islamic law were severely punished. The community was guided by the religious legal opinions of Abdullah and later by the legal rulings of Maliki jurists, who were paid for their services. In 1056 8 Almoravid Empire the Almoravids, who had developed into a strong and fanatical military movement, began to advance northward into Morocco, where they subjugated other Berber tribes and preached a strict version of Sunni Islam. Three years later, during a fierce war against the Barghawata Berber tribe, Abdullah was killed. After the death of Abdullah, leadership of the Almoravid movement passed to two cousins, Yusuf ibn Tashfin and his cousin, Abu Bakr. In 1062 the city of Marrakesh was founded in southern Morocco, where it would serve as the Almoravid capital, followed in 1069 by the establishment of Fez. Under Yusuf and Abu Bakr, the Almoravid Empire expanded eastward into Algeria by the early 1080s. The conquest of Morocco was completed by 1084, and by 1075 Almoravid forces had expanded into the West African kingdom of Ghana. While the Almoravids continued to expand their realm in North Africa, Christian states in Iberia began to chip away at the Iberian Muslim states. Under the leadership of Alfonso VI, the duke of Castile, Spanish Christian forces forced the Islamic city-states in the south, including Seville and Granada, to pay him tribute. In the late 1070s Iberian Muslims sent messengers to the Almoravids, requesting support against the Christians. However, it was not until 1086 that Yusuf crossed the Mediterranean into Iberia, where he defeated Alfonso VI’s army at Sagrajas. Between 1090 and 1092 Yusuf established Almoravid authority over the Muslim states in southern Iberia, forming a strong line of defense against further Christian expansion. Although the Almoravid leadership did not favor the secular arts, such as nonreligious poetry and music, other forms of art and architecture continued to receive government support. Christian and Jewish communities residing in the south were persecuted, and the cooperation and intellectual collaboration that had once existed between Iberia’s Muslims, Christians, and Jews ended. In 1106 Yusuf died of old age and was succeeded as Almoravid caliph by Ali ibn Yusuf. At the time of Yusuf’s death, the Almoravid Empire was at the height of its power, stretching across Morocco south to Ghana, north into Iberia, and east into Algeria. During his reign and that of his successor Ali, Maliki jurists served as paid participants in the government, and the influence of a strict version of Sunni Islam was increased. Although the Almoravids officially recognized the authority of the Abbasid Caliphate in Baghdad, Iraq, they ruled independently and without interference from Iraq. They also maintained generally cordial relations with the neighboring Fatimid Caliphate centered in Egypt. Opposition to the Almoravid Empire had already taken root in North Africa by the time of Yusuf’s death. The Almoravid caliph Ali’s use of Christian mercenaries and foreign Turkish slave-soldiers raised the ire of a militant fundamentalist Berber movement, the Almohads, led by Muhammad ibn Tumart, a member of the Hargha tribe of Morocco’s Atlas Mountains. The Almohads opposed the influence of the Almoravids’ Maliki jurists, who Ibn Tumart argued had corrupted Sunni Islamic orthodoxy. In 1100 Ibn Tumart returned to his native mountain village after spending years in Iberia and then further east studying Islamic theology, legal thought, and philosophy. He founded a mosque and school where he began to preach his interpretation of Sunni Islam. Ibn Tumart ordered that the call to prayer and the sermons during Almoravid Empire The mosque and cemetery at Kairouan, Tunisia, is a Muslim holy city that ranks after Mecca and Medina as a place of pilgrimage. Friday congregational prayers be delivered in Berber instead of Arabic, and it is reported that he wrote several religious treatises in Berber as well. The growing infl uence of the Almohads would continue and would come to threaten the authority and existence of the Almoravid Empire, which was further weakened in 1144 with the death of Caliph Ali. It was during the reign of Ali that Almoravid power began to disintegrate, but it was under his successors that the empire would fi nally collapse. Faced with growing opposition in Iberia, the Almoravids were defeated in battle by Spanish, French, and Portuguese armies between 1138 and 1147, losing control of the cities of Zaragoza and Lisbon. In Morocco, the Almoravid heartland, the increasing infl uence of the Almohads continued to loom, even after the death of Ibn Tumart in 1133. The successor to the Almohad throne, Abd al-Mu’min, supervised the fi nal destruction of the Almoravid Empire, which fi nally collapsed in 1147 after the fall of its capital city of Marrakesh. See also Abbasid dynasty; Christian states of Spain; Fatimid dynasty; Muslim Spain; Reconquest of Spain. Further reading: Brett, Michael, and Elizabeth Fentress. The Berbers. Cambridge: Blackwell Publishers, 1997; Constable, Olivia Remie. Medieval Iberia: Readings from Christian, Muslim, and Jewish Sources. Philadelphia: University of Pennsylvania Press, 1997; DeCosta, Miriam. “Historical and Literary Views of Yusuf, African Conqueror of Spain.” The Journal of Negro History (October 1975); Fletcher, Richard. Moorish Spain. Berkeley: University of California Press, 1993; Hodgson, Marshall G. S. The Venture of Islam. Chicago, IL: University of Chicago Press, 1974; Kennedy, Hugh. Muslim Spain and Portugal: A Political History of al-Andalus. New York: Longman Publishers, 1997; Norris, H. T. “New Evidence on the Life of Abdullah B. Yasin and the Origins of the Almoravid Movement.” The Journal of African History 12, no. 2 (1971); O’Callaghan, Joseph. A History of Medieval Spain. Ithaca, NY: Cornell University Press, 1983; Reilly, Bernard F. The Medieval Spains. London: Cambridge University Press, 1993; Von Grunebaum, G. E. Classical Islam: A History 600–1258. New York: Barnes and Noble Books, 1970. Christopher Anzalone


Andes: pre-Inca civilizationsEdit

Building on the economic, political, cultural, and ideological- religious developments that shaped Andean prehistory from the Lithic Period to the mid-Early Intermediate Period (see Volume I), the eight centuries between 600 and 1400 c.e. saw the continuing expansion and contraction of kingdoms, states, and empires across large swaths of the Andean highlands and adjacent coastal lowlands. The three most prominent imperial states were the Huari, the Tiwanaku, and, later, the Chimor. These empires, in turn, laid the groundwork for the explosive expansion of the Inca Empire in the 15th century (see Volume III). The Tiwanaku culture and polity, whose capital city of the same name was located some 15 kilometers southeast of Lake Titicaca, traced its origins to humble beginnings around 400 b.c.e., with the establishment of clusters of residential compounds along a small river draining into the giant lake. For the next eight centuries, the nascent Tiwanaku polity competed with numerous adjacent settlements for control over the rich and highly prized land in the Lake Titicaca basin, until the mid- 300s c.e., when it came to dominate the entire basin and its hinterlands. Lake Titicaca and its surrounding basin represent a singular feature in the mostly vertical Andean highland environment. The largest freshwater lake in South America (covering some 3,200 square miles and stretching for some 122 miles at its longest) and the highest commercially navigable lake in the world (at an elevation of 12,500 feet), Lake Titicaca tends to moderate temperature extremes throughout the basin while providing an ample supply of freshwater and a host of other material resources, especially reeds, fi sh, birds, and game. The basin itself covered some 22,000 square miles, signifi cant portions of which were relatively fl at and arable when modifi ed with raised fi elds. All of these features rendered the zone unusually productive and highly coveted—not altogether unlike the Basin of Mexico—permitting it to support one of the highest population densities in all the pre-Columbian Americas. Archaeologists divide Tiwanaku’s growth into fi ve distinct phases extending over a period of some 1,400 years, until the polity’s collapse around 1000 c.e. Phases I and II saw the settlement’s gradual expansion on the southern fringes of the lake. Phase III (c. 100–375 c.e.) saw extensive construction within the capital city. By Phase IV (c. 375–600 or 700), Tiwanaku had emerged as a true empire, dominating the entire Titicaca Basin and extending its imperial and administrative reach into windswept puna (high plains), throughout large parts of the surrounding altiplano, and south as far northern Chile. Phase V (c. 600/700–1000) was a period of grad- 10 Andes: pre-Inca civilizations ual decline, until the capital city itself was abandoned by around 1000. The empire’s economic foundations were agropastoral, combining intensive and extensive agriculture with highland pastoralism. The dominant feature of the capital city, a structure called the Akapana, consisted of an enormous stone platform measuring some 200 meters on a side and rising some 15 meters high. Evidently the ritual and ceremonial center of the city and empire, the fl at summit of the Akapana held a sunken court with elaborate terraces and retaining walls in a style reminiscent of Chiripa and other Titicaca sites. A nearby structure, called Kalasasaya, prominently displayed the famous Gateway of the Sun, chiseled from a single block of stone and featuring the so-called Gateway God, which some scholars interpret as a solar deity. A host of other buildings, walls, compounds, enclosures, and platforms graced the sprawling urban center, which housed an estimated 20,000 to 30,000 inhabitants. Like other Andean cities, Tiwanaku had no markets, its goods and services exchanged through complex webs of kinship networks and state-administered redistribution. Covering a much larger territory than Tiwanaku was the Huari Empire, with its capital city Huari on the summit of Cerro Baul some 25 kilometers north of the present-day city of Ayacucho in the Central Highlands. The Huari state emerged toward the beginning of the Middle Horizon (c. 600 c.e.). At its height, around 750 c.e., the empire spanned more than 900 miles along the highlands and adjacent coastal plains, touching the northernmost fringe of the Tiwanaku Empire to the south and extending to the Sechura Desert in the north. The capital city, densely packed with walls and enclosures, covered around four square kilometers and is estimated to have housed some 20,000 to 30,000 people. The Huari elite ruled their vast empire through a series of administrative colonies or nodes that exercised political domination in the zones under Huari control. The Huari Empire is perhaps best known for its extensive agricultural terracing and irrigation projects that spanned large parts of the highlands. Requiring enormous expenditures of labor, the Huari terraces, canals, and related reclamation projects transformed millions of hectares of steep arid hillsides into land suitable for cultivation. Scholars hypothesize that the extensive terracing and irrigation works undertaken by the Huari state help to explain the empire’s survival through the periodic El Niño–induced droughts and fl oods that comprise a persistent feature of the highland and coastal environments, and that proved catastrophic for the Moche polity during the same period. In order to acquire the vast amounts of labor necessary for the construction of such terraces, irrigation works, and other infrastructure, both the Huari and Tiwanaku Empires compelled subject communities to contribute substantial quantities of labor to the state —a kind of labor tax required of all subject peoples. Indeed, Andean polities were predicated on stark social inequalities and the division of society into two broad classes: elites and commoners. Public works such as terraces, canals, roads, and urban monumental architecture were built by commoners from ayllus and communities compelled to devote specifi ed quantities of time annually to such endeavors. The state and its agents reciprocated by ensuring military security, food security, and other benefi ts, a reciprocity rooted, at bottom, in a fundamentally unequal relationship between the sociopolitically dominant and dominated. With the demise of both the Tiwanaku and the Huari Empires by the end of the Middle Horizon, the Andes entered a period of political decentralization and reassertion of local and regional autonomies. An important exception unfolded along the North Coast and its adjacent highland, where the powerful Chimor Empire emerged around 900 c.e. With its capital at Chan Chan near the mouth of the Moche River, at its height in the Late Intermediate Period the Chimor Empire spanned nearly 1,000 kilometers from the Gulf of Guayaquil in contemporary Ecuador to the Chillon River valley on the Central Coast. Like the Inca Empire that supplanted them in the mid-1400s, Chimor’s rulers deployed a combination of conquest and alliance-building to bring large areas of both coast and highland under their dominion. The capital city of Chan Chan was a huge urban complex, housing upwards of 35,000 people and covering at least 20 square kilometers, while its civic core encompassed at least six square kilometers and housed some 6,000 rulers and nobility. During the Late Horizon, the young and powerful Inca Empire swept down from its highland capital at Cuzco to bring Chimor, and the rest of highland and coastal Peru, under its dominion (see Volume III). Further reading: Silverman, H., ed. Andean Archaeology. Malden, MA: Blackwell, 2004; Michael E. Moseley, The Incas and Their Ancestors. Rev. ed. London: Thames & Hudson, 2001; Kolata, A. L., ed. Tiwanaku and Its Hinterland: Andes: pre-Inca civilizations 11 Archaeology and Paleoecology of an Andean Civilization. Washington, D.C.: Smithsonian, 1996–2003; Stanish, C. Ancient Titicaca: The Evolution of Complex Society in Southern Peru and Northern Bolivia. Berkeley: University of California Press, 2003. M. J. Schroeder

Anglo-Norman cultureEdit

The Anglo-Norman culture resulted from the fusion of the culture brought over with William the Conqueror when he killed the last English king of England, Harold Godwineson, at the Battle of Hastings in October 1066, with the culture that existed in England. In the 11th and 12th centuries the Normans not only conquered England but also established a kingdom in Sicily. English culture had developed relatively independent of continental Europe since the time of the coming of the Angles and Saxons in the fi fth century, who in turn had been infl uenced by the native British culture. British culture was a mixture of the Roman culture, which had come with the Roman conquest under Emperor Claudius (41–54), with that of the original Celtic inhabitants. The English culture at the time of the Norman Conquest of 1066 was dominated by the warrior ethos that the Angles and Saxons had brought with them from mainly what is now Germany. Classics of this period were the poem of “The Battle of Maldon,” as well as the better-known saga of Beowulf. Heaney describes this militaristic society when he writes of how “the ‘Finnsburg episode’ envelops us in a society that is at once honor-bound and blood-stained, presided over by the laws of the blood-feud . . . the import of the Finnsburg passage is central to the historical and imaginative world of the poem as a whole.” The Anglo-Saxon tongue began to lose out to the Norman French, which also included the infl uence of Scandinavia, where the Normans had originally come from before settling in France in the 10th century. It was the rising Anglo-Norman culture that created a hero out of King Arthur. Based on earlier writings, authors like Geoffrey of Monmouth wrote History of the Kings of Britain between 1136 and 1138. Arthur was a native British chieftain who fought the Angles and Saxons, thus giving them little cause to celebrate him. But in seeking to give legitimacy to the Norman kings, writers like Geoffrey sought to trace the monarchy back to its earliest days and thus found inspiration in the earlier accounts of Arthur. According to Helen Hill Miller in The Realms of Arthur, “the Anglo-Norman kings . . . needed an independent source for their British sovereignty: as dukes of Normandy they were subject to the heirs of Charlemagne,” the kings of France. Geoffrey used accounts written by the monks Nennius in the ninth century and Gildas, who may have lived in the time of the historical Arthur, in the sixth and seventh centuries. According to Helen Hill Miller in The Realms of Arthur, “by January 1139, a copy from his rather heavy Latin into Anglo-Norman verse was promptly undertaken at the request of the wife of an Anglo-Norman baron in Lincolnshire. By 1155, a further translation, likewise in verse, had been completed by Maistre de Wace of Caen, a Jerseyman who spent most of his life in France.” Geoffrey wrote during the reign of Henry I (1100–35), perhaps the fi rst Norman king to see himself as English fi rst and Norman secondarily. Writing at the same time on Arthurian topics were Walter Map and Maistre [Master] Wace, who wrote the Roman de Brut and Roman de Rou. Other writers applied themselves to building up the Anglo-Norman civilization. William of Malmesbury wrote Acts of the English Kings and On the Antiquity of the Church of Glastonbury. William, like Geoffrey, consciously fused the Normans with the Celtic past, because Glastonbury was the holiest site in Celtic Britain. Tradition had it that Joseph of Arimathea, he who had given his tomb for Christ to be buried in after the Crucifi xion, founded a small church at Glastonbury. The pious at the time also believed that Joseph, who traditionally in England had been seen as a merchant for English tin, had even brought the young Jesus (Christ) of Nazareth to visit Glastonbury. The church served as another institution in building a rising new culture in England, as memories of the conquest of 1066 dimmed with the passage of time. Symbolic of this was the actual building of churches in the Romanesque architecture, which the Normans had mainly brought with them from France. The institution of the church was put to use by Henry I. The Cistercian order of monks arrived in England in 1128 and began development of advanced agriculture and sheep raising. In order to cement the church as an instrument of royal development, the king named the great prelates who ruled the church, to assure their support for his reign. William began this policy after the conquest. Along with the great bishoprics like York and Canterbury, monastic orders also fl ourished under Anglo-Norman rule and would be a central part of both English culture and economy 12 Anglo-Norman culture until the monastic system was destroyed during the reign of Henry VIII (1509–47). Using Normandy as a model, Henry I and the kings who followed him freely granted charters to towns, enabling the establishment of a town life that would be one of the hallmarks of England during the Middle Ages. London, where William built his White Tower, gained the ascendancy in England in commercial life that it still enjoys today. Towns, the estates of the great feudal lords, and the church establishments were the pillars that formed the foundation of the Anglo-Norman culture that arose after the conquest of 1066. Feudalism, the system of lords holding their lands at the will of the king, really came to England with William, who granted land holdings to those Breton, French, and Norman warriors who had come with him to fi ght the Saxon King Harold in October 1066. By the end of Henry I’s reign in 1135, only some 70 years after the conquest, the fusion between the old and the new was complete, and the Anglo-Norman culture fl ourished in England. See also Norman Conquest of England; Norman and Plantagenet Kingdom of England; Norman Kingdoms of Italy and Sicily. Further reading: Heaney, Seamus, trans. Beowulf. New York: Norton, 2000; Miller, Helen Hill. The Realms of Arthur. New York: Scribner, 1969. John F. Murphy, Jr.

Anglo-Saxon cultureEdit

The Anglo-Saxons were Germanic barbarians who invaded Britain and took over large parts of the island in the centuries following the withdrawal of the Roman Empire. They were initially less gentrifi ed than other post-Roman barbarian groups such as the Franks or Ostrogoths because they had less contact with Mediterranean civilization. The Anglo-Saxons were originally pagan in religion. The main group, from northwestern Germany and Denmark, was divided into Angles, Saxons, and Jutes. German tribal affi liations were loose and the original invaders included people from other Germanic groups as well. Although some of the early Anglo-Saxon invaders had Celtic-infl uenced names, such as Cedric, the founder of the house of Wessex, the Anglo-Saxons had a pronounced awareness of themselves as different from the peoples already inhabiting Britain. Their takeover led to the integration of Britain into a Germanic world. Unlike other groups such as the Franks they did not adopt the language of the conquered Celtic and Roman peoples, but continued speaking a Germanic dialect. The early Anglo-Saxons highly valued courage and skill in battle, as refl ected in the most signifi cant surviving Anglo-Saxon poem, Beowulf. Their pagan religion was marked by a strong sense of fatalism and doom, but also by belief in the power of humans to manipulate supernatural forces through spells and charms. They shared a pantheon with other Germanic peoples, and many Anglo-Saxon royal houses boasted descent from Woden, chief of the Gods. Their religion was not oriented to an afterlife, although they may have believed in one. The Anglo-Saxons strongly valued familial ties—the kinless man was an object of pity. If an Anglo-Saxon was killed, it was the duty of his or her family to attain vengeance or a monetary payment, weregild, from the killer. Anglo-Saxon kinship practices differed from those of the Christian British, adding to the diffi culty of the assimilation of the two groups. For example, British Christians were horrifi ed by the fact that the Anglo-Saxons allowed a man to marry his stepmother on his father’s death. Anglo-Saxons also had relatively easy divorce customs. The cultural differences between the Britons and the Anglo-Saxons were particularly strong in the fi eld of religion, as British Christians despised Anglo-Saxon paganism. The Anglo-Saxons reciprocated this dislike and did not assimilate as did continental Germanic groups. The extent to which the Anglo-Saxons simply displaced the British as opposed to the British assimilating to Anglo- Saxon culture remains a topic of debate among historians and archeologists of post-Roman Britain. The conversion of the Anglo-Saxons to Christianity owed more to missionary efforts from Ireland and Rome than it did to the indigenous British Church. Paganism held out longest among the common people and in the extreme south, in Sussex and the Isle of Wight. Some Anglo-Saxons were not converted until the middle of the eighth century. Some peculiar relics of paganism held out for centuries. For example Christian Anglo-Saxon kings continue to trace their descent from Woden long after conversion. The church waged a constant struggle against such surviving pagan Anglo- Saxon customs as men marrying their widowed stepmothers. Reconciling Irish and Roman infl uences was also a challenge, fought out largely on the question of the different Irish and Roman methods of calculating the date of Easter. Not until the Synod of Whitby in 664 did the Anglo-Saxon church fi rmly commit to the Roman obedience. Anglo-Saxon culture 13 Conversion led to the opening of Anglo-Saxon England, until then a rather isolated culture, to a variety of foreign influences, particularly emanating from France and the Mediterranean. The leader of the missionary effort sent by Rome to Kent to begin the conversion, Augustine, was an Italian, and the most important archbishop of Canterbury in the following decades, Theodore, was a Greek from Cilicia in Asia Minor. Pilgrimages were also important in exposing Anglo-Saxons to more developed cultures. The first recorded visit of an Anglo-Saxon to Rome occurred in 653 and was followed by thousands of others over the centuries. Since pilgrims needed to travel through France to get to Italy and other Mediterranean pilgrimage sites, pilgrimage also strengthened ties between Gaul and Britain. Anglo-Saxon churchmen found out about innovations or practices in other places, such as glass windows in churches, and came back to England eager to try them out. Despite these influences, Anglo-Saxon Christianity also drew from Germanic culture. Like other Germanic peoples the Anglo-Saxons tended to view the Bible and the life of Christ through the lens of the heroic epic. Christ was portrayed as an epic hero, as in one of the greatest Anglo-Saxon religious poems, The Dream of the Rood. The Dream of the Rood recounts the Crucifixion from the seldom-used point of view of the cross itself, and represents Christ as a young hero and the leader of a group of followers resembling a Germanic war band. Another remarkable example of the blending of Germanic and Christian traditions is the longest surviving Anglo-Saxon poem, the epic Beowulf. Telling of a pagan hero in a pagan society, the epic is written from an explicitly Christian point of view and incorporates influences from the ancient Roman epic, Virgil’s Aeneid. As the Anglo-Saxon Church moved away from dependence on outside forces, Irish or Roman, in the seventh and eighth centuries, the Christian Anglo-Saxon kingdoms produced their own saints, mostly from the upper classes. Anglo-Saxon saints such as Cuthbert (d. 687), a monk and hermit particularly popular in the north of England, attracted growing cults. The highest point of Anglo-Saxon Christian culture was the Northumbrian Renaissance, an astonishing flowering of culture and thought in a poor borderland society. Northumbria was a kingdom in the north of the area of Anglo-Saxon settlement, an economically backward and primitive society even compared to the rest of early medieval Europe. It was also a place where Continental and Irish learning met. The Northumbrian Renaissance was based in monasteries, and its most important representative was the monk Bede, a historian, chronographer, and hagiographer. Bede’s Ecclesiastical History of the English People is the most important source for early Anglo-Saxon history. Another Northumbrian was Caedmon, the first Anglo-Saxon Christian religious poet whose works survive. Northumbria also displayed a rich body of Christian art, incorporating Anglo-Saxon and Celtic artistic influences, and some from foreign countries as far away as the Byzantine empire. An enormous amount of monastic labor went into the production of manuscripts. Despite the importance of Northumbrian Renaissance, Northumbria was not the only place where Christian culture reached a high point. Another area was the West Country, where the Anglo-Saxon kingdom of Wessex encroached on the British territories of Devon and Cornwall. Curiously, Kent, still headquarters of the archbishop of Canterbury who claimed primacy over all the “English,” became a cultural backwater after the death of Archbishop Theodore in 690. The influence of Anglo-Saxon Christianity and the Northumbrian Renaissance spread to continental Europe. Anglo-Saxons, in alliance with the papacy, were concerned to spread the Christian method to culturally related peoples in Germany. The principal embodiment of this effort was the missionary Wynfrith, also known as St. Boniface (680–754), who was born in Wessex. His religious efforts began with assisting a Northumbrian missionary in an unsuccessful mission to the Frisians. He then went to Rome to receive authority from the pope. Boniface made many missionary journeys into Germany, where he became known for converting large numbers of Germans, and for a physical, confrontational missionary style that included chopping down the sacred trees that were a feature of Germanic paganism. Many English people followed Boniface to Germany, where they exerted a strong influence on the development of German Christianity. Boniface was also responsible for a reorganization of the Frankish Church to bring it more firmly under papal control. On another journey to Frisia angry pagans killed him. Anglo-Saxons, along with other people from the British Isles, were also prominent in the circle of learned men at the court of Charlemagne. The leading scholar at Charlemagne’s court, Alcuin of York, was a Northumbrian. This high point of Anglo-Saxon Christian culture was terminated by the series of Viking raids and invasions beginning in the late eighth century. Unlike Christian Anglo-Saxon warriors, who usually respected monasteries, the pagan Vikings saw them as rich 14 Anglo-Saxon culture repositories of treasure, and monastic life virtually disappeared from the areas under Scandinavian control. By the ninth century the leader of the English resurgence, King Alfred the Great of Wessex, lamented the passing of the golden age of English Christianity, claiming that there was hardly any one in England who could understand the Latin of the mass book. Alfred, an unusually learned king who had visited the European continent, made various attempts to restore English monasticism and learned culture. He gathered in his court scholars from throughout the British Isles and the continent, as well as writing his own translations, such as that of Boethius’s Consolation of Philosophy. Alfred also sponsored the translation of Bede’s Ecclesiastical History and other works from Latin into Anglo-Saxon. The period also saw the beginnings of the Anglo-Saxon Chronicle, a record of current events kept in Anglo-Saxon, eventually at monasteries. Like the political unifi cation of England by Alfred’s descendants, the creation of this body of Anglo-Saxon literature contributed to the creation of a common Anglo-Saxon or English identity. There was very little parallel for this elsewhere in Christian Europe at the time, when learned writing was almost entirely restricted to Latin. Alfred’s patronage of men of letters was also important for the creation of his personal legend. The unifi cation of England did not end the Scandinavian impact on English culture, which revived with the conquest of England by the Danish king Canute in the 11th century. Canute, a Christian, respected the church and English institutions, and his reign was not destructive as the early Viking conquests had been. Scandinavian infl uence was particularly marked on the English language. Since it was already similar to the Scandinavian tongues, Anglo-Saxon or Old English adopted loanwords much more easily than did Celtic languages such as Irish. Since it was necessary to use English as a means of communication between people speaking different Germanic tongues, many complex features of the language were lost or simplifi ed. English would make less use of gender and case endings than other Germanic or European languages. Although Alfred had hoped to revive English monasticism, the true recreation of monastic communities would only occur in the 940s, with royal patronage and under the leadership of Dunstan, a man of royal descent who became archbishop of Canterbury and a saint. The English monastic revival was associated with the revival of Benedictine monasticism on the Continent, and the new monasteries followed the Rule of St. Benedict. Monasteries dominated the church in the united Anglo-Saxon kingdom, with most bishops coming from monastic backgrounds and often serving as royal advisors. The church generally prospered under the En glish kings—large cathedrals were built or rebuilt after the damage of the Scandinavian invasions. The copying and illumination of manuscripts was also revived, and reached a high degree of artistic excellence in Winchester. Continental infl uences preceded the Norman Conquest of England in 1066. The penultimate Anglo-Saxon king, Edward the Confessor, who had spent many years in France, built Westminster Abbey in a Norman Romanesque style. Although Anglo-Saxon culture was displaced from its position of supremacy after the Norman Conquest of 1066, it did not disappear. At least one version of the Anglo-Saxon Chronicle continued to be compiled for nearly a century, and Anglo-Saxon poetry continued to be composed. See also Anglo-Saxon kingdoms; Frankish tribe; Irish monastic scholarship. Further reading: Blair, Peter Hunter. Northumbria in the Days of Bede. London: Gollancz, 1976; Crossley-Holland, Kevin, ed. The Anglo-Saxon World: An Anthology. Oxford: Oxford University Press, 1984; Godden, Malcolm, and Michael Lapidge, eds. The Cambridge Companion to Old English Literature. Cambridge: Cambridge University Press, 1991; Smyth, Alfred P. King Alfred the Great. New York: Oxford University Press, 1995; Stenton, F. M. Anglo-Saxon England. Oxford: Oxford University Press, 2001; Whitelocke, Dorothy. The Beginnings of English Society. Harmondsworth: Penguin Books, 1952; Wilson, David Raoul. Anglo-Saxon Paganism. London: Routledge, 1992. William E. Burns

Anglo-Saxon kingdomsEdit

Following the decline of Roman power in Britain, political power rapidly decentralized, and several small kingdoms emerged to fi ll the political vacuum. These kingdoms, called the Anglo-Saxon kingdoms, competed among themselves and with Danish invaders for power from the late sixth through the ninth centuries. Eventually they melded into one large kingdom that governed most of England until the Norman Conquest of England in 1066. Throughout the fourth and fi fth centuries a number of Germanic peoples invaded England. Some came Anglo-Saxon kingdoms 15 with military objectives in mind, but many others came as settlers, seeking peaceful colonization. These people came from several tribes, but the most famous were the Angels and Saxons, many of whom came as raiders and mercenaries seeking employment in Roman Britain’s undermanned military outposts. Beyond this, details of the invasion are unclear. The invaders stamped out all vestiges of Roman culture, but the complex transition to Anglo-Saxon England occurred gradually. How many small kingdoms existed during the sixth and seventh centuries is unknown, but as larger kingdoms eliminated rivals, the number shrank. This consolidation of power has led historians to identify the movement toward a territorial state as one of the main themes in Anglo-Saxon history. As post-Roman chaos subsided, Anglo-Saxon England painstakingly settled into seven or eight major kingdoms and several smaller ones. The kingdoms centered on the Thames, Wash, and Humber, the main entry points for the migrations. Factors infl uencing the shapes and formation of the kingdoms also included geography, defensibility, and the degree of resistance the inhabitants offered the invaders. Four kingdoms developed around the Thames estuary. In the southeast, Kent arose with unique artistic, legal, and agrarian traditions, infl uenced by Jutish and, possibly, Frankish culture. West and northwest of Kent three kingdoms associated with the Saxon invasions developed: Essex (East Saxons), Wessex (West Saxons), and Sussex (South Saxons). Settlers who entered via the Wash founded East Anglia, forming groups called the North Folk and South Folk, whose territories became Norfolk and Suffolk. Those who entered by the Humber formed Mercia, which dominated the Midlands, and Northumbria, north of the Humber River, that grew from the unifi cation of the smaller kingdoms of Deira and Bernicia. These kingdoms are traditionally called the Heptarchy, a misleading term that implies seven essentially equal states. In fact at times many more than seven kingdoms existed, and the seven main kingdoms were rarely political equals. Developments in the institution of kingship were vital in the political growth of the early kingdoms. Germanic peoples had a tradition of kingship, and as Roman institutions declined, they looked to their own heritage to replace Roman customs. The practical appeal of kingship is clear. It offered strong personal leadership and the kind of governing that led to success during the Anglo-Saxon invasion and settlement. The post-Roman political situation demanded similar leadership. Christian tradition held up biblical kings as examples of good leadership, and as Anglo-Saxons converted, this bolstered Germanic notions about the institution. By the mid-seventh century, royal houses had emerged, and a claim of royal lineage became necessary for a king to rule unchallenged. The bloodline was important, but other Western notions about kingship had not yet taken hold. The successor had to be both from the right line and the fi ttest to rule. How closely related to the previous king he had to be was debatable, and the right of the eldest son to succeed, the right to pass the succession through the female, the rights of a minor or female to inherit, or the right of a king to choose his successor were not guaranteed. In case of a disputed succession, kingdoms were divided or shared, which was risky but preferable to feud or civil war. Toward the end of the seventh century a group of leaders emerged known as Bretwaldas. The fi rst Bretwaldas were kings whose actions gained them fame and reputation and who had the political and military power to reach beyond their borders and collect tribute from neighbors. According to Bede, by around 600 one king customarily received this title from his royal colleagues, giving him preeminence within the group. The position shifted from one dynasty to the next, with changing political and military successes. At fi rst the title was largely honorary, and it is unclear whether other kings listened to the Bretwalda’s demands, but as time passed the authority of the Bretwalda grew. In the late sixth and early seventh centuries the eastern kingdoms had the political edge, but strong rivalries existed and power shifted frequently. England’s population and prosperity grew in the seventh century, and much of England converted to Christianity. A common language, common social institutions, and, eventually, a common religion counterbalanced the political and military turbulence but did not stop it. As the seventh century progressed more powerful kingdoms absorbed smaller kingdoms, and by the end of the century Northumbria, Mercia, and Wessex dominated the island. Northumbria dominated affairs in the seventh century; Mercia led the way in the eighth century; and Wessex emerged to dominate the events of the ninth century. After King Oswy’s (642–670) defeat of Mercia in 654, Northumbria exercised lordship over the other kingdoms. Although unable to control Mercia after about 658, Northumbria nevertheless remained preeminent through its great moral authority. For example, Northumbrian support ensured the Synod of Whitby’s (663) success in promoting Roman Christian traditions 16 Anglo-Saxon kingdoms over the Celtic Church throughout England. However internal dissent and external defeats steadily drained Northumbrian political and military power. Unrest, violence, and political coups throughout the eighth century doomed Northumbrian culture, culminating with the Viking sack of Lindisfarne in 793. Mercia began its rise under King Penda (628–654), and its political domination culminated under kings Ethelbald (716–757) and Offa (757–796). Many factors contributed to Mercia’s success. It held prosperous agricultural territory in the Trent Valley. The people in the east Midlands, the Middle Angles of the Fens, Lindsey, and around the Wash accepted the dynasty, as did settlers in the Severn Valley and along the borders of modern Chester, Shropshire, Herefordshire, and Gloucestershire. London and East Anglia fell to Mercia as well. After the death of King Ine in 725, no effective resistance to Mercia remained in Wessex. Eventually it subdued Kent and threatened Canterbury. This success led Ethelbald to claim he was king of all Britain. The actions of Mercian rulers bolstered the concept of kingship. Offa summoned papal legates, held church councils, created a new archbishopric in Lichfi eld, and asked the church to anoint his son Ecgfrith, all of which shows a practical desire to cooperate and benefi t from relations with the church, but it also did much to strengthen his position and the theory of monarchy throughout the land. Mercia retained power in the Midlands throughout the reigns of Cenwulf (796–821) and Ceowulf (821– 823), but their popularity faded in the south and southeast following the harsh tactics they used in building a defensive system within England. Kent and East Anglia resented Mercian overlordship and led the way in uprisings in the early ninth century. The primary benefi ciary of these uprisings was the kingdom of Wessex. The rise of Wessex began with King Egbert (802– 839), who defeated the Mercians in 825, winning control of Kent, Sussex, and Essex, and continued with the arrival of the Vikings. The Vikings had been making raids on England since the 780s, but in the mid-ninth century their attacks changed from raids to campaigns of conquest. In the 850s they stayed between campaign seasons, and by 865 thousands of Danes undertook a conquest that ended with their control over nearly all of England except Wessex. ALFRED THE GREAT Alfred the Great (r. 871–899) came to power just after the Danish onslaught started. He was a talented king, warrior, able administrator, patron of the arts, and a good political leader, but it was a desperate moment in Anglo-Saxon history. Danes controlled the most fertile parts of north and east England. The south held out, but it seemed only a matter of time until it too fell. To buy time while he mustered his army Alfred made a truce with the Danes in 872. He then reformed his army, fortifi ed towns, and built a navy to meet the Viking threat. To control his kingdom Alfred depended upon his royal court, made up of bishops, earls, king’s reeves, and some important thanes. Councils, called Witenagemots, or Witans, discussed issues such as raising military forces, building fortresses, and fi nances. The king made the decisions, but he relied on the Witan for advice, support, and help making decisions known. Ealdormen, noblemen of great status who managed the shires or districts of Wessex for the king, played especially important roles. In 876, the Danish leader, Guthrum, renewed the attack on Wessex, and by winter 878 Alfred retreated to the Isle of Athelney. In the spring he took the fi ght back to the Danes, defeating Guthrum and forcing him to promise to cease his attacks on Wessex and convert to Christianity. Following this, Alfred repeatedly beat back Danish attacks and gradually regained lost territory. Around 886, Alfred and Guthrum created a boundary running northwest, along an old Roman road known as Watling Street, from London to Chester that became the Anglo-Saxon–Danish border. The cultural infl uences of the Danish side, the Danelaw, affected England for centuries. The boundary also freed a large section of Mercia from Danish control, and Alfred installed a new ealdorman to control the area and married his daughter to him, uniting the kingdoms and setting the groundwork for a united England. Clashes with Danes continued, but the most severe crises had passed by Alfred’s death in 899. Under his heirs, resistance to Vikings and pagan forces came to be associated with the royal house of Wessex. From 899 to 1016 Alfred’s descendants held the throne. They continued developing royal institutions and expanded their power base. In the late 10th century new Viking attacks coupled with internal divisions among noblemen led to the overthrow of Ethelred “the Unready” (978–1016). The Witan installed a Dane as king of England. Canute (1016–35) successfully managed Denmark, Norway, and Anglo-Saxon England and became a powerful political fi gure in Europe. While Canute ruled with a Scandinavian touch, creating nobles called “earls,” most Anglo-Saxon governing institutions functioned unchanged. He brought together Anglo-Saxon kingdoms 17 Anglo-Saxon and Danish nobles and won the loyalty of the Witan. When Canute died his sons ruled briefly, but each died without heir, and the Witan selected Edward, the son of Ethelred, to be king. Edward the Confessor (1042–66) was more Norman than Anglo-Saxon, because he lived from ages 12 to 36 in Normandy. He installed Norman nobles as advisers, a move deeply resented by the Anglo-Saxon nobles. The earls owed everything to Canute and were very loyal to him, but they owed Edward nothing. Resenting Norman influences at court, they soon began acting with autonomy. Edward’s father-in-law, Godwin, earl of Wessex, and his son, Harold, led the opposition. Their lands made them more powerful than the king, and as their power grew, Edward became a figurehead. When Edward died childless in 1066, the Witan chose Harold as king, but he faced challenges from Norway and Normandy. William of Normandy proved too much for Harold at the Battle of Hastings, bringing an end to the Anglo-Saxon kingdom. On the eve of the Norman Conquest, Anglo-Saxon England was prosperous and well-governed by 11thcentury standards. It had a thriving church, effective military, and a healthy, growing economy. The English- Danish division caused diversity in legal and social traditions, but it possessed great unity for its time. Continental kingdoms rarely knew unity and experienced almost constant internal warfare. By comparison, Anglo-Saxon England had evolved quickly from the days of the Heptarchy and through the rise of Wessex and unifying onslaught of the Danes to become the stable kingdom of 1066 that was so attractive to those who claimed it upon Edward the Confessor’s death. See also Anglo-Saxon culture; Norman and Plantagenet kings of England. Further reading: Campbell, James. The Anglo-Saxon State. New York: Hambledon and London, 2000; Fisher, D. J. V. The Anglo Saxon Age, c. 400–1042. London: Longman, 1973; Hollister, C. Warren. The Making of England, 55 b.c. to 1399. Lexington, MA: D.C. Heath and Company, 1983; Kirby, D. P. The Earliest English Kings. New York: Routledge, 2000; Loyn, H. R. The Governance of Anglo-Saxon England, 500–1087. London: Edward Arnold, 1984; Sawyer, P. H. From Roman Britain to Norman England. New York: Routledge, 1998; Stenton, F. M. Anglo-Saxon England. London: Oxford University Press, 1947. Kevin D. Hill

An Lushan (An Lu-Shan) RebellionEdit

The An Lushan Rebellion (755–763 c.e.) occurred at the midpoint of the Tang (T’ang) dynasty, 618–909, and marked a significant turning point in the fortunes of the regime. The rebellion marked the Tang’s irreversible decline after one and half centuries of good governance, economic prosperity, and military success. An Lushan’s (703–757) beginnings were humble. He was half Sogdian and half Turk, of the Khitan tribe, and was born beyond the Great Wall of China in present-day Manchuria. At an early age he was sold to a Chinese officer of the northern garrison and rose to the rank of general and commander of a region on the northeastern frontier of the Tang empire. By the mid-eighth century c.e. most of the frontier garrisons were under foreign (non–Han Chinese) generals. An was introduced to the court of the aging Emperor Xuanzong (Hsuan-tsung, also known as Minghuang, or “Brilliant Emperor”) and rapidly ingratiated himself to his young favorite concubine, the Lady Yang (known by her title Yang Guifei, or Kuei-fei; Guifei means “Exalted Consort”) who adopted him as her son. Gross and fat, 18 An Lushan (An Lu-Shan) Rebellion To control his kingdom, Alfred the Great depended upon his royal court of bishops, earls, king’s reeves, and thanes. An became a frequenter at court events, entertaining the emperor and his harem with his clowning and uncouth behavior. He was rewarded with the title of prince and given command of the empire’s best troops. Protected by Lady Yang and her brother who was chief minister of the empire, reports of General An’s treacherous intentions were not only unheeded by the emperor but the men who reported them were punished. In 755 c.e. General An rose in rebellion. At the head of 150,000 troops, among them tribal units (he commanded a total of 200,000 troops), he marched from his base near modern Beijing toward the heartland of the empire. His success was immediate. The eastern capital, Luoyang (Loyang), fell. With the main capital Chang’an (Ch’ang-an), poorly defended by unseasoned troops, Xuanzong and his court beat a hasty retreat, heading for refuge in the southwestern province of Sichuan (Szechwan). En route the dispirited troops escorting the emperor mutinied. They blamed Lady Yang and her brother the chief minister for their plight, killed him, and forced the emperor to hand Yang over to them and strangled her. These humiliations led to the abdication of Xuanzong and the ascension of his son and crown prince as Emperor Suzong (Su-tsung). The doting, aged emperor’s love for his favorite, his neglect of his duties and indulgence in a sybaritic life with Lady Yang, the disastrous consequences of their love, their fl ight, and her tragic death have inspired many poems by famous Tang poets and were the subject of many paintings. An Lushan’s troops entered Chang’an unopposed, and he proclaimed himself emperor, but his rebellion made little progress after that. He soon became blind and was murdered by his son in 757 c.e. The son, too, was murdered by one of his generals, and soon the rebellion degenerated into chaotic civil war as some of his early supporters defected and other rebels bands rose as opportunities offered. The new emperor rallied loyal troops who outnumbered the rebels, but were scattered in different garrisons. He also obtained help from former vassals and allies, most notably from a Turkic tribe called the Uighurs and others in Central Asia, and even some Arab troops sent by the Islamic caliph. Some of the help came at a high price, for example the Uighur khan who twice helped to recapture Luoyang was repaid by permission for his men to rampage through and loot the city, including the palaces and Buddhist temples, and which cost thousands of lives. Peace was fi nally restored in 766; however, the empire would never recover its previous prestige and prosperity. The following are some important results of the rebellion: 1. Growing importance of the army and military leaders. The army expanded to over 750,000 men. The military would remain a signifi cant force, and regional commanders would become powerful and able to resist central control. 2. Restructuring of provincial administrations that became semiautonomous through the remainder of the dynasty. This is especially signifi cant in the decreasing amounts of revenue that local authorities would turn over to the central government, further curtailing its authority. 3. Ending the land registration and distribution system in effect since the beginning of the dynasty that had ensured economic equity for the cultivators, maintained local infrastructure projects, and provided men for military service. 4. Accelerating the large-scale shift of population from war ravaged areas in the Yellow River valley in northern China to southern provinces in the Huai and Yangzi (Yangtze) valleys whose productivity became crucial to the economy of the empire. 5. Grievous loss of territory in the border regions because troops were withdrawn to defend the core of the empire. Central Asia was lost to Chinese control, as were Gansu (Kansu) and Ningxia (Ninghsia) Provinces. Both crucial links to the western regions. were lost to the rising Tibetan state. Nothing about the An Lushan rebellion was inevitable. However, it caused enormous disruption to the Tang Empire and acted as a powerful catalyst for the changes that characterized the Chinese world. Although the dynasty survived until 909 c.e. it never regained the prestige and power it had enjoyed before the rebellion. See also Uighur Empire. Further reading: Pulleyblank, Edwin G. The Background of the Rebellion of An Lu-shan. London: Oxford University Press, 1955; Wright, Arthur F., and Denis Twitchett, eds. Perspectives on the T’ang. New Haven, CT: Yale University Press, 1975. Jiu-Hwa Lo Upshur.

AnselmEdit

(c. 1033–1109) philosopher and theologian Anselm was a philosophical theologian and archbishop of Canterbury who is often dubbed the Father of Anselm 19 Scholasticism. Scholasticism is the system of education that characterized schools and universities during the High Middle Ages (12th–14th century) and that aimed principally at reconciling and ordering the numerous and divergent components of an ever-growing body of knowledge with dialectic (logic or reason). Anselm is best known for making several major contributions to early Scholastic theology, namely, his distinctive method of “faith seeking understanding,” his ontological argument for the existence of God, and his classic formulation of atonement theory. Anselm was born into a wealthy family in Aosta in northern Italy. After his mother’s death in 1056 he left home, crossed the Alps into France, and in 1059 entered the Benedictine abbey school at Bec in Normandy, where Lanfranc taught him. In 1063 Anselm succeeded Lanfranc as prior and was consecrated abbot in 1078. Toward the end of his priorate Anselm produced two signifi cant works: the Monologion (Monologue on Reasons for the Faith; 1076) and the Proslogion (Address [to God], fi rst titled Faith Seeking Understanding; 1077– 78). Although both works are intensely contemplative, Anselm proposes philosophical or rational proofs for God’s existence. In both works he begins with the fi rst article of the Christian faith—namely, that God exists—and then seeks to understand it by reason (without further recourse to scriptural or traditional authorities). The basic argument of the Monologion, later called the “ontological argument,” runs thus: God is that being than which nothing greater can be thought. Yet “that than which nothing greater can be thought” cannot exist only in human thought or understanding. Rather by defi nition, “that than which nothing greater can be thought” must also exist in reality. Hence God necessarily exists in reality. In the centuries after his death, Anselm’s method of “faith seeking understanding” (fi des quaerens intellectum) became the basic model of inquiry into the divine and remains the classic defi nition of theology. During his abbacy at Bec (1078–93), Anselm produced the treatises On Grammar, On Truth, On Free Will, and On the Fall of the Devil. As archbishop of Canterbury (1093–1109), Anselm composed several apologetic works, including his greatest theological treatise, Why God Became Man (or Why the God-Man; 1097–98), On the Virginal Conception and Original Sin (1099–1100), and On the Sacraments of the Church (1106–07). In Why God Became Man, Anselm presents “necessary reasons” for the Incarnation. He argues that God had to become human in order for humankind to be saved because the fi rst sin offended God’s honor infi - nitely, yet the guilty party (humanity) is fi nite. Even if they gave their entire lives to God, humans could not thereby pay the penalty for sin because even prior to sin they owed everything to their Creator. Although humans are obliged to make satisfaction, then, only God (who is not a creature and therefore owes nothing) is actually able to do so—hence the God-man. Anselm’s treatise, which rejected the widely held ransom theory, made the most signifi cant contribution to atonement theology in the Middle Ages. During Lent in 1109 Anselm became seriously ill and died on Wednesday of Holy Week, April 21, 1109. His cult became fi rmly established in the Late Middle Ages, and his feast day continues to be celebrated on April 21. In 1720 Pope Clement XI declared Anselm a Doctor of the Church. Further reading: Southern, R. W. Saint Anselm: A Portrait in a Landscape. Cambridge: Cambridge University Press, 1990; Davies, Brian, and Brian Leftow, eds. The Cambridge Companion to Anselm. Cambridge: Cambridge University Press, 2004. Franklin T. Harkins

anti-Jewish pogromsEdit

Jewry suffered a reversal of fate during the High Middle Ages that can only be compared to the destruction of Jerusalem 1,000 years before and the oppression by Nazis 1,000 years after. The turning point in the Middle Ages can be located in the pogroms carried out in May 1096 by gangs and mobs en route to the First Crusade. These events signaled that the stability that Jews enjoyed under Western Christendom during the fi rst millennium was about to end. There were telltale signs that things were about to change in the century before the First Crusade. Jews were accused of colluding with the Muslims to destroy the Church of the Holy Sepulcher in Jerusalem, undertaken in fact by the mad Caliph Hakim in 1009. For another thing, a pre-crusade campaign to cast out the Saracens from Spain in 1063 revealed that Jews did not take up the fi ght alongside of the Christian soldiers. In fact, Jews had prospered and integrated well under the Umayyads of Spain. When Pope Urban II issued the summons to fi ght for the Holy Land, the fi rst to respond in France and 20 anti-Jewish pogroms Germany were paupers and peasants who had been stirred up by monks and preachers. The church hierarchy did not effectively counter a populist piety that the killing of Jews expiated sins and atoned for the crucifi xion of Christ. Mobs also felt that Jews were legitimate targets because they lived within Christendom and constituted an immediate threat, whereas the Muslims were far away. The fi rst pogroms broke out in Rouen in French Lorraine. Jews were forced into baptism or slaughtered. Though warnings were sent out from France to beware the onslaught of the mobs, the German Jews dismissed them and trusted in their fellow countrymen. When Peter the Hermit and Walter the Penniless led their forces there, their brutal intentions were quickly made known. Though many bishops and priests tried to protect them, it is estimated that up to 10,000 Jews who lived in settlements around the Rhine and Danube Rivers perished. Cities affected included Treves, Meuss, Ratisbon, and Prague. The more disciplined crusader armies took anti-Semitism with them into the Holy Land when they fi nally arrived and burned Jews in their synagogues. Later crusades did not witness the same degree of bloodshed against Jews in Europe. Nonetheless, the earlier massacres unleashed bitterness and tension between the two religious groups, especially evident among the intellectuals and hierarchy, for the next few centuries. When the Second Crusade was proclaimed, Pope Eugenius III (1145–53) suggested that Jewish moneylenders cancel the debts of Christian crusaders. Infl uential abbot Peter of Cluny wrote Louis IX of France that European Jews fi nance the war effort. A French monk named Radulph traveled around Germany— without his monastery’s approval—preaching that the Jews were the enemies of God. At the risk of his life, the saintly and respected Bernard of Clairvaux confronted and condemned Radulph but still urged that Jews not collect interest on crusaders’ debts. Since Jews could not count on the protection of the church, they were forced to accept a special legal status in the eyes of the civil government. This new identity meant that Jews now were quarantined in ghettos, bound to wear badges or unique clothing, and even kept from reading the Talmud. By the end of the Middle Ages, western European Jewry was in ruins, and Jews fl ed eastward to Poland and Russia. See also Crusades; Umayyad dynasty. Further reading: Flannery, Edward H. The Anguish of the Jew. Mahwah, NJ: Paulist Press, 1985; Lewis, Bernard. Cultures in Confl ict: Christians, Muslims, and Jews in the Age of Discovery. New York: Oxford University Press, 1995; Yuval, Israel Jacob. Two Nations in Your Womb: Perceptions of Jews and Christians in Late Antiquity and the Middle Ages. Translated by Jonathan Chipman and Barbara Harshav. Berkeley: University of California Press, 2006. Mark F. Whitters

Aquinas, ThomasEdit

(1225–1274) philosopher and theologian St. Thomas Aquinas was born at Roccasecca, Italy, to Count Landulf and Countess Theodora. From early on, Thomas was diligent in his studies and had a meditative mindset. He received his education from the monastery of Monte Cassino and the University of Naples. Thomas entered the Dominican Order and then studied in Paris from 1245 under the well-known philosopher Albertus Magnus (1195–1280). He spent 10 years visiting Italy, France, and Germany. In 1248 he lectured on the Bible at a college in Cologne, Germany. He was in Paris from 1252 c.e., eventually becoming a professor of theology and writing books. He was awarded the degree of doctor in theology in 1257. Between 1259 and 1268 he lectured as professor in the Dominican covenants of Rome and Naples. Thomas also worked at the papal court as an adviser. He was a well-known fi gure by the time he came to Paris in 1269. His intellectual inquiries about the relationship between philosophy and theology made Thomas a controversial fi gure. His Scholasticism made him an avid reader of works pertaining to Christian theologians, Greek thinkers, Jewish philosophy, and Islamic philosophy. Thomas wrote his fi rst book as a commentary on Sentences, a seminal book on theology by Peter Lombard (1095–1161). Aristotle (384–322 b.c.e.) infl uenced him greatly, and his comments on Sentences contained about 2,000 references to Aristotle. Critics also associate Thomas with the doctrine of Averroës (1126–98), distinguishing between knowledge of philosophy and religion. The Dominicans sent Thomas to Naples in 1272 to organize a studium generale (a house of studies). The pope had asked him to attend the Council of Lyon on May 1, 1274, and to bring his book Contra errores Graecorum (Against the errors of the Greeks). In spite of his deteriorating health, he started the journey in January. He died on his way there on March 7, 1274, at the Cistercian abbey of Fossanova. In Christian theology the 13th century was an important time, as two schools of thought were raging with controversy. The Averroists separated philosophical truths Aquinas, Thomas 21 from faith. They did not believe in divine revelations and believed that reason was paramount. The Augustinians gave faith the predominant position. For Thomas both reason and faith were important. Both were complementary to each other, and the nature of their relationship did not confl ict. He believed that the truths of philosophy and religion were gifts from God. The moderate realism of Thomas postulated that both the medium of thought and that of the senses led to knowledge of the intelligible world or the universal. Thomas was a sharp thinker, combining philosophical truths with theological postulations. His natural law accommodated the divine law. He synthesized Christian theology with the philosophy of Aristotle, the Stoics, and Ibn Rushd. Thomas was a prolifi c writer, penning 60 works. His manuscripts were preserved in the libraries of Europe, and multiple copies came out after the invention of printing. The fi rst published work of Thomas was Secunda Secundae (1467). The Summa Theologica, one of his best-known works, was also printed. It brought out great debate between the rational inquiry of Thomas and the Catholic doctrines. He defended the Christian faith in Summa de veritate catholicae fi dei contra gentiles (Treatise on the truth of the Catholic faith against unbelievers). In the Quaestiones disputatae (Disputed questions), he gave his opinion on various topics. The pernicious theory that there was only one soul for all persons was refuted brilliantly in De unitate intellectus contra Averroistas. He proved in Opusculum contra errores Graecorum that the Holy Ghost proceeded from the Father and the Son. His deep knowledge of the fathers of the church was found in Catena Aurea. Pope John XXII canonized Thomas Aquinas on July 18, 1323. In 1567 he was made a Doctor of the Church. The Summa Theologica became the standard textbook in theology in the syllabus of universities all over Europe. There was renewed interest in his writings after the papal bull of 1879. Leo XIII, in his Providentissimus Deus (November 1893), took the principles behind his criticism of the sacred books from Thomas. St. Thomas Aquinas was the “Christian Aristotle” who wielded immense infl uence on future popes, universities, and academia. He combined the best of faith and reason with a careful synthesis. Further reading: Aertsen, Jan. Nature and Creature: Thomas Aquinas’ Way of Thought. Leiden: E. J. Brill, 1988; Bourke, Vernon J. Aquinas’ Search for Wisdom. Milwaukee, WI: Bruce, 1965; Dayson, R. W. Aquinas: Political Writings. Cambridge: Cambridge University Press, 2002; McInerny, Ralph. Aquinas. Cambridge: Polity Press, 2004; ———. Thomas Aquinas’ Selected Writings. London: Penguin Classics, 1998; Stump, Eleonore. Aquinas. London: Routledge, 2003; Wippel, John. The Metaphysical Thought of Thomas Aquinas: From Finite Being to Uncreated Being. Washington, D.C.: Catholic University of America Press, 2000. Patit Paban Mishra

Aquitaine, Eleanor ofEdit

(1122–1204) duchess of Aquitaine, queen of France and England Eleanor of Aquitaine was born in 1122 to William X, duke of Aquitaine and count of Poitou, and Aenor, daughter of the viscountess of Châtellerault. At the death 22 Aquitaine, Eleanor of Thomas Aquinas’s Summa Theologica became the standard textbook in theology in universities all over Europe. of her younger brother, Eleanor became the wealthy heiress of Aquitaine. Groomed by her father, she frequently accompanied him on trips throughout his lands as he administered justice and faced down rebellious vassals. On his deathbed in April 1137, William entrusted her to his feudal lord, the Capetian monarch Louis VI, to arrange her marriage, which he did to his 17-year-old son and heir Prince Louis. When Louis VI died in August 1137 the young prince became King Louis VII of France and Eleanor his queen. The two were ill-matched. Louis, as the second son, had originally been groomed for a career in the church. Eleanor had been raised in one of the most sophisticated households in all of Europe. Her grandfather William IX is credited with creating the literary genre of courtly love and had welcomed minstrels, poets, and troubadours to his court. Eleanor was frequently able to convince Louis to intervene in affairs that concerned her own interests, to the detriment of the crown. All of this might not have mattered had Eleanor been able to provide Louis with a male heir who would have inherited the lands of both his parents. Unfortunately Eleanor bore Louis only two daughters, Marie and Alix. The breaking point in their marriage occurred during the Second Crusade, which both Eleanor and Louis agreed to undertake in 1146 in response to the preaching of St. Bernard of Clairvaux. Their goal was to rescue the crusader-state of Edessa that had fallen to the Muslims. Presumably Eleanor’s offer of a thousand knights from Aquitaine and Poitou had helped to assuage misgivings about allowing her and numerous other noblewomen to accompany Louis and his warriors on their journey. In March 1148 the French army arrived at Antioch, just to the southwest of the kingdom of Edessa. Here Louis and Eleanor were greeted by the queen’s uncle, Raymond of Poitiers, ruler of the principality. Rumors began to circulate about an affair between Eleanor and her uncle. When Louis rejected Raymond’s strategically sound plan of taking back Edessa in favor of marching on Jerusalem, Eleanor exploded against the king, and demanded that their marriage be annulled. Although Louis wrenched her away from Antioch and forced her to march southward on Jerusalem, their marriage was over. The two boarded separate ships and sailed for home in 1149–50. In 1152 their marriage was annulled on grounds of consanguinity, and Eleanor regained control of her lands. Later in 1152 she married the 18-year-old count of Anjou, Henry Plantagenet, whose extensive land holdings in France also included the duchy of Normandy and the counties of Maine and Touraine. Their marriage created a formidable counterweight to the authority and power of Louis VII of France. Moreover, in 1154 Henry made good his claim to the English throne through his mother Matilda. In December 1154 Eleanor was crowned queen of England, consort to Henry II (1154–89) of the house of Plantagenet. Over the next 13 years Eleanor bore Henry five sons and three daughters, two of whom, Richard (1189–99) and John, (1199–1216) would rule England. Aquitaine, Eleanor of 23 Enameled stone effigy of Eleanor of Aquitaine from her tomb in the Abbey of Fontevrault, France. Initially Eleanor played a substantial role in administering their combined lands in France while Henry secured England, but as his power and authority grew, he had less use for his independent-minded queen. Disenchanted with Henry and perturbed by his numerous affairs, Eleanor left England with her two sons Richard and Geoffrey for Poitiers in 1168. Here over the next several years she established a fl ourishing court that became a cultural center for troubadours and poets singing of courtly love. Meanwhile Richard and Geoffrey increasingly chaffed at their father’s unwillingness to give them real authority in ruling lands that they nominally held. They joined their older brother Henry in revolting against Henry II in 1173, with Eleanor’s backing. Henry crushed this revolt, and for her part in it, he placed Eleanor under close house arrest in England for the next 16 years. When Henry died in 1189 Eleanor resumed her active role in political and familial affairs. In 1189 her favorite son, Richard, became king of England, and when he departed on the Third Crusade in December of that year, he left Eleanor as regent in England. On his return from the crusade in 1192 Richard fell into the hands of his enemy the German emperor Henry VI (1190–97), and Eleanor took charge of raising his ransom and negotiating his release. When Richard died in 1199 she supported her youngest son, John, as his successor, undertaking a diplomatic mission to the court of Castile, and coming to his aid when war broke out between him and Philip II Augustus, king of France, in 1201. She died in March 1204 at the age of 83 and is buried alongside Richard and Henry in the nunnery at Fontevrault in Anjou. Further reading: Kelly, Amy. Eleanor of Aquitaine and the Four Kings. Cambridge, MA: Harvard University Press, 1971; Swabey, Fiona. Eleanor of Aquitaine, Courtly Love, and the Troubadours. Westport, CT: Greenwood Press, 2004; Duby, Georges. Women of the Twelfth Century, Vol. 1, trans. by Jean Birrell. Chicago: University of Chicago Press, 1997. Ronald K. Delph

Ashikaga ShogunateEdit

This shogunate saw the Ashikaga family dominate Japanese society, ruling for much of the period from their headquarters in the Muromachi district of Kyoto. As a result the shogunate, or bakufu (“tent government,” in effect a military dictatorship), with military power controlled by the seii tai shogun or shogun (“general who subdues barbarians”)—is called either the Ashikaga or the Muromachi Shogunate. The Ashikaga Shogunate lasted from 1336 until, offi - cially, 1588, although the last of the family was ousted from Kyoto in 1573, and it did not have much military power after the 1520s. The period when the Ashikaga family dominated Japanese politics reached its peak when Ashikaga Yoshimasa (1436–90) held the hereditary title of shogun (military dictator) of Japan from 1449 to 1473, although the last years of his shogunate were dominated by a succession of crises leading to the Onin War (1467– 77). Yoshimasa’s period as shogun—or, strictly speaking, the time after his abdication—represented an important period for the development of Japanese fi ne arts. The Ashikaga was a warrior family that had been prominent in Japanese society since the 12th century, when Yoshiyasu (d. 1157) took as his family name that of their residence in Ashikaga. They trace their ancestry back further to Minamoto Yoshiie (1039–1106), also known as Hachiman Taro Yoshiie, the grandfather of Yoshiyasu. From the Seiwa Genji branch of the famous Minamoto family, he was one of the great warriors of the Later Three Years’ War that raged from 1083 to 1087. Yoshiyasu’s son took an active part in the Taira- Minamoto war of 1180–85, and six generations later Ashikaga Takauji became the fi rst shogun, from 1338 to 1358. This came about after Emperor Go-Daigo (r. 1318–39) was exiled to the Oki Islands after being accused of plotting against the Kamakura Shogunate that controlled the army. The emperor rallied some loyal forces with the aim of ending the dominance of the Kamakura family. ASHIKAGA TAKAUJI The emperor put his troops under the command of Ashikaga Takauji and sent them to the central provinces. The choice of Takauji was interesting, as he had taken part in plots against the shogunate in 1324 and again seven years later. Put in charge of an army to defeat the enemies of the shogun, Takauji changed sides and decided to support the emperor. He took Kyoto and ousted the shogun, ushering in what became known as the Kemmu Restoration. However, rivalries quickly broke out between Takauji and another warlord, Nitta Yoshisada. By this time the prestige of the throne was suffering after major administrative failures had clearly resulted in Go-Daigo being unable to protect his supporters. Takauji led his men to Kyoto, which he captured 24 Ashikaga Shogunate in July 1336, forcing Emperor Go-Daigo to fl ee to Yoshino in the south. In 1338 Takauji established what became known as the Ashikaga (or Muromachi) Shogunate, based in Kyoto. Takauji controlled the army and his brother Ashikaga Tadayoshi controlled the bureaucracy, with additional responsibility for the judiciary. The shogunate initially resulted in a split in the imperial family, with the Kyoto wing supporting it and Go-Daigo and his faction ruling from the southern court at Yoshino. This continued until 1392 when the policy of alternate succession to the throne was reintroduced. After a short period of stability, there was an attempt at an insurrection by Ashikaga Tadayoshi, who seized Kyoto in 1351. Takauji was able to drive him out, and Tadayoshi fl ed to Kamakura. Takauji established a “reconciliation,” during which Tadayoshi suddenly died, probably from poisoning. This left Takauji in control of the north, but he died in 1358 and was succeeded by his son Yoshiakira (1330–67), who was shogun until his death in 1367. There was then a short period with no shogun. ASHIKAGA YOSHIMITSU When Ashikaga Yoshimitsu (1358–1408) became shogun in 1369, a position he held until 1395, he was able to develop a system by which families loyal to him held much regional power, and the offi ce of military governor was rotated between the Hosokawa, Hatakeyama, and Shiba families. Yoshimitsu may have been planning to start a new dynasty. This theory comes from the fact that he was no longer administering territory in the name of the sovereign. Certainly he did try to break the power of the court nobility, occasionally having them publicly perform menial tasks. When he went on long pilgrimages, he took so many nobles with him that the procession, to many onlookers, seemed to resemble an imperial parade. Yoshimitsu was able to build a rapport with Emperor Go-Kogon. His main achievement, involving considerable diplomatic skill, was to end the Northern and Southern Courts by persuading the southern emperor to return to Kyoto in 1392, ending the schism created during his grandfather’s shogunate. Yoshimitsu also had to deal with two rebellions—the Meitoku Rebellion of Yamana Ujikiyo in 1391–92 and the Oei Rebellion of 1400 led by Ouchi Yoshihiro (1356–1400). Ouchi Yoshihiro had relied on support from pirates who had attacked Korea and also, occasionally, parts of China, but his rebellion came about when he did not want to contribute to the building of a new villa for the shogun. He had long harbored resentment against the Ashikaga family, and in some ways the villa was merely an excuse for war. However, very quickly Ouchi Yoshihiro was betrayed by people he thought would support him, and after he was killed in battle, the rebellion ended quickly. In order to ensure an easy succession Yoshimitsu abdicated the offi ce of the shogun to his son Ashikaga Yoshimochi (1386–1428), who was shogun from 1395 to 1423, while he, himself, remained in Kyoto, where he made vast sums of money monopolizing the import of copper needed for the Japanese currency and negotiating a trade agreement with China in 1401. He also created a minor controversy by sending a letter to the Ming emperor of China, which he signed with the title “king of Japan.” In his latter years Yoshimitsu became a prominent patron of the arts, supporting painters, calligraphers, potters, landscape gardeners, and fl ower arrangers. Many of the artists that Yoshimitsu encouraged became interested in Chinese designs and were infl uenced by their Chinese contemporaries—this became known as the karayo style. Ashikaga Shogunate 25 Ashikaga Yoshimitsu had a villa in Kyoto that combined Japanese and Chinese architecture. It is now known as the Golden Pavilion. The system of control established by Yoshimitsu continued under Ashikaga Yoshimochi, and his son Yoshikazu (1407–25), who was shogun from 1423–25. However, it was also a period when the Kanto region of Japan started to move out of the control of the shogunate. Yoshikazu’s uncle Yoshinori (1394–1441) succeeded him, taking over as shogun in 1428. Yoshinori had been a Buddhist monk from childhood and ended up as leader of the Tendai sect, having to give up the life of a monk when his nephew died. Because of his background, he was determined to establish a better system of justice for the poorer people and overhauled the judiciary. He also strengthened the shogun’s control of the military, making new appointments of people loyal to the Ashikaga family. Many nobles disliked him because he was seen as aloof and arrogant, and in 1441 a general from Honshu, Akamatsu Mitsusuke, assassinated Yoshinori. In what became known as the Kakitsu incident, Akamatsu Mitsusuke was hunted down by supporters of the shogunate and was forced to commit suicide. Yoshinori’s oldest son, Yoshikatsu (1434–43), succeeded him and was shogun for only two years. With his death, there was no shogun from 1443 to 1449, when Yoshikatsu’s 13-year-old brother, Ashikaga Yoshimasa, became shogun. THE ONIN WAR Ashikaga Yoshimasa was born on January 20, 1436, at Kyoto, and when he became shogun, the shogunate was declining in importance with widespread food shortages and people dying of starvation. Yoshimasa was not that interested in politics and devoted most of his life to being a patron of the arts. He despaired of the political situation, and without any children, when he was 29 years old he named his younger brother, Yoshimi (1439–91), as his successor and prepared for a lavish retirement. However, in 1465 he and his wife, Hino Tomiko, had a son. His wife was adamant that the boy should be the next shogun, and a confl ict between supporters of the two sides—that of Yoshimasa’s wife and that of his brother—started in 1467. Known as the Onin War, most of the fi ghting took place around Kyoto, where many historical buildings and temples were destroyed and vast tracts of land were devastated. More important, it showed the relative military impotence of the shogun, and the power of the military governors, and quickly changed from being a dynastic squabble to being a proxy war. It then became a confl ict between the two great warlords in western Japan, Yamana Mochitoyo, who supported the wife 26 Ashikaga shogunate and infant son, and his son-in-law, Hosokawa Katsumoto, who supported Yoshimi. Both died during the war, and there was no attempt by either side to end the confl ict until, fi nally, exhausted by the 10 years of confl ict, in 1477 the fi ghting came to an end. By this time Yoshimasa, anxious to avoid a diffi cult succession, had stood down as shogun in 1473 in favor of his son. His son, Yoshihisa, was shogun from 1474 until his own death in 1489, whereupon, to heal the wounds of the Onin War, Yoshimasa named his brother’s son as the next shogun. Yoshimi’s son, Yoshitane (1466–1523), was shogun from 1490 until 1493. In retirement, Yoshimasa moved to the Higashiyama (Eastern Hills) section of Kyoto, where he built a villa that later became the Ginkakuji (Silver Pavilion). There he developed the Japanese tea ceremony into a complicated series of ritualized steps and was a patron to many artists, potters, and actors. This fl owering of the arts became known as the Higashiyama period. Yoshimasa died on January 27, 1490. From the shogunate of Yoshitane, the family was fast losing its political power. Yoshitane’s cousin Yoshizumi (1480–1511) became shogun from 1495 to 1508 and was succeeded, after a long interregnum, by his son Yoshiharu (1511–50), who became shogun in 1522, aged 11, and remained in that position until 1547. His son Yoshiteru (1536–65) succeeded him from 1547 to 1565, and, after his murder, was then succeeded by a cousin, Yoshihide (1540–68), who was shogun for less than a year. Yoshiteru’s brother Yoshiaki (1537–97) then became the 15th and last shogun of the Ashikaga family. He had been abbot of a Buddhist monastery at Nara, and when he became shogun renounced his life as a monk and tried to rally his family’s supporters against a sustained attack by Oda Nobunaga. In early 1573 Nobunaga attacked Kyoto and burned down much of the city. In another attack in August of the same year, he was fi nally able to drive Yoshiaki from Kyoto. Going into exile, in 1588 Yoshiaki formally abdicated as shogun, allowing Toyotomi Hideyoshi to take over. He then returned to his life as a Buddhist priest. In at least its last 50 years—and arguably for longer—the shogunate had become ineffective, and warlords had once again emerged, often fi nancing their operations by not only pillaging parts of Japan itself but with piratical raids on outlying parts of Japan and Korea. The Ashikaga Shogunate remains a controversial period of Japanese history. During the 1930s Takauji was heavily criticized in school textbooks for his disrespect to Emperor Go-Daigo. Many historians now recognize him as the man who brought some degree of stability to the country. The attitude toward Yoshimasa has also changed. Because he concentrated so heavily on the arts, he neglected running the country. He is now recognized as heading an inept administration that saw great suffering in much of Japan. It would lead to a period of great instability that only came to an end when Tokugawa Ieyasu became shogun in 1603. Further reading: Keene, Donald. Yoshimasa and the Silver Pavilion: The Creation of the Soul of Japan. New York: Columbia University Press, 2003; Perkins, Dorothy. Samurai of Japan: A Chronology from Their Origin in the Heian Era (794–1185) to the Modern Era. Darby, PA: Diane Publishing Company, 1998; Sansom, George. A History of Japan 1334–1615. London: The Cresset Press, 1961. Justin Corfield

Athos, MountEdit

Christian monasticism began in the eastern Christian world when St. Antony of Egypt, who exemplified the solitary form of monastic life, entered the Egyptian desert in the late third century c.e. Soon afterwards, Pachomius of Egypt and the Desert Fathers developed the communal life. From here, early monasticism spread to Palestine, Syria, and the West. Monasticism’s birthplace was vastly affected by the Islamic conquests of the seventh and eighth centuries and declined in its wider historical significance. The heart of (Chalcedonian) Orthodox monasticism is Mt. Athos in northern Greece, on a rugged peninsula extending 35 miles into the Aegean Sea. It is the easternmost of three such “fingers” that stretch out from the Chalkidike Peninsula. The name of this promontory is derived from its highest peak, the nearly 7,000-foot Mount Athos. The Orthodox refer to this region as the “Holy Mountain” because of its spiritual significance over the past millennium. In the eighth and ninth centuries monks journeyed to Mount Athos to find refuge during the controversy of Iconoclasm when the state forbade icon veneration. By the later ninth century c.e. the area was already becoming known for its reputation for holiness. In 963 c.e. the monk Athanasios of Trebizond created the first communal monastery there, the Great Lavra. Several Byzantine emperors supported Athanasios, endowing the monastery with wealth, privileges, and land. Other monasteries quickly followed. In less than 40 years, there were almost 50 monasteries, with the hegoumenos (abbot or presiding father) of the Great Lavra holding the preeminent position. Mount Athos sprouted communal monasteries as well as sketes, small groups of monks who lived separately from a general community but came together for worship and feast days. Mount Athos was also home to many anchorites, or hermits. Monastic life, in all its variety, blossomed on Athos, but it did so with strict gender separation, for, in 1045 c.e., the emperor banned all females and even female animals. Women were, and still are, excluded both as members and even as visitors. Patronage continued from Byzantine emperors as well as from Slavic rulers in Serbia, Bulgaria, and Russia. Mt. Athos became a truly international community where monks from all over the Orthodox world mingled together: Italians, Greeks, Georgians (Iveron Monastery), Russians (Panteleimon), Serbs (Chilandar), Bulgarians (Zographou), and Orthodox Armenians. Theological ideas quickly passed, via Mount Athos, from one part of the Orthodox world to another. Such was the case in the 14th century when the controversy over Hesychasm (the “Jesus Prayer”) led to its defense by Athonite Gregory Palamas and its spread throughout Orthodoxy. Its accumulated wealth made the peninsula attractive to invaders. In the 13th century Athos fell into the hands of western European crusaders and, after the 14th century, to the Ottoman Turks, who, after accepting tribute and depriving the monasteries of their estates outside, respected the autonomy of the region. While Mount Athos was the heart of Orthodox monasticism, it was not the only center of monastic life—many other areas, Meteora in central Greece, for example, were well known. Monasteries (ranging in size from a few monks to hundreds) sprouted up wherever there were Orthodox communities. So, not surprisingly, when the town of Mystras, located west of ancient Sparta, became an important Byzantine cultural and political center in the 13th–15th centuries c.e., monasteries (like the Brontocheion) appeared as well. Unlike Athos, however, this region lost its wider importance after the Ottoman conquest of 1460 c.e. See also Ottoman Empire. Further reading: Cavarnos, Constantine. The Holy Mountain. Belmont, MA: Institute for Byzantine & Modern Greek Studies, Incorporated, 1973; Harper, R. Journey from Paradise. Beauport, Québec: Editions du Beffroi, 1987. Matthew Herbst Athos, Mount 27

AverroësEdit

(1126–1198) religious philosopher Abu Al-Walid Muhammad Ibn Ahmad Ibn Rushd, Ibn Rushd for short, or Averroës, as he is known to the West, was born in Córdoba (Qurtuba), Spain, in 1126 to a family of distinguished Andalusian scholar-jurists. Ibn Rushd was to become a famous philosopher, theologian, physician, and royal consultant. He was a scholar of the natural sciences, namely biology, astronomy, medicine, physics, and the Qur’anic sciences. His grandfather, after whom he was named, was a renowned chief justice (qadi) in Córdoba and an authority on Malikite jurisprudence, having written two famous books on the subject. At the same time, he was the imam of the Great Mosque of Córdoba. Ibn Rushd’s father was also a judge. Having grown up in a family of scholars, Ibn Rushd received an excellent education in Córdoba in linguistics, Islamic jurisprudence, and theology. He became very knowledgeable in these subjects, evident through his many writings. He was especially competent in the subject of khilaf, which dealt with controversies in legal sciences. Ibn Rushd had profound knowledge of Aristotelian philosophy, possibly introduced to the subject by one of his teachers or one of the leading scholars in Córdoba. He was educated in medicine and accomplished a major work known as the Al-Kulliyat fi ‘l tibb translated as General Medicine in 1169. Ibn Rushd’s writings were so widely celebrated at one time that it was claimed that medieval Islamic philosophy was an earlier version of the European Enlightenment. In 1153 Ibn Rushd moved to Marrakech, where he met the Almohad ruler Abu Ya’qub Yusuf, who was very impressed with the young Ibn Rushd’s intellect and deep knowledge of philosophy. It is interesting to note that Ibn Rushd was initially reluctant to reveal the extent of his knowledge to the prince because at the time strict Muslim leaders frowned on philosophy, which was considered anti-Islamic. Ibn Rushd had to fi ght against this prevalent belief by asserting that philosophy could be compatible with religion, if both were properly understood. He had nothing to fear with regards to the Almohad prince, who admired his wide knowledge. In fact, the ruler consulted Ibn Rushd on philosophical matters from then on and became his patron. It was also because of Abu Ya’qub’s prompting that Ibn Rushd summarized the works of Aristotle in a clear manner. During this time he also provided detailed commentaries of his Aristotelian philosophy, such that he is popularly known as the Commentator of Aristotle. In Marrakech, Ibn Rushd remained active in other areas beside writing and philosophy. He also made astronomical observations. In 1182 he was appointed chief physician in Marrakech. He then became the chief justice in Córdoba. In 1195 Ibn Rushd fell out of favor with the new Almohad prince during the latter years of his reign. His works were considered contrary to religion, and the Caliph passed edicts forbidding their study. He was banished to Lucena near Córdoba but later returned to Marrakech. He died soon after in December 1198. See also Islamic law. Further reading: Bello, Iysa A. The Medieval Islamic Controversy between Philosophy and Orthodoxy. New York: E. J. Brill, 1989; Davidson, Herbert A. Alfarabi, Avicenna and Averroës, and Intellect—Their Cosmologies, Theories of the Active Intellect, and Theories of Human Intellect. Oxford: Oxford University Press, 1992; Leaman, Oliver. Averroës and His Philosophy. Oxford: Oxford University Press, 1988; Urvoy, Dominique. Ibn Rushd (Averroës). New York: Routledge, 1991. Nurfadzilah Yahaya

Avignonese papacyEdit

The Avignonese papacy (1304–78) and the Great Schism (1378–1414) are regarded as two of the most dramatic events in the history of Christianity that further undermined and diminished the prestige of the papacy and the authority of the Western Latin Church. The fi rst episode refers to the nearly century-long pontifi cate of eight popes, who from the beginning of the 14th century until 1378 ruled the Christian world from the French town of Avignon, being held captive by Philip IV the Fair; because of its forced nature, the Avignonese papacy is also called the Avignonese Captivity, or Exile. Historians attribute the cause of the Avignonese Exile of the papacy to the earlier confl ict between Pope Boniface VIII and the young French king Philip the Fair in the preceding century, when the king and the pope were struggling to proclaim their rule over Europe. In the center of the confl ict stood new military taxes the king levied on French monasteries, requiring new subsidies to fi ght his wars with the English. Boniface rejected the king’s claims for fi nancing his army 28 Averroës Avignonese papacy 29 The Avignonese papacy refers to the pontifi cate of eight popes, who from the beginning of the 14th century until 1378 ruled the Christian world from the French town of Avignon. They built themselves a fortifi ed palace within the walls of Avignon and lived in luxury. on account of the church in the bull Clericis laicos from 1290 and later paid for his stubbornness with his own life, literally terrifi ed to death by the king’s chancellor William of Nogaret. Boniface’s direct successor, Benedict XI (1303–04), did not live long enough to pacify the spirits, supposedly having been poisoned by an unidentifi ed monk; a new pope, old and gravely ill, Bertrand de Got, who assumed the name Clement V, led the papacy into exile. Residing in France at the time of his election, weakened by what was likely cancer, and discouraged by the fate of his predecessor, Clement V capitulated to Philip’s demands that he should be crowned at Lyon. He established the tradition of the Avignonese papacy, never setting foot in the ancient city of Rome. Clement’s Avignon successors (seven popes, among whom the most famous were John XXII and Benedict XII) all remained loyal to the French rulers, playing whenever necessary against the German emperor and the English, which outwardly may have been seen as an ordinary state of affairs had it not been for the fact of direct infl uence the French kings exercised in the curia. Throughout the 14th century the Avignonese papacy was continuously showing signs of decline of papal authority, which was becoming increasingly undermined by the feudal monarchy. In 1312 the papacy surrendered to the will of Philp IV and dismissed the Order of the Templars, famous for its wealth, with thousands of its members accused of heresy, witchcraft, and sodomy and all its treasures confiscated by the crown. The fiscal oppression of the curia (chiefly through control over the sale of benefices and indulgences but also over tithes and annates) became more amplified during the Avignonese papacy, despite the heavy French presence in the College of Cardinals (seven out of eight Avignonese popes and almost all of the important cardinals were Frenchmen by the middle of 14th century). In due course the popes built themselves a fortified palace behind the walls of Avignon and lived there surrounded by luxury in the midst of magnificent artificial gardens. The luxurious lifestyle of the popes was subject to constant complaints and gossip. Contemporaries, including such important thinkers as Petrarch, Marsilius of Padua, and Catherine of Siena, relentlessly criticized the Avignonese popes. The image of the papacy during those years changed sharply, having lost its unconditional spiritual authority and its control over the brethren. Petrarch called the Avignonese papacy “the Babylonian Captivity of the Church” and Avignon popes “wolves in shepherds’ clothing.” The Avignonese papacy was detested by most social sectors—from peasants who suffered the ever-increasing taxation to intellectuals and theologians who wrote against the moral and spiritual degradation of the Holy Office. In the next centuries the Avignonese papacy was described as totally deprived of spirituality. Subservience to a secular ruler, nepotism, and rapacity of the “puppet-popes” seriously undermined the reputation of papacy in the eyes of Europe, marking at the same time the end of the reign of Church Universal and the beginning of a new epoch, where ultimate power belonged with the national ruler. The Avignon church underwent a complete makeover. Despite criticisms, almost all Avignon popes undertook serious attempts at reform. They created a sophisticated and effective administration that surpassed anything previously known in the European states. The popes’ involvement in secular politics also grew during these years, despite the forced capitulation to France. Both developments effectively turned the church into a modern, secularized, and politicized organization. The last years of the popes’ stay at Avignon are also marked by their recurring attempts to strengthen their position in Italy. Quite unsuccessfully they tried to turn the outcome of the revolt of Cola di Rienzo in 1347 to their favor, but even after this failure popes continued to maintain close economic and political relations with Italy. Their final success and return to Rome is indebted to the activity of Cardinal Albornoz and Pope Urban V, who gave constitution to the Papal States. Taking advantage of the difficulties France was experiencing during the Hundred Years’ War (1337–1453), Pope Gregory XI (1370–78) transferred the papal residence back to Rome in 1378, dying just a few months after this historic reunion of the church with its ancient capital. This move, however, was attempted too late to save the papacy from disaster: Its return was blackened by the shadow of the Great Schism. Soon after Gregory XI died, the Roman people, fearing that a new pope might leave them for France once again, gathered under the walls of the conclave, demanding election of an Italian to the Holy See. Cardinals, the majority of whom were Frenchmen, chose the archbishop of Bari, a Neapolitan, Bartholomew Prignano, to be elected the next pope. He accepted the Holy Office, taking the name of Urban VI. No doubt that Prignano, who had previously held a position of a vice chancellor of the curia, seemed an excellent choice to the cardinals. They were confident they could control the “little archbishop” (as they nicknamed their candidate), who would be grateful for this unexpected promotion. Later the cardinals would announce that that they had elected Prignano under threats and for fear of the reaction of the angry mob that was raging on the streets surrounding the palace during the election. From the very start the pontificate of the new pope was stained with a most bitter struggle with the cardinals and members of the curia of non-Italian origin. Harsh reform measures of the new pontiff, who was irritated at the slightest pretext, and physically assaulted cardinals on several occasions (publicly announcing their lifestyle of pomposity and splendor as sinful), caused the French party to flee from Rome. Urban soon found himself at daggers drawn with everyone around him, managing to deprive the Holy See of a number of its most loyal supporters, such as Joanna, queen of Naples; her husband, Duke Otto of Brunswick; and the powerful duke of Fondi, not to mention the king of France. On August 9, 1378, under a pretext that Urban’s appointment was forced, the conclave of the fugitive cardinals issued a lengthy document, entitled Declaratio, where they declared the election invalid and the Holy Office vacant. At the same sitting they unanimously voted for the Gallic cardinal Robert of Geneva, who assumed the office under the name of Clement VII (1378–94), thus becoming an “anti-pope.” 30 Avignonese papacy The following 40 years were characterized by almost constant warfare between pope and anti-pope, in which the Papal States were the chief playground. The schism left no one sitting on the fence. Having unparalleled impact on political allegiances, it reshaped European geopolitics, changing cultural boundaries and paving the way for the upcoming Reformation. With every passing year the split went deeper. On the side of the “French” Pope Clement VII fought such powerful allies as the king of France, the kings of Naples and Scotland, and half of the rulers of Germany; Urban was supported by England, Portugal, and Hungary. The legal pope continued to be tactless and inconsiderate to his allies, and gradually his authority grew weak. Appointing new cardinals to replace the rebels was not a suffi cient measure to keep discipline among the supporters; constantly suspecting treachery, Urban did not hesitate to send several cardinals to be executed for “disobedience” to his will. Isolated and defeated in most of his battles, Urban locked himself up in his castle—mainly to hide from the French king who had announced a huge prize for the pope’s head. In 1389 Urban VI came back to Rome, where he died, according to one source, surrounded by followers; according to another, he was poisoned by enemies. Soon after Urban’s funeral it became clear that even the disappearance of one of the ruling pontiffs would not save the situation—the “Italian” party immediately appointed a successor. Thus receiving a precedent, the schism continued— Clement VII was succeeded by Benedict XIII (from 1394); Urban VI by Boniface IX (1389–1404), Innocent VII (1404–06), and Gregory XII (from 1406). The confl ict deteriorated when the Council of Pisa in 1409 deposed both Benedict XIII and Gregory XII, selecting new pope Alexander V (1409–10). The deposed popes refused to recognize the decision of the Council, and the Holy See became occupied by three popes at once. This development was very favorable to the heretical movements that rose in large quantities all across Europe, preaching noninstitutional evangelism and unbalancing the old feudal system. Secular lords and princes who supported the establishment and the unity of the church were greatly concerned, despite the fact that the decrease in the papal authority contributed to consolidation of power in the hands of secular rulers. The schism continued well into the 15th century, until, fi nally, the Council of Constance (1414–18) put an end to it, having deposed three popes at once: John XXIII (successor of Alexander V), Gregory XII, and Benedict XIII, and selecting, to the great relief of everyone involved, a single pontiff—Martin V (1417–31). Further reading: Housley, Norman. Avignon Papacy and the Crusades. New York: Clarendon Press, 1986; Smith, John H. The Great Schism. London: Hamish Hamilton, 1970; Ullman, Walter. The Origins of the Great Schism. London: Burns Oats and Washbourne Ltd., 1948; Workman, Herbert B. Church of the West in the Middle Ages. London: Kelly, 1900–12. Victoria Duroff

al-AzharEdit

The Fatimids established al-Azhar, one of the oldest universities in the world, in Cairo in 970. Built around a large mosque with an open courtyard surrounded by columned walkways where classes were taught, al-Azhar quickly became one of the premier educational centers in the entire Islamic world, attracting students from Asia, Africa, and, in subsequent centuries, the Western Hemisphere. Originally, the university focused on the tenets of the Isma’ili sect of Islam followed by the Fatimid rulers, but over the following centuries the university became a center for orthodox Sunni belief. By the 1600s the Shaykh al-Azhar, leader of al-Azhar, was chosen from among the shaykhs of the university. Generations of legal scholars and judges were educated in theology and Islamic law at al-Azhar. In the 15th century c.e., the Mamluk sultan Qaitbey fi nanced the construction of an inner gate and elaborate minaret overlooking the courtyard. Following sultans added further buildings and ornamentation to the sprawling complex, including living quarters for students, libraries, and the mosque. After the 1952 c.e. revolution in Egypt, Gamal Nasser modernized and instituted major reforms including the creation of a College of Islamic Women and the addition of colleges in medicine and engineering. See also Fatimid dynasty; Islam; Isma’ilis. Further reading: Dodge, Bayard. Al-Azhar: A Millennium of Muslim Learning. Washington, D.C.: Middle East Institute, 1961; Eccel, A. Chris. Egypt, Islam, and Social Change: Al-Azhar in Confl ict and Accommodation. Berlin: Klaus Schwarz Verlag, 1984. Janice J. Terry

The First Global Age 1450 to 1750 Edit

Abahai KhanEdit

(1592–1643) Manchu military and political leader Abahai (also named Hung Taiji) was the eighth son of Nurhaci, a Jurchen tribal chieftain who founded the Manchu state in what is today northeastern China. Elected by the Hosoi Beile, or council of clan princes and nobles, in 1623 to be his father’s successor, Abahai built upon his father’s foundations for a Manchu state during the last years of China’s Ming dynasty. In 1644, his son was proclaimed emperor of the Qing (Ch’ing) dynasty, assuming leadership of China as the Ming dynasty collapsed. The Jurchen tribal people who lived in Manchuria, a frontier region of the Chinese Ming Empire, did not recognize the right of firstborn sons to succeed their fathers. Because of this, all the ruler’s sons were eligible to succeed him in an election by their fellow tribal leaders. Abahai was elected and continued his father’s unfinished work. He expanded the powerful Banner Army that consisted of Manchu, Mongol, and Han Chinese units and used it to consolidate control of the Liaoyang area in southern Manchuria. Next he used his military forces to subjugate Korea, forcing its government to transfer its vassal relationship from the Ming dynasty to him. Abahai then conquered the Amur region of northern Manchuria and the Mongols of eastern Mongolia. His next move was to set up a civil administration in the capital city of Shenyang in 1631. The six ministries and other institutions he implemented were copied from the Ming government, and he staffed them with many Han Chinese administrators. In 1635, he gave his people a new name, Manchu (from Jurchen), and changed his dynastic name from Hou Jin (Hou Chin, adopted by Nurhaci, which means “Later Jin,” after the Jin dynasty that ruled northern China 1115–1234). By this act, he disssociated his dynasty with the Jin, who had conquered northern China after much bloodshed. Instead he adopted the dynastic name Qing (or Ch’ing, which means “pure”), and he assumed the title emperor rather than khan, which had been his father’s title, because of its nomadic associations. In 1640, Abahai attacked Jinzhou (Chinchow) at the southern tip of Manchuria, defeating a Ming force. This victory brought the Manchus to the key eastern pass of the Great Wall, Shanhaiguan (Shanhaikuan, or Mountain and Sea Pass). However, this formidable fortress was defended by a strong Ming army, and Abahai was not ready to challenge it. He died in 1643 before he could do so. Abahai continued his father, Nurhaci’s, work of building up Manchu power, and he transformed the Manchus from a frontier tribal vassal of the Ming Empire to become its rival. Under his rule, a collaborative relationship developed among the Manchus, the Mongols, and the Han, or ethnic, Chinese. The adoption of the Chinese model of a bureaucratic administration and its inclusion of Han Chinese would characterize the Qing Dynasty and account for its success in conquering and ruling China. Further reading: Crossley, Pamela K. The Manchus. Cambridge, MA: Blackwell Publishers, 1997; Elliott, Mark C. The Manchu Way: The Eight Banners and Ethnic Identity in Late Imperial China. Stanford, CA: Stanford University Press, 2001; Michael, Franz. The Origin of Manchu Rule in China, Frontier and Bureaucracy as Interaction Forces in the Chinese Empire. Baltimore, MD: Johns Hopkins University Press, 1942. Jiu-Hwa Lo Upshur


Abbas the Great of PersiaEdit

(1571–1629) Safavid Persian ruler Shah Abbas the Great reigned from 1588 to 1629 during the zenith of Safavid glory and power. He effectively unified all of historic Persia and centralized the state and its bureaucracy. Using loyal slave soldiers (ghulam) recruited among Caucasians, Abbas successfully destroyed the influence of the Qazilbash princes and extended Crown-owned land taken from defeated local rulers. With English advisers, he moved to reform the army into a successful fighting force. In the Ottoman‑Safavid Wars, Abbas was generally successful. He conquered northwest Persian and in 1623 took Baghdad and then Basra in southern presentday Iraq from the Ottomans. His forces seized Hormuz in the Persian Gulf in 1622, thereby extending Safavid power along this important seafaring trade route. By the time Abbas came to power, the majority of the people in Safavid Persia, who had previously been Sunni Muslims, had become Shi’i. Qom and Mashad, sites holy in Shi’i tradition, were enlarged into centers for pilgrimages, and the veneration of Shi’i imams became widespread. The martyrdom of Husayn, Ali’s son, was annually commemorated in massive passion plays and ceremonies; pilgrimages to Kerbala, in present- day Iraq, where Husayn had been killed, became a major event for devout Shi’i. However, unlike many of his predecessors, Abbas encouraged religious tolerance. He encouraged foreign traders, especially Christian Armenians, who were known as skilled silk producers, to move to Iran. Although the sale of silk became a royal monopoly, Abbas provided Armenians financial inducements, including interest-free loans for building houses and businesses, to move to the outskirts of Isfahan. In 1592, Abbas made Isfahan his new capital and turned it into a center for Safavid arts, culture, and commerce. Under Abbas, Isfahan’s population grew to more than one‑half million people and became a major trading center. He sent envoys to Venice, the Iberian Peninsula, and eastern Europe to encourage trade in luxury textiles and other goods; he also provided tax incentives to foreign traders. By 1617, the East India Trade Company had established trading posts along Persian Gulf, and Bandar Abbas became a major port. Along northern routes, the Safavids also enjoyed a lively trade with Russia. As befitted 16th‑ and 17th‑century monarchs, Abbas presided over a lavish court. He was the patron to numerous court poets and painters, even allowing portraits of himself and members of his court to be painted. Like Suleiman I the Magnificent of the rival Ottoman Empire, Abbas, who had killed or blinded several of his sons, left no able successor. After his death, the Safavid empire entered into a century-long period of decline. It is a tribute to Abbas’s abilities as an administrator and leader that the empire survived as long as it did. Further reading: Monshi, Eskandar Beg. History of Shah ‘Abbas the Great: Ideology, Imitation, and Legitimacy, Safavid Chronicles. Roger M. Savory, trans. Salt Lake City: University of Utah Press, 2000. Janice J. Terry

absolutism, EuropeanEdit

Royal absolutism is a controversial concept among historians. There has been considerable debate about both the proper definition of the term and its applicability to the actual workings of European states in the early modern period. Scholars have suggested that elements of absolutism appeared at one time or another in France, Russia, Spain, Austria, the German states, and other smaller entities, and that even England (after 1707, Britain) displayed some traits common to absolute monarchy. At a most basic level, the term royal absolutism suggests a system of state administration centered on and dominated by a monarch as opposed to some other level of society or some other office or institution, and usually without legal or constitutional restraints. It can be differentiated from the older medieval form of monarchy by its increasing independence from, or suppression of, the feudal apparatus that linked each person in a hierarchy of mutual obligation between higher and lower. An absolute monarch controlled the state directly, rather than being forced to rely on the cooperation of the nobility through a lord-vassal relationship. Medieval monarchs usually had to contend with multiple challenges to their authority. These challenges included rival claimants to the throne, powerful nobles Abbas the Great of Persia who could raise armies and funds independent of the sovereign, councils or parliaments that insisted on being heard, merchants and financiers who were more interested in profit than in paying taxes or serving political interests, towns that claimed immunity from certain controls, and frequent peasant uprisings. Religious institutions, which were often wealthy and had great influence over the population, could also be tenacious in defending their independence from temporal authority. In essence, the idea of an absolute ruler was developed as one solution to these problems. Rather than living in constant fear of their antagonists, or being forced to share power with them, an absolute monarch could create and maintain a powerful kingdom and rule it effectively. james ii One of the problems with the study of royal absolutism in history is that too often the term absolute was used in a pejorative sense by those who opposed a particular ruler. This was true of both internal and external conflicts. In the 1680s, for example, the groups in England who opposed the policies of James II accused him of attempting to establish an absolute monarchy that would disregard Parliament, reimpose Catholicism, and generally strip his subjects of their rights and liberties. The English would also apply this label to Louis XIV in the late 17th and early 18th centuries, when England fought two wars against France. Even the term absolutism to describe a particular style of government was not coined until after the French Revolution, with the explicit purpose of discrediting the ancien régime. The concept of a powerful ruler in a centralized state was not always viewed in a negative light, especially among some intellectuals of the 16th through 18th centuries. Three thinkers closely associated with the development of absolutism as a political theory are Jean Bodin (1530–96), Thomas Hobbes (1588–1679), and Jacques-Bénigne Bossuet (1627–1704). Each was deeply influenced by the political circumstances of his time. Bodin and Hobbes were examining the nature of authority when it had clearly broken down; Bossuet was justifying a system developed in reaction to such crises, but which itself was subject to challenge. Although their ideas were not necessarily representative of the opinions of their contemporaries, or of the realities of statecraft in early modern Europe, each work was widely known and read in its time and afterward. Bodin’s Six Books of the Commonwealth first appeared in 1576, in the midst of the French Wars of Religion. Bodin undertook a sweeping study of various forms of government, taking care to distinguish between what he called royal monarchy, despotic monarchy, and tyranny. Despots generally violated the property rights of their subjects; tyrants were arbitrary and purely selfish. Royal monarchy meant that a ruler, although entirely sovereign, would always seek to rule in the best interests of his subjects. There were no formal constitutional checks on power, but a paternal sense of duty to the welfare of the kingdom would guide the ruler’s actions. parliaments The other limit on royal power evident in Bodin’s own time was the legislative or consultative body, such as the Estates General and parlements of France. All such legislative bodies claimed some rights and privileges absolutism, European In his best-known work, Leviathan, Thomas Hobbes compares a country to a body, with a monarch as the head. from the sovereign. The political history of France and England after Bodin’s time demonstrated that although rulers of those countries could circumvent Parliament and the Estates for extended periods of time, this eventually led to resistance and revolution. Hobbes also lived in a turbulent age. Many of Hobbes’s most important political works, including De Cive, Leviathan (both published in 1651), and Behemoth (1681), were heavily influenced by the events surrounding the English Civil War, which ended with the execution of King Charles I. In Leviathan, his best known work, Hobbes drew a lengthy analogy between a commonwealth and the human anatomy, in which the king is represented as the head and the rest of society as the body. He proceeded to set out his view of human nature unconstrained by government or communal moral standards. In such a situation, he argued, there could be no guarantee of life or possessions except by violence. Human beings needed government to remove them from this state of nature, and the best government was the one that reduced violence and uncertainty the most. This required people to surrender a portion of their individual liberty (either by making a covenant between themselves or by being conquered) to a single authority, which would be charged with the protection of their lives, property, and other retained rights. This authority could take one of three forms: monarchy, aristocracy, or democracy. He argued that of these, monarchy was theoretically preferred, since it was least likely to degenerate into factional struggles and civil war. This monarchy, he continued, should not be elective (as in the Holy Roman Empire) or limited (as claimed in England), or else it was not a true monarchy, since the ultimate source of sovereignty lay with others. enlightened self-interest Like Bodin, Hobbes argued that a true monarch would be restrained from acting in an arbitrary and wicked manner through reason and enlightened self-interest. Because the monarch was the embodiment of sovereignty, his or her private interest would be aligned with the public good. A wise ruler would seek counsel from those best equipped to provide it, but would always reserve the personal right to choose and implement the best policy. Anticipating critics who would point to historical examples of rulers who did not concern themselves with the common good or the most reasonable policies, Hobbes repeatedly stated that whatever problems could be caused by the corruption of a single sovereign would simply be multiplied in an oligarchy or a democracy. Bossuet’s Politics Derived from the Very Words of Holy Scripture (1709) was an exploration of the nature of kingly power as demonstrated in the Bible and in history. For a number of years Bossuet had served as the tutor to the Dauphin, the son and heir of Louis XIV, and he was thus highly interested in and knowledgeable about the workings of the French monarchy. He proposed that the power of the king is “paternal,” “absolute,” and “subject to reason,” but he also added a “sacred” quality. The principle that temporal authority originates with God is found in many parts of the Bible, and most medieval European sovereigns were considered to be God’s anointed. The doctrine of divine right kingship was invoked by 16th and 17th century rulers such as James VI and I of Scotland and England to justify their actions and to condemn resistance or questioning of their authority. In France, the sacred quality of kingship had an added dimension: since the king was placed on the throne by God, resistance to his power was illegitimate and sinful; those who opposed the political or religious policies of the king, such as the Huguenots, should not be tolerated at all. The Russian czar Ivan IV (reigned 1533–84) provides an early example of an attempt to centralize authority in the person of the ruler and circumvent existing institutions and controls. Ivan began his reign as the grand duke of Muscovy, but by 1547 he assumed the title of czar (emperor) of Russia. In 1565, frustrated with the problems still facing his fragmented domains, Ivan created a separate administration under his personal control, the Oprichnina. Originally this was confined geographically to certain towns and parts of the countryside, but over time it grew in both size and scope. Ivan IV’s reign illustrates two different concepts often associated with absolutism. The first is reform of the state, which included the creation of a standing army and a centralized bureaucracy responsible directly to the ruler, as well as a systematic overhaul of laws and institutions dating from feudal times. The second, despotic and arbitrary rule, was one of the primary reasons that many philosophers and statesmen feared and opposed anything resembling royal absolutism. The one ruler who is most often associated with absolutism is Louis XIV of France (reigned 1643–1715). While it is true that the Sun King had a more powerful state apparatus at his disposal than his predecessors, and showed more vigor in running France than his immediate successors, he was not primarily responsible for creating the system he led. France had been divided absolutism, European by internal political and religious wars in the 16th century, although the appearance of a strong ruler, Henry IV, began the process of healing the rifts and stabilizing the government—at least until Henry was assassinated in 1610. His successor, Louis XIII, was not as assertive, and by the 1620s he had effectively delegated much of his authority to Cardinal Richelieu. Louis XIV may have consciously portrayed himself as an absolute ruler, but the daily reality of managing his kingdom was something quite different. He did not rid himself of all obstacles to his authority, but through a combination of compromise and assertiveness he was able to reduce the resistance of such bodies as the nobility, the parlements, and the church. Louis XIV was only partially successful in establishing himself as the unquestioned master of his kingdom, and even less so in his attempt to act as the “arbiter of Europe.” In fact, scholars such as Nicholas Henshall argue that the lingering image of Louis XIV as an absolute monarch owes more to the perpetuation of a myth by English polemicists than to his actual behavior. After the Glorious Revolution in 1688, Henshall says, absolutism came to be defined by the English as everything that their constitutional monarchy was not: French, Catholic, and despotic. This was a simplistic definition that ignored the continuing importance of the monarch in British politics and the real constraints on the power of the French king. Even with all of the centralization and modernization associated with absolutism in this period, most states still remained a patchwork of different jurisdictions under the nominal control of a single crown. Spain, France, the Austrian empire, and Russia all had ancient internal divisions that no monarch could simply erase, no matter how much he or she might want to. See also Louis XI; Vasa dynasty. Further reading: Anderson, Perry. Lineages of the Absolutist State. London: NLB, 1974; Bodin, Jean. On Sovereignty: Four chapters from The Six Books of the Commonwealth. Cambridge: Cambridge University Press, 1992; Bossuet, Jacques-Bénigne. Politics Drawn from the Very Words of Holy Scripture. Cambridge: Cambridge University Press, 1990; Franklin, Julian H. Jean Bodin and the Rise of Absolutist Theory. Cambridge: Cambridge University Press, 1973; Henshall, Nicholas. The Myth of Absolutism: Change and Continuity in Early Modern European Monarchy. London and New York: Longman, 1992; Hobbes, Thomas. Leviathan, Parts I and II. Peterborough, Ontario: Broadview, 2005; Krieger, Leonard. An Essay on the Theory of Enlightened Despotism. Chicago: University of Chicago Press, 1975; Miller, John, ed. Absolutism in Seventeenth-Century Europe. London: Macmillan, 1990; Riasanovsky, Nicholas, and Mark D. Steinberg. A History of Russia, Seventh Edition. New York: Oxford University Press, 2005. Christopher Tait

Africa, Portuguese inEdit

The Portuguese were the first to make significant inroads into Africa during the age of discovery, yet they were the last to decolonize their African possessions. This was to a large extent true of Portuguese socioeconomic and political activities in the various communities of Africa in which they operated. The Portuguese empire in Africa was the earliest and longest lived of the colonial empires, lasting from 1415 until 1974, with serious activity beginning in 1450. The first attempt made by the Portuguese to establish a presence in Africa was when some Portuguese soldiers captured Ceuta on the North African coast in 1415. Three years later, a group of Moors attempted to retake it. A better armed Portuguese army defeated the Moors, although this did not result in effective political control. In 1419, two captains in the employ of Prince Henry (Henrique) the Navigator, João Gonzalez Zarco and Tristão Vaz Teixeira, were driven by a storm to Madeira. A Portuguese expedition to Tangier in 1436, which was undertaken by King Edward (Duarte) for establishing Portuguese political control over the area, followed. However Edward’s army was defeated, and Prince Ferdinand, the king’s youngest brother, was surrendered as a hostage. Tangier was later captured by the Portuguese in 1471. The coast of West Africa also attracted the attention of the Portuguese. The Senegal was reached in 1445, and Cape Verde was passed in the same year. In 1446, Álvaro Fernandes was close to Sierra Leone. By 1450, the Portuguese had made tremendous progress in the exploration of the Gulf of Guinea. Specifically under João II, exploration had reached the fortress of São Jorge da Mina (Elmina), which was established for the protection of the trade of the Guinea. The Portuguese reached the ancient kingdom of Benin and the coastal part of present-day Niger Delta region of Nigeria before 1480. Oba (King) Esigie, who reigned in the last quarter of the 15th century, is said to have interacted and traded with the Portuguese. The famous Portuguese explorer Diogo Cão sighted the Congo in 1482 and reached Cape Cross in 1486. The Portuguese thus found themselves in contact with one of the largest states in Africa. The leading kingdom in the area was the Kongo Kingdom built by the Bakongo, a Africa, Portuguese in Bantu people whose king, the Mani-Kongo, had his capital at Mbanza-Kongo, modern San Salvador in northern Angola. Other leading states in the area included Ngoyo and Loango on the Atlantic coast. When the Portuguese arrived on the east coast of Africa at the end of the 15th century, the region was already witnessing some remarkable prosperity occasioned by a combined effort of Africans and Arab traders who established urbanized Islamic communities in the area. These included the coast of Mozambique, Kilwa, Brava, and Mombassa. From East Africa the Portuguese explorer Pêro da Covilhã reached Ethiopia in 1490. The big island of Madagascar was discovered in 1500 by a Portuguese fleet under the command of Diogo Dias. The island was called Iiha de São Lourenço by the Portuguese. Other Portuguese might have visited previously, as was evidenced in the stone tower, containing symbols of Portuguese coats of arms and a Holy Cross. Mauritius was discovered in 1507. By 1550, Portuguese dominance in both the Indian and Atlantic Oceans had been confirmed. Their position was further strengthened by the Treaty of Tordesillas of July 7, 1494, with Spain, leading to the emergence of a large empire. Some African communities were part of this sprawling Portuguese empire. Commercial Aims The needs to establish Christianity and Portuguese civilization were not strong motivators; the aims of the Portuguese were essentially commercial. In the East African region, the Portuguese wanted to supplant the preexisting network of Arab seaborne trade. Consequently, Portuguese bases at Sofala, Kilwa, and other areas such as the offshore islands of Mozambique, Zanzibar, Pemba, Mombassa, and the island of Lamu were established. In this direction, Vasco da Gama took the first step on his second voyage to India in 1502. He called at Kilwa and forced the sultan to pay a yearly tribute to the king of Portugal. This was typical of Portugal’s dealings with the coast, and unless tribute was paid, the town was destroyed. If it was paid, the local ruler was usually left in peace, provided he carried out the wishes of the Portuguese. After Kilwa, Zanzibar was the next place to suffer from the Portuguese. In 1503, a Portuguese commander, Ruy Lourenço Ravasco showed the power of guns by killing about 4,000 men aboard canoes. The men were carrying commodities that were of interest to Ravasco. Available evidence shows that the local men in no way provoked the Portuguese official. Sofala was another center of attraction to the Portuguese. The town was important because it gave the Portuguese control of the gold supply of the interior of East Africa. The town offered minor resistance to Portuguese incursion. Consequently, a fort was built there to protect the Portuguese colony that now replaced the old Arab settlement in the area. Kilwa shared the fate that befell Sofala. As in the case of Sofala, the Portuguese met little resistance there. A Portuguese fleet commanded by D’Almeidas captured the town. From there the Portuguese official then sailed away to Mombassa, where they met strong resistance. Indeed the city was like a thorn in the flesh of the Portuguese. The island was consequently named “the island of war.” However the resistance of the people of Mombassa collapsed and the city was set on fire. Outside the coast the Portuguese were interested in the gold region of the Zambezi. The Portuguese embarked upon such a massive exploitation of the mineral that within a few years of their activities and occupation, the region had withered to an unattractive settlement. This development sometimes created a crisis and revolt from the local people. The first serious revolt to succeed was in 1631 when Mombassa rebelled. It should be noted that it was in an effort to contain uprising from the local people that the Portuguese in 1593 established and garrisoned the great and famous Fort Jesus at Mombassa. Still, the safety and security of the Portuguese merchants were never guaranteed relative to Arab threats. Already a part of the Indian Ocean community was slipping out of the grip of the Portuguese. In 1622, they were ejected from the Persian Gulf and by mid-17th century, the seafarers of the maritime state of Oman were regularly making incursions and conducting Africa, Portuguese in A statue of Prince Henry the Navigator in Lisbon portrays Portugal’s early explorations of Africa. raids as far south as Zanzibar. By the middle of the 18th century, the maritime trade of the East African coast was more or less out of the control of the Portuguese and the region had gradually resumed its pre-Portuguese commercial activities that made the area an attraction for many traders. The appearance of the British and the Dutch East India Companies was another threat to Portuguese commercial interests in East Africa. Elsewhere in Africa the Portuguese experimented with the plantation system in São Tomé from where they introduced it to Brazil. Following this development a new era of Portuguese exploitation of Africa started. This was in the area of the slave trade, which lasted for more than two centuries. During the 16th century, the Portuguese concentrated their slave trading attention on the Kongo Kingdom. During the reign (1507– 43) of the Christian king Afonso (Nzinga Mbemba), the Portuguese had already started to export young Kongolese across the atlantic in large numbers. Although King Afonso disliked the slave trade, he paid in slaves for European goods and services, which he regarded as essential to his kingdom. Such services included those provided by missionaries, masons, carpenters, and other artisans. King Afonso died frustrated with his desires to see the Portuguese technologically transform his kingdom unfulfilled. Instead the slave trade continued unabated. A turning point in Portuguese exploitation of West Central Africa came in 1575 when Paulo Dia de Novais was sent as a conquistador to Africa. From his base at Loanda, south of the Kongo frontier, several wars were waged against the so-called recalcitrant king of Ndongo, the Ngola. Sometimes the Portuguese made an alliance with the predatory Jaga group encouraging them to wage wars against Ndongo and some parts of Kongo Kingdom. The situation was so chaotic that early 17th century Mani-Kongos had to send petitions to the Holy See through the missionaries urging them to intervene in the matter, but nothing substantial came out of it. Not even the Portuguese Crown could help the situation. This was the development when in 1660 the Bakongo turned to war with the Portuguese. The Portuguese defeated them. Further raids weakened the kingdom. In fact many of the provinces began to break away. By 1750 the once powerful Kongo state had become a shadow of its former self. The high demand of slaves in the Portuguese colony of Brazil put pressure on Ndongo, known as Angola by the Portuguese. The state was the largest supplier of slaves to the colony of Brazil in the whole of Africa south of the equator. The demand was so great that the Portuguese often incited the local communities to wage war on one another in the interest of obtaining slave labor for Brazil. The Portuguese also tried their hands in commodities other than slaves, such as pepper from the Benin kingdom (in present-day Nigeria) and gold from the Gold Coast. However by 1642, the Dutch had permanently ousted the Portuguese from the Gold Coast. This development encouraged both the English and French to join in the competition against the Portuguese. By the 18th century, it was the traders of these countries who became very active in the trade of the Gulf of Guinea, while the Portuguese continued with their slave-trading activities. Meanwhile, before the other European powers joined in international trade, the Portuguese experimented with all sorts of goods. In the 1470s, for example, the Portuguese were able to procure cotton cloth, beads, and other items from the Benin kingdom, which they exchanged for gold on the Gold Coast. The Portuguese also participated in the trade in cowries in the Kongo and its offshore islands. They were also very active in the trade in salt along the Angolan coast. The Portuguese dominated trade in this era because they were better organized compared to the Africans and they were technologically superior. This showed in the way the Portuguese dislodged the Arab traders along the East African coast who had been established in the area long before the advent of the Portuguese in Africa. See also voyages of discovery. Further reading: Duffy, J. Portuguese Africa. Cambridge, MA: Harvard University Press, 1959; Oliver, R., and J. D. Fage. A Short History of Africa. London: Penguin Books, 1975; Rodney, Walter. How Europe Underdeveloped Africa. London: Bogle L’Ouverture Publications, 1976. Omon Merry Osiki

Akan states of West AfricaEdit

The Akan people of West Africa are descandants of the residents of the early Akan states and continue to live in the area east of the Mende people that makes up present‑day Ghana and the Ivory Coast. It is believed that the Akan people have been present in West Africa since the first century. However, it was not until the 15th century that the world outside Africa became aware of the Akan states. Most of the early information on the Akan came from the Portuguese who developed the West African gold trade. When the Portuguese first appeared Akan states of West Africa in West Africa, the area controlled by the Akan states stretched from the equatorial forest southward to the Ofin and Pra Rivers. This area roughly compares to what later became the states of Ashanti and Adansi. While locals called the early Akan settlements Akyerekyere, Europeans identified the people as belonging to two separate groups, the Akany and Twifu (or Twifo). While a number of scholars suggest that members of Akan states were of Dyula ancestry, others disagree. It is true that a number of Dyula settlements existed in Akan states, but the most prevalent view is that Akan states grew in strength to rival Dyula rather than evolving from it. Further arguments that support the belief that the Akan states were separate from Dyula center on cultural differences. Two customs that were distinctly Akan in nature and that had no counterpart in Dyulan culture were the annual yam festivals and the tradition of matrilineal inheritance. Subsequent studies of the Akan people have led scholars to believe that the southern branch of the Akan, the Fante, traveled in earlier times from the Volta Gap to the coastlands of Accura, where they intermarried with existing inhabitants. As the area expanded, several powerful Akan states emerged. The oldest of these is thought to be Bono, which was also called Brong. Asante, which later came to be known as Ashanti, proved to be the most powerful Akan state. Others included Akwamu, Denkyira, Akyem, and Fante. Europe and the Akan States When the Portuguese established their presence in West Africa in 1471, they discovered that the Akan people were not living in towns, as was typical in Africa during this period. Instead, the Akan were occupying small kingdoms ruled by kings and queens in the savanna north of the existing gold belt. Within each kingdom, families that were descended from seven or eight particular clans, identified by matrilineal lineage, lived in villages where they were ruled by their own chieftains. In addition to the chieftains, each family and clan had its own leader. All of the families, clans, and villages worshipped gods that they had individually deified. The various lineages also had their own symbols, which were used to identify matrilineal ancestry. Once it became clear that the gold trade would develop into a significant economic undertaking, the Akan states realized that it was in their best interest to control the route to and from the Gold Coast. As a result, the Akan states took on a prominent role in developing West Africa. Early on, the Akan depended on three significant areas to establish their presence in the gold trade. The first of these was Bona, which was located close to the Lobi gold mine. The others were Banda, which controlled passage to the main gold trading route through the Volta Gap, and Bono, where Bono-Mansa, the capital of the early Akan states, was located. Over the following decades, the gold trade with Portugal exploded, reaching its peak in 1560 with West African gold providing onefourth of all revenue for Portugal. From the earliest days, the Akan had been heavily involved in agriculture, developing a farming belt along the outer environs of the equatorial forest where they grew yams and oil-producing palms. Other agricultural activities included the production of plantain, bananas, and rice, as well as collecting kola nuts, raising livestock, hunting, fishing, and making salt. The density of the soil in and around the forest limited the type of produce that could be grown, and increasing populations soon exhausted the soil. As a result, the Akan people entered the equatorial forests, where they cleared enough land to support the needs of the people. In the 17th century, agricultural production and the growth of the trade along the Gold Coast led to permanent settlements in the equatorial forest. Rates of urbanization and increasing sophistication among the Akan states subsequently led to the emergence of more complex political and social structures. Strong leadership among the people of the Akan states allowed them to retain their own cultures in the midst of the expanding European presence, while winning the respect of the Europeans in the process. Slavery in the Akan States In the past, attempts by some Akan leaders to dominate the entire region had resulted in tribal wars. As a result, victorious tribes had begun selling members of conquered tribes at local European slave markets. The more vulnerable tribes, such as the Ewe who lived in the lower Volta area, were continually subjected to being enslaved. Additionally, certain Africans were born into lineage slavery and were forced from their earliest years to serve the dominant African groups. The Akan states also bought slaves from the Portuguese. Most of these came from Benin, where the government regularly sold off its captives. After 1516, when the government of Benin reduced its military activity, most of the slaves that the Akan states purchased from Portugal came from the Niger Delta and the Igbo region. The Akan states retained some slaves for local use, while others were placed on slave ships bound for markets along the Atlantic slave-trading route. Domestically, the Akan states used slaves in royal households and in transporting goods to market. Additionally, large numbers of slaves were put to work in construction, in Akan states of West Africa mines, and on farms. A smaller number of slaves were employed as artisans in various crafts. The Akan states also designated some slaves to be trained to use flintlock muskets as part of citizen armies employed in the Akan quest to crush neighboring states and expand the existing Akan empire. Along with slaves, the Akan states also commandeered the services of immigrants and migrants to be employed in various tasks. In general, both slaves and forced labor were allowed limited freedom because their numbers prevented total control over the population. Rivalry among Akan states As individual states became more powerful, competition arose among the Akan states, with Denkyira and Akwamu emerging as the most powerful. By the middle of the 17th century, Denkyira had won the right to control most of the western gold-bearing area and had begun forging an empire leading northward to the established European trading routes that led to Banda and Bono. During the 1670s, Denkyira seized control of the entire area around the western Gold Coast and beyond. On the eastern coast, Akwamu had begun to do the same. From 1677 to 1781, Akwamu worked on its campaign to win control of Accara, which had been under Denkyira control since 1629. Ultimately, Akwamu annexed Accara, in addition to the surrounding areas of the eastern territory. This expansion provided them with direct control of the trading forts operated by the English, Dutch, and Danish along the eastern Gold Coast. Thus, by 1702, Akwamu had also gained control of the east coast slave-exporting businesses. Despite their enormous strength, greed ultimately destroyed both Denkyira and Akwamu. Asante, which had originally been a dependency of Denkyira’s, emerged as a major contender in the ongoing power struggle of the late 17th and early 18th centuries, giving birth to the powerful Ashanti state. Ashanti was formed from the various Akan states that had gathered together in the north-central section of the equatorial forest. The combined strength of these states enabled them to dominate the trading route from western and central Sudan. Within the state of Ashanti, the various kings agreed to accept the supremacy of one king to be based in the capital city of Kumasi. The first Ashanti king was Osei Tutu (c. 1680–1717). In 1698, Osei Tutu declared war on Denkyira, using arms from Akwamu. In 1701, Ashanti finally succeeded in overwhelming Denkyira, thereby gaining essential territory for its southward expansion. Three decades later, Akyem, an important Ashanti ally, defeated Akwamu. After the downfall of Denkyira and Akwamu, Ashanti became the most powerful influence in the area now known as Ghana, continuing to rule until the end of the 19th century when the British conquered the area. Ashanti development and expansion Over the course of the 18th century, Ashanti strengthened its hold on the central forest region and began reaching outward to expand its territory. Each captive area was forced to pay tribute to Ashanti. Areas such as Dagoomba in the northeastern area of the equatorial forest paid their tribute in slaves, which had in turn been taken captive from more remote areas of Africa. Ashanti then traded those slaves for firearms, smelted iron, and copper. Between the 15th and 19th centuries, some 4 million slaves had been taken for this purpose from south of the equator in an area that extended from Cameroon to Kunene. Until the pope banned the sale and trade of European firearms to Ashanti out of fear that radical Muslims would lay hold of the guns and use them against Christian traders, the Portuguese regularly traded weapons to Ashanti in exchange for slaves. By 1820, the Ashanti Empire controlled some 250,000 square kilometers that had been organized into three distinct regions. The first was composed of the six metropolitan chiefdoms that had furnished the military power for King Osei Tutu. The bulk of the people of Akan descent lived in the second region. The third was composed of dependencies, such as Gonja and Dagomba, which were required to pay tribute of 1,000 slaves each year. Since the strength of the Ashanti state was always dependent on the force of its military rather than on a sense of nationalism, it became impossible to maintain a hold on those tributary states that made up two-thirds of the Ashanti Empire. This weakness made Ashanti more vulnerable when the British declared war on the state in the 19th century. Today, the remaining Akan people belong to either eastern or western Akan groups. The five groups of eastern Akan, which all speak Twi, include Asanta, Auapem, Akyem, Denkyria, and Gomua. Sehwi-speaking Western Akan is made up of Anya, Ahanta, Baule, Sanwi (Afema), Nzima, and Aowin. Despite the fact that each subgroup has its own dialect, groups are able to communicate with one another. While the Akan people continue to practice the tradition of matrilineal descent, some changes have been instituted to make inheritance laws more equitable. See also Africa, Portuguese in; Dutch East India Company (Indonesia/Batavia); Ewuare the Great; slave trade, Africa and the. Akan states of West Africa Further reading: Fage, J. D. A History of West Africa: An Introductory Survey. New York: Cambridge University Press, 1969; ———. A History of Africa. London: Hutchinson, 1988; Fyle, Magbaily C. Introduction to the History of African Civilization. Lanham: University of Maryland Press, 1999; Iliffe, John. Africans: The History of a Continent. New York: Cambridge University Press, 1995; Newman, James L. The Peopling of Africa: A Geographic Interpretation. New Haven, CT: Yale University Press, 1995; Oliver, Roland. The African Experience: From Olduvai Gorge to the Twenty-First Century. Boulder, CO: Westview Press, 2000. Elizabeth Purdy

AkbarEdit

(1542–1605) emperor of India Jalal ud-din Akbar was born in 1542 to Humayun, in India, while the latter was a fugitive ruler. Akbar succeeded to a very shaky throne at age 13 but went on to enjoy a long and successful reign, becoming the greatest ruler of the Mughal (Moghul) Empire founded by his grandfather Babur and his followers, who were Muslims from Central Asia. Akbar spent much of his difficult childhood on the run. Consequently, he never learned to read or write. However, he was a brilliant man with an inquisitive mind and phenomenal memory who had others read to him throughout his life. Akbar’s leadership highlighted his diverse achievements. He was a good general who expanded his empire after personally leading troops to defeat the powerful Hindu Rajput warriors. Then he married a Rajput princess, daughter of the ruler of Amber; she would become the mother of his heir. His lenient treatment of the defeated Rajputs, whom he kept as his vassals, foreshadowed his policy toward other Hindu subjects. In 1572, he conquered Gujrat, thereby gaining access to the sea. When he encountered the Portuguese, he grew to admire their ships, arms, and European merchandise. In 1573, he signed a treaty with the Portuguese viceroy ensuring safe passage for Indian Muslims crossing the Indian Ocean on pilgrimages to Mecca. Later he added Bengal, Baluchistan, Afghanistan, Kashmir, and part of the Deccan region to his empire. Like his grandfather Babur, Akbar was a builder. In Delhi, the tomb he built for his father was constructed of red sandstone and adorned with white marble, the precursor of the mature Indo-Islamic style of the taj mahal. He also built a fort at Agra from red sandstone. Above all, he was noted for building a new palace city at Fatehpur Sikri near Agra, close to the retreat of a Muslim holy man and his mentor. Built of white marble, it became his headquarters until 1585, when he moved away and the palaces were never occupied again. Akbar’s national policies aimed at uniting his subjects. The centerpiece was religious tolerance, partly the result of his disillusionment with Sunni Islam’s rigidity and intolerance and partly to conciliate his Hindu subjects. Thus he abolished the poll tax on non-Muslims and the special tax on Hindu pilgrims. He hosted religious debates of Hindu, Muslim, Parsi (Zoroastrian), and Christian (Jesuit) scholars at Fatehpur Sikri and concluded that no religion held the exclusive truth. Attracted by mysticism he also took up Sufi Islam and Hindu yogi practices. Akbar eventually established a new religion called Din-I ilahi, or Divine Faith, in 1582. With Akbar himself as spiritual guide, Din-I ilahi was drawn mainly from Hinduism, Jainism, and Zoroastrianism. Orthodox Muslims were offended and accused him of heresy. He ruled as an autocrat served by ranked officials who were given salaries. However, 70 percent of his officials were foreigners, mostly Afghans and Persians, and Persian was the official language of his empire. The rest were Indians, both Muslim and Hindu. The employment of some Hindus in government service was an improvement in the status of Hindus from previous Muslim dynasties. He abolished tolls, made roads safe, and kept dues low to encourage commerce. Akbar was a patron of the arts, and culture flourished during his reign, enormously impressing the Europeans who visited India at the time. His last years were saddened by the death of two sons from drinking and drugs, and by the revolt of his eldest son and heir, Selim (Salim). Similar troubles also plagued his successors, who faced revolts by their sons and civil wars among them. See also Jahangir. Further reading: Gascoigne, Bamber. The Great Moghuls. New York: Harper and Row, Publishers, 1971; Richards, J. F. The Mughal Empire. Cambridge: Cambridge University Press, 1993; Schimmel, Annemarie. The Empire of the Great Mughals, History, Art and Culture. Chicago: The University of Chicago Press, 2004. Jiu-Hwa Lo Upshur

Alawi dynasty in MoroccoEdit

The Alawi dynasty of Morocco, also known as Filalis or Filalians, first appeared in Morocco sometime in the 13th 10 Akbar century. Its members claimed they could trace their lineage directly to the prophet Muhammad (571–632). The dynasty’s name was derived from the name of its ancestor, Mawlay Ali al-Sharif of Marrakesh. Mawlay Rashid (667–722), the first Alawite ruler of Morocco, is considered to be the founding father of the dynasty. The name Alawi is also used in Morocco in a more general sense to identify all descendents of Ali, who was the cousin and son-in-law of the prophet Muhammad. At the time the Alawi surfaced in Morocco, sultan kings with absolute power had ruled Morocco for almost four centuries. In the 16th century, Morocco’s sultan kings had been forced to make decisions about foreign trade. While the rulers wanted the gunpowder and arms that trading with Europe could bring, they were hesitant to trade with the continent that Moroccans knew as the “land of infidels.” Weapons were particularly important for Morocco at that time, because the country was facing Iberian expansion along the Atlantic and Mediterranean coasts. Members of the Alawi dynasty were also cognizant of the possibility of their becoming a target of European colonialism. The rulers not only wanted to protect Morocco from foreign invaders, but they were also determined to maintain the purity of their Muslim society. In the past, they had accomplished this goal by banning foreign travel and restricting contact with all foreigners. Yet, the likelihood of continuing such practices was diminishing since foreign trade had become an essential economic activity. In 1666, Mawlay Rashid of the Alawi dynasty seized power after the death of Ahmad al-Mansur of the Sa’did dynasty. Rashid came to power by outmaneuvering Ahmad al-Mansur’s three sons. Rashid also killed his own brother, Mawlay Mohammad, who challenged him for the right to rule Morocco. Once in power, Rashid appointed the ulema (a group of learned religious men) and noted scholars as his advisers, and he celebrated his victory by holding elaborate ceremonies that combined elements of Moroccan politics, religion, and culture. These rituals were designed to introduce the Moroccans to their new leader and to demonstrate the right of the Alawi to rule Morocco because of its strong connection with the past. In 1672, Mawlay Isma’il succeeded his brother as the ruler of Morocco after Rashid was killed in a riding accident. Isma’il became known as the greatest sovereign of the early Alawi period. He established a form of government that survived until the 20th century. Isma’il also reached out to the French, with whom he formed an alliance against the Spanish. The partnership resulted in a steady supply of weaponry into Morocco and in a number of construction projects for new palaces, roads, and forts. To finance these projects, Isma’il levied heavy taxes and demanded ransoms for imprisioned Europeans. Rashid had great respect for scholarship, and he built Madrasa Cherratin in Fez and an additional college in Marrakesh. Rashid also reformed the monetary system and ensured that wells were dug in the eastern deserts. In the 17th century, Alawi nationalists launched a jihad (holy war) designed to strip local Christians of all land located on the Atlantic and Mediterranean coasts of Morocco. The Alawi dynasty continued to rule Morocco from the mid-17th century until 1912, when the country became a protectorate, with Spain controlling northern Morocco and France ruling the southern part of the country. In 1956, Morocco reestablished its independence, and the Alawi monarchy again rose to power under the rule of King Mohammed V. Since that time, the Alawi dynasty has continued to rule Morocco. In the 21st century, Moroccan members of the Alawi dynasty continue to practice close adherence to Sunni Islam. Moroccan scholars have scientifically documented the Alawi claim to be directly descended from the prophet Muhammad. As a result, the Alawi dynasty continues to hold wide legitimacy in contemporary Morocco. The Alawi are credited with bringing economic prosperity to the country by growing the economy, establishing foreign trade links, and improving the overall standard of living. A Syrian branch of the Alawi dynasty, which practices the Shi’i school of thought, follows the teachings of Muhammad ibn Nusayr. More liberal than the Moroccan Alawi, the Syrians celebrate both Muslim and Christian festivals. Further reading: Bourgia, Rahma, and Susan Gilson Miller, eds. In The Shadow of the Sultan: Culture, Power, and Politics in Morocco. Cambridge: Cambridge University Press, 1999; Cohen, Mark I., and Lorna Hahn. Morocco: Old Land, New Nation. New York: Praeger, 1966; Ogot, B. A., ed. General History of Africa. Volume Five: Africa from the Sixteenth to the Eighteenth Centuries. Berkeley: University of California Press, 1981. Elizabeth Purdy

Albuquerque, Afonso deEdit

(1453–1515) Portuguese explorer One of the great sea captains in Portuguese history, Afonso de Albuquerque captured the cities of Goa, Malacca, and Hormuz and founded the Portuguese empire in Asia. He was born in Alhandra, near Lisbon. Both his paternal grandfather and great-grandfather had been Albuquerque, Afonso de 11 confidential secretaries to King João I and King Edward (Duarte), and his maternal grandfather had been an admiral in the Portuguese navy. He grew up at the court of his godfather King Afonso V, and when he was 20 he sailed in the Portuguese fleet to Venice and was involved in the defeat of the Turks at the Battle of Taranto. He then spent 10 years in the Portuguese army in Morocco gaining military experience. Albuquerque was present when the Portuguese under King Afonso V captured Arzila and Tangier in 1471, and Afonso’s son, King João II, made him a bodyguard and then his master of the horse. He returned to Morocco in 1489 and fought at the siege of Graciosa. When John’s brother Manuel I became king in 1495, Albuquerque returned again to Morocco. It was during this time that Albuquerque became interested in Asia. The possibility of opening up a trade route was tantalizing to Albuquerque and in 1503 he joined his cousin Francisco to Cochin on the southwest coast of India, where they built the first Portuguese fortress in Asia. King Manuel appointed Dom Francisco de Almeida as the first viceroy of India with the aim of increasing trade and establishing a permanent presence on the Indian subcontinent. In April 1506, Albuquerque set out on his second (and final) voyage—one that would last nine years. He was skilled in military tactics, seafaring, and handling men and was incredibly ambitious. However he was only in charge of five of the fleet’s 16 ships. Overall command was given to Tristão da Cunha, who led the expedition up the east coast of Africa, and around Madagascar. They built a fort at Socotra to prevent Arab traders from passing through the mouth of the Red Sea and ensure a Portuguese trade monopoly with India. In August 1507, Albuquerque was given permission by Tristão da Cunha to take six ships and 400 men. They headed straight for the Arabian and Persian coasts and, heavily armed, they sacked five towns in five weeks. Albuquerque then decided to attack the town of Hormuz (Ormuz), which was located on an island between the Persian Gulf and the Gulf of Oman. Taking it would cripple Turkish trade with the Middle East as it was the terminus for caravan routes from Egypt, Persia, Turkestan, and India. Even though Hormuz had a population of between 60,000 and 100,000, Albuquerque was able to capture the town and force it to pay him an annual tribute. Albuquerque, appointed to succeed Almeida, found Almeida reluctant to hand over the office. Almeida was keen to avenge the death of his son, who was killed by an Egyptian fleet. He jailed Albuquerque and then led the Portuguese into a naval battle off the island of Din near Goa in February 1509. In October 1509 the marshal of Portugal, Fernando Continho, on a tour of inspection, ordered the release of Albuquerque and demanded that Almeida hand over his office. Albuquerque then set out to create the Portuguese empire in Asia. In January 1510 he attacked the port of Cochin but was unable to capture it. Two months later he attacked and took the town of Goa. After being there for two months he was forced out, but retook Goa in November 1510. Albuquerque then made for Malacca (now Melaka), the richest port on the Malay Peninsula. It was the center where traders from the Indonesian archipelago brought their spices. It had a population of 100,000 and was well armed. With 15 ships, three galleys, 800 European and 200 Indian soldiers, in July 1511, he attacked Malacca and after a day, took the city, which his men looted. They loaded their treasure into the Flor do Mar, and the ship was so overloaded that it sank off the coast of Sumatra; the wreck has never been found. Back in Goa, Albuquerque fought off the attackers and then took a group of Portuguese and Indians to try to take the port of Aden. They failed and they returned to India. In February 1515, he again sailed from Goa, taking 26 ships to Hormuz. However he was taken ill in September and sailed back to Goa. On the way back he heard that his success had made him many enemies in Lisbon and he had been replaced by an enemy, Lopo Soares. Albuquerque died on December 15, 1515, at sea off the coast of Goa. See also Africa, Portuguese in; Goa, colonization of; Malacca, Portuguese and Dutch colonization of. Further reading: Boxer, C. R. The Portuguese Seaborne Empire 1415–1825. Harmondsworth, Middlesex: Penguin Books, 1973; Diffie, Bailey W., and George D. Winius. Foundations of the Portuguese Empire 1415–1580. Minneapolis: University of Minnesota Press, 1977; Subrahmanyam, Sanjay. The Portuguese Empire in Asia 1500–1700. London: Longman, 1993; Villers, J., and T. F. Earle. Albuquerque: Caesar of the East. Warminster, UK: Aris & Phillips, 1990. Justin Corfield

Almagro, Diego deEdit

(c. 1475–1538) explorer and political leader A leading figure in the conquest of Peru Diego de Almagro launched a rebellion against the Pizarro brothers 12 Almagro, Diego de around Cuzco that convulsed the newly conquered Andean territories in civil war (1537–38) and led to his own death by garroting at the hands of Hernando Pizarro. Almagro’s mestizo son, also named Diego de Almagro (Almagro the Younger), nominally headed the Almagrist faction that murdered Francisco Pizarro in 1541, but he, too, was captured and executed in 1542. The name Almagro thus has come to be associated with internecine conflicts among Spaniards during the most tumultuous years of the conquest of the New World. Both sides held substantial encomiendas in Panama, and in 1524 Diego de Almagro and Francisco Pizarro formed a partnership for exploration and conquest along the Pacific coast of South America. After two exploratory expeditions (1524 and 1526–28), Pizarro returned to Spain in mid-1528 and in Toledo received sanction for conquest from King Charles. The seeds of later dissension were sown in this Toledo agreement, as Pizarro was named governor and captain-general of Peru, and Almagro given the much lesser title of commandant of Tumbez, an Incan city they had encountered in the Gulf of Guayaquil and the anticipated site of a new bishopric. During the third expedition, which resulted in Pizarro’s capture of the Incan Atahualpa in Cajamarca in November 1532, Almagro stayed behind in Panama, where he had taken ill. He rejoined Pizarro in April 1533 at Cajamarca, bringing some 150 Spanish reinforcements. Almagro’s men received a much smaller share of Atahualpa’s ransom than did Pizarro’s, sharpening the factionalism between the two leaders and their followers. After their combined forces had taken and ransacked Cuzco, Pizarro sent Almagro and Sebastián de Benalcázar north to defeat the last substantial Inca military force and to prevent rival conquistador Pedro de Alvarado from seizing Quito first. They succeeded. Alvarado returned to Guatemala with a handsome bribe to ensure his departure; Almagro returned to Cuzco; and Pizarro went to the coast to found the new capital city of Lima. About this time, in early 1535, news arrived that King Charles had divided Peru, with Pizarro awarded the northern portion and Almagro the southern. The actual document not yet in hand, rumors flourished among partisans of both camps that their leader had been awarded Cuzco. Open civil war was avoided by Francisco Pizarro, who persuaded his old comrade Almagro to head an expedition south into Chile. Almagro’s Chilean campaign (July 1535–April 1537) turned out to be a disaster, with no treasure but much hardship, many cruelties against the natives, and much native resistance. Upon his return to Cuzco in April 1537, Almagro was determined to wrest the city from Hernando and Gonzalo Pizarro. His forces took the city, for a year. A bitter civil war ensued between the two factions and their Indian allies. Hernando Pizarro was released, Gonzalo escaped, and both joined forced with Francisco on the coast. Marching inland, the forces of the Pizarro brothers roundly defeated the Almagrist faction in the Battle of Las Salinas, just outside Cuzco, on April 26, 1538. In July 1538, in Cuzco, Hernando Pizarro had Almagro garroted. Almagrist feeling against the Pizarros still ran high, however, culminating in the faction’s murder of Francisco Pizarro in Lima in June 1541. Diego de Almagro the Younger, a figurehead, ruled Lima for the next year, until the new viceroy, Vaca de Castro, definitively crushed the Almagrist faction on September 16, 1542 in the Battle of Chupas, just outside the city of Huamanga, and had its young mestizo leader executed. Thus ended the bitter civil war between the Pizzarist and Almagrist factions in Peru. The conflict was emblematic of intra-Spanish divisions in the conquest Almagro, Diego de 13 Hernando Pizarro and Diego de Almagro swearing a peace oath, yet Spanish internecine conflict continued in the New World. of the Americas, in its violence and factionalism comparable to the civil wars between the conquistadores of Central America a few years earlier. See also Peru, Viceroyalty of; voyages of discovery. Further reading: Lagasca, Pizarro. From Panama to Peru: The Conquest of Peru by the Pizarros, the Rebellion of Gonzalo Pizarro and the Pacification by La Gasca; an Epitome of the Original Signed Documents to and from the Conquistadors, Francisco, Gonzalo, Pedro, and Hernando Pizarro, Diego de Almagro, and Pacificator Las Gasca, Together with the Original Signed Ms. Royal Decrees. London: Maggs Bros., 1925; Hemming, John. The Conquest of the Incas. New York: Harcourt Brace Jovanovich, 1970. Michael J. Schroeder

Altan KhanEdit

(c. 1507–c. 1582) Mongol tribal leader, warrior Altan Khan led a federation of Mongol tribes that occupied the region called Chahar in today’s Inner Mongolian region of China. His people were formidable because of their proximity to Ming China’s capital Beijing (Peking), their wealth among Mongol tribes because of trade, and their prestige as the legitimate successors of Genghis Khan. Under his grandfather Bayan Khan, also known as Batu Mongke (c. 1464–c. 1532), and then under him the Mongols came close to unity. Thus they were able to threaten China. He also forged a close religious alliance with the Yellow Hat Sect of Tibetan Buddhism. After their ouster from China in 1368 by the Ming Dynasty (1368–1644), the Mongols broke into five groups that fought among themselves. As a result they did not realize their military potential. Altan Khan was important because he united the Chahar Mongols and began launching annual raids against Ming lands along the northern frontier, even threatening Beijing in 1550. In one raid in 1542, he reputedly took 200,000 prisoners and 2 million head of cattle. Despite winning favorable trading rights with the Ming, the Mongols continued to raid Ming outposts for the next two decades until 1570, when Altan Khan’s grandson defected to the Ming governor Wang Chonggu (Wang Chung-ku) at Datong (Tatung). A new Ming emperor was ready to reverse the hostile relations between China and the Chahar Mongols. Thus he treated the Mongol defector as a guest, assured Altan Khan of the young man’s safety, and began negotiations that culminated in a settlement in 1571. It provided for the establishing of many trading points along the Great Wall of China and a Chinese title for Altan Khan as the Prince Shunyi (which means “compliant and righteous prince”). Altan Khan also played an important role in the religion of the Mongols. Tibetan Buddhism had won increasing numbers of converts among Mongols since Kubilai Khan’s acceptance of that faith in the late 13th century. In 1577, the head of the Yellow Hat Sect in Tibet visited Mongolia. Altan Khan used the occasion to declare Tibetan Buddhism the official religion of all Mongols and conferred on that cleric the title Dalai Lama, which means “lama of infinite wisdom” in Mongol. The title was conferred retroactively on that lama’s two predecessors and is carried by his successors to the present. In return, the Dalai Lama conferred on Altan Khan the title king of religion. Thus began the close relationship between the Mongols and the Yellow Hat Sect of Tibetan Buddhism. In 1589 Altan Khan’s great grandson was proclaimed the reincarnation of the third Dalai Lama, becoming his successor as the fourth Dalai Lama. He was the only non-Tibetan to hold that title. The Mongol‑Tibetan axis that resulted has persisted to the present and plays an important role in the politics of Inner Asia. Significantly the so-called conquest changed Mongols from ferocious warriors to pious lamas and laymen, effectively ending their dreams of future conquest. Altan Khan’s early raids struck fear to the Chinese over the revival of Mongol militarism, but his conversion and that of his followers to Tibetan Buddhism ended that threat. Further reading: Grousset, Rene. The Empire of the Steppes, a History of Central Asia. Naomi Walford, trans. New Brunswick, NJ: Rutgers University Press, 1994; Jagchid, Sechin, and Van Jay Symons. Peace, War, and Trade along the Great Wall, Nomadic-Chinese Interaction through Two Millennia. Bloomington: Indiana University Press, 1989. Jiu-Hwa Lo Upshur

Alvarado, Pedro deEdit

(1485?–1541) Spanish conquistador Renowned as one of the most powerful, fearless, and ruthless of all the Spanish conquistadores, Pedro de Alvarado was a key actor in the conquest of Mexico and the conquest of Central America, and a minor player in the conquest of Peru. His flowing blond hair, imposing demeanor, and skill in battle reportedly prompted the Aztecs to nicknamed him Tonatiuh, meaning “the 14 Altan Khan daytime Sun” (an exceptionally high compliment in their solar-centric culture), while the Indians of Guatemala are said to have considered him so handsome and cruel that they made masks of him that became part of their culture and folklore. According to the Spanish priest Bartolomé de Las Casas, Alvarado was responsible for the deaths of 4 to 5 million Indians in Guatemala between 1524 and 1540. Born in Badajóz, Spain, around 1485, Alvarado arrived in Hispaniola in 1510 and participated in the exploratory expedition of Juan de Grijalva in 1518 along the Mexican gulf coast. He then served as the chief lieutenant of Hernán Cortés in the conquest of Mexico. It was his impetuous slaughter of the celebrants in Tenochtitlán in mid-May 1520, during Cortés’s absence, that led to the catastrophic noche triste and nearly spelled the doom of the Spanish expedition. After subjugating Tenochtitlán, in 1523, Alvarado was sent by Cortés to conquer the kingdoms and polities of Central America. For the next 11 years, Governor and Captain-General Alvarado headed the Spanish and Indian army that crushed the indigenous polities of Guatemala, a protracted process. Tales of his atrocities are abundant, and his own letters on these events have been translated and published. In 1534–35, Alvarado headed to the northern Andes around Quito to participate in the subjugation of indigenous polities there. Running afoul of rival conquistadores Sebastián de Benalcázar and Diego de Almagro, Alvarado abandoned his Andean venture and headed back to Spain (1536–39), where he further solidified his power base. Returning to Mexico, in June 1541, he received fatal wounds when he fell from a horse and was crushed during the Mixtón War at Nochistlán in Guadalajara. Further reading: Gibson, Charles, ed. The Black Legend: Anti-Spanish Attitudes in the Old World and the New. New York: Knopf, 1971; Kelly, John E. Pedro de Alvarado, Conquistador. Princeton, NJ: Princeton University Press, 1932; Mackie, Sedley J., ed. and trans. An Account of the Conquest of Guatemala in 1524 by Pedro de Alvarado. New York: The Cortés Society, 1924; Thomas, Hugh. Conquest: Montezuma, Cortés, and the Fall of Old Mexico. New York: Simon & Schuster, 1993. Michael J. Schroeder

AnabaptismEdit

Anabaptism refers to a series of Reformation-era movements that was a part of what is commonly called the radical Reformation. The word Anabaptism comes from the Greek and means to rebaptize. Anabaptist interpretation of the Bible led adherents to hold that their original baptism as an infant was invalid because it was only as an adult that one could choose to be a part of God’s select people. Thus, members were often rebaptized if they were baptized first as infants. Most modern-day Baptists, while holding similar beliefs, only indirectly trace their roots to Anabaptists. Beginnings The more radical reformers were not united as a group, mostly because they tended toward extreme views on issues, having little patience for the views of others. There were several key figures in the period from 1521 to 1535, which began with the Zwickau prophets and ended with Jan Bockelson and the Münster Commune. Although Anabaptists claim to come from dissident roots that go back to the time of Constantine, the first visible signs during the Reformation were in December 1521, in Wittenberg, Germany, home of Martin Luther. Luther was hidden at the Wartburg Castle when three men, Nicolas Storch, Thomas Dreschel, and Mark Thomas Stübner, arrived in Wittenberg from Zwickau, a city with a history of radical Christian movements. These so-called Zwickau prophets at first simply took refuge in Wittenberg, which by that time had a reputation as a safe haven for those dissenting from Roman Catholicism. Eventually their efforts to convince others of their beliefs caused enough consternation that Luther came out of hiding in 1522 to interview the men, causing their eventual expulsion from Wittenberg. The men from Zwickau were connected to a former resident of Zwickau, Thomas Müntzer, a key figure in the Peasants’ War of 1524–25. Not long after the war, a separate group began in Switzerland, under the leadership of Conrad Grebel. Grebel, at first a follower and friend of Ulrich Zwingli, eventually disagreed with Zwingli regarding the role of the church and state. Grebel, like many other Anabaptists, saw Christians as separate from the society around them, and he resisted any entanglement between the Christians and the government. The period 1524–35 was a time of strong conflict between Anabaptists and other Christians. Many Anabaptists were caught up in end-times expectations. The first and most violent conflict was the involvement of Müntzer in the Peasants’ War. Müntzer was convinced that God was coming to judge and condemn the unrighteous, and that the lowly and meek would soon inherit the earth by conquering the unrighteous rulers and Anabaptism 15 nobles (an aberration of Christian teaching that that at the end time, God would judge the unrighteous). This eventually led to armed conflict that was put down in April 1525. For his part in it, Müntzer was tortured and killed. In January 1525, Zwingli and Grebel held a disputation in Zürich to debate Baptism, with Zwingli prevailing. Grebel left Zürich, and by October he was imprisoned for his beliefs. He escaped in March 1526 and died of the plague that summer. In 1527, a group of Anabaptists, whose followers were called the Swiss Brethren, met in Schleitheim, Switzerland, and adopted the Schleitheim Confession. In it, seven articles described the basic theology of the Anabaptist movement—adult baptism, the “ban” (expulsion from the church of unfaithful believers), a definition of the Lord’s Supper, separation from the world, a definition of the office of the pastor, refusal to take part in military service, and refusal to swear an oath. The author, Michael Sattler, was subsequently put to death for his beliefs. Many of his fellow participants were eventually killed. Later that year, in Augsburg, Germany, a different group of Anabaptists connected with Zwickau, led by Hans Hut, Hans Denck, and Melchior Hoffmann, met in Augsburg. This so-called Martyrs Synod (of the 60 attendees, only two were alive five years later) emphasized the imminent return of Christ (some thought in 1528), along with a communal sharing of goods. Heretics In the coming years, many Anabaptists were executed as heretics for their beliefs. Both their view on baptism and their view on refusing military arms were grounds for punishment. Some were drowned as a mockery of their view of baptism (which the Anabaptists defined as full immersion). Many fled to nearby Moravia, where a substantial community was established under the leadership of Jacob Hutter. Hutter was captured and burned at the stake in Austria in 1536 for refusing to renounce his faith. The culmination of the extreme wing of Anabaptism was the rise of the Münster Commune in 1534–35. Followers of Melchior Hoffman made their way to this German city and in a series of bizarre episodes, took over the city, forcibly converting townspeople to Anabaptism and eventually instituting polygamy and the “Kingdom of Münster” until the city was conquered in 1535. After 1536, there were fewer violent episodes, though Anabaptists were persecuted by Roman Catholic, Lutheran, and Reformed alike. Anabaptists found new leaders, most notably Menno Simmons, a former Catholic priest who became an Anabaptist in 1536 in the Netherlands. His followers were called Mennonites. The followers in Moravia, called Hutterites (after Jacob Hutter), were led by Peter Riedeman. By 1600, there were over 15,000 Hutterites in Moravia. The Amish were a group of Mennonites who, under the leadership of Jacob Amman in 1693, separated from the other Mennonite churches in Switzerland. Many migrated to Pennsylvania in the early 1700s. While some Baptist denominations can trace their origins to Anabaptist influence, most Baptist denominations trace their origins to the English Reformation and the Puritan movement in the later 1500s and early 1600s. While both Baptist and Anabaptist would practice adult or “believer’s” baptism, Baptists would not have the same emphasis on nonviolence or separation from the world. Today, the largest grouping of Anabaptists is the Mennonites, with around 1,250,000 followers throughout the world. The Amish number around 120,000 and are located primarily in the United States with a small number in Canada. The Hutterites number around 10,000 and are located in the United States and Canada. All of these groups share the foundational beliefs and characterizations of the Anabaptists, being separate from the world around them, not serving in the military, and refusing to take oaths. The Amish and Hutterites still practice a strong communal approach to possessions. See also Calvin, John; Counter-Reformation (Catholic Reformation) in Europe; justification by faith; Melancthon, Philip. Further reading: Elton, G. R., Reformation Europe 1517– 1559. Oxford: Blackwell Publishers, 1999; Estep, William Roscoe. Anabaptist Story: An Introduction to Sixteenth-Century Anabaptism. Grand Rapids, MI: William B. Eerdmans Publishing Company, 1996; Klaassen, Walter, ed. Anabaptism in Outline. Kitchener, ON: Herald Press, 1981; Leichty, Daniel, ed. Early Anabaptist Spirituality, Selected Writings. New York: Paulist Press, 1994; Weaver, J. Denny. Becoming Anabaptist: The Origin and Significance of Sixteenth-Century Anabaptism. Scottdale, PA: Herald Press, 1987. Bruce D. Franson

Andean religionEdit

Because of the diversified nature of Andean tribes and the Inca Empire, a complex system of religious beliefs and rit- 16 Andean religion uals developed. It is difficult to conduct a comprehensive examination that includes all of the different religions in the Andean region. A closer look at the Moche, Chinchorro, and Inca societies and religions provides insight to understand the basics of religious belief and practice in this region. The Inca, Chinchorro, and Moche cultures developed a complex system of religious beliefs as a result of the sedentary or semisedentary nature of their societies. Historians believe that after 7500 b.c.e., the indigenous inhabitants in Andean regions began experimenting with certain plants in order to determine the conditions in which they could best flourish. This experimentation with agriculture was crucial as it allowed for an expanding population that developed craft specialization, a political hierarchy, and complex religious beliefs that later characterized a number of indigenous tribes in Andean societies and the Inca Empire. The rulers of the Inca Empire and the Moche depicted themselves as possessing supernatural powers to help justify their ability to rule society. This depiction is evidenced by an archaeological examination of the Moche tomb in Sipán, which discovered that the skeletons in this tomb were clothed in regalia similar to that worn by the mythical individuals who were imprinted on Moche artwork. The desire of the Inca rulers to depict themselves with supernatural powers is illustrated in various myths. The Inca incorporated the gods of the tribes they conquered into their religion as is illustrated by the Inca devotion to the gods Pachacamac and Viracocha. In fact, the gods of conquered tribes were sometimes popular and powerful deities in the Inca pantheon as Viracocha was believed to be one of the more powerful Inca gods, since he had the ability to give life. Besides sharing gods with conquered tribes to unite their empire, the Inca also used children from various tribes as human sacrifices. human sacrifices Human sacrifices were used by a number of indigenous tribes in the Andes for both religious and political purposes, as becomes clear when examining the Inca Empire and, to a lesser extent, the excavations at Tiwanaku. Excavations at Tiwanaku have uncovered evidence that human sacrifices were practiced in this region in the seventh century c.e., but it is difficult to determine whether religious and/or political reasons motivated these sacrifices. Human sacrifices were used by the Inca to maintain social bonds among the various tribes in the Inca Empire, as children from these tribes were either taken or presented to the Inca for this particular purpose. The families to which these children belonged were given a position of power in the Inca Empire or goods in return for giving up their children. Recent discoveries of three children of varying ages who were sacrificed in the mountains of Argentina during the late 15th or early 16th century illustrate that the Inca believed that children were not only offerings to their gods but also ambassadors between the Inca and their deities. This tomb at Cerro Llullaillaco, which is 22,110 feet above sea level, held the remains of three children: one male and female, both approximately eight years old, and another female approximately 14 years old. The goods that were deposited by the Inca near the three children provided archaeologists significant information regarding Inca religion. Archaeologists believe that the three llama statuettes positioned near one of the sacrificed children, two of which were made of spondylus (mollusk shell) while the other was constructed of silver, were offerings to Inca deities to seek divine assistance in guaranteeing that Inca herds remained fertile. Archaeologists also hypothesize that the two male statues, one constructed of spondylus and the other constructed of gold, were depictions of either Inca gods or Inca nobles. Archaeologists are also able to hypothesize about the clothing that was deposited with the sacrificial victims. The tunic the male was wearing was too large for him, indicating that it was an offering to the gods or that the boy was expected to grow into this tunic in the afterlife. Two extra pairs of sandals found by the boy also suggest that the Inca believed in life after death. The 14-year-old female victim was also wearing a tunic created for a male, which suggests that this was a present for the gods. Oracles Oracles attracted large audiences and thus played a significant role in creating unity among various tribes situated in the Andes. Pachacamac was one of the more popular locations used by the local population for divination purposes. Individuals seeking to enter certain parts of this temple were forced to undergo certain rites such as fasting for 20 days to acquire access into the lower sections of the temple. Individuals seeking to enter the upper levels of the temple were forced to fast for one year. A piece of cloth was hung between the idol and the priest who was seeking divine advice for a petition, preventing the priest from viewing the idol. Blood acted as nourishment for the idol, which was fed this substance on a regular basis. Mummification was a practice used by the indigenous tribes of the Andes for several millennia prior to Spanish contact. The Chinchorro, in the area of Chile and Peru, Andean religion 17 practiced this death ritual at least seven millennia ago. Chinchorro culture did not just limit mummification to the elite of society, as archaeological discoveries noted that the Chinchorro mummified individuals regardless of gender, age, or class. The mummification of Chinchorro corpses followed a certain procedure: the skin was stripped off, followed by attaching reeds and sticks to the remains to maintain the basic skeleton structure. After this was done, the Chinchorro stuffed the corpses with plants and ash or dirt and then painted them. It is difficult to assess whether the mummification of the Chinchorro corpses influenced other cultures in the Andes region to mummify their ancestors, but mummification was an important aspect of many Andean societies. Certain indigenous tribes used mummification to keep the corpses in their homes so that they could be escorted through the cities during the Festival of the Dead. The Inca practiced ancestor worship, and Inca royalty were mummified and their royal palaces maintained by a group of people known as the panacas. It was the responsibility of the panacas to tend to the royal mummies. By examining this aspect of Inca society, historians can conclude that the royal mummies played an important social role since they were expected to participate in certain ceremonies and various social engagements. dynamics of religion The arrival of Christopher Columbus in the Caribbean in 1492 changed the dynamics of religion in the Andean region when thousands of Spanish friars came after Columbus to convert the indigenous populations to Christianity. The flexibility of the Inca religion is a compelling reason why many of the indigenous people in the Andes converted to Christianity so readily. The Spanish friars employed a variety of tactics to convince the indigenous populations to convert. The Spanish friars petitioned the Spanish Crown to alleviate the labor tribute imposed on the natives because they believed that it needed to be more moderate in order to ensure that Christianity flourished. This issue resulted in a bitter debate between the church and secular individuals concerning the treatment of the indigenous populations. Today Roman Catholicism has a sizable following in the Andes region. Various aspects in the lives of the natives illustrate that the premise of Christianity was accepted in the 16th century. This is evidenced through the artwork of Francisco Tito Yupanqui. His work shows the devotion of some natives to Christianity in pieces such as his sculpture of Our Lady of Copacabana in 1582. The people who worship at this sculpture have attributed to it many miracles they have witnessed. The stories of these miracles are some of the reasons the image of Our Lady of Copacabana has such a large following and have motivated other artists to create similar images throughout Peru. There is no doubt that a great number of indigenous people in the Andes accepted Christianity, but a number of these natives refused to reject completely their past religions. Historians have actively debated the degree to which syncretism (reconciling different religious viewpoints into a single belief system) developed among the indigenous populations in the Andes. There is artistic evidence that suggests that a great deal of syncretism existed in the Andes. For example, within the cathedral in Cuzco, Peru, is a chapel called La Linda that is home to a painting of an Andean wearing a robe with symbols associated with Jesus Christ and the Inca god Inti. The religions of the Andes are a complex and diversified facet of Andean societies. The Inca, Chinchorro, and Moche left indicators of their complex religious beliefs concerning the afterlife through their respective burial practices. The Moche and the Inca in particular used their religion in order to reinforce their political hierarchies. Religion was also a way to unite various tribes as in the cultural sharing between the Inca Empire and the tribes that it conquered or the use of oracles. The Spanish conquest of the Inca Empire by the Spanish conquistador Francisco Pizarro in the 1530s, and the subsequent subjugation of other Andean tribes by the Spanish, changed the religious dynamics in the Andes. In fact, that the Catholic Church attempted to convert the indigenous populations to Christianity, but the natives refused to renounce completely their existing religious beliefs, resulted in the blending of indigenous religions and Christianity. See also Atahualpa; Aztecs, human sacrifice and the; Cuzco (Peru); Peru, conquest of. Further reading: Arriaza, Bernardo. “Chinchorro Mummies.” National Geographic (March 1995); Bauer, Brian. “Legitimization of the State in Inca Myth and Ritual.” American Anthropologist (June 1996); Keen, Benjamin. A History of Latin America. Boston, MA: Houghton Mifflin Company, 1996; MacCormack, Sabine. Religion in the Andes: Vision and Imagination in Early Colonial Peru. Princeton, NJ: Princeton University Press, 1991; Taylor, Kenneth, William Taylor, and Sandra Lauderdale Graham, eds. Colonial Latin America: A Documentary History. Woodbridge, CT: Scholarly Resources, 2002. Brian de Ruiter 18 Andean religion

AnneEdit

(1665–1714) queen of Great Britain The last of the Stuart rulers, Anne was born on February 6, 1665, in London to King James II (r. 1685–88) and Anne Hyde. Although her father converted to Roman Catholicism, Anne’s uncle, King Charles II, gave orders that Anne and her sister, Mary, were to be raised Protestant. In 1683, Anne married Prince George of Denmark, and by all accounts the two were well-matched and content in marriage. They were plagued, however, with the inability to have a family. In 1700, their 11-year-old son, William, died. After at least 18 pregnancies, 13 ended in miscarriage or stillbirth, and in the others infants did not live to the age of two. William was the only child to survive into childhood. Anne entered the line of succession according to the 1689 Bill of Rights and succeeded her brother-in-law, William III (reigned 1689–1702). She took the throne on March 8, 1702, as queen of England, Scotland, and Ireland. Anne was determined to look after the Anglican Church, believing that God had entrusted it to her care. The War of the Spanish Succession (1702–13) erupted over disputed claims to the Spanish throne. This conflict dominated Queen Anne’s reign. France, Spain, and Bavaria were pitted against Britain, the Netherlands, Austria, most of Germany, Savoy, and Portugal. Louis XIV (1638–1715) had repudiated the Partition Treaty of 1698’s solution to the succession problem. He debarred trade with the Spanish Indies and refused British imports as he set about his expansionist agenda. The dominating figure from the allies was General John Churchill, the duke of Marlborough (1650–1722), who marched rapidly to Blenheim to defeat the French in 1704. The Treaty of Utrecht of 1713 ended the war, and its provisions were beneficial to Britain’s colonial and commercial interests. Britain’s marine supremacy was intact. Britain received Gibraltar and Minorca in Europe, along with Newfoundland, Nova Scotia, and Hudson Bay territory in North America. It won exclusive rights to supply slaves to the Spanish colonies. France was forced to recognize Protestant succession to the throne of Britain. In 1707, England and Scotland combined under the Act of Union to become the single kingdom of Great Britain, making Anne the first monarch of Great Britain. The union of England and Scotland was mutually advantageous. Scotland accepted free trade, better economic opportunity, and an intact church in exchange for recognition of the Protestant English succession to the throne. England also benefited politically and militarily by having the land and coastline of Scotland as part of its kingdom. The parliamentary party differences between the Tories and the Whigs fully emerged during Anne’s reign. The Whigs were advocates of religious toleration, constitutional government, and the War of the Spanish Succession. The Tories adhered to the Anglican Church and divine right theory and supported the war only at early stages. Marlborough, a Tory, had influence over the queen through his wife, Sarah Jennings (later Sarah Churchill, duchess of Marlborough, 1660–1744). Marlborough switched his loyalty to the Whigs and brought his son-in-law, Charles Spencer Sunderland, in as secretary of state. Anne excluded other Tories from office at the insistence of the Marlboroughs and Sidney Godolphin (lord high treasurer, 1702–10). The Tories passed the Occasional Conformity and Schism Acts in 1711 and 1714, aimed at weakening the Nonconformists. But the Tory desire for putting Prince James Francis Edward Stuart, “The Old Pretender,” on the throne before the queen’s death was not fulfilled. Anne had not produced an heir to her throne, so she arranged for the accession of a distant cousin, the Protestant Hanoverian prince George Louis (King George I, 1714–27). The Whigs were triumphant and enjoyed power for half a century. Queen Anne died on August 1, 1714, in London. She had no surviving children. See also British North America; Scottish Reformation; slave trade, Africa and the; Stuart, House of (England). Further reading: Beatrice, Curtis Brown. Anne Stuart, Queen of England. London: G. Bles, 1929; Gregg, Edward. Yale English Monarchs—Queen Anne. New Haven, CT: Yale University Press, 2001; Hodges, Margaret. Lady Queen Anne: A Biography of Queen Anne of England. New York: Farrar, Straus & Giroux, 1969; Hopkinson, M. R. Anne of England. New York: MacMillan, 1934; Lockyer, Roger. Tudor and Stuart Britain. New Delhi: Orient Longmans, 1970; Trevelyan, G. M. England under Queen Anne: Blenheim. London: Fontana Library, 1965. Patit Paban Mishra

Araucanian Indians (southwestern South America) Edit

Symbol of implacable resistance against Spanish domination, the Araucanian Indians of Chile successfully repulsed repeated Spanish efforts to subdue them and were not fully conquered until the late 19th century. Occupying the western slopes of the Andes in the fertile Araucanian Indians 19 lands between roughly 30 and 43 degrees south latitude, the Araucanians were loosely incorporated into the Inca realm in the late 1400s, though Inca influence was never strong. Sedentary agriculturalists who cultivated corn, beans, and other crops, the Araucanians were less a unified polity than a series of independent chieftaincies sharing the same language and broadly similar social and cultural attributes. The first Spanish incursion into the area, led by Diego de Almagro in 1535–37, met with bitter disappointment. The second, led by Pedro de Valdivia beginning in 1540, was nominally more successful. In 1541, Valdivia founded Santiago and a number of lesser settlements. After returning to Peru in 1547 and helping suppress the rebellion of Gonzalo Pizarro, Valdivia was named governor of Chile. From 1549, he continued his effort to conquer the Araucanians, marching south to the Bío-Bío River and founding the fortress-towns of Concepción (1550) and Valdivia (1552). Dividing subjugated Indians into encomiendas and heartened by reports of large deposits of gold, Valdivia encouraged miners and prospectors to stream into the district. In 1553, a large force of Araucanians from the province of Tucapel and under the leadership of the chieftains Lautaro and Caupolicán launched a counterattack that annihilated an entire Spanish expedition, including Governor Valdivia, whom they ate in ritual cannibalism. A general uprising continued for four years. Their exploits were immortalized in the epic poem La Araucana (pub. 1569–89) by the Spanish poet Alonso de Ercilla y Zúñiga. A brutal war followed. In 1598, victorious Araucanians captured and ate Governor Martín García de Loyola. By 1600, the successors of Lautaro and Caupolicán had destroyed most of the nascent Spanish settlements south of the Bío-Bío. Over the next two centuries, there emerged a complex military and political struggle, as the Spanish settlements slowly grew and groups of Araucanians rose in major uprisings in 1723, 1740, and 1776. Scholars have emphasized the internal transformations in Araucanian culture, politics, and militarism, and the role played by Spanish deserters, as key to their long success in resisting Spanish domination. They were not militarily conquered until 1883, while their cultural influence remains strong in Chile today. See also Andean religion. Further reading: Dillehay, Thomas D. Araucanians: Empire and Resistance in the South Andes. Cambridge, MA: Cambridge University Press, 2007; Padden, Robert Charles, and John E. Kicza, eds. The Indian in Latin American History: Resistance, Resilience, and Acculturation. Wilmington, DE: Scholarly Resources, 1993. Michael J. Schroeder

art and architectureEdit

From the 1390s onward, Renaissance ideas influenced European styles of art and architecture. This was initially seen in the architecture in Florence, Italy, with the completion of the Duomo. The building of the cathedral had ended in 1296 without the dome. Work on the dome started in 1419 when the architect Filippo Brunelleschi (1377–1446) created the design and got the city fathers to agree to it; it was completed in 1436. The baptistery, near the cathedral, has magnificent bronze doors showing the Gates of Paradise by Lorenzo Ghiberti (1378–1455), which were made from 1425 until 1452 and show a distinct Romanesque style; it, along with the nearby Basilica di San Lorenzo (construction started in 1425), are harmonious examples of Renaissance architecture. The splendor of Florence spread to other parts of Italy. One of the largest artistic and architectural achievements was the rebuilding of St. Peter’s Basilica, Rome, beginning in 1506 with Michelangelo as the architect of the Basilica and painter of the ceiling of the Sistine Chapel from 1508 until 1512. Work had begun on the Doges’ Palace in Venice in the 1340s, and Leonardo da Vinci (1452–1519) painted the Mona Lisa and The Last Supper and created other works of art and science. Other artists and architects of the period include Leon Battista Alberti (1404–72), Piero della Francesca (c. 1416–92), Benozzo Gozzoli (c. 1420–97), Masaccio (Tommasso Guidi, c. 1401–28), with Tintoretto (Jacopo Robusti, 1518–94) flourishing from the 1560s, Giovanni Lorenzo Bernini (1598–1680) from the 1620s, and Canaletto (Giovanni Antonio Canale, 1697–1768) painting the first of his famous Venetian views in 1723. In the Mediterranean, following the defeat of the Turks at Malta in 1565, work began on building the city of Valletta close to the forts that had held out during the siege. The general and architect Gabrio Serbelloni (1509–80) from Spain was involved in much of the work there. In Spain, the architectural style was moving from the Early Gothic to the Late Gothic, with the Church of San Juan de los Reyes in Toledo expressing the Isabelline style that marked the period after the accession of Ferdinand and Isabella, the capture of Granada in 20 art and architecture 1492, and the voyages of Christopher Columbus to the New World. Philip II’s construction of his new palace, San Lorenzo de El Escorial in the 1560s, represented the emergence of Spain as a major world power evidenced by the conquering of the Americas and the destruction of the Ottoman navy at the Battle of Lepanto in 1571. The 17th century in Spain saw many of the greatest Spanish artists flourish: El Greco (Domenikos Theotokopoulos, 1541–1614), Bartolomé Estebán Murillo (1617–62), Jusepe Ribera (1591–1656), Diego Rodriguez de Silva y Velázquez (1599–1660), and Francisco de Zurbarán (1598–c. 1664). In France, the Renaissance ushered in the development of its artistic and architectural styles, although the Wars of Religion from 1562 until 1598 caused massive destruction. In terms of military architecture, Marshal Sébastien Le Prestre de Vauban (1633–1707) was to draw up a new style of fortification, which soon became popular around the world; this style featured low thick walls, often made of earth with a stone surround, protected by artillery rather than the tall stone walls of the medieval period. The new Louvre Palace was constructed starting in 1546. In the 1660s Louis Le Vau and, from the 1670s, his successor Jules Hardouin- Mansart (1646–1708) worked on turning the former royal hunting lodge at Versailles into a palace that would be grander than any other in the world. Many of the great chateaux of the Loire Valley also date from this time, with that at Chantilly being exceptional in its size, although much of the present building was rebuilt in the 1870s. Paintings by Nicolas Poussin (1594–1665) and others frequently refer to classical mythology and biblical themes, and a number of recent writers see “hidden messages” in the works of Poussin. The founding of the French Royal Academy in 1648 by Charles Le Brun opened up French art, which saw the open scenes of the works of Jean-Antoine Watteau (1684–1721). In Britain, the Tudor style of architecture gradually gave way to the more expansive Elizabethan style, and then the Jacobean, and Restoration styles, and during the 18th century, the Georgian. Following the end of the Wars of the Roses in 1485, sections of many of the castles were destroyed or converted. Elegant country houses and “small” palaces were built with Hampton Court to the southwest of London, Nonsuch Palace in Surrey, and Hatfield House in Hertfordshire all dating from the early Tudor period. A number of the Oxford and Cambridge University colleges are from this date. For more modest buildings, the use of black-painted beams as a feature made the style recognizable around the world. By the late Elizabethan period, increased prosperity was often reflected in architectural flourishes such as brick chimneys. Jacobean England—named after James I, king from 1603 until 1625—saw architects such as Inigo Jones (1573–1652) flourish. During the English Civil Wars in the 1640s, much energy was put into building fortifications, or fortifying old buildings, often with little success. In Restoration England, the most famous of the early modern architects, Sir Christopher Wren (1632–1723), was able to work on the rebuilding of many churches destroyed in the Great Fire of London, with his masterpiece being St. Paul’s Cathedral. Other notable buildings of this period include Guy’s Hospital in London, and some of the buildings at Greenwich. Of the artists, Anthony Van Dyck (1599–1641), who painted a number of the important people in Jacobean and civil war England, and Godfrey Kneller (1646/49–1723) painted portraits of most of the major political and society figures of the late 17th and early 18th centuries. By the 1750s, Georgian urban architecture placed terraced houses around squares like London’s Bedford Square. The most well-known Georgian architects were Colin art and architecture 21 Girl with a Pearl Earring painted in 1665 by Johannes Vermeer (1632–1675) Campbell (d. 1729); Richard Boyle, third earl of Burlington (1694–1753), who designed Chiswick House; and William Kent (1685–1748), who designed House Guards, Whitehall, and Holkham Hall, Norfolk. Elsewhere in Europe, there was also a large flourishing of the arts, with Renaissance artists such as Hubert van Eyck (c. 1366–1426) and Jan van Eyck (c. 1390–1441), and later Rembrandt van Rijn (1606–69), and Jan Vermeer (1632–75) being famous in Flanders and the Netherlands. In central Europe, one of the most famous artists was the Nuremberg-born Albrecht Dürer (1471–1528). This era also saw the construction of cathedrals and palaces, the best examples being the Hofberg in Vienna, Austria, which had the Amalia Wing and the Royal Chapel added in the 16th century, and the Imperial Chancery Wing in the 18th century. Mention should also be made of the Graz-born Johann Bernhard Fischer von Erlach (1656–1723), who developed the Austrian baroque style. Sadly the Thirty Years’ War (1618– 48) led to mass destruction of much of the splendor of the Renaissance in many countries. Military architecture was also important in eastern Europe, in Poland, Hungary, Romania, and Russia. The great castle at Königsberg was reinforced and enlarged, with much work undertaken in other parts of the Baltic, in Oslo (Norway), Smolensk, and Moscow; the Kremlin Wall was built in 1486, the Archangel Cathedral built between 1505 and 1508 by the architect Italian Alevisio Novi, and St. Basil’s Cathedral, built between 1555 and 1561, the architect believed to be Posnik Yakovlev. It was also the era of peter the great, with the founding of St. Petersburg in 1703. This saw the construction of massive new government buildings and churches. On the Mount Athos Peninsula, Stavronikita, the last monastery to be founded on the peninsula, was built starting in 1542. With the Ottomans capturing Constantinople in 1453, there was a great resurgence of Muslim architecture and art. The most famous architect of this period was Sinan (1489–1588), the son of a stonemason. Sinan worked for Sultan Suleiman the Magnificent (reigned 1520–66) and was involved in the building of 79 mosques, 34 palaces, 33 public baths, 55 schools, and many other buildings. His best-known buildings are the Sehzade Mosque and the Mosque of Suleiman I the Magnificent, both in Istanbul. Mention should also be made of the Mostar Bridge in Bosnia, built in 1566, replacing a former wooden suspension bridge. At Bokhara, Tashkent, and Samarkand, great cities were built along the Silk Route, with many magnificent mosques and substantial public buildings. The building recognized as the greatest Muslim structure of the period is the Taj Mahal, which was built between 1631 and 1653. The main architect is unknown, but two European architects, Austin of Bordeaux and Veroneo of Venice, both helped in the design, although the overall concept is, of course, Mughal. In China, the Forbidden City was laid out between 1406 and 1420, with up to a million workmen constructing the central residence for the Ming emperors of China, their court, and their administration. In 1642 work started on the building of the Potala in Lhasa, Tibet. By the early 18th century, there was extensive trade between China and much of the rest of the world, with the Chinoiserie style becoming popular in Europe in particular. In the Americas, much of the early architecture involved the construction of forts, with domestic buildings in the Plymouth style of housing becoming popular in New England, the modern-day states of Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont in the United States. The early architecture in New York tended to reflect its Dutch origins. The central part of Mount Vernon, a Georgian mansion, had been built by 1740 and was to become the home of George Washington; Williamsburg, dating from the same period, is now a colonial-style tourist site. Many of the cities of South America date from the 16th or early 17th century, with architects and artists working in cities such as Lima, Buenos Aires, and Rio de Janeiro on churches, cathedrals, and public buildings developing a style that became known as Ibero- American. In North Africa, Moulay Ismail (r. 1672–1727), intent on proving Moroccan greatness worked on a massive palace at Meknes and moved the capital there from Fez. The palace was said to have rivaled Versailles in its extravagance, with some 25,000 slaves working on it. However little of it survives. In Timbuktu, and other parts of West Africa, many cities were built during this period, with many Dogon mosques built and artisans working on what is now known as “tribal art.” The great stone walls of Great Zimbabwe also date from this time, and there were undoubtedly many skilled architects in sub-Saharan Africa, but with no surviving writing from the period, and most of the buildings made from wood, little is known of the architects involved. Much of the art and architecture in the great cities of the Middle East, such as Damascus and Aleppo, dates from this period. During the early and mid-18th century the wealth in Damascus led to a style known as Damascene, with villas constructed in stone around courtyards, 22 art and architecture with the upper floors made from wood. Much of the old city of Cairo, and also of many port cities in North Africa—Algiers, Tunis, and Casablanca—dates from this period. Further reading: Fletcher, Bannister. A History of Architecture on the Comparative Method. London: The Athlone Press, 1961; Clark, Kenneth. Civilisation. London: British Broadcasting Corporation and John Murray, 1971; Jacquet, Pierre. History of Architecture. Lausanne: Leisure Arts, 1966; Pevsner, Nikolaus. An Outline of European Architecture. Harmondsworth, UK: Penguin Books, 1968; Richards, J. M. Who’s Who in Architecture from 1400 to the Present. New York: Holt, Rinehart and Winston, 1977. Justin Corfield

Ashanti kingdom in AfricaEdit

The Ashanti kingdom, or Asante, dominated much of the present-day state of Ghana during the period between the late 17th and early 20th centuries. It was ruled by an ethnic group called the Akan, which in turn was composed of up to 38 subgroups, such as the Bekiai, Adansi, Juabin, Kokofu, Kumasi, Mampon, Nsuta, Nkuwanta, Dadussi, Daniassi, Ofinsu, and Adjitai. In the late 1500s, there were at least 30 small states, which corresponded to the subsections of the Akan people. By 1650, these groups had been reduced to nine, and by 1700, they united. Ultimately the groups formed a confederation headed by the chief of the Kunasi group. The kingdom, formed by its legendary warrior Osei Tutu in 1691, was in fact a confederacy of both Akan and non-Akan people. The king’s symbol was the golden stool; equivalent to the throne, the stool became the symbol of kingship, so that a ruler was said to be enstooled or destooled. The asantehene, or king, had authority when he was raised three times over the stool. Even after 1901, when Ashanti became a protectorate, and 1957, when it became part of the modern state of Ghana, the stool and the enstooling ceremony of the Asantehene were important ceremonies The Ashanti kingdom, although originally a confederacy, had three bases of power—administration, communications, and economics—and was located in what is now north Ghana. Osei Tutu took over the administration set up by Denkiyira, the former hegemon, and added to it. Communities within 50 miles of the capital city of Kumasi were directly ruled by the asantehene. Under Osei Tutu and his successor, Osei Apoko (whose reign collectively lasted from approximately 1690 to 1750), the state expanded so much that by 1750, it encompassed about 100,000 square miles, with a population of 2 to 3 million. All of present-day Ghana with the exception of areas directly on the coast with small adjacent areas in the contemporary states of Togo, Ivory Coast, and Burkino Faso were part of the Ashanti state. In order to accommodate the new extent of the state, the administration divided itself into a metropolitan and a provincial area. The metropolitan area consisted of those towns within a 50-mile radius of Kumasi. The rulers of these towns were made up of the confederacy. Their only obligation was to pay annual tribute to Kumasi and troops in the event of war. This practice was extended to newer members of the state. All towns elected a governing advisory council composed of powerful members of the community. The towns were considered part of the Kumasi sphere, as they paid taxes that supported a steady army in the early 20th century. After a revolt of a military chieftain in 1748, a palace guard was organized. The rulers of the metropolitan spheres were members of the royal Oyoko clan and served on the royal council and had autonomy in nonfiscal and military matters. The Council for the Asantehene had gained substantial power; it occasionally destooled an incompetent ruler and formally helped to choose the new asantehene. bureaucratic control The provincial aspect of administration was subject to increased centralization as the centuries progressed. Outlying Akan districts did not participate in the royal selection process but were forced to pay taxes. By 1800, they were also forced to pay tribute. They were subject to increasing bureaucratic control such as a state agency that controlled all internal and external trade. The non-Akan areas controlled until the mid-19th century also sent thousands of slaves annually to Kumasi. The effectiveness of the Ashanti state relied on communication processes. The complex bureaucracy served as a conduit throughout the state. In addition both taxes and tribute were used to establish a well-maintained army throughout the century. Most famously were the talking drums. Since the national language of Ashanti, called Twi, was polytonal, any military commander or administrator could send out messages by matching syllables to the tones of the drum in a fashion similar to Morse code. Economics The mainspring of the confederation was economic. It had fertile soil, forests, and mineral resources, most notably Ashanti kingdom in Africa 23 gold. The future state of Ashanti had two ecological zones. In the southern forest belt there were forests and fertile soil. Original subsistence crops included yams, onions, and maize and, in the 19th century as farming became commercial, cola nuts and cocoa. In the northern savanna belt, there were yams and Guinea corn. The state was advantageously located for the importation of slaves from both the north and the west. In this period, beginning in the 15th and 16th centuries and lasting until the 1830s when slavery was abolished, the Ashanti still used slave labor to plant more crops such as plantains, yams, rice, and new crops such as maize and cassava brought from the Americas. This led to an increase in population and a movement of the Akan peoples to the forest zones. The use of slave labor was involved in its most important mineral product, gold. Akan enterprise utilized the labor of slaves for both trading with Europeans (Portuguese, Dutch, English) and in the state grassland belts first in clearing new land and then for the development of deep-level mining and placer mining. The slave trade for gold brought more slaves to produce more gold, and slaves were also traded for firearms. The desire to exert control over gold production and the new farming communities in the forest helped facilitate state functions. The desire to control access to labor pushed the Ashanti state in its attempt to control the coast inhabited by its Fanti peoples. The attempt to conquer the Fanti led to disputes and battles with the British, who had taken over the Gold Coast by 1815. Earlier the Ashanti had played the Dutch and Portuguese against the British. However hostilities after 1800 erupted for control of its coast. After the Ashanti were able initially to defeat the British in 1807 and in 1824, they suffered setbacks and accepted the Prah River as a border. Thereafter peace reigned for over 40 years. In 1872, a long-simmering dispute on the control of El Mina (the great Portuguese and Dutch post) saw a renewal of hostilities. After early Ashanti success, the British occupied Kumasi in 1874 until peace was concluded. In the late 19th century, the state began a rapid decline. Other parts of the state broke away so that by 1900, the state had dwindled to approximately 25,000 square miles and a quarter of a million people. The British began to interfere in events in Ashanti. In 1896, they deposed the asantehene and in 1900, a British demand for the golden stool resulted in an uprising that was put down in 1901, after which Ashanti was a protectorate. Incredibly, the golden stool was never surrendered and was restored to the nation after being “accidentally” found in 1921. In 1926, the asantehene was restored to the stool, and in 1935, its ceremonial role in Ashanti was formally restored. During the colonial period, its population increased more than fourfold. The Ashanti peoples engaged in cocoa growing while also actively producing crafts such as weaving, wood carving, ceramics, and pottery making. The bronze and brass artifacts produced by the lostwax process became prominently displayed in museums throughout the globe. Since 1935, the kingdom, now part of Ghana, has been organized into 21 districts. Throughout its golden age, the Ashanti state demonstrated impressive flexibility, often at the expense of neighbors whom it enslaved and whose tribute it exacted. It continued to increase production in the gold mines and to migrate and clear forest for agricultural production. It utilized the slave trade to increase its military might and diplomacy to key European allies. After slavery was abolished, it found a new economic outlet in cola nuts, and in the 20th century, the production of cocoa, Ghana’s biggest export. Even in independent Ghana, the Ashanti kingdom still maintains a clear existence and the Ashanti people have retained their cultural identity. See also Akan states of West Africa; cacao; Dutch East India Company (Indonesia/Batavia); slave trade, Africa and the. Further reading: Edgerton, Robert B. The Fall of the Asante Empire: The Hundred-Year War for Africa’s Gold Coast. New York: The Free Press 1995; Fox, Christine. Asante Brass Casting: Lost-Wax Casting of Gold-Weights, Ritual Sculptures with Handmade Equipment. Cambridge: Cambridge University Press, 1988; Freeman, Thomas Birch. Journal of Various Visits to the Kingdoms of Ashanti, Akan, and Dahomey in Western Africa. New York: Frank Cass & Co., 1968; McCaskie, T. C. State and Society in Pre-Colonial Asante. Cambridge: Cambridge University Press, 1995; Rattray, Robert Sutherland. Ashanti. Oxford: Oxford University Press, 1923; Rattray, Robert Sutherland. The Tribes of the Ashanti Hinterland, 2 vols. Oxford: Oxford University Press, 1932; Wilks, Ivor. Asante in the Nineteenth Century. New York: Cambridge University Press, 1975. Norman C. Rothman

AtahualpaEdit

(d. 1533) Incan emperor The last independent ruler of the vast Inca Empire, Atahualpa Inca was seized by the forces of Francisco 24 Atahualpa Pizarro in Cajamarca, Peru, on November 16, 1532. He was held prisoner pending payment of an enormous ransom, and after the ransom was paid, he was executed for treachery on July 26, 1533. Atahualpa’s name and legacy have come to be associated with Spanish avarice and duplicity in their conquests in the New World. His legacy will also be forever tied with indigenous political factionalism and incomprehension of the larger threat posed by European invasions, and with the persistence of pre-Columbian Andean culture and religiosity long after the Spanish military conquest of Peru was complete. Upon the death of their father, Huayna Capac Inca, in 1525, the brothers Atahualpa Inca and Huascar Inca were granted two separate realms of the Inca Empire: Atahualpa the northern portion centered on Quito, and Huascar the southern portion centered on Cuzco. In keeping with a longstanding Inca and Andean tradition of fraternal conflict, Atahualpa rebelled against his brother and imprisoned him. Pizarro and his men had the fortune of ascending into the Andes just as Atahualpa was returning to Cuzco after successful conclusion of his northern campaigns. After launching a surprise attack in Cajamarca and massacring upward of 6,000 Incan soldiers, Pizarro took Atahualpa prisoner. To secure his release, Atahualpa pledged to fill a room of approximately 88 cubic meters with precious golden objects, the famous Atahualpa’s ransom. Over the next months, trains of porters carted precious objects from across the empire, including jars, pots, vessels, and huge golden plates pried off the walls of the Sun Temple of Coricancha in Cuzco. On May 3, 1533, Pizarro ordered the vast accumulation of golden objects melted down, a process that took many weeks. Finally, on July 16, the melted loot was distributed among his men, and 10 days later, Atahualpa was executed. The eight months during which Pizarro held Atahualpa prisoner provided the Spanish with ample opportunity to observe the Inca leader’s customs and habits and the relations between him and his people. Their detailed descriptions offer valuable insights into the profound reverence with which the Inca was regarded, his semidivine status, and the social hierarchies and relations of the Inca realm. While being held prisoner, Atahualpa secretly ordered the assassination of his brother Huascar, an act that provided the Spanish with a ready pretext for executing him. Atahualpa’s execution provoked a fierce debate in Spain regarding the morality of the act, and of the conquest more generally. King Charles wrote to Pizarro of his displeasure, while other prominent Spaniards also condemned the execution. One result was that the Crown decided to treat Atahualpa’s descendants with considerable respect and deference. His sons and other family members were granted privileged status, and Atahualpa’s many descendants ranked among the most socially privileged of Indians in postconquest colonial society. In subsequent decades, he was also transformed into a martyr in the cause of Indian resistance to Spanish domination. See also Andean religion; Peru, conquest of; voyages of discovery. Further reading: Hemming, John. The Conquest of the Incas. New York: Harcourt, Brace, Jovanovich, 1970; Taylor, William B., and Franklin Pease G. Y., eds. Violence, Resistance, and Survival in the Americas: Native Americans and the Legacy of Conquest. Washington, DC: Smithsonian Institution Press, 1994. Michael J. Schroeder

Atlantic islands of Spain and PortugalEdit

In the 15th century, the Atlantic islands of Spain and Portugal were crucial in the formation of a kind of technological and commercial prototype or template for slave-based sugar production that was transferred to the Americas after 1492. The Portuguese began colonizing the Madeira Islands (especially Madeira, La Palma, Hierro, and Porto Santo, c. 768 square kilometers) in the early 1420s; the nine islands of the Azores (c. 2,300 square kilometers) in the 1430s or 1440s; and the 10 principal islands of the Cape Verde Islands (c. 4,000 square kilometers), most importantly São Tomé and Principe, in the late 1400s. None of these islands were inhabited. This was not true of the seven Canary Islands (c. 7,300 square kilometers), which were inhabited by a group collectively known as the Guanches. In the late 1300s, Castilians, Italians, French, and others launched slave-raiding expeditions on the Canaries. The Spanish formally incorporated the Canaries into their empire in 1496 after the subjugation of the islands’ natives, though nominal Castilian rule dated back to the early 1400s. Together these Atlantic islands provided the aggressively expansive empires of Spain and Portugal with “stepping stones” to the Americas for their nascent sugar and other tropical export industries. Crucibles of empirical, hands-on experiments regarding all aspects of sugar production—from cultivation and harvest, to the importation and control of African slave labor, to the quasi-industrial processes by which cane juice was Atlantic islands of Spain and Portugal 25 transformed into granular sugar—the Atlantic islands were crucial in the development of the technological know-how necessary for the explosion of sugar production in the Caribbean and Brazil in the 16th century and after. By the late 1450s, sugar production on Madeira exceeded 70,000 kilograms, most exported to England and the Mediterranean, deepening markets and solidifying the financial and commercial networks that would later play a crucial role in the development of plantation-based export production in the Americas. The administrative infrastructure that the Portuguese developed to rule Madeira, the Azores, and the Cape Verde Islands, based on hereditary “donatary captaincies,” were likewise transferred wholesale to Brazil during the first half-century of its colonization. Plantation-based sugar production on Madeira in particular, based on both slave and free-wage labor, also whetted the European appetite for this luxury commodity, deepening demand just on the eve of the encounter with the Americas. In addition both before and after sugar production had become established in the Americas, the Atlantic islands served as important way stations for the African slave trade and for long-distance trade with Asia. See also Africa, Portuguese in; Ferdinand V and isabella I of Spain; slave trade, Africa and the; sugarcane plantations in the Americas. Further reading: Diffie, Bailey W., and George D. Winius. Foundations of the Portuguese Empire, 1415–1580. Minneapolis: University of Minnesota Press, 1977; Fernández- Armesto, Felipe. Ferdinand and Isabella. New York: Dorset Press, 1991; Mintz, Sidney W. Sweetness and Power: The Place of Sugar in Modern History. New York: Viking, 1985. Michael J. Schroeder

Augsburg, Peace ofEdit

The Peace of Augsburg refers to a settlement between Charles V, Holy Roman Emperor, and the Lutheran princes that accorded Lutheran churches legal status in Germany. This settlement resolved the conflict on a state level but did not resolve any of the theological issues in the Reformation. The period between 1546 and 1555 was one of substantial warfare in Europe, characterized mostly by smaller battles, opportunistic in nature, with a few more major conflicts. The main actors up to this time had been Charles V, the Emperor; Francis I, king of France; Pope Paul III; and various princes in Germany who had made an association for mutual defense together in what was called the Schmalkaldic League (named after the town of Schmalkalden in central Germany). Charles V was frustrated by the religious conflict tearing apart his Empire. He pressured the pope to resolve the differences, resulting in the Council of Trent, which began in 1545. Charles V wanted the council to include the Protestant leaders, but this did not happen. At the same time, Charles was maneuvering to gain greater control over the German princes, using military pressure and negotiations. His hope was to break apart the Schmalkaldic League by diplomacy (and intrigue), but if that failed, to drive a wedge through Germany with his armies and break up the league by military means. This was accomplished in a series of battles beginning in later 1546 and concluding in April 23, 1547, with the defeat of the league forces in Mühlberg and the subsequent imprisonment of a key leader, the landgrave, Philip of Hesse. Charles’s main ally in the battles was the Elector Maurice of Saxony, an opportunist with Lutheran leanings. While Charles V accomplished his goal of gaining political and military control over Germany, Lutheranism was to prove impossible to eradicate. In April 1548, in an edict published in Augsburg (called the Augsburg Interim), Charles mandated restoration of the Roman Catholic Mass and other practices, allowing only two concessions to the Lutherans: married clergy and the use of both bread and wine in Communion. Later that year, the Lutheran Philip Melancthon was directed by Charles and Maurice to make certain alterations to the document in the hopes of making it more acceptable to the other Lutheran princes, who had refused to support the Augsburg Interim. This edict was published as the Leipzig Interim. Neither edict succeeded in bringing uniformity of church practice back to Germany. The Interim failed to gain support from the populace of Germany and Melancthon found himself reproached by his fellow Lutherans for his part in the Leipzig Interim. The only real effect of the Interim was the ability of those who were still Roman Catholics to observe their faith in the Lutheran territories. The balance of power that allowed Charles V to gain control over Germany in 1547–48 soon changed. Charles was forced to give Maurice of Saxony a great deal of control over Germany in exchange for his continuing military support. Charles had negotiated a peace settlement with Francis I, king of France, in 1544, but Francis died in 1547 and was succeeded by 26 Augsburg, Peace of his son, Henry II, who would prove to be troublesome for Charles in the coming years. After several years of political maneuvering, Maurice of Saxony formed the League of Torgau in May 1551 with several other German Lutheran princes. In January 1552, Maurice made formal peace with Henry II, who agreed to support the German princes against the emperor. This led to open war from March 1552 through June 1553. At this point, Charles was essentially surrounded. France was assaulting his territories from the east, Maurice from the north, and the Turkish sultan was battling Charles’s brother Ferdinand from the south and west. Yet no one had the military power to defeat Charles completely, as the lands and armies of Charles’s dominion were still immense, containing Spain, Austria, the Netherlands, and substantial amounts of Italy. Maurice of Saxony died in June 1553 from battle wounds, ending the major battles of that period. An uneasy truce remained until 1555, when the representatives of the Lutheran princes met with representatives of Charles at the Diet of Augsburg, held from February through September 1555. Representatives of the pope were not invited. The various emissaries were able to negotiate both political and religious peace. The Lutheran princes were granted territorial independence. All people in Lutheran territories would follow the religion of their prince. All people in Catholic territories would be required to observe Roman Catholicism. Certain cities that had both significant Catholic and Lutheran populations would allow both churches. People who did not wish to live in one territory because of their faith could freely move to another territory. The Peace of Augsburg was a significant milestone in Western Christianity. It recognized the Lutheran Church as a separate church body, allowing its members rights within the empire. It did not settle any of the theological issues and was a major fissure in Western Christianity; nor did it address the rights of Reformed or Anabaptist believers. For Reformed believers, recognition would come at the Treaty of Westphalia in 1648. Anabaptist believers would continue to endure persecution for several centuries, causing many to flee into eastern Europe and eventually to America to practice their faith. See also Anabaptism; Church of England; Counter- Reformation (Catholic Reformation) in Europe; Luther, Martin. Further reading: Elton, G. R. Reformation Europe 1517– 1559. Oxford: Blackwell Publishers, 1999. Lindsay, Thomas M. A History of the Reformation—the Reformation in Germany from Its Beginning to the Religious Peace of Augsburg. London: Hesperides Press, 2006. Bruce D. Franson

Augsburg ConfessionEdit

The Augsburg Confession is a document written in 1530 primarily by the Lutheran Philip Melancthon. It is addressed to the Emperor Charles V and makes a defense for the Lutheran positions on several theological issues. Divided into 28 chapters (or articles), it was designed to appeal to moderate Roman Catholics including, of course, the emperor himself. After the Diet of Worms in 1521, Martin Luther had been declared a heretic by both pope and emperor. Between 1521 and 1530, there were many troubles in Europe that had occupied the emperor, including a war with France and political battles with the pope, which resulted in an invasion of Rome by the emperor in 1527. Emperor Charles V was hoping for a more united front to face the threat of Moslem invasions in the eastern part of his empire. His hope was to bring about reconciliation between the Lutheran parts of Germany and the Roman Catholics. He gathered all these parties together at the Imperial Diet of Augsburg in 1530. On June 25, 1530, Melancthon and others presented the Augsburg Confession to the emperor. Luther was in a nearby castle but could not be present since he was officially still a heretic and thus was an outlaw in the empire. The confession was signed by many of the German princes. Many of the articles in the Augsburg Confession come from the Marburg Colloquy, a meeting of Lutherans and John Zwingli and some of his followers in 1529, a failed attempt to bring reconciliation between these Protestant parties. The Confession begins with 21 articles or chapters, which describe the basic beliefs of the Lutherans, belief in the Trinity or triune God, the Apostles and Nicene Creeds, and other definitions that were agreed to mostly by the Catholics. The second portion of the confession deals with the abuses that the Lutherans saw in the Catholic Church. Addressed to the emperor, the second portion begins: Translated, the Augsburg Confession of faith states, “Inasmuch as our churches dissent from the church catholic in no article of faith but only omit some few abuses which are new and have been adopted by the fault of the times although contrary to the intent of the canons, we pray that Your Imperial Majesty will graciously Augsburg Confession 27 hear both what has been changed and what our reasons for such changes are in order that the people may not be compelled to observe these abuses against their conscience. Your Imperial Majesty should not believe those who disseminate astonishing slanders among the people in order to inflame the hatred of men against us.” The second portion then discusses various theological topics including marriage of priests, confession, and monastic vows. The emperor handed the confession to the Roman Catholic officials and theologians present. Chief among these was Cardinal Lorenzo Campeggio from Rome, who with the other theologians composed a rather forceful rejection of the Lutheran positions. The emperor forced them to tone down the document before presenting what is called the Confutation of the Augsburg Confession to the Lutherans on August 3, 1530. The response by the Lutherans to the confutation was a much longer document, called the Apology to the Augsburg Confession, again written by Melancthon, which deals with the confutation point by point. This was published at the end of April or the beginning of May 1531 and also became an official position of the Lutherans when signed in Smalcald in 1537. This document was also more forceful in rejecting the Catholic position. The result was a stalemate, which led to various battles and conflicts over the following 25 years until the Peace of Augsburg in 1555. Was this really a chance to reconcile Protestant and Catholic Christianity? Many historians think that there was at least a reasonable chance. Certainly the emperor desired reconciliation. Melancthon was more of a peacemaker than Luther, and if some of the more moderate Catholics had been able to get the emperor’s ear, perhaps the direction of Western European Christianity would have been different. Today, the Augsburg Confession is still a foundational document of Lutheran Christianity. In 1575, a group of Lutherans worked to put together the key documents that defined Lutheranism in order to prevent further division. This book was called the Book of Concord and contained the Augsburg Confession, the Apology to the Augsburg Confession, the Smalcald Articles, and several other statements of Lutheran belief and doctrine. These still are held as accurate statements of Lutheran theology and practice by most Lutherans. Further reading: Tappert, Theodore, ed. The Book of Concord. Philadelphia: Fortress Press, 1959; Hillerbrand, Hans J., ed. Oxford Encyclopedia of the Reformation. Oxford: Oxford University Press, 1996; Lund, Eric. Documents from the History of Lutheranism, 1517–1750. Minneapolis: Augsburg Fortress, Publishers, 2002. Bruce D. Franson

AurangzebEdit

(1618–1707) emperor of India Aurangzeb was the sixth Mughal (Moghul) emperor (r. 1658–1707). He ruled for 49 years as Emperor Alamgir (conqueror of the universe); he was the last great ruler of the Mughal dynasty, but left the empire economically exhausted and widely disaffected. As Shah Jahan aged, his sons openly rebelled against him. The winner was the 44-year-old Aurangzeb, who imprisoned Shah Jahan and killed all three of his brothers. His personal strengths included widespread administrative and military experience, strict frugality in personal life, and devotion to work. He curbed corruption and took measures to improve agriculture. A strict and devout Muslim, he was also a bigot who had no tolerance of other religions and persecuted their followers. Thus began his troubles, which also contributed to the disintegration of the Mughal Empire. He ordered Hindu schools closed, had many Hindu temples destroyed, and ousted many Hindus from government service. Although he could not eliminate all Hindus from government, no Hindu under him rose to high positions. The last straw for Hindus was the reinstatement of the poll tax and other harsh taxes on non-Muslims, which had been dropped under his ancestor, Emperor Akbar. Aurangzeb’s religious policy contributed to the growth of revivalist Hinduism, a mixture of religion and what may be termed protonationalism. It began in southern India under Shivaji, who rebelled in 1662, heading the Maratha Confederacy. Long and costly campaigns failed to end the Marathas’ insurgency. In 1683, the Rajputs, powerful Mughal supporters, also revolted, even attracting one of Aurangzeb’s sons to their cause. While his lieutenants led the campaigns against the Marathas and Rajputs, Aurangzeb took personal charge of a drawnout war in the south, where he had been viceroy under his father. His objective was to subdue the two remaining independent kingdoms of the Deccan, beginning in 1683. He was militarily successful, with the result that the Mughal Empire under Aurangzeb extended from Kabul in the north to Cape Comorin to the south. However, the wars left the empire financially exhausted and the overtaxed peasants in revolt. Moreover, his total 28 Aurangzeb preoccupation with the campaign and absence from the capital had left the administration neglected. Aurangzeb died in 1707 at the age of 89. Because he ascended the throne after killing his brothers, he trusted no kinsman and kept all power in his own hands. His religious bigotry alienated Hindus and his focus on subduing rebels and expanding the empire left him unaware of the new shift of power among Europeans in India and the passing of maritime supremacy from the Portuguese to the English. His Muslim generals served him faithfully in his life, but rose to usurp his inept sons’ inheritance after his death. Mughal power soon declined and fell. See also Mughal Empire. Further reading: Allen, John, T. W. Haig, and H. H. Dodwell. The Cambridge Shorter History of India, Part II, Muslim India. Cambridge: Cambridge University Press, 1958; Burn, Richard ed. The Cambridge History of India, Vol. 4, The Moghul Period. Cambridge: Cambridge University Press, 1937; Richards, John F. The Mughal Empire. Cambridge: Cambridge University Press, 1993; Sakar, J. A Short History of Aurangzeb. Calcutta: M. C. Sarkar and Sons, 1962. Jiu-Hwa Lo Upshur

Austrian Succession, War of the (1740–1748) Edit

The War of the Austrian Succession was primarily between the Austrian Empire and Prussia, although several other European countries were eventually brought into the conflict. There were underlying causes that led to this renewal of European hostilities aside from the question of the Austrian succession. The Treaty of Utrecht, which was signed in 1713 to end the War of the Spanish Succession (1702–13), did not settle the underlying problems between ambitious powers seeking to extend their influence in Europe and the world. Before the War of the Austrian Succession began, British and Spanish antagonism was prominent in European society. The British were furious with the Spanish over the limited amount of trade the Asiento Privilege, which was signed in 1713, granted the British with Spanish colonies in the Americas. British captains attempted to get around this agreement by resorting to smuggling, which resulted in the Captain Jenkins Incident. Captain Jenkins claimed he was captured by the Spanish, who cut off one of his ears, which he kept to show to the British parliament. The British government declared war on Spain in October 1739 and commenced hostilities against the Spanish fleet in the Caribbean, but they were defeated. Despite hostilities between Spain and England, the immediate cause of the War of the Austrian Succession was the death of Charles VI of Austria in 1740, which gave his daughter, Maria Theresa, control over Austria. When Maria came to the throne, the Austrian military and bureaucracy were in a weakened state. With regard to trade, Austria was a very weak country because its mercantile system was centered predominately on a rural base, which failed to generate a significant degree of revenue. Austria had fought a bitter war against the Ottoman Empire that drained the treasury, leaving only 90,000 gulden for government spending. This war also angered many Hungarians since they were responsible for quartering the Empire’s soldiers. This financial burden and discontent were domestic issues with which Maria Theresa was forced to deal when she assumed the throne in 1740. These problems created a great deal of instability in Austria, and many countries hoped to divide up Austrian territory for their own benefit. An anti-Austrian coalition was formed, as neighboring countries were interested in seizing Austrian lands. This is evidenced by the fact that Prussia was interested in acquiring Silesia, France was interested in the Austrian Netherlands, Spain wanted to acquire more territory in Italy, and Piedmont-Sardinia wanted Milan. Frederick the Great, the ambitious king of Prussia, struck quickly against the Austrians by sending troops into Silesia in December 1740. Frederick the Great attempted to turn Prussia into a powerful country through the creation of a strong military and a centralized government that could effectively generate revenue through taxation. The Austrian government faced larger problems as the Bohemian nobles were unhappy with Habsburg rule and revolted since they wanted to be placed under the control of the elector of Bavaria. At this point, war enveloped the European continent as British and Austrian governments sided together to counter the ambitious design of the French, Prussian, Bavarian, and Spanish governments. Many of the European countries became concerned about the balance of power since they did not want one country to become too powerful in Europe. Prussian Invasion of Silesia With the Prussian invasion of Silesia and the revolt in Bohemia, Maria was forced to ask the Hungarian diet Austrian Succession, War of the 29 for assistance in 1741. The inability of the Austrians to repel the Prussian invasion forced Maria to assemble the Hungarian diet to acquire further assistance in the war effort. The diet attempted to assert Hungarian interests over Austrian interests as it demanded the institution of better economic policies, an alteration in the coronation oath, and greater Hungarian control over the region. Maria agreed to negotiate these terms, with the exception of the demand concerning the coronation oath, in order to acquire further Hungarian assistance in the war, but she refused to honor this agreement in its entirety. As the war continued to deteriorate for the Austrians, Maria was forced to approach the diet again. She promised to give Hungarians greater control over the administration of Hungary, more Hungarian influence in regard to allocation of tax money, the selection of Hungarians to ecclesiastical offices in Hungary, and the promise to give more territory to Hungarian domains. The members of the diet accepted this proposal and promised to provide the Austrian empress with at least 4 million gulden and a minimum of 60,000 troops. Despite the fact that Maria considered Hungarian opinion when creating government policies, she failed to implement most of the demands to which the Hungarians agreed. The Hungarians also fell short on their promises regarding the number of troops they could offer to the service of the Crown, which helps to explain the poor performance of the Austrian war effort. The Peace of Dresden, which was signed in 1745 between the Prussian and Austrian governments, confirmed Prussia’s control over Silesia. Despite the fact that Prussia and Austria negotiated a peace settlement the conflict still continued among the other European powers. The British became involved in the war with the fear that the expansion of French influence on the European continent would affect Hanover. George II, who was king of England and elector of Hanover, led an army that defeated the French forces at Dettingen in June 1743, but the threat of an army led by Charles Edward Stuart, who was attempting to restore the Stuart dynasty to the throne of England, forced the British to recall a significant portion of their army back to England in 1745. The invasion failed as Charles could not acquire enough support from the English population, forcing him to give up his march on the English capital. The remains of the Stuart army were smashed by the duke of Cumberland at Culloden Moor in April 1746. Despite this success by the English at home, the recall of a major portion of the English army allowed the French to capture the Austrian Netherlands. The war was also fought outside the European continent as the French and British combated with each other for a stronger position on the Indian subcontinent and in North America. The French were able to launch a successful offensive against the British in India by capturing Madras from the British. The British were able to gain some ground on the French in North America as a coordinated attack by colonists from New England and the Royal Navy captured the French fortress of Louisburg. The Treaty of Aix-la-Chapelle, which was signed in 1748, forced England to relinquish control of the fortress of Louisburg in Nova Scotia to the French and in exchange, the French returned the Austrian Netherlands to Austria and Madras to the English. Spain and Piedmont-Sardinia each gained territory as the Spanish acquired Parma, and Piedmont-Sardinia acquired some territory in Milan. The War of the Austrian Succession was an important step in turning Prussia into a strong European power for the acquisition of Silesia increased the population of Prussia, provided Prussia with an abundant amount of coal and iron, and gave the Prussians a thriving textile industry. Maria Theresa lost territory, but her husband was acknowledged by the German princes as the Emperor of the Holy Roman Empire. Maria spent the rest of her reign attempting to reacquire Silesia from Frederick the Great as she centralized the Austrian administration and undertook reforms in the Austrian army and economic base to accomplish this goal. See also Stuart, House of (England). Further reading: Merriman, John. A History of Modern Europe, Volume 1: From the Renaissance to the Age of Napoleon. New York: W. W. Norton & Company, 1996; Willcox, William, and Walter Arnstein. The Age of Aristocracy: 1688 to 1830. Boston, MA: D. C. Heath and Company, 1996; Winks, Robin, and Thomas Kaiser. Europe, 1648–1815: From the Old Regime to the Age of Revolution. New York: Oxford University Press, 2004. Brian de Ruiter

Aztecs (Mexica)Edit

Because the Aztec elite continually retold their own history to accord with contemporaneous political and religious concerns, the origins of the Aztec Empire are shrouded in myth and legend. The consensus view among scholars is that the Aztecs, or Mexica, were a 30 Aztecs (Mexica) Nahua-speaking nomadic hunting and gathering people who began migrating south from their mythical homeland, called Aztlán, located somewhere in Mexico’s northern deserts, beginning in the early 1100s. One in a series of Nahua-speaking ethnic groups that migrated into the more fertile regions of Mexico’s Central Highlands after the fall of the Toltecs during the Postclassic period, the Mexica were considered barbarians and dubbed Chichimeca, or “lineage of the dog,” by the more advanced and sedentary groups already settled in the Basin of Mexico. With its rich diversity of environmental resources, the Basin of Mexico, a region called Anáhuac in Nahuatl, had been a primary locus of sedentary agriculture and the development of advanced civilizations since the Preclassic period. The Aztecs migrated into Anáhuac around the year 1250, where they lived a precarious existence for the next century, learning the sedentary lifeways of their more numerous and powerful neighbors. According to Aztec legend, the site of their capital city was chosen around the year 1325, when one of their holy men saw fulfilled the prophecy of their principal god, Huitzilopochtli: an eagle perched on a cactus, in some versions devouring a snake. The site was a small outcropping of rocks on the western edge of the southern part of Lake Texcoco. On this site the Aztecs began building their capital city, an island linked to the mainland by causeways, which they called Tenochtitlán (Place of the Cactus Fruit). At the time other city-states dominated the Basin of Mexico, most notably Tepaneca, Texcoco, and Tlacopán. The island-city grew rapidly, as did Aztec military and political power. In 1428, under Itzcoatl (c. 1427–40), the Aztecs overthrew their Tepaneca overlords, asserted their independence, and became the “first among equals,” in a Triple Alliance with Texcoco and Tlacopán. Bent on imperial expansion, the Mexica polity under Moctezuma I (c. 1440–69) combined wars of conquest with alliance-making to expand their domain, a process continued under the rulers Axayacatl (c. 1469–81), Tizoc (c. 1481–86), Ahuitzotl (c. 1486– 1502), and Moctezuma II (c. 1502–20). By the early 1500s, the Aztecs had created an expansive tributary empire that reached far beyond Anáhuac to embrace most of the settled territories to the east (to the Gulf of Mexico) and south (to the edge of the Maya domains), and whose influence was felt as far south as the Maya kingdoms of Guatemala. To the west, various Tarascan polities resisted Aztec efforts to subdue them, while closer to home, some retained their independence— most notably the Tlaxcalans. Far from unitary or monolithic, the Aztec Empire was shot through with multiple fractures and divisions—of languages, ethnic groups, religions, kingdoms, city-states—largely a consequence the Mesoamerican political-cultural imperial tradition of leaving intact the ruling dynasties and bureaucratic infrastructure of dominated polities. An estimated 400 polities were subordinate and paid tribute to their Aztec overlords. By this time, Tenochtitlán had become one of the largest and most densely populated cities in the world, covering nearly 14 square kilometers, with intricate systems of canals, footpaths, gardens, walls, paved streets, residential complexes, temples, and pyramids. The city’s population probably reached 250,000 people. The planned city was divided into quarters, corresponding to the four cardinal directions, with a separate fifth quarter, Tlatelolco, serving as the city’s principal marketplace. At the city’s core lay the sacred precinct, covering perhaps 90,000 square meters, filled with more than 80 imposing structures, dominated by the Great Pyramid (Templo Mayor), some 60 meters high, with its twin temples devoted to Huitzilopochtli (the god of the Sun and war) and Tlaloc (the god of rain). Aztec Society Aztec society was extremely hierarchical, with complex gradations of class and status extending from top to bottom, with each individual and family pegged into a specific social category. After the household and nuclear family, the foundational social unit upon which social relations among the Mexica were built was the calpulli, an extended lineage group that corresponded to occupation, place of residence, and local governance—variously translated as “parish,” barrio, and “clan.” The vast majority of the inhabitants of Tenochtitlán and its subordinate polities were maceualli (commoners, plebians) engaged in agriculture, petty trade, or service. A small minority, at most 10 percent of the populace, constituted the ruling class of top-echelon bureaucrats, dignitaries, warriors, and priests. Merchants, or pochteca, divided into merchant guilds, appear to have constituted a separate social class, as did warriors, priests, and craft workers. The Aztec economy was based on a highly developed combination of agriculture, tribute, and trade, along with intensive exploitation of Lake Texcoco’s abundant lacustrine resources. An ingenious agricultural device, the chinampas (sometimes erroneously called “floating gardens”), artificial islands built of woven mats of reeds and branches atop which was piled mud and organic matter dredged from the lake bottom, Aztecs (Mexica) 31 provided abundant maize, legumes, fruits, and vegetables. Trade and commerce occupied a central place in the Aztec economy. cacao beans were the principal form of money. Religious concerns intruded into every aspect of Aztec daily life. The notion that the worlds of the sacred and the secular constituted distinct or separate realms did not exist. The Aztec corpus of religious beliefs and practices was dizzyingly complex, their pantheon of gods, deities, sacred beings, and divine entities reaching into the hundreds. The most important deities were Huitzilopochtli (the Aztec’s most honored deity), Tlaloc, Quetzalcoatl (“Plumed Serpent”), and Tezcatlipoca (“Lord of the Here and Now,” “Smoking Mirror,” “He Whose Slave We Are”). The latter was considered an especially capricious, devious, and dangerous god, one who derived great pleasure from laying waste to human ambition and pretension. Propitiation of these and many other gods constituted one of humanity’s principal tasks, for without adequate ritual and obeisance, they might well turn on their mortal underlings and wreak havoc on their lives and fortunes. Unlike the Christian God of this same period, Aztec gods, like Mesoamerican deities generally, were not considered exclusive. It was common for groups and polities to adopt new gods, especially those of a dominant or conquered group, by incorporating them into an already well-populated pantheon. Intimately tied to Aztec religion were Aztec conceptions of time. The Aztec solar calendar was divided into 18 “months” of 20 days each, with a five-day “barren” or “hollow” period at the end of each solar year—a time of foreboding and dread. Each month, in turn, was devoted to specific rituals and ceremonies paying homage to a particular god or combination of gods. Thus, for instance, the “Feast of the Flaying of Men” took place on March 5–24, and included mass ritual human sacrifice in honor of Xipe Totec (the god of fertility and martial success), as well as gladiatorial contests and sacrifices, dancing, and feasting. In addition to the solar calendar was the sacred or divinatory calendar, a pan-Mesoamerican phenomenon, composed of 260 days and divided into 20 units of 13 days each—all associated with particular gods and rituals. An Aztec “century” consisted of 52 solar years. The end of each 52-year cycle was considered a period of great danger, for unless the Sun god Huitzilopochtli was adequately propitiated with human blood, the Sun would cease to rise and the world would come to an end. Closely linked to these temporal cycles, to the propitiation of the gods, and to the expansion of the Aztec Empire generally were conceptions and practices of warfare, which occupied a central place in Aztec political culture and cosmology. By the Postclassic period, Mesoamerica as a whole had developed a highly elaborate series of beliefs and practices concerning warfare. In general, its principal purpose was not to occupy territory or kill enemy combatants, though the latter in particular was not uncommon, but to subdue competing polities and capture enemy soldiers on the battlefield. These captives would be sacrificed to the gods, in order to ensure the good harvests, the well-being of the empire, and the continuation of the world. Thus, the so-called Flowery Wars (“flower” being a metaphor for human blood) between the Aztecs and as-yet unconquered kingdoms such as Tlaxcala were conceived and undertaken principally as ritual events whose principal purpose was to capture victims for later sacrifice. The accumulation of animosities that resulted from these ritual battles, along with these cultural beliefs concerning warfare and divine intervention in human affairs generally, proved crucial in the later conquest of Mexico. Further reading: Brundage, Burr Cartwright. The Fifth Sun: Aztec Gods, Aztec World. Austin: University of Texas Press, 1979; Clendinnen, Inga. Aztecs. New York: Cambridge University Press, 1991; Soustelle, Jacques. Daily Life of the Aztecs. New York: Macmillan, 1961. Michael J. Schroeder

Aztecs, human sacrifice and theEdit

Although some maintain that the notion that the Aztecs (Mexica) practiced human sacrifice is a myth that originated with the Spanish conquistadores to justify and legitimate their conquests, in fact, abundant evidence demonstrates that the Aztec state, like many other pre-Columbian Mesoamerican and Andean polities, regularly practiced ritual human sacrifice. The evidence also shows that the Aztecs institutionalized this practice, elevating it to a high art form, the state’s most important public spectacle, and a key state function essential to the well-being of the cosmos. This evidence includes scores of Spanish and native accounts composed during and after the conquest of Mexico, along with abundant archaeological and textual artifacts that predate the Spanish invasion. The religious and cultural beliefs inspiring Aztec ritual human sacrifice had deep roots in Mesoamerican society and culture. Many pre-Columbian polities in the Americas are known to have ritually sacrificed human beings to their gods. These included 32 Aztecs, human sacrifice and the many Maya kingdoms and city-states, Monte Albán and subsequent Zapotec polities, Teotihuacán, the Toltecs, and others. Such practices were rooted in a pan-Mesoamerican corpus of beliefs concerning the spiritual power of human blood, and the everyday intervention of the gods in human affairs. States transformed these broad cultural understandings into state ideologies and spectacles. Ruling groups portrayed public offerings of human blood as payment of a debt owed to the gods. By propitiating the gods with the most valuable substance in the universe—human blood—states terrorized foes and depicted themselves securing a larger social and cosmic good. Public and private bloodletting rituals in the service of the gods were common across Mesoamerica, and ritual human sacrifice was the most extreme form of bloodletting. The Aztecs took the practice to an extreme, sacrificing people on diverse occasions in propitiation of many divine beings. Of the 18 ceremonial events that occurred during each of the 18 months of the Aztec solar year, eight included ritual human sacrifice. These included the ceremony of Quecholli (“Precious Feather,” October 31–November 9), in which priests ritually slew and sacrificed captives dressed as deer, and the ceremony of Atl Caualo (“Ceasing of Water,” February 13–March 4), in which infants and children were publicly marched in groups before being sacrificed. The gruesome sacrifice involved four priests holding the victim down on top of a large stone for another priest to cut open in order to remove the heart. By ritual preparation and transformation, the victim was depicted as becoming the god to whom he or she would be sacrificed. There were many variations on these general themes. The most frequently propitiated divine entity was Huitzilopochtli, the god of the Sun and war, particularly at the end of each 52- year Aztec century. Without such offerings, the state claimed, the Sun would cease to rise and the universe would come to an end. After the Aztec Triple Alliance of 1428 joined together Tenochtitlán, Texcoco, and Tlacopán, the practice of human sacrifice was institutionalized at the highest levels of the Aztec state. Major events such as victory in war, inauguration of a new ruler, or dedication of an important public structure became occasions for large-scale human sacrifice. The most extensive such instance occurred in 1487 with the dedication of the Temple of Huitzilopochtli in Tenochtitlán, in which an estimated 20,000 people were ritually sacrificed over four days. The Aztecs also initiated prearranged wars with neighboring polities—ritualized battles called the “Flowery Wars”—in large part to secure sacrificial victims. In its meteoric rise to domination, the Aztec state made such practices integral to state ideology and imperial ambitions. Ritual human sacrifice displayed the Aztec state’s awesome political and religious power, terrorized its enemies, worked as a cohesive ideological force among its subjects, and generated animosities against its rule among subordinate states that the Spanish later exploited in the conquest of Mexico. See also Cortés, Hernán. Further reading: Gruzinski, Serge, and Paul G. Bahn, trans. The Aztecs. London: Harry N Abrams Inc., 1992; Bierhorst, John, trans. History and Mythology of the Aztecs: The Codex Chimalpopoca. Tucson: University of Arizona Press, 1998. Michael J. Schroeder

Age of Revolution and Empire 1750 to 1900 Edit

abolition of slavery in the AmericasEdit

The history of chattel slavery in the Americas, from its beginnings in 1492 until its fi nal demise in Brazil in 1888, has spawned a vast literature. So, too, has the process by which the institution of chattel slavery was formally and legally abolished. A highly contentious, nonlinear, and uneven process that unfolded in different ways and followed distinct time lines in various parts of the Americas, abolition must be distinguished from manumission, in which slave owners granted freedom to individual slaves, which is not examined here. Especially since the 1960s, historians have examined many different aspects of abolition in the Americas, including the intellectual and moral impulses impelling it; the history of diverse social movements devoted to compelling colonial, state, and national governments to implement it; and the role of various individuals and groups—including merchants, planters, bureaucrats, and colonial, national, and imperial governments, and slaves themselves—in retarding or accelerating the process. The fi rst formal abolition of slavery in the Western Hemisphere came not from a national government but from state legislatures in New England and the Mid-Atlantic states of the not-yet-independent United States of America. In 1777 the Vermont state assembly became the fi rst governmental entity in the Americas to abolish slavery within its jurisdiction. In 1780 the Pennsylvania state assembly passed a law requiring all blacks henceforth born in the state to become free upon reaching age 28. State laws mandating the end of chattel slavery, each stipulating different time lines and provisions, were passed in Massachusetts and New Hampshire (1783), Rhode Island and Connecticut (1784), New York (1799), and New Jersey (1804). Signifi cantly, actual abolition sometimes lagged for decades following passage of such laws—as in New Jersey, where legal slavery persisted until ratifi cation of the Thirteenth Amendment to the Constitution in 1865. Because slavery did not comprise an important component of any of these states’ economies, organized opposition to abolition was limited, and abolition itself carried few economic costs to slaveholders. As individual states were passing laws for gradual emancipation, the Northwest Ordinance of 1787 banned slavery in the Northwest Territories, setting the stage for the sectional confl ict between North and South that ultimately led to the American Civil War. Far more consequential for the eventual abolition of slavery in the Western Hemisphere was the Act for the Abolition of the Slave Trade passed by the British parliament in 1807, and put into effect in 1808, outlawing the transatlantic slave trade. The law also authorized the British navy to suppress the slave trade among all slave traffi ckers, making Britain, in effect, the policeman of the high seas. The U.S. government passed less sweeping legislation in 1808 banning further import of slaves. Three years later, the British parliament made participation in the slave trade a felony. Scholarly debates have swirled regarding the origins of and inspiration behind these laws. Some historians have A emphasized the rise of a religion- and Enlightenmentinspired antislavery and humanitarian impulse among Quakers, evangelical Methodists, Unitarians, and others in providing the impetus behind the British abolition of the slave trade. An expansive literature pays special attention to leading abolitionists like William Wilberforce and to the many antislavery societies, writers, and publications that blossomed in the late 1700s and early 1800s. Other scholars have stressed the growing commitment to the ideology of free wage labor on the part of Britain’s leading capitalists. This interpretive school has located Britain’s intensifying opposition to slavery within the broader context of a rapidly developing global capitalist economy and a powerful domestic labor movement that used the symbol of slavery to portray the workers’ plight and denounce capitalism. Ironically, while the 1807 law made Britain the fi rst nation to outlaw the transatlantic slave trade, from the mid-1600s leading British economic interests had also been one of the main motors behind, and benefi ciaries of, the slave trade. While the 1807 law presaged the eventual demise of African slavery in the Americas, it did not abolish slavery, or call for the abolition of slavery, or free a single slave. Nor did the law prohibit individual nations or colonies from slave traffi cking within their borders. In nations and colonies with large slave populations— including Brazil, the United States, and throughout the Caribbean Basin—chattel slavery could, in theory, continue indefi nitely by “natural population increases” among slaves (population increases resulting from births over deaths and excluding external infl uxes). The outlawing of the Atlantic trade prompted slaveholders across the Americas to implement policies intended to increase slave populations, such as forced impregnation and rape of slave women. Local slave markets refl ected these changes, as prices of female slaves of childbearing years rose substantially in many areas. The 1807 law provoked fi erce resistance in British colonies such as Jamaica, Antigua, and Trinidad, whose colonial assemblies at fi rst rejected, then grudgingly accepted, the imperial mandate. Exeter Hall was fi lled with a large crowd for the Anti-Slavery Society meeting, London, England, in 1841. Abolitionist movements gained strength in the 19th century and successfully abolished slavery in most of the Western Hemisphere by the end of the century. 2 abolition of slavery in the Americas Similar patterns unfolded elsewhere, as imperial laws intended to place limits on slavery and the slave trade met stiff resistance by slave owners in the colonies. Overall, such laws originated in national governments’ responses to mounting domestic and international opposition to chattel slavery and the actions of slaves themselves and their many forms of resistance to the fact and terms of their enslavement. A survey of the British, French, and Spanish colonial empires highlights these broad patterns. GREAT BRITAIN In Britain the 1807 and 1811 laws were followed by the amelioration laws of 1823, meant to improve the living conditions of slaves. Far more consequential was the Abolition of Slavery Act of 1833, which went into effect on August 1, 1834. The 1833 law abolished slavery throughout the empire, while stipulating a period of apprenticeship in which slaves over the age of six would continue working for four years for their former masters. A major slave rebellion in Jamaica in December 1831 (the “Christmas revolt”) played a major role in prompting Parliament to pass the 1833 law—an illustration of the role played by slaves in advancing their own emancipation. In 1838, over the vociferous objections of slaveholders, Parliament proclaimed complete emancipation. Upper and Lower Canada followed the same trajectory as British colonies elsewhere in the Americas, with fi nal emancipation coming in 1838. For the next 27 years Canada would serve as a refuge for escaped slaves from the United States, especially after the U.S. Fugitive Slave Law of 1850 made no state in the Union immune from slave-catchers and bounty hunters. In France, with the convening of the Estates General in 1789, the Société des Amis des Noirs (Society of the Friends of the Blacks) called for the abolition of the slave trade and emancipation of slaves within the colonies. The call was rejected after a powerful coalition of white colonists successfully prevented debate on the topic. With the eruption of the Haitian Revolution from 1791, the French assembly relinquished its jurisdiction over the question. Three years later, in 1794, the Convention outlawed slavery throughout the empire and granted rights of citizenship to all adult males. In 1801, Haitian rebel leader Toussaint Louverture, whose forces had just gained control of all of Hispaniola, promulgated a constitution that prohibited slavery in perpetuity throughout the island. The following year, in 1802, Toussaint was captured and transported to France, and Napoleon I reinstituted slavery throughout the French colonies. After France’s defeat in the Napoleonic Wars, in 1817 the French constitutional monarchy passed a law abolishing the slave trade by 1826. A few months after the overthrow of the monarchy and establishment of the Second Republic, and under the leadership of prominent abolitionist Victor Schoelcher, on April 27, 1848, France abolished slavery throughout the empire. SPAIN In Spain the fi rst effort to abolish slavery came soon after the overthrow of King ferdinand vii and during the tumult of the Napoleonic occupation, when in 1811 the Cortes (parliament) abolished slavery throughout the empire. The law was largely ignored. In 1820, following a major revolt against a restored constitutional monarchy, the Cortes abolished the slave trade while leaving slavery itself intact—though after the independence of Latin America in the early 1820s, Spain’s American empire had been reduced to one major colony: Cuba. Abolitionist sentiment within Cuba mounted through the fi rst half of the century, despite the colonial government’s success in crushing organized antislavery agitation. In 1865, in the wake of the U.S. Civil War, the Spanish Abolitionist Society was founded, its considerable infl uence rooted in mounting opposition to the constitutional monarchy. In 1868 a liberal revolution triumphed in Spain, its leaders advancing as one of their principal aims the abolition of slavery in Cuba. In July 1870 the Cortes passed the Moret Law, which emancipated children born to slaves after 1868 and slaves age 60 and older. Envisioned as a form of gradual abolition, the law’s provisions were undermined by both planters and slaves. Planters sought to delay the law’s implementation and subvert its provisions, while slaves pushed its boundaries in the effort to secure their freedom. The Ten Years’ War on the eastern half of the island complicated the situation even further. Finally, on October 7, 1886, the Spanish government eliminated various legal categories of quasi slavery and abolished slavery throughout the island. A brief summary of other European nations’ abolition laws once again highlights the partial and uneven nature of the process of emancipation. Sweden abolished the slave trade in 1813 and slavery in its colonies in 1843. In 1814 the Netherlands outlawed the slave trade and, nearly half a century later in 1863, abolished slavery in its Caribbean colonies. In 1819 Portugal outlawed the slave trade north of the equator and in 1858 abolition of slavery in the Americas 3 abolished slavery in its colonies while providing for a 20-year period of apprenticeship similar to the British model. Denmark abolished slavery in its colonies in 1848, the same year as France. Turning to the independent nation-states of the Americas, most of the newly independent nation-states of Latin America abolished slavery in the fi rst three decades after independence. In 1821 Gran Colombia (comprising most of present-day Colombia, Venezuela, and Ecuador, and parts of Bolivia and Peru) became the fi rst Latin American nation to adopt a law calling for gradual emancipation, though fi nal abolition did not come for more than three decades (Ecuador in 1851, Colombia in 1852, Venezuela in 1854), fi nal abolitions followed by prolonged periods of apprenticeship that closely resembled slavery. Chile abolished slavery in 1823; Mexico in 1829; Uruguay in 1842; Argentina in 1843; and Peru in 1854. In 1850 Brazil outlawed the transatlantic slave trade, prompting a brisk internal trade in slaves that lasted until the fi nal abolition of slavery in 1888. UNITED STATES In the United States, in the aftermath of state laws abolishing or limiting slavery from the 1770s to the early 1800s, abolitionist and antislavery agitation mounted. The U.S. Constitution took an ambiguous stance toward slavery, neither prohibiting it nor precluding the possibility of its abolition and making unconstitutional any law passed before 1808 banning the importation of slaves. After the Louisiana Purchase in 1803, controversies over the expansion of slavery into the territories sharpened the sectional confl ict between North and South that dominated U.S. politics through much of the 19th century, culminating in the Civil War. Such controversies brought the nation to the brink of civil war in 1820 (forestalled by the Missouri Compromise) and again in 1850 (forestalled by the Compromise of 1850). In the 1830s the rise to prominence of vocal abolitionists like William Lloyd Garrison and Wendell Phillips sharpened the sectional confl ict even further. In 1861, following the election of Abraham Lincoln as president, southern slaveholding states formed the Confederate States of America and announced their secession from the Union, inaugurating the Civil War. Less than two years later Lincoln issued the Emancipation Proclamation, which, despite its title and symbolic signifi cance, freed no slaves. The fi nal abolition of slavery came in December 1865 with the ratifi cation of the Thirteenth Amendment to the Constitution. BRAZIL Brazil, the last nation in the Western Hemisphere to abolish slavery, offers an instructive contrast to the U.S. experience. Earlier generations of historians emphasized two key differences: Brazil did not have a comparable sectional confl ict and Brazil abolished slavery without recourse to civil war. More recent scholarship has blurred these distinctions, with greater attention to Brazil’s major regional differences and to the role played by the specter of violence and civil strife in accelerating the process of emancipation. The British prohibition of the transatlantic slave trade from 1808 did not diminish the number of slaves imported into Brazil, as the government and slave traders ignored the law. An 1831 treaty between Brazil and Great Britain banning the importation of slaves also had little practical effect, as the Brazilian government did little to enforce its provisions. Over the next 20 years, an estimated half a million slaves poured into the country. In 1850, in response to tremendous British pressure, Brazil passed a law putting teeth into the prohibition, after which the transatlantic slave trade diminished markedly. The 1850 law prompted two major shifts. Planters began creating conditions under which natural population increases would permit perpetuation of slavery, including improved nutrition and living conditions, enhanced surveillance and control, and forced reproduction. Slave traffi cking within the country also increased dramatically, with major fl ows from the Northeast to the booming coffee-based states of the South. By the 1860s, however, the Atlantic world’s mounting moral opprobrium toward slavery, combined with the carnage of the U.S. Civil War, made clear to many Brazilians that abolition was inevitable and that a gradualist approach to the problem was preferable to civil war. What eventually emerged from these debates was the Rio Branco Law of September 28, 1871. Dubbed the Law of Free Womb, the law called for all children born of slaves to be free, following a period of semibondage until they reached age 21. Many, however, including prominent abolitionists in the Chamber of Deputies such as Joaquim Nabuco, Jeronymo Sodré, and Rui Barbosa, saw the law as fatally fl awed, permitting slavery’s survival well into the 20th century. In the late 1870s abolitionist pressures intensifi ed, as did urban violence, plantation uprisings, and civil strife. Slaves especially pushed the boundaries of the law, insisting on their own emancipation. Finally, on May 13, 1888, the Brazilian parliament passed a law consisting of the following two provisions: “Article 1. 4 abolition of slavery in the Americas From the date of this law slavery is declared abolished in Brazil. Article 2. All contrary provisions are revoked.” After 396 years, legal slavery in the Americas had ended. The process by which chattel slavery was abolished in the Americas followed a number of distinct trajectories, as various groups of actors in confl ict and alliance propelled and forestalled the outcomes. Nowhere was abolition inevitable; everywhere its achievement resulted from the determined actions of many different individuals and groups. In all cases, the actions of slaves were integral to the process, a fact to which a large and growing body of scholarship amply attests. See also slave revolts in the Americas; slave trade in Africa; Wesley, John (1703–1791) and Charles (1707– 1788). Further reading: Hold, Thomas C. The Problem of Freedom: Race, Labor, and Politics in Jamaica and Britain, 1832–1938. Baltimore, MD: The Johns Hopkins University Press, 1992; Scott, Rebecca J. Slave Emancipation in Cuba: The Transition to Free Labor, 1860–1899. Princeton, NJ: Princeton University Press, 1983; Toplin, Robert Brent. The Abolition of Slavery in Brazil. New York: Atheneum, 1975. Michael J. Schroeder

AbyssiniaEdit

See ethiopia/abyssinia.

Acadian deportationEdit

In 1755, during the early days of the Seven Years’ War/French and Indian War between France and Britain, thousands of French farming families living in Nova Scotia were forcibly deported by British troops. The dislocation of the Acadians, as these French colonists were called, became almost a mythical example of the injustice and brutality of 18th-century warfare. Although several thousand Acadians would eventually return to their homeland, thousands more, often separated from their families, ended up as far away as the West Indies and Louisiana, where the refugees became known as Cajuns. Although the French were fi rst to exploit the fur, fi shing, and farming potential of the New World, France had trouble persuading its citizens to live in the wilderness at the mouth of Canada’s St. Lawrence River. Meanwhile, British colonies, especially those of New England, soon overtook French colonial holdings in both population and hunger for land and wealth. Along what became the Canadian border, French and British colonists frequently trespassed on each other’s claims, regularly enlisting the help of friendly Native tribes. In 1713 the Treaty of Utrecht ending the War of the Spanish Succession redrew the political map of Europe and dealt to Britain control of Hudson Bay and Newfoundland. In addition, fertile lands occupied by the Acadians for several generations were no longer New France but now became British territory. At fi rst, British authorities assured the Acadians that their farms would be safe and their beliefs respected. But Britain also demanded that its new colonists swear loyalty oaths and give up any notion of fi ghting for France in future confl icts. Most Acadians declined to take the oath, considering themselves French neutrals. As tensions in Europe between Britain and France escalated and played out in their respective colonies, neutrality—hard to achieve under the best of circumstances— became untenable for both sides. By the spring of 1755 the British believed that 300 Acadians had taken up arms in support of France. In July Acadian leaders were summoned to Halifax and ordered to take loyalty oaths immediately. A month later the British rounded up their recalcitrant French subjects and put them on ships for deportation. Historians disagree on the magnitude and brutality of this mass deportation. The number of Acadians affected has been estimated between 6,000 and 18,000 people. Many families were separated and many had trouble fi nding a place to relocate. Some believe family separations and dislocations were unintentional results of mistakes and confusion; others have likened British actions to modern-day ethnic cleansing. In 1847 American poet Henry Wadsworth Longfellow made the Acadian expulsion the subject of one of his extremely popular epics. Evangeline, A Tale of Acadie told of young French-Canadian lovers torn apart by war and politics. A sensational success, the poem kept alive remembrance of British misdeeds, both among French Canadians, now subjects of British Canada, and the Cajuns of Louisiana who traced their heritage back to Acadia. Further reading: Faragher, John Mack. A Great and Noble Scheme: The Tragic Story of the Expulsion of the French Acadians from their American Homeland. New York: W.W. Norton, 2005; Plank, Geoffrey G. An Unsettled Conquest: Acadian deportation 5 The British Campaign against the Peoples of Acadia. Philadelphia, PA: University of Pennsylvania Press, 2001. Marsha E. Ackermann

Adams, John, and familyEdit

(1750–1827) American diplomats and intellectuals Descendants of Puritans who settled near Boston in 1638, members of the Adams family distinguished themselves over two centuries as political leaders and thinkers. Second cousins Samuel Adams and John Adams played crucial roles in the founding of the United States. John’s wife, Abigail Smith Adams, was an early advocate for women’s expanded public roles. Their son, John Quincy, was the fi rst president’s son also elected president and dedicated his later years to ending slavery. Into the early 20th century, the Adamses excelled in diplomacy and history. Harvard-educated brewer and Boston tax collector, Samuel Adams was a leading Son of Liberty who fought new taxes and restrictions imposed by Britain on its American colonies after the Seven Years’/French and Indian War ended in 1763. He organized the 1773 Boston Tea Party in which tea worth £100,000 was dumped into the harbor to protest British policies. His younger cousin, John, a Harvard-educated lawyer, successfully defended British soldiers who killed fi ve Americans in a 1770 encounter dubbed the Boston Massacre by people like Samuel, who deemed it a “bloody butchery.” Wary of mob enthusiasms, but convinced of the rightness of American liberty, John Adams soon surpassed his cousin’s importance in the looming American Revolution. Both were delegates to the First Continental Congress; John drafted plans for a new national government and soon was helping Thomas Jefferson revise and refi ne his draft of the Declaration of Independence. After Continental victory at Saratoga in 1777, John endured long intervals of painful separation from his family as he pursued fi nancial and military support for the new nation in European capitals, working uneasily with senior diplomat Benjamin Franklin and helping negotiate the treaty ending the Revolution. In 1784 Abigail joined her husband in Europe; his diplomatic service culminated with his appointment as fi rst American ambassador to Britain. In 1789 Adams was selected as George Washington’s vice president. As such, he had little to do, sidelined in part by the dramatic political and personal clashes of Washington cabinet secretaries Jefferson and Alexander Hamilton. Adams won the presidency by just three votes over Jefferson in 1796; his tenure in offi ce would prove mostly disastrous. A combination of personality traits and crises would erode Adams’s reputation, ending his administration after a single term. Partisanship unleashed by earlier battles over the Constitution brought forth viciously competitive political parties. Soon Adams, a Federalist, would fi nd himself at odds with his own vice president, Jefferson, once a dear friend, but now a rival. The two men had already split over the French Revolution, whose growing violence was to Adams a horrifying breakdown of order and a direct threat to American independence. Although Adams avoided a costly war with France, his popularity plummeted amid partisan rancor. In 1798, a Federalist-dominated Congress passed and Adams signed the Alien and Sedition Acts. Targeting Republican publishers and other political critics, these acts clearly violated the First Amendment. Charles Francis Adams would later call these John Adams, second president of the United States, was one of several Adamses who infl uenced the early United States. 6 Adams, John, and family acts the fatal error that doomed his grandfather’s Federalist Party. Adams and Jefferson resumed their correspondence, but these old friends and enemies would truly reunite only in death. Both died on July 4, 1826, the 50th anniversary of the Declaration to which both contributed mightily. By the time his father died, John Quincy Adams, his parents’ eldest son, was in the second year of his own presidency. It was a tormented four years after years of public distinction. Trained in diplomacy at his father’s side as a teenager in Europe, John Quincy returned to attend Harvard and take up law, although attracted by literature and teaching. In 1803 John Quincy went to the U.S. Senate as a Federalist but often supported President Jefferson, losing his seat as a result. As James Madison’s ambassador to Russia and lead negotiator of the War of 1812’s Peace of Ghent, John Quincy found his own political fame. He authored the Monroe Doctrine while serving James Monroe as secretary of state. Becoming president seemed the obvious next step. But U.S. politics were changing as voting rights expanded. Being notable—a man of wealth or distinguished family—no longer assured electoral success. In 1824’s fi ve-way race, John Quincy became president only after a “corrupt bargain” steered votes from war hero Andrew Jackson to the former president’s son. John Quincy’s single term was almost devoid of accomplishment and dogged by family diffi culties. His postpresidential career would be as diffi cult but more fulfi lling. In 1830 the former president was elected to the House of Representatives, a freshman member at age 64, serving his Plymouth, Massachusetts, district until suffering a stroke on the House fl oor in 1848. For nine years, he fought a gag rule that prevented slavery opponents from conveying their views to Congress. In 1841 his nine-hour speech to the Supreme Court won freedom for 33 Africans who had commandeered the Spanish slave ship Amistad. The Adamses were hard on their sons. Just as John Quincy was John’s only son of three to make his father proud, Charles Francis Adams was the only one of three of John Quincy’s sons to gain distinction. Charles Francis became his family’s fi nancier and historian, publishing important family writings, including Abigail’s letters. Entering Massachusetts politics in 1840 he was the new Free-Soil Party’s vice presidential choice in 1848 as the U.S. victory in the Mexican-American War roiled sectional politics. Soon he joined the emerging Republican Party. Appointed minister to Britain by Abraham Lincoln, Charles Francis was instrumental in keeping Britain from backing the Confederacy during the Civil War. It was left to a fourth generation, especially brothers Henry and Brooks, to try to understand America through the lens of the Adams’ legacy. Henry, Harvard lecturer and historian, was early drawn to medievalism. In The Education of Henry Adams, his third-person autobiography, he tried to make sense of how medieval Europe could have given birth to early 20th-century America. Brooks, a more “erratic genius,” predicted inevitable decay as capitalist civilizations faltered and more energetic nations emerged. Some believe he was describing his own family. The family Adams did not disappear with Brooks’s death. But with the transfer of the old family homestead in Braintree/Quincy, Massachusetts, to the National Park Service in 1946, the Adamses became the “property” of the nation so many of them had served. See also political parties in the United States. Further reading: Contosta, David R. Henry Adams and the American Experiment. Boston: Little, Brown, 1980; McCullough, David. John Adams. New York: Simon and Schuster, 2001; Nagel, Paul C. Descent from Glory: Four Generations of the John Adams Family. New York: Oxford University Press, 1983. Marsha E. Ackermann

Afghani, Jamal al-Din al- Edit

(1838–1897) Pan-Islamic leader Jamal al-Din al-Afghani, often referred to as the founder of pan-Islam, was born in Iran. He attended madrasas (religious schools) in Iran and as a young man traveled to India, where he observed fi rsthand discrimination against Muslims by the ruling British government. After making the hajj (pilgrimage) to Mecca, al-Afghani moved on to Karbala and Najaf, the main centers of Shi’i pilgrimage in Iraq. During the 1860s al-Afghani lived in Afghanistan before moving to Istanbul, where the ruling Sunni Muslim Ottoman elite did not accord him the respect and honor he felt he deserved. In 1871 al-Afghani moved to Egypt, where he lectured on the need for unity and reform in Muslim society. His popular lectures attracted a following among young Egyptians, and he became the mentor to a Afghani, Jamal al-Din al- 7 future generation of Muslim reformers that included Muhammad Abduh and others. Al-Afghani’s popularity, calls for political reform, and opposition to British infl uences in Egypt attracted the attention of the ruling authorities, and the khedive (viceroy) expelled him from Egypt. He then returned to India, where he resumed teaching and writing on what he referred to as the Virtuous City—a society based on Islamic tenets and governed by honest, devout Muslim rulers. Al-Afghani argued that only a unifi ed Muslim world could confront the Western imperial powers, particularly the British, on an equal basis. He traveled to London and Paris, where he debated the role of science in Islam with Ernest Renan, the noted French philosopher. He spent two years in Russia before returning to Iran, where he vigorously opposed Nasir al-Din Shah (the Qajar ruler). In Iran as in Egypt, al-Afghani also spoke out against British infl uence, calling for a constitutional, parliamentary government. Al-Afghani’s opposition to the monarchy forced him to leave Iran for Turkey, where he continued to write and lecture about the need for basic constitutional reforms throughout the Muslim world. Al-Afghani carried on this work until his death in 1897. See also Arab reformers and nationalists; Ismail, Khedive. Further reading: Keddie, Nikki R. An Islamic Response to Imperialism: Political and Religious Writing of Sayyid Jamal al-Din “al-Afghani.” Berkeley, CA: University of California Press, 1968; ———. Sayyid Jamal al-Din “al-Afghani”: A Political Biography. Berkeley, CA: University of California Press, 1972. Janice J. Terry

Afghan Wars, First and SecondEdit

The two Afghan wars were caused by the growing rivalry for control of Central Asia between the Russian Empire and the British Empire. Because Afghanistan was the largest organized state in the Central Asian region, it became the main focus for both countries in what the British poet Rudyard Kipling would call the “Great Game.” The Great Game actually began during the Napoleonic Wars. In 1810, while the British duke of Wellington was fi ghting the French in Spain, Captain Charles Christie and Lieutenant Henry Pottinger of the 5th Bombay Native Infantry Regiment left the village of Nushki in Baluchistan for their role in the game. On April 18 Christie reached Herat, while Pottinger pursued his own mission in Persia. Finally, on June 30, 1810, the two agents were reunited in Isfahan, Persia, with both missions accomplished. Over the next 25 years other British agents would follow Christie and Pottinger on great treks into Central Asia. Afghanistan was seen as the vital buffer state against the advance of the Russians and, while the British did not always desire to add Afghanistan to their empire, they always hoped that the ruler of the Afghans, the amir, would lend his support to them instead of the Russians. The British concerns were realized in December 1837 when a Cossack leader arrived carrying a letter from Czar Nicholas I of the Romanov dynasty for the Afghan amir, Dost Mohammed. At the same time, Kabul was visited by a British offi cer named Alexander Burnes, who had served with the Bombay army. By this time, Persia was allied to Russia. George Eden, Lord Auckland, and his chief secretary, Henry Macnaghten, suspected that Dost Mohammed had sided with the Russians. Having ascended the throne in June of 1837, Queen Victoria was now presented with the fi rst serious crisis of her reign. Ultimately, nothing would suit Auckland and Macnaghten other than a regime change in Kabul. In February 1839 the British Army of the Indus, under the command of Sir John Keane of the Bombay Army, began its march for Kabul. In the beginning, Auckland’s expectations that Dost Mohammed’s rule could not survive appeared to be justifi ed. In July 1839 the fortress of Ghazni fell before a furious British assault and Dost Mohammed’s forces melted away. Meanwhile, the Afghans faced a combined Sikh-British expedition coming up from Peshawar. In August 1839 Shah Shuja was crowned again the amir in Kabul, and Dost Mohammed sued for peace. Macnaghten lacked the temperament to deal with the tribesmen and, in 1841, slashed the subsidies that had earned their loyalty to Shah Shuja. As young offi - cers pursued inappropriate and culturally serious affronts to Afghan women, relations worsened further. The British commander, Major-General William Elphinstone, lacked both the ability and the courage to face the mounting crisis. By the end of November all Macnaghten and Elphinstone could think of was retreat. On December 11 Macnaghten met with Dost Mohammed’s son Akbar 8 Afghan Wars, First and Second Khan to make fi nal a British withdrawal. At a second meeting on December 23, Macnaghten was taken by surprise and killed. Elphinstone continued planning for the retreat from Kabul, which began on January 6, 1842. The British and Indian troops were harassed and sometimes attacked by the Afghans along every foot of their retreat. On January 13, the last European fi nally reached safety at the British post of Jalalabad. Shah Shuja himself had been assassinated. In February 1842 Edward Law, Lord Ellenborough, replaced the unlucky Auckland as the area’s governor-general, and plans were made to avenge their fallen countrymen. A punitive force commanded by Major-General George Pollock of the Bengal army entered Afghanistan again. Despite fi erce resistance from Akbar Khan’s forces, Pollock reentered Kabul in September 1842. Having made their point, the British evacuated Kabul again in December 1842 and this time reached British territory safely. The British permitted Dost Mohammed to take back the throne, but the overall aim of the war had been achieved—Afghanistan remained in the British camp and the Russian plans were thwarted. During the next 40 years the British and Russian Empires continued their seemingly inexorable advance toward one another through Central Asia. During the Sikh Wars, the British defeated the once independent realm of the Sikhs in the Punjab, fi rmly adding it to their growing Indian Empire. Although British rule was shaken during the Indian Mutiny of 1857–58, the attention of the British was still focused on the ambitions of the Russians to the north and west. With the assumption of direct British rule in the aftermath of the mutiny, real decision-making shifted decisively from the British governors-general in India to London. The Great Game was defi nitely on again, if it ever had stopped. In 1877 the Russians went to war with Turkey and although the Congress of Berlin in 1878 promised peace, the stage was set for another confrontation over Afghanistan. Those who supported the aggressive Forward Policy against Russia, including Robert Bulwer-Lytton, the viceroy, demanded action be taken against Afghanistan. On November 3, 1878, British diplomat Neville Chamberlain appeared at the Khyber Pass to demand passage for his delegation to enter Kabul. Afghan border troops turned him back. On November 21 the British crossed the border into Afghanistan, 39 years after the fi rst British invasion. As before, the Afghans were in no position to withstand the determined advance. In Kabul, Sher Ali relinquished his throne to his son Yakub Khan. After a winter of guerrilla war, Yakub Khan realized that making peace with the British was the best policy. In May 1879 Yakub Khan accepted a permanent British resident (who would actually serve as the real power in the country) in Kabul, Sir Louis Cavagnari. In July 1879 Cavagnari made his entrance into the Afghan capital. In September mutinous Afghan troops killed Cavagnari. Although he had requested aid from Yakub Khan, the request was ignored, leaving the impression that the troops attacked the British with at least the unspoken agreement of the amir. When news of the massacre reached India, Major- General Frederick Roberts was given command of the Kabul Field Force in order to lead a quick British response to attempt to stabilize the situation in Afghanistan before the Russians might be tempted to take advantage of the British defeat. Yakub Khan’s troops made a stand at the Shutargardan Pass, but a determined British push cleared them away. Yakub Khan, chagrined at Roberts’s determination, decided to make peace. However, the danger was far from past, and on October 5, 1879, Roberts was forced to fi ght another engagement with the Afghans. The British now faced hostility from a different quarter. A Muslim holy man, Mushkh-i-Alam, preached a jihad, an Islamic holy war, against the British. This put the British force at Kandahar in peril. Once news reached them, Roberts began to gather a relief column to rescue them and his hard-pressed garrison at Kandahar. Within two weeks Roberts set out with a force of 10,000 men. On August 31, 1880, after a march of 21 days, Roberts broke Ayub Khan’s siege of Kandahar. The next day Roberts decisively defeated him in open battle. With the relief of Kandahar the Second Afghan War came to a close. Ayub Khan and Yakub Khan were both tainted by their treachery in British eyes, and Abdul Rahman, their cousin, became the amir in Kabul. Twice in 40 years the British had asserted their primacy in Kabul and won another round in the Great Game against the Russians. See also Anglo-Russian rivalry. Further reading: Barthorp, Michael. Afghan Wars and the North-West, 1839–1947. London: Cassell, 2002; McCauley, Martin. Afghanistan and Central Asia: A Modern History. London: Pearson, 2002; Meyer, Karl E., and Sharon Blair Brysac. Tournament of Shadows: The Great Game and the Race for Empire in Central Asia. Washington, DC: Counterpoint, 1999. John F. Murphy, Jr.

Africa, exploration ofEdit

Systematic exploration of Africa by Europeans began with James Bruce, who was born at Kinnaird in Scotland in 1730. After a century of bloody internal war, Scottish energy turned to intellectual and scientific studies, including exploration. Bruce arrived in Algiers in 1762 as the British consul, and in 1768 he was in Cairo, where he conceived the great dream of his life: to find the source of the Nile River. Unlike others, Bruce believed the source of the Nile was in Ethiopia. Bruce had the misconception that the Blue Nile was the main point of origin of the great river, not the White, as later explorers would determine. Indeed, the White and Blue Niles are two distinct rivers, as explorers would later learn. Bruce, with self-confidence and determination, was the prototype of the African explorer. In November 1770 he reached Ethiopia’s Lake Tana, the source of the Blue Nile. After months of adventure and war, he returned to Cairo in January 1773 before going on to London and then to his native Scotland. In 1790 he published the record of his journeys, Travels to Discover the Sources of the Nile. Four years later, Bruce, who had survived disasters and dangers, died at home from a fall on a flight of steps. The next great explorer of Africa was another Scotsman, Mungo Park, born in Selkirkshire in 1771. In 1789 he went to Edinburgh to study to become a surgeon. Park’s extraordinary abilities caught the attention of Joseph Banks, perhaps the greatest botanist of his day. After Park completed his studies, Banks helped him secure the position of surgeon on the British East India Company’s merchant ship Worcester. When he returned, he brought descriptions of eight new species of fish. Meanwhile, French and British colonial rivalry was beginning to engulf Africa. Impressed by Park’s presentation of the new species, Banks recommended Park as a scientist for the Association for the Promotion and Discovery through the Interior of Africa—an expedition-sponsoring association. He got the position, and the expedition set sail on May 22, 1795. The party located the Niger River on July 22, 1796, and Park’s record of the journey was published in 1799 as Travels in the Interior Districts of Africa. In January 1805 Park set sail in the troopship HMS Crescent and landed at the port of Gorée on the Gambia two months later. Disregarding sickness and bandits, which took a steady toll of his party, Park reached the Niger on August 19. Park wrote his last letter to his wife, Allison, on November 20, 1805. It appears the Scotsman was killed in a skirmish with tribesmen at Bussa Falls in 1805 on the Niger. The Napoleonic conquest of Egypt guaranteed continued British interest in Africa because it brought the continent into the heart of the conflict. One of Napoleon’s generals, Louis-Charles-Antoine Desaix, unwittingly became one of the first European explorers of the Nile as he pursued the defeated Mamluks into Upper Egypt. The British used the Napoleonic Wars to stake their claim on South Africa as well. In 1806 at the southern extremity of the continent, the British seized the Dutch colony at what would become Cape Town, since the Netherlands were then allied with the French. The great anchorage of Table Bay made the site vital to communications with the crown jewel of the growing British Empire, India. It became the southern British gateway to the interior of Africa, then undergoing the imperial conquests of the Zulu king Shaka Zulu. From Cape Town came the British penetration of the southern half of Africa that continued to the end of the 19th century. cape town In November 1810 the new British colony of Cape Town led to the first British journey into the unknown Bantu lands to the north. William Burchell was born in 1782, the son of a professional nurseryman. Like Joseph Banks and Mungo Park before him, an interest in botany led to his interest in exploration. It took Burchell several months to gather together an expedition. His goal was the Kalahari Desert and Angola, which the Portuguese had first visited in the 15th century in their long trek down the west coast of Africa. Discovering the desert, the terrible heat and lack of water finally forced Burchell to abandon his quest for Angola, and in August he turned back. It would take him and his party two and a half years to return to Cape Town, having traversed some of the most forbidding terrain in Africa. In April 1815 he returned to Cape Town with an immense scientific treasure from his years of exploration. He returned to England, and from 1822 to 1824 Burchell devoted himself to writing his two-volume Travels in the Interior of Southern Africa. Thus, by the end of the Napoleonic Wars in 1815, much of the coastal area of Africa had been explored, and intrepid adventurers had begun to enter the uncharted heart of the continent. For the rest of the century, the lure of the African interior would be irresistible. While governments may have had their own agendas, for the great majority of explorers, they traveled neither for imperial glory or monetary gain, but for the sheer adventure of finding out what lay beyond 10 Africa, exploration of the next river or mountain range. Still, as in the era of Mungo Park, one of the greatest challenges to exploration was the ancient city of Timbuktu; this and the source of the Nile formed two of the Holy Grails for generations of explorers. In May 1825 Alexander Gordon Laing landed in Tripoli, determined to fi nd his way to Timbuktu. Finally, after a year of incredible hardship in the desert, on August 13, 1826, he arrived at Timbuktu. Although the city disappointed him, Laing was impressed by the Mosque of Sankore, built by the great Muslim West African ruler Mansa Musa. Although Laing had achieved his goal, his exploration ended in tragedy. On September 21, 1826, Laing was told he was not safe and left the city to walk into a trap set by Sheikh Ahmadu El Abeyd, who had promised him protection. On September 22 El Abeyd demanded Laing accept Islam, but the Scotsman refused. He was killed and his head cut off. ZANZIBAR The chapter in the history of African exploration concerning Richard Burton and John Hanning Speke is the most tragic of all. In 1856 Richard Burton, perhaps the greatest British adventurer of his generation, was commissioned by the Royal Geographic Society to fi nd the source of the Nile. He decided to take with him a companion from an earlier expedition, John Hanning Speke. Burton was already an accomplished traveler, profi cient in Arabic, and able to carry off pretending to be a Muslim. On December 19, 1856, Burton and Speke arrived at Zanzibar from Bombay, where Burton held a commission in the army of the East India Company. Both men took ample time in Zanzibar preparing for their expedition. They set off on their quest after years of travels and squabbles. Burton was convinced that Lake Tanganyika was the source of the White Nile, whereas Speke believed it was Lake Ukewere, which he renamed Lake Victoria. The rivalry that began in their prior expedition came to a head, and when Burton stopped to rest in Aden, Speke went on to England, promising to wait for his return to reveal the results of their journeys. He broke that promise, and by the time Burton arrived in England on May 21, 1858, Speke had convinced the Royal Geographic Society that Lake Victoria was the source. This accomplishment earned him another commission by the society, and he did not invite Burton to join him on his return to Africa to verify the claim. Instead, Speke chose an army companion, James Augustus Grant. They arrived in Zanzibar from England in August 1860. They retraced the route that Speke had taken with Burton. After several months in Uganda, Speke and Grant continued their trip. Because Grant had a severely infected leg, Speke tended to forge ahead on his own. On July 21, 1862, Speke found himself on the Nile and on July 28 came to Rippon Falls, where the White Nile fl ows out of Lake Victoria. It was during Speke’s second trip that he and Grant met two of the period’s most colorful explorers, Samuel Baker and his redoubtable wife, Florence. They met Speke at Gondokoro on the White Nile, whose source the Bakers were pursuing. A question remained about another lake, known as the Luta N’zige. Speke believed that the White Nile fl owed into it from Lake Victoria and then out of Luta N’zige. Speke suggested to Baker that he take up the investigation, and Baker was pleased to do so. On February 26, Speke and Grant resumed their journey down the Nile to Khartoum, and from there to Cairo and England. LAKE ALBERT The Bakers continued with their exploration and on January 31, 1864, they struck out on the fi nal march toward Luta N’zige. On March 15, 1864, they found the lake, which they renamed Lake Albert. Samuel explored the surrounding area and saw that the Nile fl owed through it. He and Florence returned to En gland in October, and Samuel was given a gold medal by the Royal Geographic Society. The following August he was knighted. Meanwhile Speke returned to England without any convincing evidence that his theory was correct. The British Association for the Advancement of Science set up a meeting between Burton and Speke to make their cases. At a preliminary meeting Burton triumphed over Speke. On September 15, one day before the fi nal confrontation, Speke was shot dead while hunting. Many claimed he had shot himself by accident, but others felt he had taken his own life. Throughout this entire period the name David Livingstone seemed to dominate. Livingstone was a Scotsman born on May 1, 1813. He fi rst visited Africa as a missionary, having gained a degree in medicine at the age of 25 at the University of Glasgow. Livingstone soon realized that the exploration of this virtually unknown continent was more to his heart than laboring at a missionary station and devoted himself to exploration, often with his wife. On June 1, 1849, with two companions, Orwell and Murray, he traveled to fi nd Lake Ngami, and on August 1 Livingstone and his party sailed down the Africa, exploration of 11 entire lake. Then began Livingstone’s exploration of the Zambezi River. A national hero back home, Livingstone recounted his travels in his best-selling Missionary Travels and Researches in South Africa. From 1858 to 1864 he was in Africa on a second expedition to explore eastern and central Africa. He returned to Africa in 1864 to look for the sources of the Nile. Striking out from Mikindani on the east coast, the expedition was forced south, and some of his followers deserted him, concocting the story that he had been killed and making headline news. Livingstone, however, pressed on, reaching Lakes Mweru, Bang weulu, and Tanganyika. Moving on to the Congo River, he went farther than any European before him. It was on this exploration that rumors reached England and North America that the great explorer was near death. In 1869 the New York Herald hired Henry Morton Stanley to fi nd Dr. Livingstone. On November 10, 1871, Stanley found Livingstone at his camp at Ujji on Lake Tanganyika. Upon Livingstone’s death in 1873, his body was returned to England for burial in Westminster Abbey. Stanley decided to pick up where Livingstone, Burton, and Speke had left off, and he set off on his own expedition. The most important result of the journey was the realization that Speke’s theory had been right—Lake Victoria was the source for the White Nile. He followed the Congo River and caught the attention of King Leopold II of Belgium, who wished to develop the Congo River basin. In 1879 Stanley set off for Africa in the service of Leopold. The exploration of Africa led to a rivalry among the countries that had sponsored the explorers. At the same time that Stanley had been exploring the Congo for Belgium, so had Pierre Savorgnan de Brazza for France. To prevent an African rivalry from endangering the peace of Europe, Chancellor Otto von Bismarck of Germany chaired a Conference of Berlin from November 1884 to February 1885 to gain the Great Powers’ agreement to a peaceful partition of Africa. The map of Africa was fi lling in as the end of the century approached. The areas not yet mapped quickened the heartbeats of explorers from all over the world. Kenya was the next area of interest. On January 2, 1887, the Hungarian explorer Count Teleki von Szek arrived in Zanzibar with Ludwig von Hohnel. Their goal was to explore for their patron, Crown Prince Rudolph of Austria-Hungary, another of the lakes that still tantalized African explorers, known in the local language as Basso Narok, or Black Water. Teleki was the fi rst to climb Mount Kenya before discovering two more lakes, today known as Turkana and Stefanie. On October 26, 1888, after close to two years, they returned to Mombasa and the voyage home. Sixteen years later, in 1914, World War I changed the map of Africa forever. Still, in honor of the explorer who had the purest heart, in spite of the era of decolonization after World War II and the years of unrest that followed, the statue of Dr. David Livingstone still stands overlooking Victoria Falls today. See also Cook, James; slave trade in Africa. Further reading: Dugard, Martin. Into Africa: The Epic Adventures of Stanley and Livingstone. New York: Broadway Books, 2003; Kryza, Frank T. The Race for Timbuktu. New York: Harper, 2006; Livingstone, David. The Life and African Explorations of David Livingstone. New York: Cooper Square, 2002; Novaresio, Paolo. The Explorers. Vercelli, Italy: White Star, 2004; Shipman, Pat. To the Heart of the Nile. New York: HarperPerennial, 2004. John F. Murphy, Jr.

Africa, imperialism and the partition ofEdit

Imperialism, or the extension of one nation-state’s domination or control over territory outside its own boundaries, peaked in the 19th century as European powers extended their holdings around the world. The huge African continent (three times the size of the continental United States) was particularly vulnerable to European conquest. The partition of Africa was a fastmoving event. In 1875 less than one-tenth of Africa was under European control; by 1895 only one-tenth was independent. Between 1871 and 1900 Britain added 4.25 million square miles and 66 million people to its empire. British holdings were so far-fl ung that many boasted that the “sun never set on the British Empire.” During the same time frame, France added over 3.5 million square miles of territory and 26 million people to its empire. Controlling the sparsely populated Sahara, the French did not rule over as many people as the British. By 1912 only Liberia and Ethiopia in Africa remained independent states, and Liberia was really a protectorate of U.S.-owned rubber companies, particularly the Firestone Company. By the end of the 19th century, the map of Africa resembled a patchwork quilt of different colonial empires. France controlled much of North Africa, West Africa, and French Equatorial Africa (unifi ed in 1910). The British held large sections of West 12 Africa, imperialism and the partition of Africa, the Nile Valley, and much of East and southern Africa. The Spanish ruled small parts of Morocco and coastal areas along the Atlantic Ocean. The Portuguese held Angola and Mozambique, and Belgium ruled the vast territories of the Congo. The Italians had secured Libya and parts of Somalia in East Africa. Germany had taken South-West Africa (present-day Namibia), Tanganyika (present-day Tanzania), and Cameroon. Britain had the largest empire and the French the second largest, followed by Spain, Portugal, and Belgium. Germany and Italy, among the last European nations to unify, came late to the scramble for Africa and had to content themselves with less desirable and lucrative territories. There were many different motivations for 19thcentury imperialism. Economics was a major motivating factor. Western industrial powers wanted new markets for their manufactured goods as well as cheap labor; they also needed raw materials. J. A. Hobson and Vladimir Lenin both attributed imperial expansion to new economic forces in industrial nations. Lenin went so far as to write that imperialism was an inevitable result of capitalism. As the vast mineral resources of Africa were exploited by European imperial powers, many Africans became laborers in mines or workers on agricultural plantations owned by Europeans. The harsh treatment or punishment of workers in the rubber plantations of the Belgian Congo resulted in millions of deaths. However, economics was not the only motivation for imperial takeovers. In some instances, for example the French takeover of landlocked Chad in northern Africa, imperial powers actually expended more to administer the territory than was gained from raw materials, labor, or markets. Nationalism fueled imperialism as nations competed for bragging rights over having the largest empire. Nations also wanted control over strategic waterways such as the Suez Canal, ports, and naval bases. Christian missionaries traveled to Africa in hopes of gaining converts. When they were opposed or even attacked by Africans who resented the cultural incursions and denial of traditional religions, Western missionaries often called on their governments to provide military and political protection. Hence it was said that “the fl ag followed the Bible.” The fi nding of the Scottish missionary David Livingstone by Henry Stanley, an American of English birth, was widely popularized in the Western press. Livingstone was not actually lost, but had merely lost contact with the Western world. Explorers, adventurers, and entrepreneurs such as Cecil Rhodes in Rhodesia and King Leopold II of Belgium, who owned all of the Congo as his personal estate, also supported imperial takeovers of territories. Richard Burton, Samuel and Florence Baker, and John Speke all became famous for their exploration of the Nile Valley in attempts to fi nd the source of that great river. Their books and public lectures about their exploits fueled Western imaginations and interest in Africa. CULTURAL IMPERIALISM Cultural imperialism was another important aspect of 19th-century imperialism. Most Westerners believed they lived in the best possible world and that they had a monopoly on technological advances. In their imperial holdings, European powers often built ports, transportation, communication systems, and schools, as well as improving health care, thereby bringing the benefi ts of modern science to less developed areas. Social Darwinists argued that Western civilization was the strongest and best and that it was the duty of the West to bring the benefi ts of its civilization to “lesser” peoples and cultures. Western ethnocentrism contributed to the idea of the “white man’s burden,” a term popularized by the poet Rudyard Kipling. Racism also played a role in Western justifi cations for imperial conquests. European nations devised a number of different approaches to avoid armed confl ict with one another in the scramble for African territory. Sometimes nations declared a protectorate over a given African territory and exercised full political and military control over it. At other times they negotiated through diplomatic channels or held international conferences. At the Berlin Conference of 1884–85, 14 nations decided on the borders of the Congo that was under Belgian rule, and Portugal got Angola. The term spheres of infl uence, whereby a nation declared a monopoly over a territory to deter rival imperial powers from taking it, was fi rst used at the Berlin Conference. However, disputes sometimes led European nations to the brink of war. Britain and France both had plans to build a north-south railway and east-west railway across Africa; although neither railway was ever completed, the two nations almost went to war during the Fashoda crisis over control of the Sudan, where the railways would have intersected. Britain was also eager to control the headwaters of the Nile to protect its interests in Egypt, which was dependent on the Nile waters for its existence. Following diplomatic negotiations the dispute was resolved in favor of the British, and the Sudan became part of the British Empire. Africa, imperialism and the partition of 13 War did break out between the British and Boers over control of South Africa in 1899. By 1902 the British had emerged victorious, and South Africa was added to their empire. In West Africa, European powers carved out long narrow states running north to south in order that each would have access to maritime trade routes and a port city. Since most Europeans knew little or nothing about the local geography or demographics of the region, these new states often separated similar ethnic groups or put traditional enemies together under one administration. The diffi culties posed by these differences continue to plague present-day West African nations such as Nigeria. FRENCH AND BRITISH RULE The French and British adopted very different approaches to governance in their empires. The French believed in their “civilizing mission” and sought to assimilate the peoples of their empire by implanting French culture and language. The British adopted a policy of “indirect rule.” They made no attempt to assimilate the peoples of their empire and educated only a small number of Africans to become civil servants. A relatively small number of British soldiers and bureaucrats ruled Ghana and Nigeria in West Africa. In East Africa, the British brought in Indians to take jobs as government clerks and in commerce. Otherwise, the British tried to avoid interfering with local rulers or ways of life. Although the British and French policies were radically different, both were based on the belief in the superiority of Western civilization. European colonists also settled in areas where the climate was favorable and the land was suitable for agriculture. Substantial numbers of French colons settled in the coastal areas of North Africa, especially in Algeria and Tunisia, while Italians settled in Tunisia and Libya. British settlers moved into what they named Rhodesia and Kenya. In Kenya, British farmers and ranchers moved into the highlands, supplanting Kenyan farmers and taking much of the best land. The Boers, Dutch farmers, fought the Zulus for control of rich agricultural land in South Africa. The Boers took part in a mass migration, or Great Trek, into the interior of South Africa from 1835–41 and established two independent republics, the Orange Free State and the Transvaal. Dutch farmers clashed with the British for control of South Africa in the Boer War. In Mozambique and Angola, Portuguese settlers (prazeros) established large feudal estates (prazos). Throughout Africa, European colonists held privileged positions politically, culturally, and economically. They opposed extending rights to native African populations. A few groups, such as the Igbos in Nigeria and the Baganda in Uganda, allied with the British and received favored positions in the colonial administrations. However, most Africans resisted European takeovers. Muslim leaders, such as Abdul Kader in Algeria and the Mahdi in Sudan, mounted long and effective armed opposition to French and British domination. But both were ultimately defeated by superior Western military strength. The Ashante in Ghana and the Hereros in South- West Africa fought against European domination but were crushed in bloody confrontations. The Zulus led by Shaka Zulu used guerrilla warfare tactics to halt the expansion of the Boers into their territories, but after initial defeats the Boers triumphed. The Boers then used the hit-and-run tactics they had learned from the Zulus in their war against the British. The British defeated the Matabele and Mashona tribes in northern and southern Rhodesia. In the 20th century, a new generation of nationalist African leaders adopted a wide variety of political and economic means to oppose the occupation of their lands by European nations and settlers. See also Congo Free State; Social Darwinism and Herbert Spencer (1820–1903); South Africa, Boers and Bantu in. Further reading: Hobsbaum, Eric. The Age of Empire, 1875–1914. London: Weidenfeld and Nicolson, 1987, 1996; Nederveen, Jan. White on Black: Images of Africa and Blacks in Western Popular Culture. New Haven, CT: Yale University Press, 1991; Pakenham, Thomas. The Scramble for Africa: White Man’s Conquest of the Dark Continent from 1876 to 1912. London: Weidenfeld and Nicolson, 1990; Robinson, Ronald, John Gallagher, and Alice Denny. Africa and the Victorians. New York: St. Martin’s Press, 1961. Janice J. Terry

Africa, Portuguese colonies inEdit

Before the 1880s most African societies were independent of European rule. With particular reference to Africa south of the Sahara, colonial rule was confi ned to coastal patches and the Cape region, the latter being home to Anglo-Boer political rivalry. As regards the Portuguese, their colonial interest was restricted to their colonies of Angola, Mozambique, and the tiny area of Portuguese Guinea. Interestingly, Portuguese rule in these areas was not strong. The reason was that trade, 14 Africa, Portuguese colonies in not political administration, dominated the purpose of their encounter with Africans during this period. It was because of this that no major political responsibility was taken by Portugal, unlike the other European powers, with regard to colonies in Africa, creating the unique nature of Portuguese enterprise or activities in Africa between 1750 and 1900. The establishment of colonies and colonial rule, as well as the strategies employed by the Portuguese to keep their holdings in Africa, have an interesting history, despite their dwindling fortunes during this period, occasioned by economic, political, and strategic factors. PORTUGUESE ENTERPRISE Between 1750 and 1900 the Portuguese did not achieve much as far as their attempt to establish colonial rule in Africa was concerned. But if colonialism is taken to mean the occupation and control of one nation by another, then some of the attempts made by Portugal to establish political control over some parts of Africa can be highlighted as examples. It is important to stress that the driving force behind Portuguese enterprise in Africa, and elsewhere in the world, was trade and economic exploitation of their colonies, and it is this more than anything that drove Portuguese desire for political control of these areas. Indeed, Portugal, like many of the other colonial powers, had always treated its colonies like private estates of the motherland, where resources had to be repatriated for the development of the latter. No real political administration and structure were put in place in the colonies. In the case of East Africa, the area was more or less a stopping place for the Portuguese on their way to Asia. The chief result of their rule in this region was that it contributed greatly to crippling the old Arab settlements that were once the pride of the East African coast. Portugal viewed its East African possessions with mixed feelings. While the area did not give them the wealth they had expected, they nevertheless wanted to contain Arab infl uence in the area and deal directly with the indigenous Africans. It was for this that the Portuguese attacked communities in the area and established a presence in Mombasa, Sofala, Kilwa, Mozambique, and Pemba. There were many obstacles as far as its East African project was concerned. First, many of the Portuguese settlers in East Africa died from tropical diseases. Many others were killed in the continual fi ghting on the coast. Second, due in large part to disease and fi ghting, Portugal never had a population large enough to carry out its colonial plans in East Africa. Most of its personnel were kept busy in Brazil and their empire in the Indian Ocean. Third, competition from the British and the Dutch East India Company helped to weaken the Portuguese hold on the eastern shores of the Indian Ocean. Then there were numerous revolts from the Arab leaders of the region. For instance, in 1698 Sultan bin Seif, the sultan of Oman, and his son, Imam Seif bin Sultan, captured Fort Jesus, which had been the military and strategic base of Portuguese holdings in East Africa. Indeed, in 1699 the Portuguese were driven out of Kilwa and Pemba, thus marking the end of Portuguese colonial interest in East Africa north of Mozambique. Earlier in 1622 a revolt against the Portuguese led by a former Portuguese mission pupil, Sultan Yusuf, helped to prepare the disintegration of Portuguese military strength in Mombasa. Consequent upon these issues, Portuguese holdings in East Africa were far from a successful colonial rule. By 1750 Portuguese interests in East Africa were replaced by a new socio-political order led by the leaders of Oman. AFRICAN INTERIOR In the interior of Africa, the Portuguese did not achieve anything substantial as far as colonial rule was concerned. The Mwenemutapa (known to the Portuguese as Monomotapa) did not provide fertile soil for the establishment of Portuguese colonization. The Portuguese, for their part, were more interested in what they would get instead of what they would give. Besides, the area was already experiencing decline owing to the emergence of several dynasties in the region. This situation was not helped by contact with the Portuguese. Elsewhere, in Guinea there was Portuguese infl uence, but it was not enough to be described as colonial rule. By 1750 Portuguese colonies in Africa were limited to Angola, Mozambique, and Guinea, but colonial rule was more pronounced in the fi rst two colonies. The Portuguese also held important islands in the Atlantic off the coast of Africa. During this period Portuguese colonies, especially Angola, remained the supply base for the Brazilian slave trade. The Portuguese sought to create a highly polished elite conditioned by their culture. This aspiration did not materialize. Indeed, the Angolan colony, which was an example of Portuguese colonial interest in Africa, was a mere shambles, in which the criminal classes of Portugal were busy milking the people for their own benefi t. To this end, Angola, like Mozambique, could be described as a trading preserve from which the interior could be reached. Africa, Portuguese colonies in 15 WEB OF MISERY Politically, Portuguese colonies lacked effective administration. The historian Richard Hammond has painted the picture in a sympathetic way when he argued that Portugal could not effectively control its colonies. He was merely echoing the voice of a Portuguese offi cial, Oliveira Martins, who wrote that Portuguese colonies were a web of misery and disgrace and that the colonies, with the exception of Angola, be leased to those “who can do what we most decidedly cannot.” The reason why Portuguese colonies were so painted is not hard to understand. A. F. Nogueira, a Portuguese offi cial, said, “Our colonies oblige us to incur expenses we cannot afford: For us to conserve, out of mere ostentation, mere display, mere prejudice . . . colonies that serve no useful purpose and will always bring us into discredit, is the height of absurdity and barbarity besides.” In 1895 the minister of marine and colonies, the naval offi cer Ferreira de Almeida, argued in favor of selling some of the colonies and using the proceeds to develop those colonies that would be retained. It is obvious from the issues Portugal contended with in Africa that the intent was to have a large space on the map of the world, but that Portugal was never ready to administer them practically. This notwithstanding, it is safe to say that the Portuguese implemented the policy of assimilation in governing their colonies. The aim was to make Africans in the colonies citizens of Portugal. Those who passed through the process of assimilation were called assimilados. It is important to note that the number of assimilados ceased to grow after the unsuccessful effort of the liberal Bandeira government to make all Africans citizens of Portugal. It is not clear whether the Portuguese were sincere in their efforts to assimilate Africans in their colonies. It appears that the policy was a mere proclamation that did not have the necessary political backing. Indeed, the idea of equality was a farce. The government did not provide the necessary infrastructure such as schools, fi nances, or other social institutions upon which such equality, demanded by true assimilation, could be built. The process of education in Portuguese territories in Africa was far from satisfactory. The aim of Portuguese education was essentially to create an African elite that would reason in the way of the Portuguese. However, the Portuguese offi cials were not committed to the cause of educating Africans at the expense of Portugal. Consequently, most schools were controlled by the Catholic Church, as a refl ection of the relationship between church and state. This meant that the state was dodging its responsibility to provide education for the people of its African colonies. Historian Walter Rodney has criticized the type of education in Portuguese colonies in Africa. He believed that the schools were nothing but agencies for the spread of the Portuguese language. He argued further that “at the end of 500 years of shouldering the white man’s burden of civilizing ‘African Natives,’ the Portuguese had not managed to train a single African doctor in Mozambique, and the life expectancy in eastern Angola was less than 30 years . . . As for Guinea-Bissau, some insight into the situation there is provided by the admission of the Portuguese themselves that Guinea-Bissau was more neglected than Angola and Mozambique.” Later in the 20th century, the Portuguese encouraged state fi nancing of education in the colonies and ensured that a few handpicked Africans were allowed to study in Portugal. Sometimes, provisions were made for the employment of such assimilados in the colonial administration. This development notwithstanding, Portuguese colonies in Africa did a poor job in education. SLAVE TRADE Another important aspect of Portuguese colonial rule in Africa is its attitude toward labor and the recruitment of it. For a long time the slave trade provided an avenue for the recruitment of labor in Portuguese territories. However, in 1836, slave traffi cking was abolished in Portugal’s colonies, although it continued in practice under the name of contract labor. Under this new practice, every year the Portuguese shipped thousands of people from Angola to coffee and cocoa plantations on the island of São Tomé as forced laborers. Mozambique also offered an avenue for migration of labor to work in mines in Britishcontrolled Rhodesia. Sometimes, the migrants were happier working in the mines than being forced to work at home. All the same, the Portuguese controlled the recruitment of this labor to Rhodesia, taking revenue from each worker that they allowed to leave. This was another way to generate revenue. The historian Basil Davidson has commented that a distinguishing feature of Portuguese colonies was the presence of large systems of forced labor put in place to exploit and oppress the indigenous people. There were reasons for this development. First, in the case of Angola, the increasing prosperity of the cocoa industry and the attendant increase in the demand for labor made forced labor a desirable alternative. 16 Africa, Portuguese colonies in Second, toward the end of the 18th century, the supply of labor was affected by the spread of sleeping sickness in the interior. Consequently, the Portuguese had to rely on forced labor for its supply. The colonies were subjected to a great deal of economic exploitation. From the start, Portuguese enterprises in Africa were dictated by the desire to procure slaves. Indeed, slaves constituted almost the sole export of the colonies. This continued up to the end of the 19th century. In Angola, the Portuguese established their rule of ruthless exploitation for the purpose of procuring large numbers of slaves for the Brazilian market. The exploitation of Angola for slaves came to be known as the era of the pombeiros. The pombeiros, half-caste Portuguese, were notorious for their activities, which consisted of stirring up local confl icts in order to capture slaves for sale at the coast. The pombeiros were the masters of the interior whom the slave dealers relied on for procurement. INTELLECTUAL REACTION In 1901 a decree was issued by the government in Lisbon to put a stop to recruitment of labor by violent means. In Luanda, some pamphlets were published to denounce the practice of forced labor. This was an intellectual reaction to the phenomenon of forced labor. In practical terms, it did not have any substantial effect on the practice. There was a violent reaction to the phenomenon of forced labor, starting with the Bailundo Revolt of 1902. In 1903 fresh regulations were issued to tackle the issue of forced labor, but they achieved little or no success. Portugal’s objection to forced labor was not born out of their concern for Africans, but such a stance was taken whenever the authority felt that certain individuals were gaining too much local power. Indeed, the offi cial view, embodied in a law of 1899, was that forced labor was an essential part of the civilizing process, provided it was done decently and in order. The Portuguese attitude to race was one of superiority on their part and inferiority on the part of Africans. No colonial power was entirely free from racial prejudice. Segregation, whether pronounced or not, was often used as a means of preserving the racial purity of European settlers in Africa. In the case of the Portuguese, the authority was interested in ensuring the racial purity of Portuguese agrarian settlers in Angola. However, the conditions in the colonies did not favor or encourage Europeans to settle in large numbers. Consequently, white populations could be maintained only by settling convicts and by miscegenation. Because of this, racial mixing in Portuguese colonies was accepted—it was necessary to maintain the population. Portugal’s colonial history provides a particularly illuminating case of Europe’s impact on the racial and ethnic character of Africa as far as racial-demographic engineering was concerned. No substantial infrastructure development can be ascribed to Portuguese colonial enterprise in Africa. Even though the Portuguese treated their colonies as the “private estate of the motherland,” no major policies and programs were put in place to address infrastructural development. For instance, even though Angola produced excellent cotton, none of it was actually processed in Angola. Additionally, communication was poor. The Portuguese settlements were isolated from one another. For instance, when Lourenzo Marques was engulfed in crises in 1842 and the governor was killed in a raid organized by the indigenous people, it took the authorities in Mozambique a year to hear of the happening by way of Rio de Janeiro. But Portugal was lucky to benefi t from development initiated by other countries. In 1879 the Eastern Telegraph Company’s cable, en route to Cape Town, established “anchor points” in Mozambique and Lourenzo Marques. In 1886 the telegraph line reached Luanda en route to the Cape. This provided the fi rst major link between Portugal and its overseas colonies. Furthermore, in 1880 Portugal and the Transvaal concluded a revised version of their existing territorial treaty of 1869, in which they agreed to build a railroad from Lourenzo Marques to Pretoria. British control of the Transvaal stalled the progress of the work. Portugal on its own did not make efforts to connect its colonies in Africa in a manner that would make sense with regard to Africa’s needs and development. Lastly, bureaucracy was not effective as far as Portuguese colonial rule in Africa was concerned. There was no regular cadre of trained civilian recruits on which to draw. The effect of this was that there was an almost complete absence of the routine competence that a good administration needs. This affected the coordination of Portuguese colonial activities in Africa. CONCLUSION Between 1750 and 1900 the Portuguese presence in Africa was one of economic exploitation much more than actual colonial rule. In fact, the Portuguese had no major administrative systems in place in their African colonies. Instead, the primary motive for the creation Africa, Portuguese colonies in 17 of the colonies was economic, initially the slave trade and later other lucrative commodities. The Portuguese colonies lacked basic infrastructure and lagged behind European colonies in Africa. See also Brazil, independence to republic in; British East India Company; Omani empire; prazeros. Further reading: Davidson, Basil. Modern Africa. A Social and Political History, 3rd ed. London: Longman, 1994; Hammond, Richard J. “Uneconomic Imperialism: Portugal in Africa before 1910.” In Gann, L. H., and Peter Duignan, eds., Colonialism in Africa, 1870–1960, Vol. 1. Cambridge: Cambridge University Press, 1969; Marsh, Z. A., and G. W. Kingsnorth. An Introduction to the History of East Africa, 3rd ed. Cambridge: Cambridge University Press, 1965. Omon Merry Osiki

Aigun and Beijing, Treaties ofEdit

The Russian Empire made important gains at the expense of China between 1858–60. The Qing (Ch’ing) dynasty’s easy defeat by Great Britain in the fi rst Anglo- Chinese Opium War had made its glaring weakness apparent to the world. Russian leaders, including Czar Nicholas I, feared British dominance in East Asia and resolved to expand into Chinese territory fi rst. In 1847 Nicholas appointed Nikolai Muraviev, an energetic proponent of Russian imperialism, governor of Eastern Siberia. Muraviev built up a large Russian force that included Cossack units, a naval squadron in the Far East, and set up forts and settlements along the Amur River valley in areas that the Treaty of Nerchinsk (1689) between Russia and China had recognized as Chinese territory. The small and ill-equipped Chinese frontier garrison in the region was no match for the Russians when Muraviev demanded in May 1858 that China recognize Russian sovereignty on the land north of the Amur riverbank. With more than 20,000 troops and naval support, he was able to force the Chinese representative to agree to the Treaty of Aigun, named after the frontier town where the meeting took place. Under its terms, China ceded to Russia 185,000 square miles of land from the left bank of the Amur River down to the Ussuri River and agreed that the territory between the Ussuri and the Pacifi c Ocean would be held in common pending a future settlement. The Chinese government was furious with the terms and refused to ratify the treaty but was helpless because of the ongoing Taiping Rebellion and others and a war with Great Britain and France, known as the Second Anglo-Chinese Opium War. Events played into Russian hands in 1860, because resumed warfare between China and Britain and France had led to the capture of capital city Beijing (Peking) by British and French forces. The incompetent Qing emperor Xianfeng (Hsien-feng) and his court fl ed to Rehe (Jehol) Province to the north and left his younger brother Prince Gong (Kung) in charge. Russia was represented in Beijing at this juncture by the wily ambassador Nikolai Ignatiev, who had recently arrived to secure Chinese ratifi cation of the Treaty of Aigun. Ignatiev offered to mediate between the two opposing sides; by deception, maneuvering, and ingratiating himself to both parties he scored a great victory for Russia in the supplementary Treaty of Beijing in November 1860. It affi rmed Russian gains under the Treaty of Aigun and secured exclusive Russian ownership of land east of the Ussuri River to the Pacifi c Ocean to Korea’s border, an additional 133,000 square miles, including the port Vladivostok (meaning “ruler of the East” in Russian). In addition Russia received the same extraterritorial rights and the right to trade in the ports that Britain and France had won by war. China also opened two additional cities for trade with Russia located in Mongolia and Xinjiang (Sinkiang) along land routes. Through astute diplomacy and by taking advantage of the weak and declining Qing dynasty Russia was able to score huge territorial gains from China without fi ring a shot between 1858 and 1860. See also Romanov dynasty. Further reading: Quested, R. K. I. The Expansion of Russia in East Asia, 1857–1860. Kuala Lumpur and Singapore: University of Malaya Press, 1968; Schwartz, Harry. Tsars, Mandarins and Commissars, A History of Chinese-Russian Relations. Rev. ed. Garden City, NY: Anchor Press, 1973; Tien-fong Cheng. A History of Sino-Russian Relations. Washington, DC: Public Affairs Press, 1957. Jiu-Hwa Lo Upshur

Alaska purchaseEdit

Alaska was purchased by the United States from czarist Russia in 1867. It had been occupied by Russia since the 18th century and exploited by Russian fur and fi shing interests. However, by the 1860s the region was viewed by the Russian government as a strategic liability and 18 Aigun and Beijing, Treaties of an economic burden. Suspicious of British intentions in the Pacifi c, and concerned with consolidating its position in eastern Siberia, the Russian government offered to sell Alaska to the United States. Baron Edouard de Stoeckl, Russia’s minister to the United States, entered into negotiations with President Andrew Johnson’s secretary of state, William H. Seward, in March 1867. Seward was a zealous expansionist. Throughout his tenure as secretary of state, which had begun during the administration of Abraham Lincoln, Seward was avid in his desire to advance American security and extend American power to the Caribbean and to the Pacifi c. The American Civil War and the lack of political and public support for expansion in the war’s aftermath stymied his desires. He did succeed, however, in acquiring Midway Island in the Pacifi c and in gaining transit rights for American citizens across Nicaragua. Seward and Stoeckl drafted a treaty that agreed upon a price of $7,200,000 for Alaska. For approximately two cents an acre, Seward had obtained an area of nearly 600,000 square miles. However, he encountered diffi culty in obtaining congressional approval for the transaction. Senator Charles Sumner overcame his initial opposition and sided with Seward. He gave a persuasive chauvinistic three-hour speech on the Senate fl oor that utilized expansionist themes familiar to many 19th-century Americans. He spoke of Alaska’s value for future commercial expansion in the Pacifi c, cited its annexation as one more step in the occupation of all of North America by the United States, and Alaska purchase 19 The Alaska Range in the south-central region of Alaska. The purchase of Alaska in 1867 yielded rich fi shing grounds, the discovery of oil and natural gas fi elds, and the recognition of natural beauty as a source for tourism in the following century. associated its acquisition with the spread of American republicanism. The Senate ratifi ed the treaty in April 1867. Despite the formal transfer of Alaska in October of that year, the House, in the midst of impeachment proceedings against Johnson, refused to appropriate the money required by the treaty. It was not until July 1868 that the appropriation was fi nally approved. The purchase was repeatedly ridiculed. Alaska was referred to as a frozen wilderness, “Seward’s Ice Box,” and “Seward’s Folly.” The subsequent discovery of gold in 1898 brought about a new appreciation for the area’s intrinsic value. Alaska’s rich fi shing grounds, its vital location during World War II, the discovery of oil and natural gas fi elds, and the recognition of its natural beauty as a source for tourism have allayed further criticism of its purchase. Its increasing population qualifi ed it to become the 49th state in 1959. See also Hawaii; Louisiana Purchase; Manifest Destiny. Further reading: Holbo, Paul S. Tarnished Expansion: The Alaska Scandal, the Press, and Congress, 1867–1871. Nashville, TN: University of Tennessee Press, 1983; Jensen, Ronald J. Alaska Purchase and Russian-American Relations. Seattle, WA: University of Washington Press, 1975; Paolino, Ernest N. The Foundations of the American Empire: William H. Seward and U.S. Foreign Policy. Ithaca, NY: Cornell University Press, 1973. Louis B. Gimelli

Alexander IEdit

(1777–1825) Russian czar Alexander I was the czar of Russia from 1801 to 1825, a rule during which he not only instituted widespread reforms but later reversed many of them. As a child, he was raised by his grandmother Catherine the Great in a liberal and intellectual environment. She died when he was a teenager in 1796, and his father died fi ve years later, most likely with Alexander’s complicity as part of a conspiracy to put him on the throne. Alexander was deeply committed to reform and sought to bring Russia up to speed with the rest of Enlightenment-era Europe. Attempts at drawing up a constitution that could fi nd support failed, and his early legal code was never adopted. In many cases, Alexander called for reform and micromanaged its adoption, making it impossible for the reform to take place. Other reforms were simply poorly conceived, lacked a practical transition from the status quo, or were unimplementable in light of the existing bureaucracy. His European contemporaries saw him as enigmatic and inconsistent. When Russia acquired Poland, Alexander approved their constitution, which provided many of the same things he wanted for his own country. Reform efforts dwindled in 1810 because of the Napoleonic wars that consumed Europe. Alexander was intimidated by Napoleon I, and perhaps by the scale of the wars themselves. He believed that at stake in the wars in Europe were the rights of humanity and the fate of nations and that only a confederation of European states devoted to the preservation of peace could prevent the dangers of dictators and world conquerors. Napoleon claimed Russia had nothing to fear from France and that the distance between the two nations made them allies. Any ambitions this may have stirred in Alexander were crushed by the summer of 1812, when Napoleon invaded Russia. The results startled everyone; in preparation for the invasion of Moscow, Alexander ordered the city evacuated and burned. Anything that could help the invading French army was destroyed. More than three-quarters of the city was lost. Napoleon began his long retreat, and by the end of the campaign, the French forces of nearly 700,000 had been reduced to less than 25,000. It was a turning point for both men: Napoleon would ultimately lose, and Alexander would ultimately abandon his quests for reform. He initiated few new programs, failed to see older programs through, and by the end of his reign had reversed many of his early reforms rather than repair them. Alexander died of sudden illness in 1825, on a voyage in the south. The circumstances of his death inspired rumors claiming that he had been poisoned or he hadn’t died at all and had buried a soldier in his place. Further reading: Gribble, Francis. Emperor and Mystic: The Life of Alexander I of Russia. New York: Kessenger, 2007; Martin, Alexander M. Romantics, Reformers, Reactionaries: Russian Conservative Thought in the Reign of Alexander I. DeKalb: Northern Illinois University Press, 1997; Troubetzkoy, Alexis S. Imperial Legend: The Disappearance of Czar Alexander I. New York: Arcade, 2002. Bill Kte’pi

Algeria under French ruleEdit

France first occupied Algeria in 1830. During the Napoleonic era, France had bought Algerian wheat on credit. After the fall of Napoleon I Bonaparte, the newly reestablished French monarchy refused to pay these debts. The dey of Algiers, Husain, sought payment, and during a quarrel with the French consul Duval he allegedly hit the consul in the face with his flyswatter. Duval reported the insult to Paris, and the French government sought revenge. King Charles X, who wanted to gain new markets and raw materials and deflect attention from an unstable domestic political situation, used the supposed insult as an excuse to attack Algeria. As a result, a French fleet with over 30,000 men landed in Algiers in the summer of 1830 and Dey Husain was forced to sign an act of capitulation by General de Bourmont. The French pledged to maintain Islam and the customs of the people but also confiscated booty worth over 50 million francs. The French government then debated what to do with the territory. France could keep the dey in power, destroy the forts, and leave or install an Arab prince to rule. The government also debated supporting the return of Ottoman rule, putting the Knights of Malta in power, inviting other European powers to establish some form of joint rule, or keeping the territory as part of the French empire. By 1834 the French had decided on a policy of conquest and annexation of the Algerian territory. A French governor-general was appointed, and all Ottoman Turks were out of Algeria by 1837. The French government held that there was no such thing as an Algerian nation and that Algeria was to become an integral part of France. Although assimilation of the predominantly Muslim and Arabic-speaking Algerian population into French society was ostensibly the policy of successive French regimes, the overwhelming majority of Algerians were never accepted as equals. Algeria became a French department, and the French educational system, with French as the primary language, was instituted. In 1865 the French government under Napoleon III declared that Algerian Muslims and Jews could join the French military and civil service but could only become French citizens if they gave up their religious laws. The overwhelming majority of the Muslim population refused to do so, and Algerian Muslims gradually became thirdclass citizens in their own country, behind the mainland French and the colons, or French settlers. In 1870 Algerian Jews were granted French citizenship. Through most of the 19th century, the Algerians fought against the French occupation. Led by Emir Abdul Kader, the Algerians were initially successful in their hit-and-run attacks against the French. To gain the offensive, General Thomas-Robert Bugeaud created mobile columns to attack the Algerian fighters deep inside Algerian territory. With their superior armaments, the French put Abdul Kader’s forces on the defensive, and Abdul Kader was forced to surrender in 1847, after which he was sent into exile. In 1870 another revolt led by Mokrani broke out in the Kabyle, the mountainous district of northeastern Algeria. A woman named Lalla Fatima also championed the fighters in the Kabyle, but by 1872 the French had crushed the revolt. In retaliation, the French expropriated more than 6.25 million acres of land. Much of the expropriated land was given to French settlers coming from the provinces of Alsace-Lorraine that the French had lost to the Germans as a result of the Franco-Prussian War from 1870 to 1871. These punitive land expropriations made most Algerians tenant farmers and led to further impoverishment of the indigenous population. By the end of the 19th century there were approximately 200,000 French colons living in Algeria. Indigenous Algerians were forced to pay special taxes, and limitations were placed on the numbers of Algerian children who could attend French schools. In addition, the French judicial system was implemented. In reaction to the growing social and political chasm between the colons and the indigenous population, a few Muslim leaders in the cities of Tlemcen and Bone sent a note to the government in 1900 asking for the right to vote. Called the Young Algerians (Jeunes Algériens), these modernizers sought to narrow the gap between the two societies and had much in common with reformers in other parts of the Arab world. Although some liberals in mainland France supported reforms, the colons remained firmly opposed to any legislation that would lessen their favored positions. See also Kader ibn Moheiddin al-Hosseini, Abdul. Further reading: Danziger, Raphael. Abd Al Qadir and the Algerians: Resistance to the French and Internal Consolidation. New York: Holmes and Meier, 1977; Sullivan, Antony T. Thomas-Robert Bugeaud, France, and Algeria, 1784– 1849: Politics, Power, and the Good Society. Hamden, CT: Shoe String Press, 1983. Janice J. Terry

Alien and Sedition Acts, U.S.Edit

In 1798 four federal laws restricting U.S. citizenship and severely curtailing the freedoms of speech, press, and assembly were adopted by a Federalist Party–dominated Congress and signed by President John Adams. Sparked by mounting tensions between the United States and its former ally, France, these laws purported to be essential to the young nation’s security. In fact, they were mainly used to silence domestic critics as intense partisanship emerged. War certainly seemed a strong possibility as the French seized U.S. ships and sailors, schemed to regain control of Spanish Louisiana, and blatantly demanded bribes in return for diplomatic recognition. As Americans expressed patriotic outrage, those who still viewed France as a key ally and hailed the French Revolution were painted as traitors. Chief among these was Democratic-Republican leader Thomas Jefferson, who was both Adams’s vice president and chief political rival. As these laws were implemented by his Federalist foes, Jefferson would call the years 1798 to 1801 “the reign of witches.” A new naturalization statute and two alien laws created major barriers to what had been an extremely liberal U.S. policy of welcoming and extending citizenship benefits to foreigners. Emerging nativist suspicions focused on French “Jacobins” and the supposedly “wild” Irish. The Alien Acts gave the president broad powers to have noncitizens arrested or deported in both peace- and wartime. Anticipating deportation, French visitors chartered 15 ships to return to Europe. Soon after, Adams would personally prevent French scientist Pierre Samuel du Pont de Nemours, whose son would later found a major American chemical company, from setting foot in the United States. The effects of the Sedition Act would prove even more significant, posing a clear challenge to the First Amendment of the Constitution, adopted just eight years earlier. Zealously enforced by Secretary of State Timothy Pickering, this act forbade utterances that might bring the president or Congress “into contempt or disrepute.” It produced 17 known indictments, focusing on Republican newspaper publishers. One of these was Benjamin Franklin Bache, editor of the Philadelphia Aurora and grandson of Benjamin Franklin. Despite violent attacks on his home and person, Bache continued to publish until he died of yellow fever a month before his scheduled trial. Politicians, too, were targeted. Matthew Lyon, an Irish immigrant and Vermont congressman who was one of very few non-Federalist politicians in New England, was convicted for calling the Sedition Law unconstitutional. Conducting his reelection campaign from jail, Lyon won easily and was freed when supporters paid his $1,000 fine. Federalist Jedidiah Peck, a New York assemblyman, was dumped by his party and arrested for petitioning to repeal the Alien and Sedition Acts. He was also handily reelected, as a Republican. Opponents got no help from the Supreme Court, where ardently Federalist Associate Justice Samuel Chase personally prosecuted several sedition trials. The predominantly Republican states of Kentucky and Virginia passed resolutions condemning the laws. It took Jefferson’s narrow victory in the bitter presidential campaign of 1800 to assure that the acts, already set to expire in March 1801, did not continue. Jefferson also pardoned those still jailed for sedition. Years later, Charles Francis Adams, diplomat grandson of John Adams, would call the Sedition Act the fatal error that ultimately doomed the Federalist Party to oblivion after the War of 1812. See also immigration, North America and; newspapers, North American; political parties in the united states. Further reading: Miller, John C. Crisis in Freedom: The Alien and Sedition Acts. Boston: Little, Brown, 1951; Smith, James Morton. Freedom’s Fetters: The Alien and Sedition Laws and American Civil Liberties. Ithaca, NY: Cornell University Press, 1966. Marsha E. Ackermann

Aligarh College and movementEdit

Aligarh College, now Aligarh Muslim University, was the first institution of higher learning for Muslims in British India. Many prominent Muslim leaders and scholars have studied at Aligarh, and it served to provide an important focus for the development of Muslim unity and political awareness, particularly during the late 19th and early 20th centuries. The college has its roots in the belief of Sayyid Ahmad Khan that there was no conflict between education in modern empirical science and belief in the Qur’an. Khan desired to educate young Muslims in English, modern science, and the principles of Western government so they could take a leading role in the contemporary world. He was particularly interested in enabling them to 22 Alien and Sedition Acts, U.S. compete with Hindus and other religious and ethnic groups for positions of power in British-ruled India. In order to prepare Indian Muslims to accept Western education, Khan fi rst created the Scientifi c Society of Aligarh in 1864, which translated Western scientifi c, historical, and philosophical works into Indian languages. Khan visited England in 1870, and his inspiration for Aligarh College was the universities at Oxford and Cambridge. He founded what was then known as the Muhammadan Anglo-Oriental College at Aligarh in 1875; it offered a Western curriculum similar to that of an English public (private) school, and the fi rst principal, Theodore Beck, was British. Aligarh College became the leading center for the education of modern Muslim leadership in India and helped to create an educated Muslim elite that held many political positions and were catalysts for change within the British system. The college was particularly important in providing practical experience in politics through campus debating societies and student elections and in encouraging the formation of a collective and unifi ed identity by the Indian Muslim community. Aligarh College became a full-fl edged university in 1920 and was renamed Aligarh Muslim University. The university is located in the city of Aligarh, Uttar Pradesh, in northern India. It currently has about 30,000 students representing many religious and ethnic backgrounds and offers instruction in 80 fi elds of study, including law, medicine, and engineering. Further reading: Khan, Abdul Rashid. The All India Muslim Educational Conference: Its Contribution to the Cultural Development of Indian Muslims, 1886–1947. New York: Oxford University Press, 2001; Moin, Mumtaz. The Aligarh Movement: Origin and Early History. Karachi, Pakistan: Salman Academy, 1976; Muhammed, Shan. Successors of Sir Syed Ahmad Khan: Their Role in the Growth of Muslim Political Consciousness in India. Delhi: Idarah-i Adabiyat-i Delli, 1981; Nasr, Seyyed Vali Reza. “Religious Modernism in the Arab World, India and Iran: The Perils and Prospects of a Discourse.” Muslim World 83, no. 1 (1993). Sarah Boslaugh

American Revolution (1775–1783)Edit

The war that created and established the independence of the United States of America offi cially broke out between Britain and 13 of its North American colonies at Lexington and Concord, Massachusetts, and ended when the Treaty of Paris was signed. However, historians now maintain that the revolution really began during, or at least in the wake of, the Seven Years’ War, also called the French and Indian War, long before the “shot heard round the world” of April 19, 1775. Serious political and social issues between Britain and its colonies emerged during this earlier confl ict. Many colonial American men were not prepared to endure the harsh discipline of the British army or navy during the war and had an extraordinarily narrow and even legalistic perspective on their military obligations. For their part, aristocratic British military offi cers were unfamiliar with colonial America’s more boisterous political culture and expected colonial militiamen to obey orders without a second thought. These problems of deference and duty grew worse in the 1760s as the British attempted to deal with issues of imperial governance over the huge territory they had won from France. The British struggled to reconcile the goals of its colonial subjects, who hungered for Indian lands between the Mississippi River and the Appalachian Mountains, with the need to foster peace, stability, and the continuation of the fur trade among the Indian tribes in the same region. As the French and Indian Wars were ending in 1763, an Indian coalition assembled by Ottawa chief Pontiac besieged British garrisons in and around the Great Lakes, killing or capturing 2,000 colonials and resulting in Britain’s Proclamation Line. This poorly conceived and expensive attempt to separate Indian and colonial claims proved hugely unpopular with American expansionists. The greatest problem that Britain faced, however, was the doubling of its national debt resulting from the Seven Years’ War, as this confl ict was known in Europe. Parliament sought to levy taxes on the colonies in order to manage the debt without raising levies on already heavily taxed British subjects. The colonists, mistrustful of parliamentary motives and quite used to being subsidized by the Crown, reacted with alarm to new taxes on items such as sugar, paper, and, later, tea. Each new tax was followed by petitions, protests, and even riots, especially in Boston, where leaders like Samuel Adams rallied opposition against parliamentary power over the colonies, and in Virginia, where Burgess Patrick Henry shocked fellow legislators by seeming to foment rebellion against King George III. Each time resistance to a tax ensued, Parliament repealed it but introduced a new one, spawning more resistance that was often met by British shows of American Revolution (1775–1783) 23 force. When Parliament sent in redcoats after the 1774 Boston Tea Party, the deliberate destruction by colonials of 342 chests of tea subject to the hated tax, and imposed what colonists called the Intolerable Acts, it provoked even more violence between British troops and Americans. Colonial propagandists made the most of these incidents, creating such activist organizations as the committees of correspondence and the Sons and Daughters of Liberty. By 1774, colonists had established the First Continental Congress. Using this body, as well as traditional colonial assemblies and militias, the “Continentals” or “Patriots” soon set up a virtual shadow government that ran the countryside in each colony. The Battles of Lexington and Concord ensued when the royal governor of Massachusetts, Lieutenant General Thomas Gage, sent grenadiers and Royal Marines into the countryside to try to confi scate arms and ammunition being stored by the militias. The fi rst year of the war entailed a land blockade of Boston by multitudes of militias that eventually coalesced into the beginnings of the Continental army under Lieutenant General George Washington. Bloodying the British at Breed’s Hill and other battles, the Continentals were strong enough to convince British troops to evacuate the city. This triumph gave the Continentals time to organize the army and for the Second Continental Congress to begin debating independence in the wake of British measures. Once the decision for independence was reached and the Declaration of Independence published in July 1776, Washington began to organize for the defense of New York, the most likely British target. The fi ghting around New York in the late summer and fall of 1776 was the low point of the Revolution for the Americans. Washington committed several amateurish mistakes that cost the army most of its men by December. With his head count down tenfold to 2,000 men, Washington lost control of New York and New Jersey, although victories at Trenton and Princeton rallied the army and the Continental cause. The year 1777 began with additional defeats, especially the loss of the capital city, Philadelphia, to the British. Yet the Americans did not give up. Congress evacuated to York, Pennsylvania, while Washington continued to train his army and learned to use the complementary strengths of the Continental army and various state militias. A key battle came that summer when the Americans prevented British general John Burgoyne’s attempt to conquer the Hudson River valley and sever New England from the rest of the country. Thanks to the “swarming” tactics of the Continental militias and the skilled leadership at Saratoga of Brigadier General Benedict Arnold (later famously a traitor who defected to the British), Burgoyne’s army was forced to surrender. This victory gave U.S. ambassador to France Benjamin Franklin the opportunity that he had been waiting for. Franklin had already succeeded in getting the French to covertly supply the Continentals with small amounts of arms, munitions, and money. Once France was convinced by the victory at Saratoga that the Americans could win, a decision was made to declare war on Great Britain and actively aid the Americans. While waiting for this promised aid to materialize, supporters of independence endured a diffi cult interlude. At Valley Forge in the winter of 1777–78, Continental soldiers were camped just miles from British forces who were comfortably housed in Philadelphia. The Continental army faced hunger, freezing temperatures, and outbreaks of deadly smallpox. Some 3,000 died and another thousand deserted. Nevertheless, Washington continued to train the Continental army for line-of-battle confrontations with the British, with the help of such European military offi - cers as Friedrich von Steuben, a Prussian army veteran. Evidence that this training was making progress was the good showing of the Continental army in combat with British lieutenant general Henry Clinton’s regular forces at the Battle of Monmouth, New Jersey, in 1779 as the British evacuated Philadelphia and withdrew to New York. Yet when the army was led poorly, as it was in battles in the South at Savannah and Charleston by offi cers like Horatio Gates, the results could be disastrous. MOBILIZING LOYALISTS Faced with defeats or stalemates in the North and increased opposition to the war at home and in Parliament, the British cabinet decided to strike at the South in 1779 and 1780 in the hope of mobilizing Loyalists. Loyalists—opponents of American independence, many of whom eventually fl ed to Britain or Canada—were present in all 13 colonies, though it was not always clear in what numbers. Loyalists tended to be wealthier, Anglican, and, in the South, slaveholders, but, fearing Patriot militias, they were reluctant to show themselves unless British military supremacy was demonstrated in their local areas. What followed was a brutal military struggle in the South from 1780 to 1782 that epitomized the multiple dimensions of this war. 24 American Revolution (1775–1783) The American Revolution was not just a colonial rebellion against an imperial power. It was the fi rst modern war of national liberation in which a people mobilized themselves with revolutionary nationalism to establish a republican form of government. Yet estimates are that only about 40 percent of the American population was Continental or “Patriot,” with Loyalists comprising another 20 percent, and neutrals, many of them of non-British origin, the remaining 40 percent of the population. The war, therefore, at times deteriorated in all areas of the country into guerrilla fi ghting between Continentals and Loyalists. Encouraged by British leaders, including former Virginia royal governor Lord Dunmore, tens of thousands of slaves escaped from bondage to British lines, although many others chose to or were forced to serve in the Continental forces. At times, a wartime decline of law and order led to wide-scale banditry by armed groups who owed loyalty to no one except themselves. AGGRAVATED BRUTALITIES The war in the South especially aggravated these tensions and brutalities. When the Americans lost control of the southern coastline and cities, Major General Nathaniel Greene took command in the South and proceeded to employ unconventional strategies and tactics to ruin Major General Charles Cornwallis’s army. Greene employed large guerrilla forces under leaders like Francis Marion, the Swamp Fox, as well as local militia and Continental army units to lure Cornwallis into the southern countryside, fi ghting when it was advantageous and retreating when it was not. With subordinate generals like Daniel Morgan at battles like Cowpens and Guilford Courthouse, Greene American Revolution (1775–1783) 25 American, British, and Hessian soldiers fi ght furiously at the Siege of Yorktown, the climactic battle of the Revolutionary War. The American War of Independence started in 1775, but its causes stemmed from long-term disagreements with British rule. was able to damage Cornwallis’s army severely. Heading to Yorktown, Virginia, Cornwallis hoped to be evacuated by the British navy to New York. Instead, since the French navy had by now gained temporary control of Chesapeake Bay, he found himself trapped by a French and American force led by Washington and French lieutenant general Comte de Rochambeau. The victory at Yorktown in October 1781 convinced the British government to begin peace negotiations with the United States. While negotiations went on for 18 months, fighting by both guerrilla and regular units continued, especially in the South. When the war ended in April 1783, the Americans rejoiced at their victory but also had much reconstruction to perform. The fighting had taken placed entirely on U.S. soil. Both national and state governments were heavily in debt from the war, inflation was rampant, and America’s agricultural economy was so heavily damaged by the British naval blockade that it would not regain 1774 production levels until 1799. Yet the Revolution changed American society and the world permanently. The European system of social deference made way for a new sense of individualism. African-American slaves drank deeply of revolutionary rhetoric and language, and the war began the slow process of abolishing slavery. So, too, did women and men commoners begin to advocate for revolutionary political rights that most Patriot leaders thought would be reserved for elites. By creating the first large-scale republic in the world, the American experience would become the model for revolutions and wars of national liberation for the next 200 years, starting with the French and Haitian Revolutions in the late 1700s, Latin American and central European revolutions in the 1800s, and the Marxist-Leninist revolutions in the 20th century. See also abolition of slavery in the Americas; Bolívar, Simón; Greek War of Independence; Toussaint Louverture. Further reading: Kerber, Linda. Women of the Republic: Intellect & Ideology in Revolutionary America. New York: W. W. Norton, 1986; Shy, John. A People Numerous & Armed: Reflections on the Military Struggle for American Independence. Ann Arbor: University of Michigan Press, 1990; Wood, Gordon. The Radicalism of the American Revolution. New York: Alfred Knopf, 1992. Hal Friedman

American temperance movementEdit

When the first European settlers began arriving in North America in the 17th century, they brought their alcoholic beverages with them and soon found local ways to quench their thirst by using new raw materials like sugarcane. Fermented drinks like cider and beer and distilled ones like rum and whiskey were viewed by virtually all settlers as a gift from God. These beverages protected drinkers from the dangers of tainted water and were perceived as both healthful and energizing. Men, women, and children drank, in varying quantities and strengths, from early morning to bedtime, at work and at play. Drunkenness, however, was frowned on and was punishable in many colonies. Puritan cleric Increase Mather called liquor “a good creature of God . . . but the Drunkard is from the Devil.” As rum became a significant moneymaker for the New World, Americans began distilling and drinking beverages with much higher alcohol content than colonials’ traditional tipples. The introduction of homegrown corn and rye whiskeys also made it harder to keep drunkenness under control. In 1774 on the eve of the American Revolution, a Philadelphia Quaker called distilled liquor a “Mighty Destroyer” that was both unhealthy and immoral. In 1784 famed physician and patriot Benjamin Rush attacked the health and moral deficiencies of “ardent,” or distilled spirits. Drinking these, he wrote, would surely lead to disease and what in modern times is called addiction. Intemperance, Rush further argued, disrupted family and work life and was the enemy of those republican virtues on which the new nation had been founded and depended for its success. Rush’s idea of restricting or even banning what was becoming known as “demon rum” seemed impossible at first but eventually became part of a larger pursuit of moral perfection in 19th-century America. Although hard drinking increased between 1790 and the 1830s, new forces were at work. Temperance appealed especially to clergymen, mothers, health advocates, owners of factories, and builders of railroads whose new machines were getting faster and more complicated. It would also strike a chord with native-born Americans fearful of the rising tide of Irish Roman Catholic immigrants and their presumed heavy drinking habits, and, to a lesser extent, Germans bringing their beer-making skills to America. Presbyterian and Methodist religious leaders began agitating against strong drink in 1811. By 1826 a new organization, the American Temperance Society, called for abstinence from whiskey, but found no fault with 26 American temperance movement moderate use of nondistilled beverages. That same year, Congregationalist minister Lyman Beecher called for total abstinence from alcohol of any kind. Many agreed; rejecting alcohol entirely became known as teetotaling. For the most part, early temperance efforts were spearheaded by religious and political elites, but there were exceptions. In 1840 six men, possibly while actually drinking in a Baltimore bar, created the Washington Temperance Society, a group that would help drinkers give up their unhealthful and immoral habit. In religious revival-like mass meetings, thousands of men pledged to stop drinking and a fair number fulfi lled their promise. In 1851 Maine became the fi rst state to enact a law prohibiting the manufacture and sale of liquor. By 1855 a dozen states and two Canadian provinces had also adopted Maine laws. Between 1830 and the American Civil War, annual per capita consumption of alcohol by persons aged 15 and over fell from 7.1 gallons to 2.53 gallons. The temperance movement suffered a setback when the impending breakup of the Union and the ensuing Civil War dominated public concern. With the war’s end, the drinking issue revived. Founded in 1869 by Civil War veterans, the Prohibition Party fi elded its own presidential candidates in eight post–Civil War elections, never winning more than 2.2 percent of the vote, but helping to advance the cause. More successfully, the Anti-Saloon League, founded by a minister in 1893, worked with both major parties to achieve its dry agenda through local-option elections and other techniques, paving the way to 20th-century prohibition. Most important was the 1874 emergence of the Woman’s Christian Temperance Union. For the fi rst time, large numbers of women, not yet able to vote, would play a leadership role in a major public controversy. Focusing on the evils of the neighborhood saloon, WCTU members began holding prayer meetings at places that purveyed alcohol. The exploits of WCTU member Carrie Nation, a Kansan who wielded a hatchet to destroy saloons and smash whiskey bottles, became famous but were not typical of the organization’s strategies or goals. Led by Frances E. Willard, a former women’s college president, the WCTU highlighted home protection against the disastrous effect that predominantly male drinking had on the women and children who depended on them. The 150,000-member organization also campaigned successfully for antialcohol education in the nation’s public schools and sought drinking bans at federal facilities and on Indian reservations. President Rutherford B. Hayes complied; lemonade was served at White House events. Anti-drinking propaganda, including songs, plays, and heartrending novels such as the famous Ten Nights in a Bar Room, helped spread a message of sobriety that could be assured only by public action. By the time Frances Willard died in 1898, her WCTU, as well as the Prohibition Party and Anti-Saloon League, were closer to their goal than any could have known. Persuaded by political considerations and progressivist arguments, all brought into sharp focus by America’s entry into World War I, the nation implemented a farreaching prohibition on alcohol sale and use in 1920. See also Wesley, John (1703–1791) and Charles (1717–1788); women’s suffrage, rights, and roles. Further reading: Lender, Mark Edward, and James Kirby Martin. Drinking in America: A History. New York: The Free Press, 1987; Murdock, Catherine Gilbert. Domesticating Drink: Women, Men, and Alcohol in America, 1870–1940. Baltimore, MD: Johns Hopkins University Press, 1998. Marsha E. Ackermann

Andean revoltsEdit

In what has been called the age of Andean insurrection, there erupted in the Andean highlands of Peru and Bolivia from 1742 to 1782 a spate of revolts, uprisings, and rebellions that rocked the Spanish Empire, threatening their rule across much of the Andes and prompting a host of reforms intended to quell the disturbances and reassert the Crown’s hegemony. Unlike the situation in the viceroyalty of New Spain, where revolts and uprisings were common but generally small-scale and localized, several of the Andean rebellions assumed the character of major regional confl icts, most notably the Great Rebellion led by the second Tupac Amaru from 1780 to 1782 (the fi rst Tupac Amaru had been captured and executed two centuries earlier, in 1572). Taken together, these Andean rebellions reveal the deep fi ssures of race and class that marked 18th-century colonial Peruvian society; the enduring persistence of preconquest indigenous forms of religiosity, culture, social organization, and political and communal practices; and the intensifi cation of the structural violence and systemic injustices of Spanish colonialism under Bourbon rule. Andean revolts 27 The fi rst major rebellion in 18th-century Peru was led by the Jesuit-educated mestizo Juan Santos Atahualpa, who claimed direct descent from the Inca emperor Atahualpa, captured and executed by the Spaniards in 1533. For more than 10 years, from 1742 to 1752, Juan Santos Atahualpa led a small army of Indians and mestizos in a protracted guerrilla war against the Spanish authorities. Based in the eastern montaña, between the Central Highlands to the west and the vast Amazonian jungles to the east, the army of Juan Santos Atahualpa was never defeated in open battle and the leader himself never captured; in 1752 he and his troops launched an audacious foray into the heart of Spanish-dominated territory before retreating back into the eastern jungles. The movement itself, like others of this period, was inspired by a messianic ideology that foretold the end of Spanish domination and the return of Inca rule. A major point of contention among scholars has been the extent to which this movement represented a genuinely highland Indian revolt or whether it is better understood as a frontier movement with only tenuous links to the core highland zones of Spanish domination and control. The preponderance of evidence indicates the movement’s frontier character while also underscoring substantial, if diffuse, highland Indian sympathy in the heartland of the Spanish domain. It is true that highland Indians did not rise up en masse in support of the movement. Yet substantial evidence also shows the movement’s ranks populated by signifi cant numbers of highland Indians and that Spanish authorities perceived the movement as a grave threat to their rule. A series of other, more localized revolts and uprisings marked the decades between the 1750s and the early 1780s. By one count, the 1750s saw 13 such revolts; the 1760s, 16; and the 1770s, 31. The year 1780 saw 22, and 1781, 14, including the launching of the Great Rebellion by Tupac Amaru II in November 1781. The causes of this upsurge in insurrectionary activity have been attributed to a host of interrelated causes, all having to do with the structural oppression and exploitation of Spanish colonial rule—more specifi cally, the practice of forced mita labor in the Andes; onerous and rising tax rates; the forced sale of goods under the institution of repartimiento; and the quickening pace of reform under the Bourbons, whose economic policies from the mid- 1700s intensifi ed the demands for Indian labor. The Great Rebellion, which rocked the entire southern highlands in 1781–82, represented the most serious threat to Spanish domination in the Americas during the colonial period. The subject of an expansive scholarly literature, the insurrection launched by Tupac Amaru II sought to expel the reviled Spaniards and in their stead install a divinely inspired neo-Inca state. The depths of the millenarian impulse propelling the movement and the breadth of the popular support the movement garnered constitute powerful evidence for the profundity of the cultural crisis among indigenous and mestizo Andean highland peoples in the late colonial period. The Great Rebellion began on November 4, 1780, with a raid on the Indian town of Tinta in southern Cuzco Province, where rebels captured and executed a local offi cial infamous for his abuses of the repartimiento system. Moving south, the rebels quickly gained control of much of the southern highlands, from Lake Titicaca to Potosí and beyond, suggesting a high degree of advanced preparation and planning. In January 1781 the rebels laid siege to the ancient Inca capital of Cuzco. The siege faltered with the speedy arrival of Spanish reinforcements, and soon after Tupac Amaru II and numerous lieutenants were captured and, in May 1781, executed. The executions failed to staunch highland rebel activity, however, as remnants of Tupac Amaru’s army joined forces with a similar movement led by one Tupac Katari, laying siege to La Paz (Bolivia) from March to October 1781. Tupac Katari also was captured, and in January 1782 the Spaniards negotiated a peace agreement with surviving rebel leaders. Sporadic outbreaks continued through the early 1780s across the southern and central highlands. It is estimated that altogether some 100,000 people died in the Great Rebellion of 1780–82. In response to these crises, the colonial authorities exacted swift retribution while also attempting to address some of the root causes of the violence, reforming the judicial system and selectively easing tax burdens. Yet social memories in the Andes are long, and the deep social divisions exposed by these massive upsurges of violence endured. In subsequent decades, the Creole, mestizo, and Indian elites of Peru, Bolivia, and adjacent highland Andean regions emerged as among the most conservative in all of Latin America, the specter of violence from below representing an ever-present danger to their privileges and interests. The deep social and cultural divisions exposed in the age of Andean insurrections remain, for some observers, readily apparent to the present day. Further reading: Godoy, Scarlett O’Phelan. Rebellions and Revolts in Eighteenth Century Peru and Upper Peru. Cologne: Böhlau Verlag, 1985; Stern, Steve J., ed. Resistance, Rebellion, and Consciousness in the Andean Peasant World, 28 Andean revolts 18th to 20th Centuries. Madison: University of Wisconsin Press, 1987. Michael J. Schroeder

Anglo-Chinese Opium WarsEdit

The Anglo-Chinese Opium Wars were two confl icts in which the British and French (in the second war) fought against the Chinese in support of the sale of opium in China. The fi rst of the wars, between Britain and China alone, lasted from 1839 to 1842, and the second from 1856 to 1860, also known as the Arrow War, or sometimes the Anglo-French War in China. Because the cause of both were disputes over opium, the two wars are known colloquially as the Opium Wars. The sale of opium, produced in British India, to the Chinese had generated massive wealth for the British East India Company and many other British companies and individuals. It reversed the fl ood of British gold and silver to China to purchase Chinese products and replaced it with a trade balance in Britain’s favor. The massive increase in opium addiction in China beginning in the late 18th century had resulted in major social and economic problems. As a result, the Chinese government appointed an imperial commissioner, Lin Zeku, in Guangzhou (Canton), who seized all the opium held in warehouses operated by British merchants, producing a crisis. As tensions escalated, some drunken British sailors were involved in a fi ght with some Chinese, killing a Chinese villager. The British refused to hand the men over, exacerbating the crisis. When fi ghting broke out, the British enjoyed overwhelming superiority, taking Shanghai and then moving upriver capturing Jingjiang (Chingkiang) and threatening Nanjing (Nanking). The Treaty of Nanking, dictated by Britain, was signed on August 29, 1842. It forced the Chinese to cede Hong Kong and to pay an indemnity in compensation for Britain’s military effort and the destroyed opium. The ports of Guangzhu, Shanghai, Fuzhou, and Xiawen were opened as well. Additionally, British citizens were no longer subject to trial by Chinese courts. These concessions led to other foreign powers demanding similar treatment; these treaties were known as the Unequal Treaties. In 1856, using the pretense of Chinese offi cials lowering the British fl ag on the ship Arrow, Britain went to war against China. The French joined the battle on the side of Britain, using the murder of a French missionary as a rationale. The two powers moved swiftly against the Chinese, forcing the Treaty of Tientsin on June 26–29, 1858, which opened more ports to Western trade and residence; acknowledged the right of foreigners, including missionaries, to travel to any part of China they wanted; and provided for the British and French to establish permanent legations in Beijing. However, since the treaty also legalized the opium trade, China refused to sign, and the war started anew. On October 18, 1860, the Chinese were forced to sign the Peking Convention, another of the Unequal Treaties. It imposed terms on the Chinese forcing them to accept the Treaty of Tientsin. It was after this that Charles Gordon had the Summer Palace burned down in a reprisal for the torturing of the British delegation under Sir Harry Smith Parkes. The British and the French sent missions to Beijing, where they purchased palaces in the Manchu City to turn into their legations. Gordon was to move to Shanghai, where he was to raise a force to fi ght against the Taiping rebels in the war that followed. Further reading: Fay, Peter Ward. The Opium War 1840– 1842. Chapel Hill: University of North Carolina Press, 1975; Gelber, Harry G. Opium, Soldiers and Evangelicals: Britain’s 1840–42 War with China and Its Aftermath. New York: Palgrave Macmillan, 2004; Hurd, Douglas. The Arrow War: An Anglo-Chinese Confusion 1856–60. London: Collins, 1967; Inglis, Brian. The Opium War. London: Hodder & Stoughton, 1976. Justin Corfi eld

Anglo-French agreement on Siam (1897)Edit

The Anglo-French agreement concerning Siam (later Thailand) was the result of British and French imperialism in Southeast Asia in the 19th century. The British and French were expanding their infl uence into Burma and Indochina respectively and used Siam as a buffer state between the two expanding empires. Siam was able to use this agreement to ensure some degree of autonomy, as European imperialism was increasing in Asia and Africa in the late 19th and early 20th centuries. The conclusion of the Anglo-French agreement marked an important event in European relations with Siam that had extended as far back as the 16th century. In the 16th century Portugal began attempting to extend trading relations into Southeast Asia. British, Dutch, and French merchants were also interested in Anglo-French agreement on Siam (1897) 29 the riches of Southeast Asia and sent in merchant fl eets in the 17th century. The British East India Company was concerned with acquiring posts in Southeast Asia in order to expand trade with this region. In 1786 the East India Company negotiated an agreement with the Sultan of Kedah that allowed it to occupy Penang. In order to acquire control over Penang, the East India Company had to assure the sultan that it would defend him against hostility from Selangor. In 1826 Captain Henry Burney concluded with the Siam government another agreement that opened up Southeast Asia to greater British infl uence, as this agreement prevented the Siamese government from disrupting British trade in the Trengganu and Kelantan regions. The Siamese court negotiated an agreement with the British in 1855, which allowed British subjects to enjoy extraterritorial rights in Siam, allowed a British consul to take up residence in the country, and fi xed tariff rates. At the same time, France was also seeking to expand its infl uence in Southeast Asia. In 1862 the French government cited the maltreatment of French missionaries in Vietnam as an excuse to take control of the southern region of the country. This region was important to the French because it exported rice and could produce rubber. In 1867 France sent a naval squadron that forced Siam to relinquish its control over Cambodia, allowing the French to assert their infl uence over the region. In 1884 France went to war with China over Vietnam, although Vietnamese guerrillas continued to create instability in the region. Britain became concerned that a confl ict between the Siamese and French governments would give the French an excuse to occupy the region. During the 1890s the British government also became concerned about Germany and France acquiring infl uence over the Malay Peninsula. Joseph Chamberlain, British colonial secretary, stated in a letter in 1895 that it would be in the best interests of the British Empire to acquire a sphere of infl uence in the region between the Malay States and Tenasserim in return for recognition of a French sphere of infl uence in northern Siam. The result was the Anglo-French agreement, an attempt by the British and French governments to transform Siam into a buffer zone between their two empires to lessen tensions in Southeast Asia. Lord Robert Cecil, the British prime minister, dispatched a message to the British ambassador to France assuring him that the agreement would not result in the end of an independent Siam. The government of Siam responded by appointing Westerners to government positions and reforming the Ministry of Finance. The Siamese government attempted to learn technology in an attempt to improve its international position. Following the signing of the Anglo-French agreement, the British and Siamese governments negotiated an accord in 1897. It required the Siamese government to gain permission of the British government before it could grant concessions to a third country. This new agreement strengthened the British position on the Malay Peninsula. The Anglo-French agreement, however, failed to end tensions in Southeast Asia caused by imperial rivalry between Britain and France. Further reading: Blanchard, Wendell. Thailand, Its People, Its Society, Its Culture. New Haven, CT: HRAF Press, 1966; Jeshurun, Chandran. “The British Foreign Offi ce and the Siamese Malay States, 1890–97.” Modern Asian Studies (1971); Nicolson, Harold. “The Origins and Development of the Anglo-French Entente,” International Affairs (October 1954); Pendleton, Robert. Thailand: Aspects of Landscape and Life. New York: Duell, Sloan and Pearce, 1962. Brian de Ruiter

Anglo-Russian rivalryEdit

The Great Game was the name given by British poet Rudyard Kipling to the struggle between czarist Russia and the British Empire for infl uence in Central Asia. The contest could actually be said to have begun as early as the 18th century. That was when Catherine the Great of Russia conquered the last remnants of the Mogul Golden Horde that had fi rst entered Russia in the time of Genghis Khan in the 13th century. In 1784 the last khan of the Crimea surrendered the Khanate of the Crimea to Catherine in exchange for a pension. During the same period, the British East India Company was conquering the entire Indian subcontinent. In 1799, at Seringapatam, Tipu Sultan was defeated and killed by troops of the East India Company. Between 1814 and 1816 Nepal was subdued, and the famed Nepalese Gurkha warriors fi rst entered British service. In 1818 Governor-General Warren Hastings fi nally crushed the Maratha Confederacy, fi rmly establishing British supremacy. The fi rst documented mission of the British to learn Russian intentions dated from 1810, when Alexander I, czar of Russia, was temporarily allied to Napoleon I of France by the Treaty of Tilsit. Britaind had been at war with France since 1793, and the idea of huge Rus- 30 Anglo-Russian rivalry sian armies marching south to conquer India caused the British great alarm. Although Napoleon and Alexander I went to war in June 1812, making Britain and France allies again as they had been before the Treaty of Tilsit, it did not mean the end of the Great Game. In fact, it was only the beginning. CONSTANT COMBAT The collapse of the Golden Horde had left in its wake many independent khanates, such as those of Bokhara and Khiva. While strong enough to wage bloody wars among themselves, they were no match for the armies of Britain or Russia, which had been in almost constant combat for over two decades. With the defeat of Napoleon in 1815, the wartime alliance against him between Russia and Britain was soon forgotten. Instead, both great powers began to focus their imperial goals on Central Asia. The Russians desired to conquer the khanates, and the British desired to keep them as buffer states between the Russian Empire and the British Empire in India. Beginning in 1839 Russia began a systematic conquest of Central Asia that followed the methodical planning of Czar Nicholas I. Concern over the Russian threat to India precipitated the First Afghan War in 1839. By this time, Persia had become an ally of Russia and was using Russian troops in an attack on the city of Herat in Afghanistan, a country Persia had had its own imperial designs on since at least the 18th century. George Eden, Lord Auckland, the governor-general of India since 1835, suspected that Dost Mohammed of Afghanistan’s Durrani dynasty sided with the Russians. Auckland invaded Afghanistan in 1839. In August, the British army entered Afghanistan’s capital, Kabul, with the former ruler, Shah Shuja, who Auckland felt to be more pro-British. Although the invasion went successfully, the occupation of Kabul ended in disaster. Auckland’s emissary, Sir William Macnaghten, was killed, and only one man arrived in safety back in British territory in January 1842. A second British invasion as an expression of Britain’s power succeeded in reaching Kabul and evacuated successfully in December 1842. Although the Afghans were suitably awed by the British ability to recoup their losses so quickly, this war was an unnecessary loss of lives and treasure, since the Russians abandoned their attempts to bring Afghanistan into their orbit before Auckland began the war. Meanwhile, the British were consolidating their control of India. In 1843 the British under Sir Charles Napier conquered Sind. During the Sikh Wars the British defeated the once independent realm of the Sikhs in the Punjab, fi rmly adding it to their growing Indian empire. Although the Sikh Wars were the most diffi cult the British ever fought in their conquest of India, the Sikhs ultimately became among the most redoubtable soldiers in England’s Indian army. It could be argued persuasively that this sudden imperial push on the part of the British was to deny control of the Punjab to the Russians. The British entry into the Crimean War was in part due to British alarm over the seemingly unstoppable Russian march into Central Asia. Instead of being able to focus their energy on the khanates of Central Asia, the Russians had to face a British invasion of the Russian Crimea in 1854. The heavy Russian losses suffered in such battles as Inkerman, Balaklava, and the Alma River helped delay further Russian penetration of Central Asia by a decade. IMPERIOUS NECESSITY Then, in December 1864, Czar Alexander II’s foreign minister, Prince A. M. Gorchakov, wrote what would become the defi nitive expression of Russian imperialism in Central Asia. It contained an ominous note for the British. Like all other expanding powers, Russia faced one great obstacle—“all have been irresistibly forced, less by ambition than by imperious necessity, into this onward [movement] where the greatest diffi culty is to know where to stop.” Soon the British understood what Gorchakov’s memorandum meant. Czar Alexander II began a massive campaign of conquest in Central Asia. As with the Crimean War, tensions between England and Russia contributed to a war scare in the Russo-Turkish War of 1877–78. Throughout the 19th century, Russian foreign policy vacillated between seeking empire in Central Asia and desiring to expand into the Balkans. Thus in 1877 the Russians invaded the Ottoman territory in the Balkans, which would ultimately lead to the establishment of an independent, pro-Russian Slavic Bulgaria. However, when it seemed that the armies of Alexander II would continue on until they conquered the Turkish capital of Constantinople, British prime minister William Gladstone threatened to intervene on the side of Turkey. When events seemed to be leading to a general European war, the German Chancellor Otto von Bismarck called all the parties to the Congress of Berlin in 1878, which ultimately provided a peaceful solution to the crisis. The Russo-Turkish War had immediate repercussions in Central Asia. A Russian mission arrived in Kabul under General Stolietov, supported by the czar and the czar’s governor-general for the Central Asian Anglo-Russian rivalry 31 provinces, General K. von Kaufman. The same scenario repeated itself as in 1839. With the Congress of Berlin ending a major crisis, the czar had no purpose in creating another crisis in Central Asia, so Stolietov was withdrawn from the Afghan capital. Nevertheless, the British ruler of India, Robert Bulwer- Lytton, Lord Lytton, the viceroy, prepared for a military invasion of Afghanistan. Lytton was a member of what was known as the Forward Policy school, which, believing war with Russia was certain, was determined to fi ght it as far from India as possible. When the ruler of Afghanistan, Amir Sher Ali, refused to permit a British delegation to enter Afghanistan, Lytton’s army crossed the Afghan frontier on November 21, 1878. After Major-General Frederick Roberts defeated Sher Ali’s effort to stop the British, the Afghans pursued a policy of guerrilla warfare. Sher Ali left the offi ce of amir to his son Yakub Khan, who in May 1879 accepted a British resident, Sir Louis Cavagnari. In a gesture of peace, Sir Louis Cavagnari entered Kabul in July 1879 with only an escort from the corps of guides, the elite of the British Frontier troops. In September, Afghan troops attacked the residency and killed Cavagnari, most likely acting on orders from Yakub Khan. Retribution soon followed. In October 1878 General Roberts consolidated the British position in Kabul and defeated Yakub Khan’s men. A second skirmish led to his fi nal victory over Yakub Khan on September 1, 1880. The British could now install Amir Abdur Rahman on the throne, a leader they felt would pursue at least a neutral foreign policy and prevent the Russians from using Afghanistan as a base from which to attack India. Indeed, the British demonstration of force in Afghanistan may have come none too soon, for unlike in the aftermath of the First Afghan War, this time Russia’s expansion into Central Asia rolled on like a juggernaut. Even the great Russian novelist Fyodor Dostoyevsky wrote in 1881, “in Europe we were hangers- on, whereas to Asia we shall go as masters. . . . Our civilizing mission in Asia will bribe our spirit and drive us thither.” In 1885 under the new czar Alexander III, the clash Britain had long awaited took place. A Russian army that had just conquered Merv in Turkestan continued on to occupy the Penjdeh Oasis in Herat— the Afghan buffer for British India had been breached. In Britain, the response was swift. Some £11,000,000 were voted by Parliament for war with Russia, a huge sum in those days. Given such fi rm British opposition, the Russian force withdrew from Penjdeh. Taking advantage of the Russian withdrawal, Sir Mortimer Durand drew the Durand Line in 1893, which established the eastern frontier of Afghanistan. Two years later, the British had the Wakhan region added to Afghanistan, no doubt pleasing Abdur Rahman, so that Russian territory would not border India. The Great Game in Central Asia would continue with both nations attempting to infl uence Tibet and China, whose province of Xinxiang (Sinkiang) was China’s closest to Central Asia. However, as the 19th century waned, the British and Russians were both faced by a greater threat in the growing power of the German Empire of Kaiser Wilhelm II. Already, the kaiser had made clear his interest in seeking German infl uence in the lands of the Ottoman Empire, even entering Jerusalem on horseback in 1898. In 1907, in the spirit of cooperation brought about in the face of a mutual danger, Britain and Russia peacefully settled a dispute over oil rights in Persia by effectively dividing it into Russian and British spheres of infl uence. The Great Game had offi cially come to an end. See also Russo-Turkish War and Near Eastern Crisis. Further reading: Barthorp, Michael. Afghan Wars and the North-West, 1839–1947. London: Cassell, 2002; McCauley, Martin. Afghanistan and Central Asia: A Modern History. London: Pearson, 2002; O’Ballance, Edgar. Afghan Wars: Battles in a Hostile Land, 1839 to the Present. London: Brassey’s, 2002; Tanner, Stephen. Afghanistan: A Military History from Alexander the Great to the Fall of the Taliban. New York: Da Capo, 2002; Wolpert, Stanley. A New History of India. London: Oxford University Press, 2004. John F. Murphy, Jr.

Arabian Peninsula and British imperialismEdit

During the 19th century, the British extended their economic and political presence throughout the coastal areas of the Arabian Peninsula. With the largest and most powerful navy in the world, the British needed ports to serve as refueling stations and to replenish supplies of fresh foods and water for their sailors. After the Suez Canal provided an easier and faster transportation route between Europe and Asia, the coastal areas of the Arabian Peninsula increased in importance. 32 Arabian Peninsula and British imperialism In 1839 Britain occupied Aden on the southern coast of Yemen, then on the further fringes of the Ottoman Empire, making it a British Crown Colony. After the Suez Canal became a major trade route, Aden became a bustling port city and trading center. Britain and the Ottomans clashed repeatedly over control of northern and southern Yemen. In the late 19th century, the British signed formal treaties with a number of tribes in the regions around the port of Aden; these became known as the Aden Protectorates. The largest of these sultanates, sheikhdoms, emirates, and confederation of tribes was the two sultanates of Hadhramaut. In the early 20th century the British and Ottomans agreed to specifi c borders demarking their respective territorial claims. Britain also sought to protect its vast holdings in India and to prevent rival European imperial powers from expanding into Asia by extending its control over neighboring areas both east and west of the Indian subcontinent. Consequently, British foreign service offi cials in Delhi sought to extend British control along the Persian Gulf. The British secured a number of treaties with the ruling families along the Persian Gulf, which in Arab provinces was frequently referred to as the Arabian Gulf. The patron-client relationship between Arab rulers in the Gulf and the British lessened Ottoman control and freed local rulers from Ottoman taxation while increasing their own political power. The local economies were dependant on income from pearls and sponges obtained by divers who were paid by a few trading families who often had ethnic and commercial ties with Persia. Because the area was largely poverty stricken, local sheikhs were also interested in possible economic gains from ties with the British. The fi rst British treaty agreement in the region was with the sheikh of Muscat (part of present-day Oman) in 1798. Successive agreements were signed between the British and the ruling Al Khalifah clan in Bahrain in 1820 and with the Sabah family in Kuwait in 1899. Under the latter, Britain had the right to conduct all the foreign relations for Kuwait, and no foreign treaties could be signed nor could foreign agents operate in Kuwait without the approval of Britain. This enabled Britain to ensure that the proposed Berlin to Baghdad railway would not be extended to the Persian Gulf, and it also made Kuwait an unoffi cial British protectorate. Similar agreements were reached with the Thani clan in Qatar and with a number of local rulers in the Trucial Coast (present-day United Arab Emirates). As a result, acting through its surrogates, Britain was able to control the coastal areas along almost all of the Arabian Peninsula. See also Eastern Question. Further reading: Bidwell, Robin. The Two Yemens. Boulder, CO: Westview Press, 1983; Boxberger, Linda. On the Edge of Empire: Hadhramawt, Emigration, and the Indian Ocean, 1880s–1930s. Albany: State University of New York Press, 2002; Cottrell, Alvin J., et al., eds. The Persian Gulf States: A General Survey. Baltimore: Johns Hopkins University Press, 1980; al-Naqeeb, Khaldoun Hasan. Society and State in the Gulf and Arab Peninsula. London: Routledge, 1990. Janice J. Terry

Arab reformers and nationalistsEdit

During the 19th century a number of Arab intellectuals led the way for reforms and cultural changes in the Arab world. Rifa’a al-Tahtawi from Egypt was one of the fi rst and foremost reformers. A graduate of esteemed Muslim university al-Azhar, Tahtawi was sent to France to study as part of Muhammad Ali’s modernizing program. He returned to Egypt, where he served as director of the Royal School of Administration and School of Languages, was editor of the Offi cial Gazette, and Director of Department of Translations. Tahtawi published dozens of his own works as well as translations of French works into Arabic. In A Paris Profi le, Tahtawi described his interactions as a Muslim Egyptian with French culture and society. His account was an open-minded and balanced one, offering praise as well as criticism for many aspects of Western civilization. For example, Tahtawi respected French originality in the arts but was offended by public displays of drunkenness. Tahtawi urged the study of the modern world and stressed the need of education for both boys and girls; he believed citizens needed to take an active role in building a civilized society. Khayr al-Din, an Ottoman offi cial from Tunisia, echoed Tahtawi’s emphasis on education while also addressing the problems of authoritarian rule. He advocated limiting the power of the sultan through law and consultation and wrote the fi rst constitution in the Ottoman Empire. The Egyptian writer Muhammad Abduh dealt with the ongoing question of how to become part of the modern world while remaining a Muslim. He was heavily infl uenced by the pan-Islamic thought of Jamal al-Din al-Afghani. Abduh taught in Lebanon, traveled to Arab reformers and nationalists 33 Paris, and held several government positions in Egypt. He became mufti of Egypt in 1899 and was responsible for religious law and issued fatwas (legal opinions on disputed points of religious law). Abduh became one of the most highly respected and revered fi gures in Egypt, although some conservatives opposed his reforms and open-mindedness while some more radical nationalists berated him for not being liberal enough. In his publications, including Face to Face with Science and Civilizations and Memoirs, he urged the spiritual revival of the Muslim and Arab world, arguing that Islam was not incompatible with modern science and technology. He also stressed the importance not only of law but of reason in Islamic society. Originally from Syria, Muhammad Rashid Rida was a follower of Abduh. He moved to Egypt and founded the highly respected journal al-Manar. His writings had a wide infl uence on Islamic thought, and he became one of the foremost spokespersons for what has become known as political Islam. Rida also discussed socialism and Bolshevism and the role religion should play in contemporary political life. Egyptian Abdullah al-Nadim edited several satirical journals and was a staunch supporter of the Urabi revolt of 1881–82. He also knew Jamal al-Afghani. Al-Nadim was exiled to Istanbul after his fi ery nationalist stance earned him the enmity of the British. Al-Nadim spoke openly about the growth of the nation (watan) and was one of the fi rst modern Egyptian nationalists. In 1899 Anis al-Jalis started an Egyptian magazine that carried articles dealing with the role of women in society. A new educated elite emerged as graduates of the many government and other schools that had been established as part of the reforming era of the Tanzimat entered public life. In the Sudan, the British founded Gordon College to educate male youth for government service. Other schools founded by missionaries included the Syrian Protestant College (American University of Beirut, AUB), the Jesuit University St. Joseph in Beirut, and various Russian Orthodox schools scattered throughout Greater Syria. The Alliance Israelite sponsored schools for Jewish students throughout the Ottoman Empire. Separate mission schools were also established for girls. A spirit of outward-looking, pro-Western thought prevailed, and many of the elites had extensive experience with the Western world. Many were bilingual in French or English. Nineteenth-century Arab intellectuals, many of whom were Christians, fostered a literary renaissance with a revival of interest in the Arabic language. Some sought to modernize Arabic prose and poetic styles. Butrus Bustani was one of the era’s foremost experts in the Arabic language. He also wrote a multivolume encyclopedia with thoughtful entries on science and literature as well as history. Numerous newspapers were published, especially in Cairo and Beirut. Al-Muqtataf produced in Cairo by Yacoub Sarruf and Faris Nimr was one of the most famous. In 1875 the Taqla family founded al-Ahram, which became the premier newspaper in the Arab world. Many of these new journals were published in Egypt, where there was greater freedom of the press afforded by the British than in Ottomancontrolled provinces. Nationalism spread around the world in the 19th century, and the Arab provinces were no exception. A generation of Arab nationalists began to talk and write about the relationship of the Arabs within the Ottoman Empire and the role religion should and did play in modern nationalism. These early nationalists did not deny the importance of religion but used nationalism as their point of reference. The fi rst group that dealt with the controversial issue of separation from the Ottoman Empire on the basis of national identity was formed at the Syrian Protestant College in Beirut in 1847. Its members, who met secretly to avoid prosecution from the Ottoman intelligence services, included Faris Nimr. They met under the guise of being a literary society; while the members did discuss literature they also delved into the important political questions facing the declining Ottoman Empire as well as the emergence of nascent Arab nationalism. Various groups continued to meet at the college from 1847 to 1868 when a Beirut society began. Its members discussed the key political issues of Arab identity. The so-called Darwin affair of 1882 caused a number of the leading fi gures of the movement to leave the college. In a public address, Dr. Edwin Lewis, a professor at the college, discussed Darwin’s theory of evolution; his positive conclusions about Darwin’s controversial theory roused the enmity of conservative American Christians on campus. They attacked Lewis in print and forced his resignation. Several of the liberal Arab junior faculty, including Nimr and Sarruf, resigned in outrage and moved to Cairo, where they became leading fi gures among Christian Arab secularists. Abd al-Rahman al-Kawakibi was born in Syria, but after his writings about Arab identity roused the enmity of Khedive Abbas Hilmi, he left Syria and became a frequent contributor to al-Manar, the journal edited by Rashid Rida. In his writings, Kawakibi discussed the key role of the Arabs in Islam; he also described the 34 Arab reformers and nationalists decadence and weaknesses of the Ottoman Empire. He stressed the importance of Arab unity. Another Arab nationalist, Jurji Zaidan, wrote for the journal al-Hilal. Whereas pan-Islamists, such as al-Afghani, believed in the supremacy and integrity of the Islamic legacy, pan- Arabists like Zaidan emphasized its uniquely Arab character and the importance of history, language, and culture over religion. The ideas of these early Arab nationalists would come to fruition with World War I and the collapse of the Ottoman Empire in the early 20th century. Further reading: Abdel-Malek, Anouar, ed. Contemporary Arab Political Thought. London: Zed Books, 1970; Hourani, Albert. Arabic Thought in the Liberal Age, 1798–1939. London: Oxford University Press, 1962; ———. A History of the Arab Peoples. Cambridge, MA: The Belknap Press of Harvard University Press, 1991; Philipp, Thomas, ed. The Autobiography of Jurji Zaidan. Boulder, CO: Three Continents Press, 1990. Janice J. Terry

art and architecture (1750–1900)Edit

The style of architecture in Britain changed considerably between 1750 and 1900. The Georgian mews and squares that were popular in the 1750s gave way to large suburbs, the ease of railway travel allowing for significant city sprawl. The Georgian style in Britain was very much influenced by the style of Andrea Palladio in 16th-century Italy. The architect Inigo Jones also built in the Palladian style, with some design features coming from classical Rome. Perhaps the best example in England of this neoclassical style is the city of Bath, with its crescents, terraces, and squares. Dublin is another example. Sir Robert Taylor (1714–88) and James Paine (1717–89) also worked in the Palladian tradition. In 1760 there emerged two great architects: Sir William Chambers (1723–96), who designed Somerset House, and Robert Adam (1728–92), who was the architect responsible for Syon House near London, Kenwood in Hampstead, Newby Hall and Harewood House in Yorkshire, and Kedleston Hall in Derbyshire. Chambers, although remaining Palladian at heart, was influenced by the discovery of Baalbek in Lebanon. Adam, by contrast, discarded classical proportions. His work was elaborated on by John Nash (1752–1835), who designed Regent Street, London, and by Sir John Soane, who worked on the Dulwich College Art Gallery. By the end of the 18th century, the influence of India and China led to the construction of buildings that either heavily incorporated Asian themes or were entirely Asian in style. Nash’s Royal Pavilion at Brighton, England, constructed in 1815–22, represents British interest in Mughal Indian architecture. Chinese-style pavilions and towers became common in places such as Kew Gardens and the English Gardens in Munich. Later, the emergence of Victorian architecture saw the classical style being retained for the British Museum (1823) and Birmingham Town Hall (1846). However, the design by Sir Charles Barry (1795–1860) for the new Houses of Parliament signaled the Gothic revival, with architects such as Augustus Welby Pugin (1812–52) and others being involved in the work. The Crystal Palace in 1851 was designed by Sir Joseph Paxton (1801–65). Norman Shaw (1831–1912) developed functional architecture for houses, the Bedford Park estate at Turnham Green, London, built in the 1880s, being a good example. Other architects included Charles Voysey (1857–1941), W. R. Lethaby (1857–1931), and Sir Edwin Lutyens (1869–1944). The Industrial Revolution also led to the construction of some iconic structures such as Iron Bridge in Shropshire. Sculptors like John Flaxman (1755–1826), using a linear style, were responsible for many statues around London, with commissions for public monuments of national heroes such as Lord Nelson and, later, Queen Victoria. In terms of British art, painters like William Hogarth (1697–1764), Sir Joshua Reynolds (1723–92), John Constable (1776–1837), and Thomas Gainsborough (1727–99) were important from the Georgian era; famous Victorian painters are Pre-Raphaelites such as D. G. Rossetti, Holman Hunt, and J. E. Millais. In France during the same period, neoclassical architecture appeared from 1740, remaining popular in Paris until the 19th century. This was, in part, a reaction against the rococo style of prerevolutionary France, with more of a search for order and the expression of republican values in Greco-Roman forms and more traditional ornamentation. Jacques-Germain Soufflot (1713–80), the architect of the Panthéon in Paris, drew parallels between the emerging power of Napoleonic France and that of the classical world. This can be seen in the Arc de Triomphe, La Madeleine, and the National Assembly building. In Paris, the Opera was built by Charles Garnier (1825–98) in 1862. Georges-Eugène, Baron Haussmann (1809–91) laid the plans for a new Paris, a features of which were open spaces, parks, and wide boulevards. The Eiffel Tower was built in 1889. Even before the French Revolution, paintings by Jacques-Louis David (1748–1825) had a clear republican art and architecture (1750–1900) 35 theme. David was made Napoleon’s offi cial painter, his Coronation of Napoleon being perhaps his most famous work. Jean-Auguste-Dominique Ingres (1780–1867) continued the neoclassical tradition, and the Raft of the Medusa by Théodore Géricault (1791–1824) signaled the arrival of romanticism. Eugène Delacroix drew much on his travels around the Mediterranean, with his great work being Liberty Leading the People, commemorating the July Revolution of 1830. It was not long before the emergence of the Barbizon School, with Camille Corot (1796–1875) and Jean-François Millet (1814–75) taking peasant life as their inspiration and providing a basis for such later painters as Vincent Van Gogh (1853–90). Impressionism saw the emergence of painters such as Edouard Manet (1832–83), Claude Monet (1840–1926), Alfred Sisley (1839–99), Camille Pissarro (1830–1903), Berthe Morisot (1841–95), and Pierre-Auguste Renoir (1841–1919). Other important painters of this style included Edgar Degas (1834–1917) and Paul Cézanne (1839–1906), providing an infl uence for Paul Gauguin (1848–1903), the foremost of the postimpressionists. Vincent Van Gogh from the Netherlands created haunting self-portraits and landscapes of bright color, making his work instantly recognizable. Mention should also be made of Henri Rousseau (1844–1910), who used a naïve style, and Gustave Moreau of the symbolist school. In Italy and Spain, baroque architecture gave way to neoclassicism, with tastes becoming more sober and restrained. In Italy this was exemplifi ed by Giambattisa Tiepolo (1696–1770) and his son Giovanni Domenico Tiepolo (1727–1804) and their work on churches and palaces in Venice. In Spain the reaction against classicism was marked, especially in Catalonia, where Antoni Gaudi (1852–1926) worked on a free-form style, a geometrically based style using a variety of material and mosaics, with work on his Sagrada Familia Church in Barcelona starting in 1882. Francisco José de Goya (1746–1828) was the greatest of the Spanish painters in the last part of the 18th and fi rst part of the 19th centuries. He was profoundly affected by the Peninsula War and his painting El Tres de Mayo, showing the execution by French soldiers of rebels in Madrid, is among his most well known. Other Spanish painters of the 19th century include Ignacio Pinazo (1849–1916), Francisco Domingo (1842–1920), Emilio Sala (1850– 1910), Ignacio Zuloaga (1870–1945), and Joaquín Sorolla (1863–1923). In Central Europe, increased wealth led to the construction of many major government buildings. In Austria, rococo design gave way to historicism, with the development of the Ringstrasse in Vienna. This changed with the advent of the Secession movement in 1897. King Ludwig of Bavaria fi nanced the construction of large numbers of “dream” castles throughout his kingdom. In Russia, the emergence of St. Petersburg led to the construction of massive public and private buildings. The Winter Palace, commissioned from Francesco Bertolomeo Rastrelli (1700–71) in 1754 by Catherine the Great, is certainly the most well known, with others including the Yelagin Palace built for Alexander I by the architect Carlo Rossi (1775–1849) also important. The Church of the Resurrection of Christ was built in the late 1880s on the site where Czar Alexander II was killed in 1881. The building of the Trans-Siberian Railroad led to the construction of large numbers of railway stations along the length of the railroad. It was a period when Russians were collecting art from around the world. In China, with the capital Beijing divided between the Chinese City and the Tartar City, the major change came from the 1860s with the building of foreign legations in former princely palaces in the Tartar City. This followed the Second Opium War, which saw the sacking of the “Old” Summer Palace, with work beginning on the massive enlargement of the “New” Summer Palace in 1888. Building work continued on parts of the Forbidden City, and the Manchu Qing (Ch’ing) emperors also spent much energy in the late 18th century on enlarging the palaces at their summer residence at Chengde (Jehol). The late 19th century saw a massive infl ux of foreign infl uence into Shanghai, Tianjin (Tientsin), Weihai (Weihaiwei), Qingdao (Tsingtao), Macau, Hong Kong, Hankou (Hankow), and Guangzhou (Canton). As well as warehouses, bank chambers, offi ce buildings, railway stations, and accommodations, there were also Christian churches for both Chinese and foreign parishioners. There were also churches built around India—especially in Calcutta—with many buildings being erected throughout the Indian subcontinent for the military and traders. Herman Willem Daendels (1762–1818), governor of the Netherlands East Indies, helped redesign the city of Batavia (Jakarta). In Japan, many modern buildings were erected, including the famous Imperial Hotel in Tokyo. Holiday retreats such as Simla in India, Maymyo in Burma, and the Cameron Highlands in Malaya were also built toward the end of the 19th century. Many of these places, as well as earlier temples and landmarks, were the subject of drawings by Thomas and William Daniell. In North America, vast change was refl ected in the architecture. From the 1750s, there were small build- 36 art and architecture (1750–1900) ings such as Mount Vernon, the residence of George Washington. Thomas Jefferson’s home, Monticello, dates from 1768. After independence, there were a large number of government buildings erected throughout the country, with Pierre-Charles L’Enfant (1754–1825) drawing up the original plans for Washington. The White House was built beginning in 1792 in the Palladian style. The Irish-American architect James Hoban (c. 1762–1831) worked on it after winning the competition with skilled stonemasons coming from Edinburgh, Scotland, in 1793. At the same time, there was work on the Capitol, with the chamber of the House of Representatives completed in 1807. Both the White House and the Capitol were sacked by British soldiers in 1812, and it was not until 1857 that the South Wing was added to the Capitol. There were also large numbers of other civic buildings constructed throughout the country. Southern plantation architecture was popular. In addition, around the United States, many towns and cities were being established. Unlike their counterparts in Europe, large numbers of the houses were built from wood, with log cabins constructed by pioneers. There was also the construction of the fi rst skyscrapers with the Cast Iron Building, designed by James Bogardus (1800–74) in 1848, and the Haughwout Department Store in New York City in 1857. The fi rst steel girder construction was the Home Insurance Company Building in Chicago, with work by William Le Baron Jenney (1832–1907) and also later his protégé, Louis Sullivan. Prominent artists living in the United States painted pioneer scenes and portraits of political and society fi gures. There were a few new concepts, including the panoramic painting that illustrated some historical event. Painted in a way to show the battle or event unfolding, people paid a small fee to see the picture. There was also great interest in landscape painters. In South America, Buenos Aires, Montevideo, Lima, Santiago, Rio de Janeiro, and other cities had large numbers of migrants arriving, with major public buildings, banking and insurance chambers, offi ce buildings, hotels, and other buildings erected. In Australia, during the 1880s there was the period of “Marvelous Melbourne.” As well as the Melbourne Public Library, Melbourne Town Hall, the university, and other major civic projects, there were also many Italianate mansions built throughout the city. In Australia there were many station properties, and in the country towns large numbers of wooden houses. In North Africa, Cairo saw the construction of large numbers of mock-Parisian buildings, with the wealth fl owing into Egypt through tourism and the opening of the Suez Canal. The British and French built numbers of colonial buildings throughout their empire in Africa, with the Portuguese, Germans, and Belgians also constructing buildings, but on a much smaller scale. In South Africa, Cape architecture became popular not just in Cape Town and nearby areas but also elsewhere in Africa. See also baroque culture in Latin America. Further reading: Colligan, Mimi. Canvas Documentaries: Panoramic Entertainments in Nineteenth-Century Australia and New Zealand. Melbourne: Melbourne University Press, 2002; Fletcher, Bannister. A History of Architecture on the Comparative Method. London: The Athlone Press, 1961; Jacquet, Pierre. History of Architecture. Lausanne: Leisure Arts, 1966; Richards, J. M. Who’s Who in Architecture from 1400 to the Present. New York: Holt, Rinehart and Winston, 1977; Schickel, Richard. The World of Goya 1746–1828. New York: Time-Life International, 1971; Sunderland, John. Painting in Britain 1525 to 1975. London: Phaidon Press, 1976. Justin Corfi eld

Asian migration to Latin AmericaEdit

There has been a long history of Asian migration to Latin America, with Chinese, Japanese, and Korean populations now in most countries in Central and South America. In addition there are also signifi cant Indian communities in some countries, especially Guyana, and small numbers of Vietnamese. The fi rst links between the two areas may have been during the Ming dynasty in China, when some of the fl eet of Chinese Admiral Cheng Ho may have reached the Americas. On many of his voyages members of the crew did not return with the fl eet, and if any of his ships did reach the Americas, it seems likely that they would represent the fi rst recent Asians to settle in the Americas. It is also worth mentioning that in 1492, when Christopher Columbus sailed the Atlantic, he expected to reach Asia, and in 1519 Ferdinand Magellan started his voyage that was, after Magellan’s death, to circumnavigate the world, sailing through what became the Straits of Magellan across the Pacifi c Ocean, proving that it was possible to make the voyage. However, there was little migration from Asia to the Americas until the early 19th century. Few Chinese ventured overseas during this period, except for those already in Southeast Asia—the Nanyang, as they called Asian migration to Latin America 37 it. In 1637 the Japanese government banned travel overseas and requested their citizens to return home; Korea was so isolated that travel was extremely diffi cult until recently. It is probable, however, that some Filipinos did settle in Latin America, especially in Peru, the center of Spanish power, as there were close shipping ties between Lima and Manila. In the early 19th century the increased frequency of traveling overseas by ship and overpopulation in China saw many Chinese begin to migrate, initially to the favored destinations in Southeast Asia and around the Indian Ocean, and then to the Americas. The California gold rush certainly saw many Chinese move to California and others moved in search of employment to Mexico and then to the Caribbean and South America. As a result, Chinese merchants started establishing businesses in cities and large towns along the Pacifi c coast. Some were farmers growing vegetables, others running shops, laundries, or restaurants. A few Chinese families settled on the eastern coast of Latin America. A sizeable community was established in British Guiana (now Guyana), many working on plantations. The Chinese in British Guiana form the subject of novelist Robert Standish’s Mr. On Loong. In addition, mention should be made of the family of Philip Hoalim from Guyana—Hoalim later became involved in politics in Singapore, forming the Malayan Democratic Union, the fi rst political party ever established in Singapore. As well as the Chinese in British Guiana, there was also a much larger Indian community. Known as the East Indians, to differentiate them from the West Indians, many spoke Hindi or Urdu, and there are numbers of Hindu temples and Muslim mosques in the capital of Georgetown. In neighboring Suriname, a former Dutch colony, there are also many East Indians and Chinese. There is even a statue of Mohandas Gandhi in Paramaribo, Suriname’s capital. With its Dutch connections, there are also Indonesians (mainly from Java), many descending from indentured servants who came before the 1940s. Smaller Indian communities in Brazil, Paraguay, and northern Argentina have been instrumental in the introduction and breeding of zebu and Brahman cattle. CHINESE COMMUNITIES During the latter half of the 19th century, economic opportunities encouraged many Chinese to migrate to Cuba and Peru, where they worked on sugar plantations, in mining, and on haciendas, as well as running shops in townships. However, Cuba started to restrict the number of Chinese migrants. At the same time, the Mexican government started encouraging migration from China. Porfi rio Díaz, president 1876–80 and again 1884–1911, wanted Chinese coolies as a cheap labor force for building infrastructure in northern Mexico, where many settled. As with the Chinese in Peru, there were gradual changes in the economic status of the migrant communities. Whereas in the 1870s most were manual laborers, by the 1900s many were running businesses. By 1912 there were 35,000 Chinese in Mexico. Some used it as a route to the United States, but many others established businesses, often in poor suburbs. As a result, during periods of instability, especially during the Mexican Revolution, when rioting started, Asians were often the victims of mobs. The Mexican revolutionary hero Pancho Villa was defi nitely anti-Chinese, calling U.S. citizens Chino blanco (“white Chinese”). When he took the town of Torreón on May 25, 1911, his forces and several thousand locals massacred 303 Chinese and fi ve Japanese. When he was eventually defeated by Emilio Obregón, he is reported to have said “I would rather have been beaten by a Chinese than by Obregón.” In February 1914 anti-Chinese riots took place in Cananea, and local Chinese took refuge in a U.S.-owned building, and in March 1915 many Chinese were attacked and robbed in rioting in Nogales. In spite of these attacks, many Chinese continued to migrate to Mexico, with 6,000 arriving in 1919–20. The Chinese community remains important in Mexico. In Central America, there were small Chinese communities in each country, and most were involved in running small businesses. By the 1930s they had begun to dominate trade in many towns in El Salvador, so much so that the 1939 constitution included protections for indigenous small traders. A new law, passed in March 1969, limited the running of small businesses in the country to people born in Central America, specifi cally excluding naturalized citizens. However, many Chinese continued to operate with their businesses owned by middlemen. In Honduras, many small businesses were also owned by Chinese until the 1969 war with El Salvador, which led to fervent nationalism breaking out in the country and moves to reduce the number of Chinese-owned shops. In Central America today there are small numbers of Vietnamese, and there is also a sizable Vietnamese population in Cuba, largely as a result of political ties between the two communist countries. As well as in Peru, there are also signifi cant Chinese communities in Brazil, Argentina, and Chile. Indeed bilateral ties and trade (with China) with all three 38 Asian migration to Latin America countries have increased in recent years, offering many Chinese in Latin America new opportunities for establishing businesses. Chinese-language gravestones can be seen in cemeteries throughout Latin America, although most seem to be located in foreign cemeteries, such as the British Cemetery at Chacarita in Buenos Aires or its counterparts in Chile. Most Latin American countries now recognize the People’s Republic of China, but a few still extend diplomatic recognition to the Republic of China (Taiwan) as the legitimate government of the whole of China. For these, most ties are with Taiwan. In Paraguay, the Taiwanese government and community plays an important role in commercial life in Asunción and has been involved in major projects, such as the refurbishment of the Paraguayan foreign ministry. JAPANESE AND KOREAN SETTLERS In Brazil, the largest country in Latin America, there are many people of Chinese and East Indian ancestry and also some migrants from Malaysia involved in rubber cultivation. In the southern part of the country there are also increasing numbers of Japanese—there are said to be over 600,000 Brazilians with Japanese ancestry. A number of the Japanese can trace their origins in Brazil back to 1908 when an agreement with the municipal authorities in São Paulo allowed Japanese to settle in the hinterland. They established many vegetable farms, and there are Japanese grocery stores, bookshops, and even geisha in São Paulo today. There were also numbers of Japanese farmers who left Japan during this period, with many settling in Peru, Brazil, and Paraguay, where the government was encouraging foreigners to move to the country and establish colonies. Many were poor Japanese in search of work, but quite a number were well educated. Some of the latter settled in Panama—a few involving themselves in businesses so closely linked to the Panama Canal that spying by them has long been alleged. One of them, Yoshitaro Amano, a Japanese store owner who had lived in Panama City, spied on U.S. ships using the Panama Canal. He later fl ed Panama and was arrested for spying in Nicaragua, Costa Rica, and then Colombia. Perhaps the most prominent example of the role of the Japanese in Latin America concerns two of the Japanese who left Kumamoto, Japan, moving to Peru in 1934: Naoichi Fujimori and his wife, Mutsue. Four years later their son, Alberto, was born, and the parents applied to the local Japanese consulate to ensure the child retained Japanese citizenship. He worked as an agricultural engineer and became dean and then rector of his old university, also hosting a television show. In 1990 Fujimori, heading the Cambio 90 party (“Change 1990”), defeated the author Mario Vargas Llosa in the election for president in a surprise result. Although he was Japanese, Fujimori gained the political nickname “el chino” (“the Chinese man”), with many observers crediting his victory with his ethnicity, which set him apart from the political elite of Spanish descent. Fujimori had campaigned on a platform of “Work, technology, honesty” but in what became known as Fujishock, he instituted massive economic reforms and invested the offi ce of the president with many new powers. His wife, Susana Higuchi, also of Japanese descent, in a very public divorce, accused him of stealing from donations by Japanese foundations. Reelected in 1995, Fujimori won the 2000 election, but soon afterwards a massive corruption scandal emerged. Fujimori, overseas at the time, then went to Japan, where he resigned. In November 2005 he fl ew from Japan to Chile and was arrested on his arrival. On September 22, 2007, he was extradited to Peru where he was jailed awaiting trial. On December 12, 2007, Fujimori was convicted of abuse of authority and sentenced to six years in prison. He faces three other trials on charges including murder, kidnapping, and corruption. Fujimori remains the bestknown politician of Asian ancestry to hold high offi ce in Latin America, but he has also become a byword for corruption and political sleaze. Of the Koreans who have settled in Latin America, many run shops and small businesses. There are parts of Buenos Aires and also Rio de Janeiro with large Korean populations. In Uruguay there has been an infl ux of Koreans, many associated with Rev. Sun Myung Moon. Despite the high-profi le involvement of Fujimori in Peruvian politics, most of the Asians in Latin America shun media hype. Although many operate small businesses either importing Chinese merchandise or household consumer products into Latin America or run restaurants, a new generation of highly educated Asians fl uent in Spanish is emerging, many of whom were born in Latin America. They are starting to enter the professions of law, accountancy, and banking, many having totally assimilated into the communities in which they live. When Hu Jintao, the general secretary of the Chinese Communist Party, visited Brazil, his fi rst overseas visit after assuming the leadership of the People’s Republic of China, he was greeted by thousands of Brazilians of Chinese ancestry. Further reading: Craib, Raymond B. “Recovering the Chinese in Mexico.” The American Philatelist (May 1998); Deacon, Asian migration to Latin America 39 Richard. A History of the Japanese Secret Service. London: Frederick Muller, 1982; Gruber, Alfred A. “Tracing the Chinese in Mexico through Their Covers.” The American Philatelist (June 1993); Hu-DeHart, Evelyn. “Coolies, shopkeepers, pioneers: the Chinese of Mexico and Peru (1849–1930).” Amerasia (1989); Stewart, Watt. Chinese Bondage in Peru. Westwood, CT: Greenwood Press, 1970. Justin Corfi eld

Australia: exploration and settlementEdit

The island continent of Australia was the last to be discovered and explored by Europeans. It was called Terra Australis Incognita, the unknown southern land. The fi rst European to sail into the Australian waters was a Dutchman, Abel Tasman, working for the Dutch East India Company, who discovered the western and southern coast of an island he named Van Dieman’s Land (now Tasmania) in 1618. Subsequent Dutch explorers of areas of coastal Australia called it New Holland. In the mid-18th-century France and Great Britain also became interested in exploring the unknown land. Between 1768 and 1776 Captain James Cook, an offi - cer of the British Royal Navy, made three great voyages of discovery. His fi rst voyage sailed around New Zealand and then the eastern coast of Australia. Sir Joseph Banks, a scientist and naturalist who accompanied Cook, recorded the fl ora and fauna of southeastern coastal Australia, which he named New South Wales, indicating its possibilities for settlement. Fifteen years after Cook’s discovery, British Home Secretary Lord Sydney decided to set up a penal colony in Botany Bay (named by Banks), where Sydney is today. This was to accommodate the overfl owing British jails resulting from the American Revolution, when former British colonies would no longer accept British convicts. In January 1788 Captain Arthur Philip arrived at Sydney Harbor in charge of 11 ships, 717 convicts and an army detachment named the New South Wales Corps formed for the purpose of guarding them. Philip oversaw the settlement through 1792, its most critical years, due to lack of food and the unsuitability of convicts as pioneers. Although free settlers began arriving in Sydney from 1793 the main purpose of the settlement remained a repository of convicts. Three lieutenant governors followed Philip; the third, William Bligh, earlier of the mutiny on the Bounty, was a man of such fi ery temperament that his tenure ended with the Rum Rebellion. The cause was the illegal liquor traffi c by offi cers of the New South Wales Corps, the prevalence of drunkenness, and consequent problems. Bligh’s attempt to rein in the offi cers resulted in his ouster. Although the leaders of the revolt were punished, the British government recalled Bligh and undertook reforms. The new governor was Colonel Lachlan MacQuarie who came with his own Scottish regiment. The New South Wales Corps was disbanded and replaced by regular British army units that were rotated for tours of duty. MacQuarie made extensive reforms, built up the infrastructure, and encouraged exploration into the interior as well as free immigration with land grants. The governors who followed him continued his policies, resulting in accelerated development. Between 1802 and 1803, Matthew Flinders circumnavigated Australia, proving that it was an island continent and that there was no separate island called New Holland. Flinders recommended the name Australia for the continent, which was accepted. In 1829 Great Britain laid claim to the whole continent. In 1813 the fi rst overland expedition penetrated the low mountain range that separated the coastal plains of eastern Australia from the interior. Many explorations into the interior discovered river valleys and great grassy plains suitable for agriculture and pasturage. Waves of settlers followed, encouraged by liberal land grants to free settlers and emancipists (convicts who had served their terms). The natives, known as aborigines, were hunter-gatherers and no match for the white settlers; they were killed, driven off, or survived on the fringes of white society. Great Britain established several other penal colonies in Australia in addition to the one in Sydney. One established in 1803 in Tasmania was used to house the most violent convicts and to preempt a possible French attempt to seize the island; another was on Norfolk Island, off the eastern coast. In 1824 Brisbane, north of Sydney on the eastern coast, became another penal settlement—it became the capital of a colony called Queensland. In 1850, convicts were sent to Western Australia at the request of free settlers there because of a severe shortage of labor. Two colonies, Victoria and South Australia, never had penal settlements. END OF THE PENAL SYSTEM As the number of free settlers grew, local opposition to continued transportation gained ground in the 40 Australia: exploration and settlement Australian colonies. At the same time, the transportation of convicts to remote colonies was questioned in Britain. In 1837 a parliamentary committee investigating the question reported against its continuation, beginning the movement to abolish it. The last convicts were landed in New South Wales in 1840. By then it had received almost 75,000 convicts, with 25,000 still under sentence. No more convicts were transported to Tasmania in 1853, it having received 67,000 since 1803. The first move toward representative government came to New South Wales in 1823 with an appointed legislative council. It was enlarged in 1842 to include some elected members, the electorate limited to men, including emancipists, paying certain taxes. In 1850 the British parliament passed the Australian Colonies Government Act that gave each colony the right to set up its own legislature, determine franchise, tariffs, and make laws, subject to royal confirmation. The six Australian colonies became states: New South Wales (capital Sydney), Victoria (Melbourne), Queensland (Brisbane), South Australia (Adelaide), which also administered the Northern Territory, Tasmania (Hobart), and Western Australia (Perth). Each state adopted a constitution that with slight variations provided for a bicameral legislature of elected members (initially on a restricted male franchise) and a cabinet government on the British model. By the mid-19th century, the interior of Australia had been crisscrossed; gold and other mineral deposits had been discovered and were being worked; steamships and telegraph connected it with other parts of the world; and railway lines were being built. The foundations of an Australian nation had been laid. See also Australia: self-government to federation. Further reading: Atkinson, Alan. The Europeans in Australia: Vol. I, The Beginning. Melbourne: Oxford University Press, 1997; Clark, C. M. H. A History of Australia, Vol. I. From the Earliest Times to the Age of MacQuarie. Carlton, Victoria: Melbourne University Press, 1962; Inglis, K. S. The Australian Colonists: An Exploration of Social History, 1788–1870. Carlton, Victoria: Melbourne University Press; Reynolds, Henry. Frontiers: Aborigines, Settlers and Land. North Sydney: Allen and Unwin, 1987; Shaw, A. G. L. Convicts and the Colonies: A Study of Penal Transportation from Great Britain and Ireland to Australia and Other Parts of the British Empire. London: Faber, 1966. Jiu-Hwa Lo Upshur

Australia: self-government to federationEdit

Beginning with the establishment of the legislative council for New South Wales in 1823, the Australian colonies had gradually received increasing measures of self-government from the British Colonial Office. In 1850 the British parliament passed the Australian Colonies Government Act that allowed the colonies to set up their own legislatures, pass laws to determine the franchise, tariff rates, and alter their constitutions, all subject to royal confirmation. In the following years, most of the colonies adopted their constitutions with slight variations. All provided for a bicameral legislature of elected members and a cabinet on the British model (except for the most recently settled and most sparsely populated Western Australia, which established responsible government in 1890). Evidence of their autonomy was indicated when Great Britain accepted a law passed by the legislature of New South Wales in 1851 that forbade the landing of convicts in that state. The last state to stop receiving British convicts was Western Australia, in 1867. There was rapid progress on many fronts during the second half of the 19th century, shown by the founding of public universities in each state and the introduction of compulsory public education. Railway building began in 1850, followed by the arrival of regular steamships that shortened the time of voyages, and the opening of telegraphic communications with other parts of the world. Other signs of maturity are indicated by the withdrawal of British forces from the continent in 1870 as the colonies established their own militias and the colonies agreeing to subsidize financially the British naval squadron stationed in Australian waters in 1890. However, the lack of a central government for the continent created problems and confusion. For example, each of the states built its railways using different gauges: the standard gauge of 4 feet 8 ½ inches for New South Wales, the wide gauge of 5 feet 3 inches for Victoria, and a narrow gauge of 3 feet 6 inches for South Australia, Western Australia, and Queensland. Another question that needed a common approach was immigration. Few non-British immigrants had settled in Australia up to 1850. However, in the aftermath of the discovery of gold in Victoria in 1851, peoples of many nationalities flooded to the gold fields. Disputes over taxation resulted in an uprising by German and Irish gold miners in November–December 1854 who proclaimed the Republic of Victoria—it was quickly put down. Australia: self-government to federation 41 It was the presence of 33,000 Chinese in the gold rush that led the legislature of Victoria to pass laws in 1855 that levied a heavy poll tax and put other restrictions on the Chinese that shut down Chinese immigration. New South Wales and South Australia followed with their own laws to restrict Chinese immigration, and they prevailed despite British government pressure against them. Two other issues also affected all the Australian colonies. One involved the importation of laborers from the Solomon and other islands to work in Australia, mostly in the sugarcane fi elds in Queensland. The condition of these laborers (called Kanaka) approached slavery and needed regulation. Another involved national security over control of the eastern portion of New Guinea (the Netherlands had annexed the western half). Queensland was located nearest to New Guinea and was most anxious to control all western New Guinea. However, due to British reluctance to act promptly, Germany had already claimed the northern half, leaving only the southern part, which became a British colony in 1884. These many issues contributed to the sentiment for forming a federation of all the Australian colonies. In 1885 the British parliament established a federal council to meet every two years to consult on problems that concerned all the colonies, but it was inadequate because it had no enforcement powers. The fi rst Australian Federal Convention to create a union with more power met in Sydney in 1891. It was composed of members of all colonial legislatures, including those from New Zealand, another British possession, presided over by Sir Henry Parkes, and failed to win acceptance of all the states. A second convention met in Hobart (Tasmania) without New Zealand in 1897 and drafted a constitution that won acceptance. The union was called the Commonwealth of Australia, a federation that resembled the United States. The federal government was to control foreign affairs, defense, trade, tariffs, currency, citizenship, post and telegraph, etc. It would be headed by a governor-general who represented the British monarch but would be governed by a prime minister and cabinet that had a majority in the lower house of Parliament called the House of Representatives, whose members represented districts based on population. The upper house, or Senate, had six senators from each state. A supreme court guarded and interpreted the constitution. A new city whose location would be determined later would become the federal capital. (A site in New South Wales was later chosen and named Canberra, Australian Capital Territory.) After acceptance in a referendum held in all states, the British parliament passed a bill of ratifi - cation. The Commonwealth of Australia came into being on January 1, 1901. After Canada (in 1867), Australia became the second self-governing dominion of the British Commonwealth. See also Australia: exploration and settlement. Further reading: Burgmann, Verity. ‘In Our Time’: Socialism and the Rise of Labor, 1885–1905. North Sydney: Allen and Unwin, 1985; Irving, Helen. To Constitute a Nation: A Cultural History of Australia’s Constitution. Cambridge: Cambridge University Press, 1997; Kingston, Beverly. The Oxford History of Australia. Vol. 3, 1860–1900: Glad, Confi dent Morning. Melbourne: Oxford University Press, 1988; Macintyre, Stuart. Winners and Losers: The Pursuit of Social Justice in Australian History. North Sydney: Allen and Unwin, 1985; Trainer, Luke. British Imperialism and Australian Nationalism: Manipulation, Confl ict and Compromise in the Late Nineteenth Century. Cambridge: Cambridge University Press, 1994. Jiu-Hwa Lo Upshur

Austro-Hungarian EmpireEdit

The Austro-Hungarian Empire came together in 1867 and lasted until 1918 when it was dissolved at the end of World War I. The political entity that was formed in 1867 was a method of trying to tie together the lands that were controlled by the Habsburg dynasty as a successor to the Austrian Empire that had been created in 1804. From the end of the Napoleonic Wars, the Austrian Empire had been one of the major military and political powers in Europe, with Count (later Prince) Metternich, the leading Austrian politician, helping infl uence European politics through the congress system. However, in 1848, the uprisings and revolutions that took place throughout central Europe—many of which were unsuccessful but still shook the ruling classes—forced the Habsburg rulers of Austria to try to come up with another political entity that would help hold together the Habsburg dynasty. One of the places that caused the Habsburgs the most trouble in 1848 was in Hungary, where the liberal revolution was crushed with great diffi culty. Although the Austrian Empire stayed together, Metternich was forced out of offi ce, and Austria had to accept a military decline in spite of its size as the largest country in Europe after the Russian Empire. This military decline was clearly demonstrated by the defeat of Austria in the Austro-Sardinian War of 1859 and then the Austro-Prussian War of 1866. Count Belcredi, the Austrian prime minister, felt that the Austrian government should make considerable political concessions to Hungary to ensure the support of the Hungarian nobility and the rising middle class yet retain Vienna as the center of the new empire. The agreement that the Austrian government eventually decided upon was the Ausgleich (kiegyezés in Hungarian), otherwise known as the Compromise of 1867. This established the Austro-Hungarian Empire, by which there would be a union within a dual monarchy, whereby the king-emperor would be the head of the Habsburg family who would be emperor of Austria and king of Hungary, running a unifi ed administration but under which there would be an Austrian, or Cisleithanian, government and a separate Hungarian government. Both would have their own parliaments, each with its own prime minister. Many parts of local administration would be run separately, but there would be a common government working under the monarchy that would have the responsibility of controlling the army, the navy, foreign policy, and customs matters. The administration of education, postal systems, roads, and internal taxation would be split between the Austrian or the Hungarian governments, depending on geography. The Compromise also led to Emperor Franz Josef II being crowned as the king of Hungary, whereby he reaffi rmed the historic privileges of Hungary and also confi rmed the power of the newly created Hungarian parliament. There were also some regional concessions. This largely involved some parts of Austria, offi cially known as Cisleithania, such as Galicia (formerly part of Poland) and Croatia maintaining a special status. In Croatia, the Croatian language was raised to a level equal with the Italian language, and in Galicia, the Polish language replaced the German language as the normal language of government in 1869. This did gain support from the Poles but not from the Ukrainian minority. From 1882 Slovenia was to have autonomy, with Slovenian replacing German as the dominant offi cial language and with the Diet of Carniola governing the region from Laibach (modern-day Ljubljana). In Bohemia and Moravia, Czech nationalists wanted the Czech language to be adopted, and there were subsequent concessions made in 1882. There was also another problem dealing with the ethnic Serbs in Vojvodina, where the Hungarians were eager not to allow any part of their kingdom to gain any special status. The Austro-Hungarian Empire was one controlled by the Austrian and Hungarian hereditary nobility, and this class system was to lead to many problems. The major one was the Archduke Franz Ferdinand, the nephew of Emperor Franz Josef and heir to the Austro-Hungarian throne, marrying Sophie Chotek, from a wealthy Czech family. This led to consternation at court, and the marriage was declared to be morganatic; their children could not inherit the throne. The Austrian prime minister, Count Taaffe, until 1893, managed to maintain the support of conservatives from the Czech, German, and Polish communities—known as the Iron Ring. However, some radical Czechs agitated for more power, with demonstrations in Czech-dominated Prague leading to the city being placed under martial law in 1893. Franz Josef had offered parliament the choice of choosing a prime minister, but the issue of nationalities so divided the legislative body that after two years of indecision, Franz Josef appointed Count Badeni, the Polish governor of Galicia, to the prime ministership. He remained in power for two years—being ejected in 1897 with the Czechs opposing his plans for language reforms and getting the reforms repealed in 1899. Many of these problems were to become far more evident during World War I, which led to the collapse of the Austro-Hungarian Empire and its fragmentation. Further reading: Mason, John W. The Dissolution of the Austro-Hungarian Empire 1867–1918. London: Longman, 1997; May, Arthur J. The Hapsburg Monarchy. Cambridge, MA: Harvard University Press, 1951; Sked, Alan. The Decline and Fall of the Habsburg Empire 1815–1918. London: Longman, 2001. Justin Corfi eld

Crisis and Achievement 1900 to 1950 Edit

Addams, JaneEdit

(1860–1935) U.S. social reformer and peace activist Born into a prosperous Illinois family, Jane Addams forged important new roles for women in education and social work. As founder of Chicago’s Hull-House, she helped revolutionize social services for the poor and immigrants. Her work for peace as the United States marched into World War I antagonized some Americans but made her the fi rst U.S. woman to win a Nobel Peace Prize. Addams, a sickly child, was just two when her mother died. Her father encouraged Jane’s desire for a higher education. She became part of the fi rst generation of U.S. women to have signifi cant access to college and was one of many well-educated women who would make their era one that historians labeled “Progressive.” After great success at Rockford Seminary Addams found herself adrift, searching for some useful purpose for her education. Her father’s sudden death in 1881 compounded her depression. She considered medical school but dropped out within weeks. Two trips to Europe and deepening religious convictions helped put Addams on a path toward achievement and acclaim. In 1887 she visited England’s Toynbee Hall, where reformers were seeking to improve the lives of workers exploited by the Industrial Revolution. Back home a group of Smith College women had just founded the College Settlement Association to assist the millions pouring into U.S. factories and cities. In 1889 Addams and college friend Ellen Starr opened their own settlement house in a former mansion at 335 Halsted Street. These small-town Protestant ladies soon found themselves purveying social services to families who were mostly Italian, Catholic, and poor. Initially emphasizing cultural uplift—art, music, and good manners—Hull-House under Addams’s pragmatic supervision refocused on such pressing neighborhood needs as garbage collection and playgrounds. By 1900, of more than 100 settlement houses in U.S. cities, Hull-House was the most famous, thanks to Addams’s skills in writing, lecturing, public relations, and fundraising. Possibly the best-known U.S. woman, she was acclaimed a motherly saint before she was even 40. Unlike most white progressives, Addams worked with African-American reformers. Her fame peaked in 1910 when she published Twenty Years at Hull-House, her autobiography. An 1896 visit with Russian author Leo Tolstoy, a theorist of simplicity and nonresistance, followed in 1898 by the Spanish-American War, helped turn Addams’s attention to problems of aggression and war. Writing extensively on war, peace, and pacifi sm, she became active in U.S. anti-imperialism efforts. With war raging in Europe, Addams sailed to Holland for a women’s peace conference in 1915, just weeks before German U-boats sank the Lusitania, and she later met with both sides in the vicious confl ict. When the United States entered World War I in 1917, Addams found herself vilifi ed by some as an unpatriotic defeatist and ridiculed by others as a naive A female unable to understand the necessity of warfare. When the Russian Revolution produced a communist regime, “red” and “Bolshevik” were added to the failings listed by Addams’s critics. Addams spent much of the 1920s outside the United States. A long effort by her friends fi nally paid off when Addams shared the 1931 Nobel Peace Prize with Columbia University’s president. Her life’s work imbued with new relevance by the Great Depression, Addams died of cancer just days after her pioneering achievements were celebrated by admirers, including First Lady Eleanor Roosevelt. Further reading: Brown, Victoria Bissell. The Education of Jane Addams. Philadelphia: University of Pennsylvania Press, 2004; Davis, Allen F. American Heroine: The Life and Legend of Jane Addams. 2d ed. Chicago: Ivan R. Dee, 2000. Marsha E. Ackermann

Afrikaners, South AfricaEdit

The fi rst half of the 20th century represented a consolidation of white-dominated rule in South Africa. Yet the century began with a confl ict between the British colony and the Afrikaner, or Boer, republics. Afrikaners, who claimed their lineage from the original Dutch settlers of the Cape colony, had developed an increasingly distinct national identity in confl ict with the British and the African peoples of South Africa. Despite British victory in the brutal South African War, the increasingly segregated and racialized system in a united South African state pinnacled with the birth of apartheid in 1948. What the British called the Second Anglo-Boer War the Afrikaners called the Second War of Freedom. Historians have called it the South African War (1899–1902) to refl ect that the war was not merely an imperial war between the British and the Boers, but a civil war that involved the entire population of South Africa. The British claimed that the war was about the rights of foreigners—Uitlanders—in the Boer republic called the Transvaal; Paul Kruger, the president of the Boer republic, understood the confl ict to be about something more—British desire to control the Cape and the mineral wealth of Transvaal. After the early success of the Afrikaner war effort, the British drew on the resources of the empire to meet a signifi cant challenge to their imperial dominance. The Boers, led by generals including Jan Christiaan Smuts and Louis Botha, turned increasingly to guerrilla tactics. In turn the British commander, Horatio, Lord Kitchener, responded by burning Boer farms and imprisoning enemy civilians, including Africans, at concentration camps, where thousands died of disease. Africans generally did not fi ght in the war, but they did provide logistical support and supplies. In Britain, opposition to the war on both fi nancial and humanitarian grounds grew. Finally, the last holdouts surrendered in 1902. The Treaty of Vereeniging treated the Boers relatively mildly and even granted them political and cultural autonomy. The specter of African rebellion against growing repression in the white-dominated state quickly healed the wounds of the South African War. The Native Affairs Commission (1903–05), appointed by High Commissioner Sir Alfred Milner, suggested a policy of territorial segregation between whites and blacks, making Africans the true victims of the war. In 1910, the British parliament created the selfgoverning Union of South Africa. It became a Commonwealth nation under the Statute of Westminster in 1931. The Cape government enfranchised adult blacks, but only whites could stand for election in the new Union parliament. The Afrikaner nationalist Louis Botha, on the ticket of the South Africa Party, was elected as the fi rst prime minister of the Union of South Africa in May 1910. Blacks were denied political or economic power within the offi cial structure of the state and society. Some individuals within the Afrikaner political elite, like J. B. M. Hertzog, remained intensely hostile to the British. During both world wars, South Africans served the empire on the battlefi elds of Europe, though African troops were relegated to noncombat roles. Military alliance with Britain during both wars revived old debates about white South Africa’s relationship with its “mother country.” Afrikaner nationalists revolted in 1914 after Botha allied South Africa with Britain and even agreed to invade German South- West Africa (now Namibia). During World War II, a coalition between Jan Smuts (Botha’s predecessor) and Hertzog, called the United Party, broke apart over the same issue. Groups like the African Brotherhood and the Purifi ed National Party, a political party that developed after Hertzog allied with Smuts, built a mythology of Afrikaner nationalism centered on the Great Trek. The most radical Afrikaner nationalists went as far as to openly sympathize with the Nazi Party during World War II. The beginnings of apartheid can be found in the increasing segregation of and discrimination against 2 Afrikaners, South Africa black South Africans. The Natives’ Land Act (1913) and the Natives’ Trust and Land Act (1936) designated a small percentage of South Africa’s total land area to (segregated) black reserves. The 1923 Natives (Urban Areas) Act limited blacks’ access to white urban areas. While black South Africans were indispensable to whites as laborers, their overwhelming number in relation to the white population was perceived as a threat to the white-dominated state. In 1912, a group of Western-educated Africans formed the South African Native National Congress (later known as the African National Congress, ANC). While African leaders like Pixley Seme and John Dube petitioned brilliantly against the color bar of the white-dominated society, their pleas were generally ignored by both the British and white South African governments. Some Africans sought to challenge their social and economic oppression through labor unions and even revolutionary groups like the Communist Party of South Africa. The period after 1945 witnessed revived rhetoric of human rights and self-determination in the birth of the United Nations (ironically, Jan Smuts was recruited to help draft the preamble of the United Nations Charter). In 1944, Nelson Mandela, Oliver Tambo, and Walter Sisulu founded a Youth League in the African National Congress. While they shared the ANC’s goal of a democratic, racially egalitarian society, they advocated more militant tactics. In the 1948 campaign the National Party, led by D. F. Malan, centered on their message of racial purity and white domination. In particular, their agenda was based on a systematic exclusion of and separation from Africans. With victory the National Party instituted what would become the bane of humanitarian society for the next four decades—apartheid. Further reading: Beck, Roger. A History of South Africa. Westport, CT: Greenwood Press, 2000; Giliomee, Hermann. The Afrikaners: Biography of a People. Charlottesville: University of Virginia Press, 2003; Lowry, Donal. The South African War Reappraised. New York: Manchester University Press, 2000. Charles V. Reed

Aga KhanEdit

Aga Khan, the title ascribed to the imam of the Nizari Ismaili community, was fi rst bestowed on Aga Hasan Shah by Fateh Ali, the Shah of Persia, in 1818. The Ismaili branch of Islam is the second-largest Shi’i community after the Twelvers. The Ismailis and Twelvers both accept the same initial imams from the descendants of the prophet Muhammad. However, a dispute arose on the succession of the sixth imam, Jafar as- Sadiq. Although the Ismailis accepted the legitimacy of Jafar Sadiq’s eldest son, Ismael, as the next rightful imam, the Twelvers accepted his younger son, Musa al-Kazim. The fi rst Aga Khan was appointed as the governor of the province of Kirman. He also aided the British during the fi rst Anglo-Afghan War (1839–42) and in the conquest of Sind in India (1842–43). Ali Shah, who was also known as Aga Khan II, died in 1885. Upon the death of Aga Khan II, his son, Sultan Muhammad (1877–1957), assumed the title of Aga Khan III. He played an active role in supporting the continuance of British colonial rule over the Indian subcontinent. Aga Khan III was also the founder of the All-India Muslim League, the lead political party that later demanded a separate homeland for Muslims be carved out of India. He was also the president of the Muslim League from 1909 to 1914. In the preindependence years of India, Aga Khan III made a number of high-profi le visits abroad, including the imperial conference in London in 1930–31, the Geneva Disarmament Conference in 1932, and the League of Nations in 1932 and in 1934–37. In 1937, he was appointed the president of the General Assembly of the League of Nations for his pioneering leadership role. In 1937, Aga Khan III was succeeded by his grandson, Prince Karim, who assumed the title of Aga Khan IV. He was very committed to the promotion of Islamic architecture and instituted a series of awards for architectural excellence and artistic innovation in architecture. Aga Khan IV also donated very generously to various developmental projects in a number of countries with a sizable Ismaili population. Prince Sadruddin Aga Khan is the grandson of Aga Khan IV. He has an impressive educational record with degrees from Harvard University at the Centre of Middle Eastern Studies in 1957. Sadruddin Aga Khan worked strenuously for the ideals and programs of UNESCO, particularly for the promotion of cultural heritage sites worldwide as well as for the UN High Commission for Refugees. In 1965, he was appointed the UN High Commissioner for Refugees and continued in this prestigious position until 1977. He is the founder of the Bellerive Foundation, which is an Aga Khan 3 international corporate group that funds programs for the alpine environment. In 1978, the prince was made a special adviser and chargé de mission to the secretarygeneral of the United Nations to promote the cause of universal human rights. Further reading: Aziz, K. K. Aga Khan III: Selected Speeches and Writings. New York: Kegan Paul, 1998; Edwards, Anne. The Throne of Gold: The Lives of the Aga Khans. New York: William Morrow, 1996; Khan, Aga. The Memoirs of Aga Khan: World Thought and Time. New York: Simon and Schuster, 1954. Mohammed Badrul Alam

Aguinaldo y Famy, EmilioEdit

(1869–1964) president of the Philippines Emilio Aguinaldo was a revolutionary independence leader, general, statesman, and the fi rst president of the Philippines according to many Filipinos. He played a major role in the Philippine revolution against Spain and in the Philippine-American War. Aguinaldo’s rise to notability happened early in his life. He was born into a wealthy Chinese-mestizo family that owned extensive lands and that provided benefi ts not readily available to many Filipinos. The young Aguinaldo overcame a near-death sickness in his youth and briefl y attended Letran College in Manila, but left in order to help his family care for their extensive estate. In 1895, when only 17 years of age, he was elected to the position of capitan municipal (municipal captain), or town head, of Cavite El Viejo. Around the same time, Aguinaldo began his revolutionary career and entered the secret Katipunan revolutionary society, an abbreviated Tagalog term for “The Highest and Most Respectable Society of the Sons of the People.” The Katipunan advocated complete independence from Spain and thus aroused suspicions and opposition from the Spanish authorities. No longer able to evade notice by the ruling Spaniards, Aguinaldo and his fellow revolutionaries fought them, overcame early setbacks, and achieved considerable victories, most notably at the Battle of Binakayan on November 10, 1896, when they defeated Spanish regular troops. Although he won early successes and gained the leadership of his revolutionary group, Aguinaldo was forced by renewed military pressure from the Spanish to sign the Pact of Biacnabato and to accept banishment to Hong Kong in return for fi nancial and political concessions, social reforms, and promises of autonomy of government for the Philippines. In 1898 Aguinaldo returned to the Philippines from exile to continue his revolutionary work and to assist the efforts of the United States to defeat the Spanish during the Spanish-American War. He believed that his participation and the victory over Spain would be rewarded with a declaration of independence for the Philippines; Aguinaldo instead found that the American forces refused to allow his military to occupy Manila. He refused to allow his troops to be replaced by American forces and withdrew to Malolos, where he and his followers declared independence on June 12, 1898. On January 23, 1899, Aguinaldo was inaugurated as the fi rst president of the Philippines, although U.S. authorities did not recognize his government. The Philippine-American War began on February 4, 1899, after a Filipino crossed over the San Juan Bridge and was shot by an American sentry. Aguinaldo led the resistance to American occupation and rejected the notions of gradual independence advocated by the occupiers and U.S. president William McKinley. Although Aguinaldo’s guerrilla warfare tactics posed many diffi culties for the U.S. military, they implemented a “carrot and stick” approach that mitigated popular support for the insurgents. The capture of Aguinaldo in Palanan, Isabela, on March 23, 1901, with the help of Filipino trackers broke the revolt, which foundered within the following year. In exchange for his life, Aguinaldo pledged loyalty to the United States and thus acknowledged its sovereignty over the Philippines. Although no longer a revolutionary, Aguinaldo thereafter remained committed to independence and veterans’ rights while staying retired from public life for many years. In 1935, when the Commonwealth of the Philippines was established, he ran for the presidency but lost to Manuel L. Quezon. During World War II the Japanese occupiers forced him to support them and to make anti-American speeches and statements. He was later cleared of wrongdoing when Americans recaptured the Philippines and learned that the Japanese had threatened to kill his family if Aguinaldo did not comply. After the war he actively promoted nationalistic and democratic causes within his country. He died on February 6, 1964, in Quezon City. Further reading: Achutegui, Pedro S. de, S.J. and Miguel Bernad, S.J. Aguinaldo and the Revolution of 1896: A Documentary History. Quezon City: Ateneo de Manila 4 Aguinaldo y Famy, Emilio University Press, 1972; Agoncillo, Teodoro. Malolos: The Crisis of the Republic. Quezon City: University of the Philippines Press, 1960; Aguinaldo, Emilio. My Memoirs. Translated by Luz Colendrino-Bucu. Manila, 1967. Scott Catino

Alessandri, ArturoEdit

(1868–1950) president of Chile Arturo Fortunato Alessandri Palma was president of Chile from 1920 to 1924, again in 1925, and then from 1932 to 1938. During that time he became known as the Lion of Tarapacá. Known initially for his strident support of the poor of Chile, he was later heavily criticized by many of his former supporters when he became far more conservative. Arturo Alessandri was born on December 20, 1868, at Linares, south of the Chilean capital of Santiago, the son of Pedro Alessandri and Susana Palma. His father’s family originally came to Chile from Italy. He was educated at the Sacred Heart School in Santiago, and then he worked at the National Library of Chile. He used his position there to study for a law degree and in 1893 was admitted to the bar. Politically, Alessandri was connected with the Progressive Club, making him a liberal, and, in fact, he later joined the Liberal Party, becoming secretary of its executive committee in 1890. He was elected to the Chamber of Deputies in 1897 and had six terms in Congress and two terms in the Senate after successfully challenging a prominent local politician for the seat for Tarapacá. During this time he built a major political base by supporting the nitrate workers in northern Chile. He became minister of industry and public works in 1908, minister of fi nance in 1913, and was appointed minister of the interior in 1918. In 1920 Alessandri was elected president of Chile, ending a right-wing domination of Chilean politics that had started in the 1830s. Alessandri faced many problems in offi ce, and to raise more government revenue he introduced income tax for the fi rst time in Chilean history. However, Chile was entering a period of economic hardships, and the new tax only partially made up for the shortfall in the economy. This came from the fall in the price of nitrate, which saw the Chilean peso fall from one for 27 cents (U.S.) to one for 9 cents. His reform moves were supported by the Liberal Alliance and the Democratic Party, but unemployment rose, and the pay for civil servants and the army fell into arrears. Furthermore, Alessandri’s attempts to spend more on public education, health, and welfare proved unpopular with some sectors of the country. During his time as president from 1920 to 1924, Alessandri had to change his government 16 times until he was fi nally able to secure a majority in Congress. However, Congress moved against him, and with the Chilean peso plummeting in value and his inability to pay the army, Alessandri offered to resign. In the end a military junta staged a coup d’état on September 15, 1924. Alessandri fl ed to the U.S. embassy and then into exile in Europe. General Luis Altamirano Talavera headed a military junta to run the country, but when it failed to fulfi ll the social reform program it had promised, junior offi cers overthrew it and Carlos Ibáñez del Campo headed the new junta. He allowed Alessandri to return to Chile on March 20, 1925, the former president having been promised that the constitution would be rewritten to give the executive more powers. In 1925, when Alessandri returned from exile, a crowd of 100,000 came to greet him, and several people were trampled to death in the confusion. However, on October 1, 1925, Alessandri was again forced to resign, and Luis Barros Borgono succeeded him. In the elections that followed, Emiliano Figueroa Larraín became president, but he resigned in May 1927 to allow Ibáñez del Campo to return to power. Ibáñez borrowed U.S. $300 million from the United States and tried to resuscitate the economy. Initially it worked, but Ibáñez was forced from power, and Anarguía Política became president. Elections were held in 1932, and Alessandri was once again elected president. Alessandri’s new administration was totally different from that of the early 1920s. He was a strict constitutionalist, and he had also become more conservative and depended on the support of the right wing. His economically conservative policies led to his refusing to give money to the poor, especially those hurt by the fall in the price of nitrate and copper. With the depression hurting in Chile, Alessandri tried to reorganize the nitrate industry, doubling the government’s share of profi ts, raising it to 25 percent. Promoting building and civil engineering projects, Alessandri still wanted to improve the provision of education. The only way of raising the extra money was by using his fi nance minister, Gustavo Ross Santa María, to tighten up the collecting of taxes. In early 1937 the Nacista movement began to gain support, and on September 5, 1938, it tried to stage a coup d’état to get Ibáñez del Campo back into power. Alessandri had already alienated most of his former Alessandri, Arturo 5 supporters, who then formed the Popular Front. He used the army to arrest Ibáñez del Campo. Alessandri’s term as president ended in 1938, and Pedro Aguire Cerda succeeded him. Alessandri went to Europe, endorsing Juan Antonio Ríos Morales in the 1942 elections, which he won. Returning to Chile, in 1944 Alessandri was elected to the Senate, becoming the speaker in the following year. In the 1946 elections he endorsed Gabriel González Videla, who won. By this time Alessandri had once again become more liberal in his views. Alessandri towered over Chilean politics, but his speech was often rough and crude. When the U.S. journalist and writer John Gunther visited him, Alessandri’s offi ce was decorated with autographed photographs of politicians from all over the world, including Hindenburg, Adolf Hitler, and Edward, prince of Wales (later the duke of Windsor). He died on August 24, 1950, in Santiago. Jorge Alessandri Rodríguez, who was president of Chile from 1958 until 1964, was Arturo Alessandri’s older son. His younger son, Fernando Alessandri Rodríguez, was also active in politics. Further reading: Alexander, Robert Jackson. Arturo Alessandri: A Biography. Ann Arbor, MI: University Microfi lms International for Latin American Institute, Rutgers University, 1977; Gunther, John. Inside Latin America. London: Hamish Hamilton, 1942. Justin Corfi eld

AlgeriaEdit

Algeria remained part of the French Empire throughout the fi rst half of the 20th century, but nationalist movements for independence became increasingly more vocal and determined. Several hundred thousand Algerians fought or worked for the French military during World War I. After the war they expected reforms and changes in French policies of assimilation and favoritism toward the colons, but the colons blocked government reforms announced in 1919. French government policies dating from the 19th century onward had gradually increased the ownership of the best land by the colons and had resulted in the impoverishment of Algerian peasants. By 1950 most Algerians owned small plots of less than 10 acres. To survive, peasants became sharecroppers or seasonal workers or fl ed to the cities where they were generally either day laborers or unemployed. The growing economic and social disparity between the colons and the majority Muslim Algerian population contributed to civil unrest and nationalist discontent. In the early 1920s, Algerian workers in Paris, led by Messali al-Hajj, established the Star of North Africa, a social action, leftist movement, which attracted considerable popular support. In the interwar years, two major approaches toward the relationship with France emerged among Algerians. The fi rst group wanted assimilation and participation as full-fl edged French citizens. The second advocated Algerian independence as a separate nation. Ferhat Abbas, a pharmacist by profession, represented the fi rst when he said, “If I had discovered an Algerian nation, I would be a nationalist . . . I have not found it.” Hadj Ben Ahmed Messali championed the second approach, asserting that “Islam is our religion, Algeria our country, Arabic our language.” The French often jailed Messali for his uncompromising nationalist stances. To minimize Algerian opposition, the French adopted a divide and rule tactic by favoring the Muslim Berber population that lived in the mountainous Kabyle region and encouraging it as a separate entity from the Muslim Arab population. These attempts failed as Berbers played key roles in the nationalist movement and were particularly attracted to Messali’s approach. The Algerian Muslim Congress drew up a list of grievances in 1936 but fell far short of advocating complete independence for Algeria. Many Muslim leaders still hoped that a form of assimilation could be devised whereby Muslims could become French citizens without abrogating Islamic law or tradition. In response to the problem, the Blum-Violette proposals in 1937 provided for the gradual extension of suffrage whereby some 20,000 Algerians would become citizens with more to follow over time. However, the colons adamantly opposed any reforms that widened Algerian participation and lessened their own political and economic power. The weakness and instability of French regimes in Paris prevented the implementation of reform programs that might have ameliorated the differences. When the Vichy French regime came to power during World War II, it instituted Nazi racist policies that imperiled both Muslim Algerians and Algerian Jews, who had been granted French citizenship in the late 19th century. These decrees were abolished when the Allied-supported French committee of national liberation took power in 1943. Encouraged by Allied support, Abbas and his supporters issued the Manifesto of Algerian People in 1943. The manifesto paid respect to French culture but 6 Algeria noted that assimilation had failed and that reforms were needed. Some French were willing to consider reforms, but others felt that the manifesto would lead to independence and fl atly rejected it. Abbas then formed the Friends of the Manifesto and of Liberty and called for an autonomous republic in Algeria while counseling patience. His movement attracted mostly urban middle- class Algerians. The working class, far greater in numbers, supported Messali’s calls for complete independence. The leader of the Free French, Charles de Gaulle, tried to conciliate the differences by proposing that more Algerians could become French citizens without giving up their Qur’anic rights, but this compromise failed to satisfy many Muslims and infuriated the colons. In 1945 the French put Abbas under house arrest, and Messali was exiled. In the spring of 1945 parades in Setif (southwest of Constantine) celebrating the end of World War II in Europe quickly turned into nationalist demonstrations. Violence spread to cities and other areas. In the rioting and French reprisals that quickly followed, hundreds of colons and thousands of Algerians (the fi gures vary widely ranging from 1,500 to 80,000) were killed. The Algerian Statute of 1947 in which assimilation was stopped and two separate communities were recognized pleased no one. Under the new law, the French prime minister appointed a governor-general who was assisted by a council of six with the right to apply A market in Biskra, Algeria, in the early 1900s: Algeria remained part of the French Empire throughout the fi rst half of the 20th century, but nationalist movements for independence became increasingly more vocal and determined. Algeria 7 French law. The Algerian Assembly was to have two houses, one European and one for “natives.” Europeans controlled both houses. Colons were against even this compromise, and Messali responded by demanding complete independence. By this time, the majority of Algerians had concluded that the French were never going to grant full equality and that independence was the only solution. By 1950 many Algerian nationalists had either been arrested by the French, were in exile, or had escaped into the mountains of the Kabyle. The confl ict remained unresolved until full-scale war broke out in 1954. Further reading: Berque, Jacques. French North Africa: The Maghrib Between Two World Wars. Translated by Jean Stewart. London: Faber and Faber, 1962; Brace, Richard, and Joan Brace. Ordeal in Algeria. Princeton, NJ: D. Van Nostrand, 1960; Perkins, Kenneth. Qaids, Captains, and Colons: French Military Administration in the Colonial Maghrib, 1844–1934. New York: Africana, 1981. Janice J. Terry

alliance systemEdit

Alliances are a common military or political action among states. Often resorted to for defensive purposes, they frequently result in the very war they hoped to avoid. When Sparta formed the Peloponnesian League and Athens led the Delian League in the aftermath of the Persian War, war followed, and it was long and costly. Likewise, the alliance system that emerged in the years before World War I proved to be a major cause of one of the greatest confl agrations in human history. The roots of the modern alliance system lie in the situation that arose following the victory of Prussia in its war with France in 1870–71. Since the 1860s the Prussian chancellor Otto von Bismarck had waged wars with Denmark and Austria, which led to territorial acquisitions. With the Franco-Prussian War came the unifi cation of Germany, which then took two provinces, Alsace and Lorraine, from France. One of the major consequences of these events was a change in the balance of power as Germany replaced France as Europe’s greatest nation. German diplomats assessed these new conditions. The fi rst point to be noted was that France constituted a threat on Germany’s western border, eager as it was to retrieve the lost territories. Thus, in the 1880s, Bismarck sought to isolate France and prevent it from obtaining another ally that could pose a danger to Germany in the east and thus produce the possibility of a two-front war against Germany in the future. With this in mind, Bismarck devised the Three Emperors’ League in 1873, which tied together the conservative empires of Germany, Russia, and Austria-Hungary. Even after signing the Dual Alliance with Austria-Hungary in 1879, he attempted to contain Russia in the Reinsurance Treaty of 1887. Following Bismarck’s removal from offi ce in 1890, Germany allowed the Reinsurance Treaty to lapse, as it appeared that Russia and Austria-Hungary were incompatible partners. Russian ambitions in the Balkans, fanned by Pan-Slavism, came into confl ict with Austria-Hungary’s need to control these areas for the sake of its own national integrity. Thus, Russia was motivated to sign a treaty with France in 1894 to gain its assistance in the east. This created the possibility of a two-front war for Germany. It should also be noted that both France and Germany found themselves linked to eastern powers whose quarrel did not directly involve their national interests. In these circumstances, it was natural for Britain to be taken into consideration, despite the fact that Britain had a history of maintaining its distance from the continent and eschewing treaties. From the German point of view, there were two positive scenarios. The fi rst would be for Britain to maintain neutrality; the second and best option would be for Britain to become a German partner. At the same time Russia and France hoped that Britain would become an ally and add British naval strength to their arsenal of weapons. The contest for British support was to become one of the most important issues around the turn of the century. Germany made critical mistakes in dealing with Britain. In the fi rst place, they seem to have believed that Germany needed to do nothing to woo Britain, for eventually Britain would be forced to side with Germany because of the former’s differences with France and Russia. There was a tradition of war with both, and Britain had important rivalries with France in Africa and Russia over India and Afghanistan. This turned out to be a serious miscalculation on Germany’s part since Britain, having been embarrassed by the unexpected diffi culty of the Boer War, was anxious to achieve security. What truly alarmed Britain was the German decision to adopt a program to create a high seas fl eet. Britain had always depended on its naval supremacy to be its most important defense and to secure its communications with the empire. The idea that Germany would challenge its predominance spurred Britain to embark 8 alliance system on its own naval building program, resulting in a naval race. More signifi cantly, it prompted Britain, to the surprise of Germany, to reconsider its isolation and enter into conversations with France in 1904 and Russia in 1907. Both concluded in the resolution of their colonial differences and the inauguration of military contacts. What had occurred was not an alliance between the three; rather, Britain had established friendly relations with the other two. Thus, this relationship became known as the the Triple Entente. This outcome, of course, now forced Germany to plan not only for a two-front war but for a war in which Britain might intervene on the side of its opponents. Moreover, it now became clear that Italy, the third member of the Triple Alliance, could not be counted on to support Germany and Austria-Hungary. The result of all of this was the development of the Schlieffen Plan, by which Germany hoped to score a decisive victory over Russia and France before Britain could intervene. This plan committed Germany to a timetable that was very hard to alter once a decision was made. Thus, it led to the violation of Belgian neutrality, which assured that Britain would come to Belgium’s assistance. The crisis in the Balkans caused by the assassination of Archduke Franz Ferdinand in 1914 led to a confrontation between Russia and Austria-Hungary over Serbia. Faithful to its treaty commitments, France supported Russia, while Germany backed Austria-Hungary. When German armies entered Belgium, Britain entered the war. The alliance system ensured that a chain reaction would take place as countries arrayed themselves against each other. In many ways it provoked the war it was intended to prevent. Further reading: Reiter, Dan. Crucible of Beliefs: Learning, Alliances, and World Wars. Ithaca, NY: Cornell University Press, 1996; Stokesbury, James L. A Short History of World War I. New York: Harper, 1981. Marc Schwarz

All-India Muslim LeagueEdit

The All-India Muslim League (AIML) was established on December 30, 1906, at the time of British colonial rule to protect the interests of Muslims. Later it became the main vehicle through which the demand for a separate homeland for the Muslims was put forth. The Indian National Congress (INC) was perceived by some Muslims as an essentially Hindu organization where Muslim interests would not be safeguarded. Formed in the year 1885, the INC did not have any agenda of separate religious identity. Some of its annual sessions were presided over by eminent Muslims like Badruddin Tyabji (1844–1906) and Rahimtulla M. Sayani (1847– 1902). Certain trends emerged in the late 19th century that convinced a sizable group of Muslims to chart out a separate course. The rise of communalism in the Muslim community began with a revivalist tendency, with Muslims looking to the history of Arabs as well as the Delhi sultanate and the Moghul rule of India with pride and glory. Although the conditions of the Muslims were not the same all over the British Empire, there was a general backwardness in commerce and education. The British policy of “divide and rule” encouraged certain sections of the Muslim population to remain away from mainstream politics. The INC, although secular in outlook, was not able to contain the spread of communalism among Hindus and Muslims alike. The rise of Hindu militancy, the cow protection movement, the use of religious symbols, and so on alienated the Muslims. Syed Ahmed Khan’s (1817–98) ideology and political activities provided a backdrop for the separatist tendency among the Muslims. He exhorted that the interests of Hindus and Muslims were divergent. Khan advocated loyalty to the British Empire. The viceroy Lord Curzon (1899–1905) partitioned the province of Bengal in October 1905, creating a Muslim majority province in the eastern wing. The INC’s opposition and the consequent swadeshi (indigenous) movement convinced some Muslim elites that the congress was against the interests of the Muslim community. A pro-partition campaign was begun by the nawab of Dhaka, Khwaja Salimullah Khan (1871–1915), who had been promised a huge amount of interest-free loans by Curzon. He would be infl uential in the new state. The nawab began to form associations, safeguarding the interests of the Bengali Muslims. He was also thinking in terms of an all-India body. In his Shahbag residence he hosted 2,000 Muslims between December 27 and 30, 1906. Sultan Muhammad Shah, the Aga Khan III (1877– 1957), who had led a delegation in October 1906 to Viceroy Lord Minto (1845–1914) for a separate electorate for the Muslims, was also with Salimullah Khan. Nawab Mohsin-ul-Mulk (1837–1907) of the Aligarh movement also was present in Dhaka. On December 30 the AIML was formed. The chairperson of the Dhaka conclave, Nawab Viqar-ul-Mulk (1841–1917), declared that the league would remain loyal to the British and All-India Muslim League 9 would work for the interests of the Muslims. The constitution of the league, the Green Book, was drafted by Maulana Muhammad Ali Jouhar (1878–1931). The headquarters of the league was set up in Aligarh (Lucknow from 1910), and Aga Khan was elected the fi rst president. Thus, a separate all-India platform was created to voice the grievances of the Muslims and contain the growing infl uence of the Congress Party. The AIML had a membership of 400, and a branch was set up in London two years afterward by Syed Ameer Ali (1849–1928). The league was dominated by landed aristocracy and civil servants of the United Provinces. In its initial years it passed pious resolutions. The leadership had remained loyal to the British Empire, and the Government of India Act of 1909 granted separate electorates to the Muslims. A sizable number of Muslim intellectuals advocated a course of agitation in light of the annulment of the partition of Bengal in 1911. Two years afterward the league demanded self-government in its constitution. There was also change in leadership of the league after the resignation of President Aga Khan in 1913. Mohammad Ali Jinnah (1876–1948), the eminent lawyer from Bombay (now Mumbai), joined the league. DRIVING OUT THE BRITISH Hailed as the ambassador of “Hindu-Muslim unity,” Jinnah was an active member of the INC. He still believed in cooperation between the two communities to drive out the British. He became the president of the AIML in 1916 when it met in Lucknow. He was also president between 1920 and 1930 and again from 1937 to 1947. Jinnah was instrumental in the Lucknow Pact of 1916 between the congress and the league, which assigned 30 percent of provincial council seats to Muslims. But there was a gradual parting of the ways between the INC and the AIML. The appearance of Mohandas K. Gandhi (1869–1948) on the Indian scene further increased the distance, as Jinnah did not like Gandhi’s noncooperation movement. The short-lived hope of rapprochement between the two parties occurred in the wake of the coming of the Simon Commission. The congress accepted the league’s demand for one-third representation in the central legislature. But the Hindu Mahasabha, established in 1915, rejected the demand at the All Parties Conference of 1928. The conference also asked Motilal Nehru (1861–1931) to prepare a constitution for a free India. The Nehru Report spelled out a dominion status for India. The report was opposed by the radical wing of the INC, which was led by Motilal’s son Jawaharlal Nehru (1889–1964). The league also rejected the Nehru Report as it did not concede to all the league’s demands. Jinnah called it a parting of the ways, and the relations between the league and the congress began to sour. The league demanded separate electorates and reservation of seats for the Muslims. From the 1920s on the league itself was not a mass-based party. In 1928 in the presidency of Bombay it had only 71 members. In Bengal and the Punjab, the two Muslim majority provinces, the unionists and the Praja Krushsk Party, respectively, were powerful. League membership also did not increase substantially. In 1922 it had a membership of 1,093, and after fi ve years it increased only to 1,330. Even in the historic 1930 session, when the demand for a separate Muslim state was made by President Muhammad Iqbal (1877–1938), it lacked a quorum, with only 75 members present. After coming back from London, Jinnah again took the mantle of leadership of the league. The British had agreed to give major power to elected provincial legislatures per the 1935 Government of India Act. The INC was victorious in general constituencies but did not perform well in Muslim constituencies. Many Muslims had subscribed to the INC’s ideal of secularism. It seemed that the two-nation theory, exhorting that the Hindus and Muslims form two different nations, did not appeal to all the Muslims. The Muslims were considered a nation with a common language, history, and religion according to the two-nation theory. In 1933 a group of Cambridge students led by Choudhary Rahmat Ali (1897–1951) had coined the term Pakistan (land of the pure), taking letters from Muslim majority areas: Punjab P, Afghania (North-West Frontier Province) A, Kashmir K, Indus-Sind IS, and Baluchistan TAN. The league did not achieve its dream of a separate homeland for the Muslims until 1947. It had been an elite organization without a mass base, and Jinnah took measures to popularize it. The membership fees were reduced, committees were formed at district and provincial levels, socioeconomic content was put in the party manifesto, and a vigorous anti-congress campaign was launched. The scenario changed completely for the league when in the famous Lahore session the Pakistan Resolution was adopted on March 23, 1940. Jinnah reiterated the two-nation theory highlighting the social, political, economic, and cultural differences of the two communities. The resolution envisaged an independent Muslim state consisting of Sindh, the Punjab, the North-West Frontier Province, and Bengal. The efforts of Jinnah after the debacle in the 1937 election 10 All-India Muslim League paid dividends as 100,000 joined the league in the same year. There was no turning back for the league after the Pakistan Resolution. The league followed a policy of cooperation with the British government and did not support the Quit India movement of August 1942. The league was determined to have a separate Muslim state, whereas the congress was opposed to the idea of partition. Reconciliation was not possible, and talks between Gandhi and Jinnah for a united India in September 1944 failed. After the end of World War II, Great Britain did not have the economic or political resources to hold the British Empire in India. It decided to leave India fi nally and ordered elections to central and provincial legislatures. The league won all 30 seats reserved for Muslims with 86 percent of the votes in the elections of December 1945 for the center. The congress captured all the general seats with 91 percent of the votes. In the provincial elections of February 1946, the league won 440 seats reserved for Muslims out of a total of 495 with 75 percent of the votes. Flush with success, the Muslim members gathered in April for the Delhi convention and demanded a sovereign state and two constitution-making bodies. Jinnah addressed the gathering, saying that Pakistan should be established without delay. It would consist of the Muslim majority areas of Bengal and Assam in the east and the Punjab, the North-West Frontier Province, Sind, and Baluchistan in the west. The British government had dispatched a cabinet mission in March to transfer power. The league accepted the plan of the cabinet mission, but the league working committee in July withdrew its earlier acceptance and called for a Direct Action Day on August 16. The league joined the interim government in October but decided not to attend the Constituent Assembly. In January 1947 the Muslim League launched a “direct action” against the non–Muslim League government of Khizr Hayat Tiwana (1900–75) of the Punjab. Partition was inevitable, and the new viceroy, Lord Louis Mountbatten (1900–79), began to talk with leaders from the league as well as the congress to work out a compromise formula. On June 3, 1947, it was announced that India and Pakistan would be granted independence. The Indian Independence Act was passed by the British parliament in July, and the deadline was set for midnight on August 14–15. The demand of the league for a separate state was realized when the Islamic Republic of Pakistan was born on August 14. On August 15 Jinnah was sworn in as the fi rst governor-general of Pakistan, and Liaqat Ali Khan (1895–1951) became the prime minister. The new nation had 60 million Muslims in East Bengal, West Punjab, Sind, the North-West Frontier Province, and Baluchistan. After independence the league did not remain a major political force for long, and dissent resulted in many splinter groups. The Pakistan Muslim League had no connection with the original league. In India the Indian Union Muslim League was set up in March 1948 with a stronghold in the southern province of Kerala. The two-nation theory received a severe jolt when East Pakistan seceded after a liberation struggle against the oppressive regime of the west. A new state, Bangladesh, emerged in December 1971. In the early 21st century more Muslims resided in India (175 million) than in Pakistan (159 million). Further reading: Aziz, K. K. The Making of Pakistan: A Study in Nationalism. Lahore: Sang-e-Meel Publications, 1993; Hussain, J. A History of the Peoples of Pakistan: Towards Independence. Karachi: Oxford University Press, 1997; Jalal, Ayesha. The Sole Spokesman: Jinnah, the Muslim League and the Demand for Pakistan. New Delhi: Cambridge University Press, 1994; Masselos, Jim. Indian Nationalism: A History. New Delhi: Sterling Publishers, 1985; Pirzada, Syed Sharifuddin, ed. Foundations of Pakistan-All India Muslim League Documents 1906–1947. 3 vols. Karachi: Royal Book Company, 1969, 1970, 1990; Ziring, Lawrence. Pakistan in the Twentieth Century: A Political History. Karachi: Oxford University Press, 1997. Patit Paban Mishra

Ambedkar, Bhim RaoEdit

(1891–1956) Indian lawyer and reformer Dr. Bhim Rao Ambedkar was the most important leader of the oppressed untouchable minority in the history of India. He acquired the honorifi c name Babasaheb. Fighting for his people, he angered Mohandas K. Gandhi, the revered leader of the Indian nationalist movement, as well as many Hindu traditionalists. When India became an independent country, he served in its cabinet and drafted its constitution. Near the end of his life, he became a Buddhist and encouraged other untouchables to do likewise; he had lost hope of justice for his people within Hinduism. In Hinduism most people belonged to four hierarchical castes, but a large minority were excluded from the caste system and were regarded as beneath it. They did jobs that other Hindus rejected as ritually Ambedkar, Bhim Rao 11 unclean and were not allowed to pray in temples or to draw water from communal wells. Nearly all of them were desperately poor. In English these people often are called untouchables, or pariahs. Gandhi, wishing to improve their status, called them harijans, or children of God. To underscore their miserable condition, untouchables preferred to be called dalits, a name that means oppressed. B. R. Ambedkar was born to an untouchable family as its 14th child. At the time of his birth his father was a soldier. Untouchables were divided into numerous hereditary subgroups, or jatis. Ambedkar belonged to the Mahar jati. Despite the disadvantages of poverty, family responsibilities, and untouchable status, he acquired an advanced education. In 1912 he earned a B.A. degree from Elphinstone College at Bombay University. The ruler of a princely state then fi nanced his education in the United States and Britain. In 1916 Columbia University awarded him a Ph.D. in economics. He continued his studies at the London School of Economics. In 1921 it awarded him a second doctorate. He studied law at Gray’s Inn and in 1923 was called to the bar in Britain. He also studied briefl y at a German university. In India he practiced law, taught, edited newspapers, and entered politics. Although he was elected to the Bombay legislature, his real political career was as the leader of the formerly passive untouchable community. Ambedkar’s nonviolent protests mobilized tens of thousands of dalits for the right to draw water from wells and public tanks and to pray in temples. Although Gandhi saw himself as a friend of the untouchables, he got along poorly with Ambedkar. They quarreled at the Round Table Conferences on India’s future held in London. When Britain decided to grant India extensive political autonomy, its government grappled with the problem of the diversity within the Indian population. In 1932 Britain offered separate electorates to the untouchables, so that this oppressed minority would control the selection of its representatives. The Indian National Congress strongly opposed any separate electorates. Gandhi began a fast to put pressure on Ambedkar to reject the special electorates for his people. Reluctantly, he did so. The Indian National Congress offered Ambedkar concessions in what was known as the Poona Pact. The number of seats reserved for untouchable candidates was increased, but the entire electorate, not just untouchables, would vote on the candidates for these seats. In 1936 Ambedkar organized the Independent Labour Party. In contrast with Gandhi and the Indian National Congress, Ambedkar and his party supported the British government in India during World War II. In 1942 he became a member of the viceroy’s executive council. In the same year he organized a new political party, the Scheduled Castes’ Federation. When India became independent, Ambedkar joined the new government that the Indian National Congress dominated. From 1947 to 1951, he was a member of the cabinet. More important, he chaired the committee that drafted the national constitution and was its principal author. In the fi nal years of his life, Ambedkar turned to Buddhism, a religion with Indian roots that rejected the Hindu caste system and the concept of untouchability. He formally converted to Buddhism in October 1956. Hundreds of thousands of untouchables joined him in leaving Hinduism for Buddhism. A few weeks after his conversion ceremony, Ambedkar died. Further reading: Jaffrelot, Christophe. Dr. Ambedkar and Untouchability: Fighting the Indian Caste System. New York: Columbia University Press, 2005; Jondhale, Surendra, and Johannes Beltz, eds. Reconstructing the World: B.R. Ambedkar and Buddhism in India. New Delhi: Oxford University Press, 2004; Rodriques, Valerian, ed. The Essential Writings of B.R. Ambedkar. New Delhi: Oxford University Press, 2002. David M. Fahey

Amin, QasimEdit

(1863–1908) Egyptian author and reformer Qasim Amin was a noted Egyptian intellectual and advocate of reform in the later 19th and early 20th centuries. His father was a Turkish Ottoman offi cial and landowner married to an Egyptian woman. Amin was educated in Cairo and at the School of Law and Administration. He was a follower of the earlier reformer Muhammad Abduh, who sought to resolve the confl ict of Islamic practices and tradition with the adoption of Western scientifi c thought and development. As a highly respected lawyer, Amin was sent on a government educational mission to France, where he spent several years in the 1880s. Amin wrote a number of works on social issues, and in Les Egyptiens he focused on the national rights of Egyptians. He was best known for his works on the status of women. He addressed the issues of polygamy, marriage laws, education for women, seclusion, and veiling in The Liberation of Women, published in 1899. Amin argued that sharia (Islamic law) and Islamic custom 12 Amin, Qasim did not mandate either the seclusion of women in the home or veiling. Both were commonly practiced among upper and middle classes of the era. Poor peasant families could not afford the luxury of secluding or veiling women who commonly worked alongside men in the fi elds. Amin emphasized that sharia granted legal rights to women and that the corruption or decline of morals by outside forces had been responsible for the decline of Islamic societies. He stressed the importance of women in building modern nations and in national struggles and advocated improved education for women. According to Amin, education for women should not be limited to matters of household management but should include subjects that would enable them to participate in life outside the home. Although by contemporary standards Amin’s advocacy of gradual reform was not revolutionary, his book on the status of women aroused massive public debate about the role of women and Islam. Amin was severely criticized by conservative religious leaders and the palace. Amin repudiated his critics in a second more radical— for the age—book, The New Woman, in 1900. In this second book he dropped a discussion of Islamic law and tradition to justify reforms and instead applied Western thought to augment his arguments. Amin stated that with education and reforms in status, women would ultimately have almost the same rights and status as men. Amin supported the Egyptian nationalist movement, in which both men and women were full participants, in his memoirs, Kalimat. He also stressed the need for scientifi c knowledge in order for nations to advance. An early Egyptian nationalist, Amin was friendly with Sa’d Zaghlul and Tal’at Harb, both of whom became leaders of the Egyptian nationalist movement. Further reading: Amin, Qasim. The Liberation of Women: A Document in the History of Egyptian Feminism. Cairo: The American University in Cairo, 1992, in English; Hourani, Albert. Arabic Thought in the Liberal Age 1798–1939. London: Oxford University Press, 1962. Janice J. Terry

Amritsar massacreEdit

The Amritsar massacre (April 13, 1919) helped many moderate Indian nationalists become fi ercely anti- British. The Rowlatt Acts, enacted by the British government, had outraged politically minded Indians. Extending wartime emergency legislation, the Rowlatt Acts gave the British viceroy in India the authority to silence the press, make arrests without a warrant, and imprison without trial. The Indian members of the viceroy’s legislative assembly opposed this legislation, and several of them resigned (including Mohammad Ali Jinnah, later the founder of Pakistan). To protest the Rowlatt Acts, Mohandas K. Gandhi called for a national hartal, a day of prayer and fasting, that on April 6 closed most shops and businesses in the northwestern province of the Punjab. The British administration in the Punjab, headed by Sir Michael O’Dwyer, was notoriously stern, and the province had long seethed with unrest. In Lahore there were large anti-British demonstrations and a railroad strike. On April 10, on O’Dwyer’s order, British offi - cials in Amritsar arrested Dr. Saif-ud-Din Kitchlew, a Muslim lawyer, and Dr. Satyapal, a Hindu who had served as a medical offi cer in the British army. They were leaders of the Amritsar nationalist movement. In the angry reaction against these arrests, violence broke out resulting in destruction of property and looting in Amritsar. Five British civilians and 10 Indians were killed. A school superintendent, Marcella Sherwood, was trapped by a mob, badly beaten, and left for dead. This mistreatment of a British woman outraged offi - cials. The villain in the story of the Amritsar massacre was Reginald E. H. “Rex” Dyer. Dyer was a colonel who held the temporary rank of brigadier general while commanding an infantry brigade in the Punjab. Born in India, he was competent in several Indian languages, including Hindi and Punjabi. Before the Amritsar massacre, he had not had a reputation of being more racist than other British offi cers. In fact, early in 1919 he had resigned from the offi cers’ club that served his brigade because he objected to the exclusion of Indians who held commissions as offi cers. He appears to have been lacking in self-confi dence while at the same time being stubborn and rash. He did not always obey orders. Unfortunately, he was stationed near Amritsar. Apparently, Dyer acted on his own initiative in moving his brigade to Amritsar on April 11. On the next day he reissued an earlier government order that banned any meetings or gatherings. He did not continue the previous policy of slowly extending British military and police control over one part of the city after another. He preferred to parade large forces through Amritsar as a demonstration of strength and then withdraw them. Amritsar massacre 13 Despite the proclamations against meetings, thousands of Indians fl ocked to the Jallianwala Bagh on April 13, most of them in support of the imprisoned Kitchlew and Satyapal. Some arrived after the police had closed a nearby fair held in honor of the Sikh new year. By late afternoon a huge throng was present, a rather quiet crowd and not an angry mob. Estimates vary, but there certainly were more than 10,000 people. The Bagh was a trap for them. Enclosed by the walls of surrounding buildings, it had only a few narrow openings for entrance or exit, some of them locked. Dyer made no attempt to prevent the meeting at the Jallianwala Bagh or to disperse it peacefully. He decided to make an example of those who had violated the British prohibition of large gatherings. For this purpose he assembled a small force of 90 men that included no British soldiers. Instead he chose Baluchis, Gurkhas, and Pathans, “native” soldiers but ones who lacked sympathy for local Indians. He brought with him two armored cars equipped with machine guns. He later said that he did not use them because the entrances to the Bagh were too narrow. Even without the machine guns, the carnage was great. Without any warning Dyer’s soldiers fi red on the crowd for 10 to 15 minutes. There was only one exit available for the thousands. In desperation many of those in the Bagh jumped into a deep well. After his troops had fi red 1,650 rounds, Dyer ordered an end to the slaughter because he feared that his men would run out of ammunition and not be able to protect their withdrawal. Nobody knows how many people were killed. An offi cial estimate made by the British authorities says 379. An Indian investigation says 530. The wounded numbered over 1,000. After the facts of the massacre became known, Dyer was dismissed. He returned to Britain, where a special commission of investigation censured him in 1920. Despite the offi cial censure, some in Britain saw Dyer as a hero who took decisive action to prevent a rebellion that might have shaken British rule throughout the subcontinent. For many members of the upper and middle classes and military offi cers, Dyer was a victim of the government’s need to appease Indian nationalists. Dyer died of natural causes in 1927. An embittered Indian assassinated O’Dwyer in 1940. See also Indian National Congress. Further reading: Collett, Nigel. The Butcher of Amritsar: General Reginald Dyer. London and New York: Hambledon and London, 2005; Draper, Alfred. The Amritsar Massacre: Twilight of the Raj. London: Buchan and Enright, 1985; Sayer, Derek. “British Reaction to the Amritsar Massacre, 1919–1920.” Past & Present 131 (1991). David M. Fahey

analytic philosophyEdit

Since its beginnings in ancient Greece, one of the motivations driving Western philosophy has been the conviction that concepts such as “knowledge,” “mind,” “justice,” and “beauty” are obscure and that it is the business of philosophers to achieve a clearer understanding of their meanings. Analytic philosophy seeks this elevated understanding through a clarifi cation of “ordinary,” that is, nonphilosophical, language that is believed by most analytic philosophers to be vague and obscure, at least in regard to concepts of interest to philosophers. In the early decades of the 20th century, the founders of the analytic tradition, Bertrand Russell and Ludwig Wittgenstein, sought to use newly developed techniques in symbolic logic to produce ideally simple “atomic statements,” the meanings of whose component terms were absolutely clear. These component terms would, they believed, directly match, or, to use Wittgenstein’s term, “picture,” “atomic facts,” thereby yielding absolutely certain truths about “reality.” Russell called this technique “logical atomism.” During the 1920s and 1930s, this methodology, especially as embodied in Wittgenstein’s book, Tractatus Logico-Philosophicus, inspired the short-lived analytic movement known as logical positivism. In this view science represents the standard of what is to count as knowledge, and, positivists claimed, science itself ultimately rests on statements of the sort sought by Russell and Wittgenstein, namely, simple statements the truth or falsehood of which can be verifi ed, in principle, by direct sensory experience. Utterances that cannot be analyzed and verifi ed in this way, for example, those containing religious or ethical terms, were dismissed by logical positivists as meaningless, or at the very least as outside the boundaries of possible knowledge. Though Russell never lost faith in some form of “logical analysis” as the proper approach to the solution of philosophical problems, over time most philosophers in the analytic tradition, including the logical positivists, came to doubt the feasibility of arriving at absolutely clear and simple statements whose truth could be conclusively verifi ed by basic sensory experiences. 14 analytic philosophy Wittgenstein also began to question his own “picture theory” of language. Later in his life he authored a radical critique not only of his and Russell’s earlier work, but of virtually all of previous philosophy and in the process inspired a second movement within the analytic tradition, one that came to be known as ordinary language philosophy. Through the presentation of extensive “reminders” about how concepts actually function in “ordinary” language, the later Wittgenstein sought to wean philosophers away from the perception that our ordinary concepts are obscure and in need of philosophical analysis and clarifi cation. With regard to our familiar concepts, Wittgenstein claimed that “nothing is hidden.” A concept’s meaning, he said, is fully visible in the ways in which it is used in ordinary language. If we remind ourselves of how words such as knowledge, mind, and the rest are used in the push and pull of life, he argued, we can see all there is to see about what they mean. The outcome of this realization should then be that philosophers’ traditional problems are not solved, but dissolved, that is, shown not to have been genuine problems in the fi rst place. In spite of the widespread infl uence in the mid-20th century of this critique of the need for philosophical analysis, philosophers’ faith in the legitimacy and profound urgency of their ancient puzzles reasserted itself, and it has for the most part prevailed, at least for the foreseeable future. The vast majority of analytic philosophers are today fully engaged in attempts to “shed light” on concepts of traditional philosophical interest, though without resorting to the kind of rigorous, but discredited, logical analysis envisioned by Russell and Wittgenstein in the early decades of the 20th century. Further reading: Russell, Bertrand. The Problems of Philosophy. New York: Oxford University Press, 1959; Wittgenstein, Ludwig. Tractatus Logico-Philosophicus. London: Routledge & Keegan Paul, 1922; ———.Philosophical Investigations. Malden, MA: Blackwell Publishing, 2001;———. The Blue and Brown Books: Preliminary Studies for the Philosophical Investigations. New York: Harper Torchbooks, 1958. Michael H. Reed

anarchist movements in Europe and AmericaEdit

Anarchism is a political belief that rejects organized government and asserts that each individual person should govern him- or herself. Anarchists believe that all forms of rulership and government over a people are detrimental to society because they interfere with individual action and responsibility. The term is distinguished from the word anarchy, which means the actual absence of any form of organized government. The origin of anarchism can be traced to the Age of Enlightenment in the 18th century, when movements supporting intellectualism and reason became infl uential. Some of the effects of the ideas of this age were radical changes in government ideals and values. The ideas of Jean- Jacques Rousseau (1712–78), a Swiss-born philosopher, infl uenced the inciters of the French Revolution. Some of these groups applied the term anarchist to themselves as a positive label referring to people who were opposed to old and undesirable forms of government. Anarchist ideas can be found in the writings of William Godwin (1756–1836), the father of Frankenstein author Mary Shelley. Godwin attributed the evils of mankind to societal corruption and theorized that it was better to reduce organized government. Godwin felt that humans’ possession of a rational mind would be spoiled should external controls interfere. The person who is most often credited as the father of modern anarchism is Pierre-Joseph Proudhon (1809–65). He was the fi rst to coin the words anarchism and anarchist to refer to his belief system. In 1840 he published his fi rst signifi cant work, What is Property? He was also opposed to both capitalism and communism, though his beliefs and writings put him under the socialist umbrella. Proudhon, when he settled in Paris, found people who had already accepted his ideas. However, the movement soon evolved into several types of anarchism mainly due to views on economics. Most of the concepts of anarchist groups are based on the treatment of the industrial worker, as this was a primary concern at the time these groups were founded, and workers were the ones who most commonly formed anarchist groups. The major types of anarchism that have evolved since then are: Mutualism—Although this started as a set of economic ideas from French and English labor groups, it later became associated with Proudhon. It bases its ideas on Proudhon’s assertion that a product’s true price should be determined by the amount of labor spent to produce it without considering materials. Therefore, mutual reward is achieved when people are paid for their labor no matter what economic conditions will apply. However, private ownership of production facilities is maintained. anarchist movements in Europe and America 15 Collectivist Anarchism—This movement is mostly attributed to Russian anarchist Mikhail Bakunin (1814–76). For collectivist anarchists private ownership of the means of production is opposed, and ownership is collectivized. Workers should be paid according to the time spent on production work. Anarchist Communism—Also called communist anarchism, this movement suggests that a worker is not necessarily entitled to the products that he or she worked to produce and that mere satisfaction of needs is the payment. Instead of a general government, selfgoverning communes can be organized that are ruled by actual democracy, based on constituent voting. Joseph Déjacque (1821–64) is considered the first figure of this subgroup, while the most influential is Peter Kropotkin (1842–1921). Like in communism, private ownership is opposed. Anarcho-Syndicalism—This movement promotes the power of trade unions to override capitalism and seeks to abolish the wage system and private ownership. It borrows heavily from collectivist and communist modes of anarchism. Workers’ groups are to have a heavy degree of solidarity and are able to self-govern without external controls. The most prominent anarcho-syndicalist was Rudolf Rocker (1873–1958). Individualist Anarchism—This is the most common form of anarchism in the United States. Individualist anarchism is influenced mainly by the writings of Henry David Thoreau (1817–62), although his writings are mainly philosophical and do not recommend any kind of action. Other U.S. anarchists, such as Josiah Warren, Lysander Spooner, and Benjamin Tucker had more explanation on their courses of action. However, another kind of individualist anarchism, egoism, was presented by German philosopher Max Stirner (1806– 56) in the mid-1800s. Other anarchist forms were anarcho-capitalism, which enjoys a strong following in the United States, and anarchism without adjectives, a uniquely named form championed by the most prominent female anarchist in history, Voltairine de Cleyre (1866–1912). Russian writer Leo Tolstoy (1828–1910) promoted a religion-based form of anarchism, Christian anarchism, advocating that since God is the ultimate government there should be no human governments organized. Anarchist ideals had gained a significant following by the 19th century but had lost mass appeal by the turn of the 20th century. In the Russian Revolution and Civil War of 1917, anarchists participated alongside communists but were turned against by the communist government. This led to the 1921 Kronstadt Rebellion, and anarchists were either jailed or made to leave the country. In the 1930s, anarchists were opposed to the Fascist government of Italy under Benito Mussolini. Anarchists were active also in France and Spain. In 1937, the Confederación Nacional del Trabajo was a generally anarchist labor union that participated in events leading to the Spanish civil war. See also Goldman, Emma. Further reading: Avrich, Paul. The Modern School Movement: Anarchism and Education in the United States. San Francisco: AK Press, 2005; Berkman, Alexander, Emma Goldman, and Paul Avrich. The ABCs of Anarchism. London: Freedom Press, 2000; Graham, Robert. Anarchism. A Documentary History of Libertarian Ideas. Montreal: Black Rose Books, 2005; Meltzer, Albert. Anarchism: Arguments For and Against. San Francisco: AK Press, 2000. Chino Fernandez

Anglo-Japanese treatyEdit

The Anglo-Japanese treaty was signed between Lord Lansdowne (1845–1927), the British foreign secretary, and Hayashi Tadasu (1850–1913), the minister of Japan, on January 30, 1902, in London to create an alliance scheduled to last five years. Its terms gave Japan an equal partnership with a great power of the Western 16 Anglo-Japanese treaty An act of terrorism by anarchists caused this Wall Street explosion in New York City in 1920. world. The purpose of this fi rst military agreement was stabilization and peace in northeast Asia. On Japan’s side it was to prevent Russian expansionism in northeast Asia, and on Great Britain’s side it protected British interests and its commerce in China. Japan felt vulnerable due to Russian infl uence in Manchuria and interest in Korea. The Anglo-Japanese treaty allowed Japan to become a more powerful player in world diplomacy and in negotiations with Russia. It allowed Japan to go to war against Russia in February 1904 and to ask for fi nancial support from Great Britain. The Russo-Japanese War (1904–05) astounded the world because of the success of Japan. It ended the menace of Russia and helped Great Britain to play a greater role in Europe. The revision of the Anglo-Japanese treaty was signed on August 12, 1905, between Lansdowne and Hayashi in Lansdowne’s residence. The new terms included an extension of the area covered by the alliance to include India, British recognition of Japan's right to control Korea, and Japan's recognition of Great Britain's right to safeguard her possessions in India. It also provided that in the event of any unprovoked attack neither party would come to the assistance of its ally. The alliance would remain in force for the following 10 years. The new terms showed Japan had increased its status in international society after winning the war over Russia. The third Anglo-Japanese alliance agreement was negotiated in 1911 after Japan's annexation of Korea. Important changes concerned the deletion of the articles related to Korea and India and the extension of the alliance for 10 more years. The second revision accommodated Japan’s annexation of Korea but also, at Britain’s request, excluded the United States from the region. The alliance enabled Japan to participate in World War I as a British ally. With World War I beginning in the summer of 1914 and with political changes in China, Anglo-Japanese relations entered a new era. The new situation in the Far East restulted in a closer relationship between the United States and China. With the outbreak of the Russian Revolution and Civil War in 1917, U.S. participation in the war, and later the publication of President Woodrow Wilson’s Fourteen Points on how to end the war, the groundwork was set for new national relations. These new circumstances brought changes in Anglo-Japanese relations after World War I. Great Britain no longer feared the Russian expansion in China and had developed a close relationship with the United States. The United States had also started to view Japan as a competitor in East Asia. The problems of China were also affecting international politics. As a result, the United States decided to call a conference whose aim was to prevent expansion in China. At the Washington Conference (1921–22) Anglo-American cooperation in Asia allowed the United States to force Japan to accede to an end of the Anglo-Japanese alliance. The offi cial termination of the alliance took place on August 17, 1923. Further reading: Brown, Kenneth Douglas. Britain and Japan: A Comparative Economic and Social History Since 1900. Manchester, UK: Manchester University Press, 1998; Nish, Ian Hill. The Anglo-Japanese Alliance: The Diplomacy of the Two Empires, 1894–1907. Athlone, UK: Athlone Press, 1985; Nish, Ian Hill, and Yoichi Kibata, eds. The History of Anglo-Japanese Relations. Vol. 1, The Political-Diplomatic Dimension, 1600–1930. New York: St. Martin’s Press, 2000; O’Brien, Phillips Payson, ed. The Anglo-Japanese Alliance, 1902–1922. New York: Routledge Curzon, 2004; Samson, Gerald. “British Policy in the Far East.” Foreign Affairs (April 1940). Nathalie Cavasin

anti-Communist encirclement campaigns in China (1930–1934) Edit

In 1923 Sun Yat-sen (d. 1925), leader of the Kuomintang (KMT), or Chinese Nationalist Party, then out of power, made an agreement with Adolf Joffe, Soviet representative in China. It became the basis of an entente between the KMT and the Russian Communist government whereby Russia sent advisers to help Sun and the KMT and allowed Chinese students to go to Russia to study. It also allowed members of the newly formed Chinese Communist Party (CCP) to join the KMT. This entente ushered in what became known as the fi rst united front. In 1926 the KMT launched a campaign called the Northern Expedition, commanded by Sun’s lieutenant Chiang Kai-shek, to oust the warlords and unite China. Its spectacular success led to a power struggle between the Soviet-supported CCP and the anti-CCP faction of the KMT, led by Chiang. Chiang won the showdown, expelling the Soviet advisers, purging the CCP, and then defeating most of the remaining warlords. Between 1928 and 1937 the KMT ruled from China’s new capital, Nanjing (Nanking), under an unstable coalition led by Chiang. anti-Communist encirclement campaigns in China (1930–1934) 17 Remnant CCP members fl ed to the mountains in Jiangsu (Kiangsu) province, where they established the Chinese Soviet Republic with its capital at Ruijin (Juichin). Chiang’s new government was too preoccupied with dissident KMT leaders to worry about the CCP between 1928 and 1930, which allowed the CCP to expand to parts of Hunan, Hubei (Hupei), Anhui (Anhwei), and Fujian (Fukien) provinces and increase its army to 120,000 men plus paramilitary units. Between 1930 and 1934 the Nationalist government launched fi ve encirclement and extermination campaigns against the Communists (First Campaign, from fall 1930 to April 1931; Second Campaign, from February to May 1931; Third Campaign, from July to September 1931; Fourth Campaign, from January to April 1933; and Fifth Campaign, from October 1933 to October 1934). The fi rst four campaigns failed because they were commanded by generals of varying ability and loyalty, because the government simultaneously had to deal with more serious revolts by dissident KMT generals, and because of Japan’s attack on Manchuria and Shanghai (1931–32). Meanwhile, Chiang consolidated his leadership and improved the central government’s army with the help of German military advisers. He personally led the 700,000-strong army in the Fifth Campaign and adopted new strategies that were “70 percent political, 30 percent military.” Militarily, he emphasized good training and improved morale for his offi cers and soldiers. As they advanced, his men constructed forts and blockhouses that blockaded the Communist-ruled areas. The political aspect of his strategies stressed economic reform, rural reconstruction, and neighborhood organization for security. These measures eliminated many of the abuses that had allowed the Communists to win the loyalty of the people of the contested region. The combination of military success and economic blockade effectively strangled the Communist-controlled land, reducing it to six counties by September 1934. On October 2 the central Chinese Soviet government headed by Mao Zedong (Mao Tse-tung) and its main army under Zhu De (Chu Teh) decided to abandon Ruijin. They broke through the western sector of the blockade, where a general not loyal to Chiang had not completed building the blockhouses. Thus began the Long March. Further reading: Eastman, Lloyd, ed. The Nationalist Era in China, 1927–1949. Cambridge: Cambridge University Press, 1991; Huang, Philip C. C. Chinese Communists and Rural Society, 1927–1934. Berkeley: University of California Press, 1978; Liu, F. F. A Military History of Modern China, 1924– 1949. Princeton, NJ: Princeton University Press, 1956. Jiu-Hwa Lo Upshur

appeasement eraEdit

In October 1925 British, French, Belgian, and Italian representatives met in Locarno, Switzerland, to settle postwar territory claims in eastern Europe and normalize diplomatic relations with Weimar Germany. Germany also sought to establish guarantees protecting its western borders as established by the Treaty of Versailles that ended World War I. Under the Locarno Pact, Germany, France, and Belgium agreed not to attack each other, while Great Britain and Italy signed as guarantors to the agreement. As such, all parties pledged assistance if Germany, France, or Belgium took any aggressive action against any of them. Additionally, Germany agreed with France, Belgium, Poland, and Czechoslovakia to handle any disputes diplomatically through the League of Nations, while France guaranteed mutual aid to Poland and Czechoslovakia in the event of a German attack. Under the terms of the Treaty of Versailles, Germany was forced to disarm, lost all territorial gains, and had to pay reparations as part of the acceptance of guilt in starting the war. Germans resented the treaty, considering it far too harsh and demeaning. Many blamed the treaty for compromising Germany’s economy, so much so that by 1923 the Weimar Republic could not make the required reparation payments. The situation worsened when the Great Depression hit in the 1930s, heightening the already-bleak socioeconomic pressures of the country. As a result, Germans faced a complete disintegration of their society, as a majority of citizens became disillusioned about the future of the country. Upon his ascension to the chancellorship in January 1933, Adolf Hitler sought changes to the treaty that would allow for German lebensraum (living space). With that in mind, Hitler formally repudiated the Treaty of Versailles in March 1935, using it as both scapegoat and propaganda for the ills of the nation. He set about restructuring the economy and, more importantly, rearming the military in violation of the treaty. Industrial production and civic improvements were expanded, the results of which were both positive and negative: The unemployment rate fell with continued arms production and construction projects, 18 appeasement era while infl ation increased due to currency manipulation and defi cit spending. The German military (Wehrmacht) reintroduced conscription, which helped to lower the unemployment rate further, and reorganized to include a new navy, the Kriegsmarine, and an air force, the Luftwaffe—both of which were severe violations of Versailles. Hitler made the argument that rearmament was a necessity for Germany’s continued security. At the time, European leaders felt such allowances simply corrected certain wrongs that bitter victors had set in the aftermath of a brutal world war; thus, Germany faced no repercussions other than formal protests. When France and the Soviet Union signed a treaty of alliance in 1936, Hitler’s aims became even more signifi cant. In response to the Franco-Soviet treaty, Hitler pressed for the stationing of German troops in the Rhineland. In accordance with the Treaty of Versailles, the entire Rhineland area was demilitarized to serve as a buffer between Germany and France, Belgium, and Luxembourg. By 1930 Allied forces had completely withdrawn under the terms of the treaty, which equally prohibited German forces from entering the area. Further, the Allies could reoccupy the territory if it was unilaterally determined that Germany had violated the treaty in any way. France was not prepared militarily to dispute any claim over the territory without British aid. Great Britain could not provide such support. As a result, both countries had no choice but to allow Germany to retake the region. Thus, a policy of appeasement toward Germany was offi cially born under British prime minister Stanley Baldwin (1935–37), though it had already begun under his predecessor, Ramsey McDonald (1929–35). Guided by the growing pacifi st movement, both Ramsey and Baldwin realized that national consensus did not favor military action. In spite of pressure from outspoken critics like Winston Churchill, who recognized the dangers of German rearmament, both were determined to keep the country out of war. Hitler’s ambitions grew greater. Unwilling to assist the Republican government, Baldwin initiated a pact of nonintervention with 27 countries, including Germany and Italy. Despite being signatories, Hitler and Italy’s Benito Mussolini, in violation of the agreement, sent weapons and troops to support General Francisco Franco and his nationalist forces. By December both countries were fully involved in the Spanish confl ict, having agreed two months earlier to an alliance, known as the Axis, to solidify their positions in Europe. Using the war as a test for its armed forces and methods, particularly the Luftwaffe and blitzkrieg tactics, Germany demonstrated how far its remilitarization efforts had advanced. On April 26, 1937, the town of Guernica came to symbolize and foreshadow the German advancements. German and Italian forces in a joint operation began a bombing campaign against the town. The attack happened so swiftly that it appeared as one continuous assault, with no other intent than the complete decimation of the civilian population. However, several thousand refugees had come to the town in the wake of the war; by all estimates the number of dead stood near 1,700, consisting mainly of women, children, and elderly, with over two-thirds of the town in ruins. ANSCHLUSS As the Axis powers continued to lend support in Spain, Hitler forced his native Austria to unify politically (Anschluss) with Germany in March 1938. Despite the Treaty of Versailles’s prohibition of union between Germany and Austria, again the Allies’ response to the Anschluss went no further than formal diplomatic protests. A month earlier, on February 12, Austrian chancellor Kurt Schuschnigg had met with the führer in Berchtesgaden, Bavaria. Hitler had demanded the ban on the Austrian Nazi Party be lifted and that they be allowed to participate in the government, or Austria would face military retaliation from Germany. With little choice, Schuschnigg complied with the demands by appointing two Nazis to his cabinet, Arthur Seyss- Inquart and Edmund Glaise-Horstenau. He also announced a referendum to decide independence or union with Germany—a stall tactic aimed at preserving Austrian autonomy. However, the gradual usurpation of authority by Schuschnigg’s newly appointed ministers and pressure from Germany—in the form of an ultimatum from Hitler that threatened a full invasion—forced Schuschnigg to hand power over to Seyss-Inquart and the Austrian Nazi Party. When Hitler further threatened invasion, Miklas reluctantly acquiesced. On March 12 the German Wehrmacht 8th Army entered Vienna to enforce the Anschluss, facing no resistance from the Austrians. Many Austrians gave their support to the Anschluss with relief that they had avoided a potentially brutal confl ict with Germany. Others fl ed the country in fear of the Nazi seizure of power. Austria was only the beginning. When Neville Chamberlain became prime minister of Great Britain in May 1937 he adhered to the policy of appeasement that appeasement era 19 his two predecessors had cultivated. He believed that the continued consent of changes to the Treaty of Versailles could prevent another war with Germany. To that end, Chamberlain, France’s Édouard Daladier, and Italy’s Benito Mussolini met with Hitler in Munich, Germany, in September 1938 to settle a dispute over the Germanspeaking Sudetenland, which both Czechoslovakia and Germany claimed. Hitler claimed that the Czech government was mistreating Sudeten Germans in Czechoslovakia, despite no evidence of such treatment and adamant denials from government offi cials; the same argument was made for German minorities living in Hungary and Poland. Exploiting ethnic tensions as a pretext to gain a foothold in eastern Europe, Germany demanded the incorporation of the region into Nazi Germany. The Allies urged the Czech government to comply. In what is known as the Munich Pact, the parties agreed on September 29, 1938, without Czech representation, to the transfer of the Sudetenland to German control. Terms of the agreement included the allowance of German settlements in the region, with Germany exacting no further claims of Czech lands. Triumphant that the situation had been resolved and war resoundingly avoided, Chamberlain and Daladier returned to England and France, declaring that the peace had been preserved. Feeling abandoned by its allies, particularly France, Czechoslovakia had no choice but to capitulate to Hitler. As German troops moved into the newly acquired territory, the Czech population fl ed to central Czechoslovakia. Six months later Germany violated the Munich agreement by invading Czechoslovakia itself. Despite an alliance with France and the Soviet Union, neither came to Czechoslovakia’s aid. Hitler’s main motivation for the invasion involved the seizure of Czech industrial facilities. However, Hitler’s intentions to invade Poland following the breakdown of negotiations over territorial concessions deemed it necessary that he eliminate Czechoslovakia fi rst. Accordingly, on March 15, 1939, German forces entered the Czech capital of Prague, proclaiming the regions of Bohemia and Moravia as German protectorates. Chamberlain and the Allied nations now faced a major international impasse. They had granted concessions to Hitler, with no repercussions when Germany violated the agreements. If Hitler were to continue that course of action, the Allies would fi nd themselves in a diffi cult position in regard to other international commitments. In particular, both Great Britain and France pledged aid to Poland were Germany to invade it. The scenario became a reality when Germany invaded Poland on September 1, 1939. In a fi nal attempt to avert war Great Britain and France lodged formal warnings and diplomatic protests against the invasion, to no avail. As a result, notwithstanding the Soviet-German agreement, both countries were forced to declare war on Germany. See also World War II. Further reading: Churchill, Winston S. The Second World War: The Gathering Storm. Boston: Houghton Miffl in Company, 1948; Clements, Peter. “The Making of Enemies: Deteriorating Relationships Between Britain and Germany, 1933–1939.” History Review. March 2000; Kennedy, John F. Why England Slept. New York: Wilfred Funk, Inc., 1940; McDonough, Frank. Neville Chamberlain, Appeasement, and the Road to War. New York: Manchester University Press, 1998; Shirer, William L. The Rise and Fall of the Third Reich. New York: Simon and Schuster, 1960. Steve Sagarra

Arab-Israeli War (1948)Edit

After World War II Great Britain was no longer able economically, politically, or militarily to control Palestine. The Labour government was elected to power in 1945, and the new foreign minister, Ernest Bevin, attempted to placate mounting Arab opposition to a Jewish state by enforcing limitations on Jewish immigration into Palestine. Even during World War II some Revisionist Zionist groups had begun attacking British offi cials and forces in attempts to force the British to vacate Palestine. The Irgun, led by Menachem Begin, and LEHI (Stern Gang) both attempted to kill Sir Harold MacMichael, the British high commissioner in Palestine, and in 1944 LEHI killed Lord Moyne, the British minister of state for the Middle East. In 1946 the Irgun bombed the King David Hotel, the British headquarters in Jerusalem, killing over 90 people. The British branded the Irgun a terrorist organization and arrested many of its members. The Irgun retaliated by kidnapping British soldiers; British arms depots were also raided. Although the United States was reluctant to ease its own immigration quotas, it pressured Britain to allow increased Jewish immigration into Palestine. In the aftermath of the Holocaust, the forced return or imprisonment on Cyprus of illegal Jewish immigrants fl eeing Europe was an untenable moral and political position. From the Zionist perspective there was no such thing as an “illegal” Jewish immigrant into Pales- 20 Arab-Israeli War (1948) tine, and numerous means of circumventing or evading British border controls were devised to allow the landing of new Jewish immigrants. Some Zionists, including Chaim Weizmann, recognized the potential problem posed by the displacement of Palestinians, but he argued that the Jewish need was greater. David Ben- Gurion and others in Palestine continued to claim all of Palestine for the Jewish state. Following the war, the United States issued several public statements favoring the creation of a Jewish state. In the face of its domestic weakness and reliance on U.S. economic assistance, the British government in 1947 announced that it was turning over the entire problem of Palestine to the newly formed United Nations. The UN then created the UN Special Committee on Palestine (UNSCOP), composed of 11 member states, to investigate the situation and to make recommendations as to what should be done regarding the mounting confl ict between Zionist demands for a Jewish state and Palestinian demands for an independent Arab state in Palestine. In 1947 UNSCOP traveled to Palestine, where it was well received by the Zionists and boycotted by the Arab Higher Command of Palestine under the mufti Hajj Amin al-Husseini, an implacable opponent of a Jewish state. From the Palestinian point of view, any Jewish state would result in a loss of territory that was considered part of the Palestinian national homeland. However, by boycotting the negotiations, the Palestinians lost an opportunity to present their side to the general Western public and politicians. UNSCOP submitted a minority and majority report; the minority recommended a binational state, and the majority recommended partition. The proposed partition plan allotted approximately 50 percent of the land for the Jewish state and 50 percent for an Arab state, with Jerusalem and a large area around the city to be under international control. The projected Jewish state included most of the north and coastal areas with the better agricultural land and sea access as well as the Negev desert in the south. Jaffa, totally surrounded by the proposed Jewish state, was to be an Arab port. Although the plan did not include all the territory the Zionists had claimed, Ben-Gurion and the majority Labor Party reluctantly accepted the UN partition scheme and launched an all-out effort to make an independent Jewish state a reality and to obtain recognition from the international community. At the time there were 1.26 million Palestinian Arabs, or two-thirds of the total population, and 608,000 Jews, or one-third of the population, in Palestine, and Arabs still owned over 80 percent of the total land within Palestine. Consequently, the Palestinians and other Arab states rejected the plan. At the pan-Arab conference in Bludan, Syria, in 1937, the Arabs had already unanimously rejected any partition of Palestine, so the rejection in 1947 came as no surprise to either side. The United States lobbied several nations that were poised to abstain or vote against partition: Members of the UN narrowly voted in favor of the partition plan in November 1947. Violence immediately broke out in Palestine and elsewhere in the Arab world, and in waves of anti-Semitism Jewish quarters and businesses in Cairo, Baghdad, and elsewhere were attacked. The mufti called for a three-day strike in Palestine, during which violence between the two communities escalated. The British withdrew from Palestine in May 1948, and war immediately broke out. By the time of the British withdrawal the Haganah effectively controlled the area allotted to the Jewish state by the partition plan. On May 14, 1948, Ben-Gurion proclaimed the establishment of the independent state of Israel amid widespread rejoicing among Jewish communities. Ben-Gurion became the fi rst Israeli prime minister in a coalition government dominated by the Labor Party, and the Haganah became the Israeli Defense Force (IDF). The new state was immediately recognized by both the United States and the Soviet Union; however, the celebrations were tempered by the certainty of impending war with the surrounding Arab states and the Palestinians. Israeli forces were well organized and trained with a unifi ed chain of command and a plan for securing all the territory allotted to the new state. With the IDF, the Palmach, or shock troops, the police, and the Irgun and Stern Gang Israeli forces numbered about 60,000 in addition to 40,000 reservists. The Irgun and Stern Gang were not incorporated in the IDF but on some occasions coordinated efforts with it. Arabs forces also numbered about 40,000 and included the Arab Liberation Army, volunteer forces led by Fawzi al-Kawakji, and the Jordanian Arab Legion, commanded by a British offi cer, Glubb Pasha. The legion was the best trained of the Arab forces. Abd al-Kader al-Husseini commanded Palestinians in Jerusalem; Iraqi and Syrian soldiers also fought in the war. The Arab League supported the Palestinian cause but refused to provide money to the mufti or to recognize the establishment of a Palestinian state in exile. The Palestinian population remained demoralized from their earlier defeat by the British in the Arab Arab-Israeli War (1948) 21 Revolt of 1936–39 and had no real unifi ed political or military leadership. Arab armies also suffered from inferior armaments and corrupt leadership, and they had not coordinated their efforts or devised an effective plan for military victory. PALESTINIAN REFUGEES By the time the war broke out massive numbers of Palestinians had already become refugees in neighboring Arab countries. Some upper- and middle-class Palestinians had left for jobs and businesses in other Arab countries during the mandate period, and the peasantry, by far the majority of the Palestinian population, was frightened by the mounting violence and impending war. The causes for the mass exodus remain highly controversial, with both sides blaming the other for the refugee problem. Some Palestinians undoubtedly left what was soon to become a war zone in the belief that they would return home after the war was over and the Arabs had been victorious. Attacks by Israeli forces, especially the Irgun, also terrorized the peasants and incited many to fl ee. In the spring of 1948 the Irgun and LEHI attacked Deir Yasin, a peaceful village near Jerusalem, killing over 200 Palestinian civilians. The massacre at Deir Yasin spread terror among Palestinian peasants, who feared the same fate might befall their villages. Palestinians left Haifa and the northern area of Tiberias; those from northern Palestine fled into Syria and Lebanon, those in the central area went to the West Bank and across the Jordan River into Jordan, and those in the south crowded into the Gaza Strip along the Mediterranean Sea. By the end of April over 150,000 Palestinians had left, and by May the numbers reached 300,000. The 1948 war is known as the war of independence in Israel and called al-Nakba, or disaster, by the Palestinians. Military engagements in the war fell into three parts. In the fi rst part, lasting from May to June, Egyptian forces crossed into the Negev in the south on May 15, and the Iraqis subsequently marched through Jordan into Palestine and Israel and at one juncture were within 10 miles of the Mediterranean. According to an earlier secret agreement between the Zionists and King Abdullah of Jordan, Jordanian troops would not move into areas allotted to the Jewish state, in return for which Abdullah was to secure the West Bank. The agreement held during the war, but since there had been no agreement regarding Jerusalem, Jordanian and Israeli forces fought over the city, and the Jews were forced to surrender the Jewish quarter in the old part of the walled city. The Syrians were halted in the north, and there was no Lebanese resistance. The UN sent Count Folke Bernadotte of Sweden, a leading fi gure in the International Red Cross, to mediate; Bernadotte secured a truce in mid-June that lasted for four weeks, during which time the Israelis secured arms from Czechoslovakia and elsewhere. Great Britain suspended the supply of arms to Iraq, Transjordan, and Egypt. The truce ended in July, followed by 10 days of fi erce fi ghting during which time the Israeli victory became apparent. Israeli forces took all of northern Palestine and restored communication between Jerusalem and Tel Aviv. A second truce was negotiated in July, when al- Kawakji’s forces had been decisively defeated and Israel held all Galilee; however, the eastern part of Jerusalem, including the Old City, remained under Jordanian control. In the negotiations Bernadotte had angered both sides, and there was fear among Israelis that his fi nal report due in September would be favorable to the Arabs. His report supported the partition plan but with the right of Palestinian repatriation; he also recommended that the Negev go to the Arabs, that Galilee be Jewish, the creation of a UN boundary patrol, and that Haifa be a free port. Jerusalem was to remain under UN auspices. The Stern Gang assassinated Bernadotte in September, and the report was never implemented. The U.S. diplomat Ralph Bunche was appointed the new mediator. In October the Israelis attacked the Egyptian forces in the Negev. A small group of Egyptian soldiers including a young offi cer, Gamal Abdul Nasser, held out for several months at Falluja but, lacking reinforcements or relief from Egypt, were ultimately forced to surrender. Nasser blamed the corrupt regime of King Faruk for the loss and would lead a successful revolution against the monarchy in 1952. In December Israel moved further into the Negev and northern Sinai but reluctantly withdrew from the Gaza Strip, which was administered by the Egyptian military. The 1948 war resulted in the partition of Jerusalem, with west Jerusalem held by Israel and east Jerusalem by Jordan. Through military victories Israel had increased its territory by about one-third more than the original partition plan had called for. As far as Israel was concerned, the gains were nonnegotiable, and the land was immediately incorporated into the new state. The mufti attempted to establish a Palestinian state based in Gaza, but he was thwarted by King Abdullah. In December Abdullah announced the unifi cation of the West Bank and east Jerusalem with Jordan; Abdullah’s 22 Arab-Israeli War (1948) claim as sovereign of Palestine was supported by handpicked notables, and the Palestinians remained without a state of their own. Peace negotiations were held at Rhodes in early 1949. Because the Arabs refused to recognize Israel, Bunche had to shuttle back and forth between the Arab and Israeli delegations, and the negotiations became known as the Proximity Talks. An armistice was reached with Egypt in February 1949, Lebanon in March, Jordan in April with clauses for the withdrawal of Iraqi forces from Jordanian territory, and Syria in July. No formal armistice was ever reached with Iraq. SETTING THE STAGE The losses in the 1948 war left the Arabs humiliated and unforgiving and set the stage for future political upheavals through much of the region. Attempts by the UN to secure a full peace failed; although fullscale fi ghting ceased, technically the Arabs and Israel remained at war. Nor was the Palestinian refugee issue resolved. Fearing the creation of a possible fi fth column within its new borders and a possible Arab majority in the new Jewish state, Israel refused to permit the return of most of the refugees and blamed the Arab governments for having created the problem in the fi rst place. The Arabs blamed Israel. The Palestinians were determined to return to their homes in the future and refused resettlement elsewhere. Arab states were also ill equipped to deal with the infl ux of refugees; some Arab regimes also used the refugees as pawns in their own struggles with Israel. Only Syria volunteered to discuss granting citizenship to the refugees. Ben-Gurion refused to negotiate unless his preconceived terms were met, and the offer was dropped. By 1949 there were about 800,000 Palestinian refugees, and the United Nations established an agency that became UNRWA (UN Relief and Works Administration) to provide minimal assistance of about 16 cents per day for them. As the confl ict continued and as successive generations were born in the camps, the number of refugees grew. The issues of repatriation, reparations, or compensation for land and businesses lost remained unresolved into the 21st century. The new Israeli government set about incorporating its territorial gains and assimilated over half a million new Jewish immigrants, many of whom came from Arab states, especially Iraq and Yemen. No peace settlement was reached between the Arabs and Israel, and the confl ict continued to fester until full-scale war broke out again in 1956. See also Hashemite monarchy in Jordan; Zionism. Further reading: Allon, Yigal. Shield of David: The Story of Israeli’s Armed Forces. London: Weidenfeld and Nicolson, 1970; Begin, Menachem. The Revolt: The Story of the Irgun. Rev. ed. New York: Nash Publishing, 1977; Shlaim, Avi. Collusion across the Jordan: King Abdullah, the Zionist Movement, and the Partition of Palestine. New York: Columbia University Press, 1988; Tessler, Mark. A History of the Israeli-Palestinian Confl ict. Bloomington: Indiana University Press, 1994. Janice J. Terry

Arab nationalismEdit

Arab nationalism emerged in the 19th century as the ruling Ottoman Empire continued its long decline. Arabs, who constituted the single largest ethnic group in the empire, were particularly resistant to the program adopted by the ruling Committee of Union and Progress stressing Turkish history, language, and ethnicity after 1908. Arabs were particularly opposed to the teaching of the Turkish language as the fi rst language in schools. Both Arab and Turkish nationalists such as the Young Turks grappled with the questions of what to do about the Ottoman Empire and whether separation along nationalist lines or decentralization was preferable. Prior to World War I, when many still hoped that the Ottoman Empire might be reformed, a number of Arab intellectuals and activists formed clubs and published essays dealing with the problems of the empire and offering possible solutions to its problems. In 1905 Negib Azoury (d. 1916), a French-educated Syrian Christian, published Le Reveil de la Nation Arabe. Azoury separated religion from government and openly demanded Arab independence from the Ottomans. He envisioned one Arab nation with the full equality of Muslims and Christians; however, Azoury did not include Egypt or North Africa in the projected Arab state. Amin al-Rihani and others emphasized Arabism over either Christianity or Islam. A number of small nationalist clubs and political organizations were also established. Al-Qahtaniyya, formed in 1909, was made up of Arab offi cers in the Ottoman army who discussed the issues of ethnic and national identity. Many of the same offi cers joined Al- Ahd (the Pact), led by the Egyptian major Aziz Ali al- Misri. Misri was anti-Turkish and aimed for full Arab independence. In 1911 Al-Fatat (the Youth) had several Arab nationalism 23 hundred Christian and Muslim members who called for the decentralization of the empire under some sort of dual monarchy along the lines of the Austro- Hungarian Empire. An Arab congress met in Paris in 1913 and recommended the decentralization of the Ottoman government and that Arabic be the offi cial language in Arab provinces. All of these groups aimed for the creation of a secular, democratic state. When the Ottomans joined the Central Powers in World War II and declared jihad, or holy war, in the fi ght against the Allies, most Arab Muslims rejected the call, arguing that both sides of the European confl ict were predominantly Christian and that it made no sense to fi ght on religious grounds. Sherif Husayn of the Hashemite family used the war as an opportunity to gain what he believed to be British support for an independent Arab state after the war in the Sherif Husayn–Mcmahon correspondence. Sherif Husayn’s son Faysal met with Arab nationalists in Syria to secure their backing for his father’s efforts. Misri and other Arab nationalists supported the Hashemites and in the Damascus Protocol of July 1915 agreed to Anglo- Arab cooperation in the war. Consequently, the Arabs raised the standard of revolt in June 1916 and fought with the British against the Ottomans and Germany for the duration of the war. Misri and another Arab Ottoman offi cer of Iraqi origin, Jafar Pasha Al-Askari, were among the most notable soldiers to join the fi ght against the Ottomans. In 1916 Ottoman Turkish soldiers commanded by Ahmed Jemal Pasha publicly hanged several known Arab nationalists in downtown Beirut. However, during the war the British made two other confl icting agreements, the Sykes-Picot Agreement and the Balfour Declaration, regarding the future of the Arab world. After the war the Arabs did not receive national independence. The Arab provinces of the old Ottoman Empire, including presentday Iraq, Syria, Lebanon, Jordan, Palestine, and Israel—none of which existed as independent states at the time—were divided up between the British and the French. Egypt, the Sudan, and North Africa also remained under French, British, or Italian control. When the Arabs failed to achieve self-determination, one Arab nationalist reputedly remarked, “Independence is never given, it is always taken.” In Syria representatives had gathered at the General Syrian Congress in 1919, and in the spring of 1920 they declared Syria’s independence governed as a constitutional monarchy under Emir Faysal. To enforce their mandate over Lebanon and Syria, French forces attacked the fl edgling Syrian army, defeating it at Maysalun Pass, near Damascus. Faysal was forced into exile but was subsequently made king of Iraq by the British. During the interwar years Arab nationalist parties from Morocco to Iraq adopted a wide variety of tactics including economic boycotts, strikes, demonstrations, and negotiations in the struggles against imperial control. When all of these failed some turned to more violent methods, joining armed paramilitary groups. There were also periodic and often spontaneous revolts and insurrections against the European occupiers from Egypt, to Iraq, to Syria. The Syrian revolt in 1925 was a major grassroots uprising against the French occupation. The revolt failed, and the French retained control of the Syrian mandate. Although the British granted facades of independence to Iraq, Transjordan (later Jordan), and Egypt, most of the other Arab territory remained under direct or indirect Western control until after World War II. Sati al Husri, a Syrian, was one of the foremost theoreticians of pan-Arabism. An Ottoman offi cial prior to World War I, Husri supported Sherif Husayn and his son Faysal in the Arab revolt against the Ottoman Turks. In the 1940s Husri was responsible for the Iraqi educational curriculum that emphasized Arab history and culture. A prolifi c writer, Husri argued that the Arabs were a single people, including Egyptians and Maghrabis (North Africans), and that their common identity was based on a common language and history. His books included In Defence of Arabism. Husri and other Arab writers recognized the importance of Islam for Christian as well as for Muslim Arabs in their history and culture but foresaw the creation of one unifi ed secular democratic Arab state. After World War II Husri became director general of cultural affairs of the League of Arab States, where he continued to champion pan-Arabism. With the encouragement of the British, the fi rst Arab conference was held in Alexandria, Cairo, in 1944; it resulted in the formation of the League of Arab States, ratifi ed in 1945. The league was headquartered in Cairo, and Egypt often dominated the organization. Member states were usually represented by their foreign ministers at meetings. Abd al-Rahman Azzam, an Egyptian who had fought in the nationalist Libyan war from 1911 to 1912, became the fi rst secretary-general of the league and remained in that position until 1952. Azzam was a tireless champion of the league and of a pan-Arabism that would be all inclusive. As Arab states became independent in the postwar era, all joined the league. The league supported the Palestinian cause and, as part of the struggle against Israel after the Arab 24 Arab nationalism losses in the 1948 war, implemented an Arab boycott of Israeli goods. The boycott was administered from Damascus, but individual Arab governments enforced it in a haphazard fashion; it had minimal impact. In 1950 league members signed a Joint Defence and Economic Cooperation Treaty as a cooperative effort to protect members against Israel. Pan-Arabism reached its apogee during the Nasserist era in the 1950s and 1960s, when there were numerous efforts to unify the separate Arab states. See also French mandate in Syria and Lebanon; Hashemite dynasty in Iraq. Further reading: al-Askari, Jafar Pasha. A Soldier’s Story: From Ottoman Rule to Independent Iraq. London: Arabian Publishing, 2003; Coury, Ralph M. The Making of an Egyptian Arab Nationalist: The Early Years of Azzam Pasha, 1893–1936. Reading, UK: Ithaca Press, 1998; Malek, Anouar Abdel, ed. Contemporary Arab Political Thought. London: Zed Press, 1970; Provence, Michael. The Great Syrian Revolt and the Rise of Arab Nationalism. Austin: University of Texas Press, 2005.

Janice J. Terry

Armenians in the Ottoman EmpireEdit

After the Ottoman sultan Mehmet II captured Constantinople on May 26, 1453, a new policy regarding minorities was initiated. The Ottomans organized each non-Muslim religious minority, mainly Christians and Jews, into a separate national administration, called a millet (pl. milletler). The head of each millet was its highest religious authority residing in the Ottoman Empire. For Christians there were at fi rst three milletler: one for the group of Byzantine (Greek) Orthodox, one for the Armenian Orthodox, and one for the Assyrian Church of the East. By the time of the fall of the Ottoman Empire there were no fewer than eight Christian milletler. The ideology behind this principle of organization was a liminal concept of “clean” versus “defi led.” Expressed in sociological terms, the “clean” Muslim Ottoman Turks did not wish to come into contact with “unclean” Christians. Furthermore, by substituting the Christian idea of “church” with the Islamic idea of an ethnic and religious nation in which the Armenian clergy were also civil and judicial administrators of the Armenian people, the Ottomans sought to destroy the spiritual power of the churches by forcing the bishops and other clergy to be embroiled in secular administration. In the Ottoman system, the civil head of each Christian minority millet was a patriarch. The duty of the patriarch was to administer the internal civil as well as ecclesiastical affairs of his millet. The patriarch’s chief responsibility was the collection of taxes on behalf of the Ottoman government, and the patriarch was the sole representative of his nation to the sultan. The patriarch also was responsible for education, hospitals, family law, and permission to travel within the Ottoman Empire. The millet system offered some advantages for the minority groups themselves. It was illegal to convert Armenians to Islam, although this took place with signifi cant frequency when it behooved the Ottoman government. Armenians were also nominally protected from intermarriage, and thus the homogeneity of each millet was largely preserved. For other minorities who were Muslim, principally the Kurds, their fate was worse: As Muslims they were not accorded a distinct national identity. NATIONAL SELF-CONCEPT For Armenians the church was the foundation of their national self-concept. Most Armenians were ignorant theologically. While many, especially in the rural areas of eastern Anatolia, were not formally religious, they were strongly pious. The major festivals of the church were celebrated even in the poorest homes. Even the simplest folk understood that the church was fundamental to their national survival, and Armenians supported their church as much as they could. In the last three decades of the 19th century, like many other minorities in the Ottoman Empire, Armenians were faced with a precarious existence. Armenians in eastern Anatolia, who were forbidden to keep fi rearms, were at the mercy of marauding Kurds and Turks. Although some Armenians loyal to the Ottoman government rose to positions of power in the state, overall they were second-class citizens, faced with corruption both within and outside of their own community, unfairly taxed, and who, despite their industriousness and hard work, began immigrating to the United States, Canada, South America, and Australia in large numbers. The Russo-Turkish War of 1877–78 marked the beginning of a new and bloody chapter for Armenians in the Ottoman Empire. The wars with Russia brought Armenians in Turkey into close quarters with their brethren in Russia, who enjoyed a much higher standard of living and greater autonomy. As a result the national revival of Armenians advanced much faster in Russian Caucasia than in Turkey. The Great Concert of Armenians in the Ottoman Empire 25 European powers produced the Treaty of Berlin (July 1878), which blocked Russia’s attempt to force the sultan to improve the lives of Armenians. The situation of Armenians in Anatolia became worse in the 1880s, as Kurds and other Muslim minorities attacked Armenians without interference from the Turkish governors. The result was that Armenians formed political organizations to force the Ottomans to deal with these and other problems. By the 1890s Armenian paramilitary organizations emerged with the intention of organizing a defense of Armenians and Armenian interests. The most important of these was the Armenian Revolutionary Federation, which sought greater autonomy for Armenians while ruling out political independence, and the Social Democratic Hnchag (“Clarion”) Party, which sought complete independence for Armenia. In 1894 the matter came to a head when Hnchag leaders sought to stir the international community to action through a planned act of rebellion. The response of the Ottoman government was very much disproportionate to the threat posed by the act: The Kurds and the Turkish military exterminated many villages that did not participate in the rebellion. In the course of 1894–96 in a planned and systematic fashion, Sultan Abdul Hamid I sought to solve the Armenian question through reduction of the number of Armenians through massacres. European powers did not intervene largely out of fear of Russia, and American president Grover Cleveland refused to intervene. The massacres essentially ended the Armenian revolutionary drive for independence and even led to a rejection of revolution from some of its most prominent Armenian supporters. However, after 1904 renewed Armenian guerrilla activity in eastern Anatolia resulted in further punitive massacres similar to those in 1894–96. Further attacks followed in Adana and in Syria in 1908 with the participation of the Young Turks, who had seized power that same year. The tense situation between Armenian political organizations and the government of the Young Turks continued. The problem was compounded by the intervention of Western powers in Turkish governance and their open hostility to the Turkish regime. The start of World War I, which pitted Turkey against many of its former enemies, particularly Russia, resulted in a cataclysm of death for Armenian civilians. The policy of brutally suppressing Armenian cries for safety from murder and pillage under the Ottomans continued. The government of the Ottoman Empire, led by the Young Turks, began a policy of massacre that was concentrated in 1915 but continued in the new Turkish Republic until 1922. Claiming that the Armenians and other Christians were collaborating with the Russian army, the Turks set out to systematically eliminate, or at least to reduce to an insignifi cant number, the Armenians and other Christians from eastern Anatolia. Along with this violence came the transfer of the wealth of these groups into Kurdish and Turkish hands. Although most of this activity was conducted at the hands of Kurds and prisoners released for the massacres, the Turkish army provided support, and the Turkish government was responsible for sanctioning and in some cases actively planning the removal of Armenians from eastern Anatolia. As many as 1.5 million Armenians, along with hundreds of thousands of Suryani and Assyrian Christians, were killed or died as a result of forced marches southward through the desert or in concentration camps. The Turkish government in the early 21st century vehemently denied that the government of the Young Turks (who also were the founders of the modern Turkish Republic) engaged in a planned and systematic elimination of all Armenians from Anatolia. Instead, the Turkish Republic claimed that most of the casualties were Armenians who fought with the invading Russian army against the Ottomans, and that the number of these battle casualties for Armenians was 600,000. Currently, a reassessment of the Turkish participation in the slaughter of the Armenians is occurring among intellectuals and historians in Turkey, and even the government is promoting restoration and cultural expressions of the Armenians and other minorities as it lobbies to join the European Union. Further reading: Bartov, Omer, and Phyllis Mack, eds. In God’s Name: Genocide and Religion in the Twentieth Century. Studies on War and Genocide 4. Oxford: Berghahn Books, 2001; Mirak, Robert. Torn Between Two Lands. Armenians in America, 1890 to World War I. Harvard Armenian Texts and Studies 7. Cambridge, MA: Harvard University Press, 1983. Robert R. Phenix, Jr.

art and architecture (1900–1950)Edit

With new styles and the availability of new construction material, there was a dramatic change in architecture during the fi rst half of the 20th century. Although prefabrication had fi rst been used in London’s Crystal Palace in 1851, it did not become popular until the early 20th century, which saw the rise in functionalism. 26 art and architecture (1900–1950) However, some architects reacted sharply against this, the most well-known being perhaps British architect Edwin Lutyens, who returned to a simplifi ed Georgian classicism with the Viceroy’s House in Delhi, India, and other projects. In Britain Norman Shaw was one of the main domestic architects. The fi rst half of the 20th century saw a massive increase in travel around the world and the publication of heavily illustrated photographic works, art books, and millions of postcards. This led to much use of iconography, with particular cities being identifi ed by specifi c buildings or structures. Examples of these include the Empire State Building (1931) in New York, the Harbour Bridge (1932) in Sydney, and the Golden Gate Bridge (1937) in San Francisco. Postcards also became important for artists whose designs, drawings, and photographs were reproduced and sold around the world, exposing creative people to infl uences of which previous generations had not known. In terms of art styles, Fauvism from France of the 1890s continued to infl uence painters, and cubism began to revolutionize the manner in which art and sculpture was produced, the latter producing artists Pablo Picasso, Fernand Léger, and Georges Braque. Expressionism emerged in the 1910s, and Dadaism peaked from 1916 until 1920, introducing an antiwar polemic through the work of Marcel Duchamp, Francis Picabia, and others. From the 1920s surrealism became a cultural movement, refl ecting itself in visual artwork. In Germany the Bauhaus movement fl ourished under Walter Gropius during the 1920s and also led to work by Vasily Kandinsky and Josef Albers; the Swiss architect Le Corbusier became famous during the 1940s for his introduction of modernism and functionalism; and Buckminster Fuller was celebrated for his geodesic domes. Other notables include Max Ernst, Joan Miró, and Salvador Dalí. The two world wars and several other confl icts also had a dramatic infl uence on both art and architecture. War artists wanted to record specifi c events or sought to capture the spirit of an event. At the same time photography emerged as an art form with Robert Capra’s depiction of the dying republican soldier during the Spanish civil war becoming famous—despite some doubts over whether it had been staged. The fi lm and still photographs showing Adolf Hitler looking at the Eiffel Tower and the soldier fl ying the Soviet red fl ag over the Reich Chancellery in Berlin are also famous for what they symbolized. The pile of captured German fl ags dumped at the foot of Lenin’s mausoleum on June 24, 1945, signifi ed the fi nal destruction of the German war machine in the same way that the haunting photographs and later paintings of the ruins of Hiroshima marked the fi rst use of an atom bomb in war. In terms of architecture, the massive destruction of many European and Chinese cities during bombing raids and land bombardment also saw many pieces of artwork destroyed, although a remarkable number survived, having been moved to safekeeping in time of war. The Basque city of Guernica in northern Spain, bombed in 1937 in what is now seen as a prelude to the World War II bombing raids, led to Picasso producing his famous painting Guernica later in 1937. In Britain painters such as C. R. W. Nevinson (1889–1946) recreated the horror of World War I, as did Paul Nash (1889–1946), while artists in communist countries depicted heroic scenes from battles that became part of their respective countries’ folklore. The main way in which the world wars affected architecture was in terms of the war memorials and war cemeteries that were built. Then there were also the tombs to the unknown soldiers, at the Arc de Triomphe in Paris, Westminster Abbey in London, the Victor Emmanuel Monument in Rome, and in many other capital cities. Although war memorials had been built in previous centuries, the number and the diversity of them after the world wars is important. The building of the Cenotaph in London, the Shrine of Remembrance in Melbourne, the India Gate in New Delhi, the Liberty Memorial in Kansas City, and the National War Memorial in Canada are only the most obvious examples, with small memorials throughout Europe and indeed throughout the world. In Japan Yasakuni Shrine not only remembers Japan’s war dead but also provokes foreign consternation over the reverence given to Japanese war criminals also remembered there. It is also impossible not to mention military architecture, with pillboxes and fortifi cations constructed of such indestructible material that they will outlast ordinary buildings—both in places that were invaded and also as a preventive measure in places that feared attack. The Maginot Line, along the French-German border, was perhaps the most famous defensive structure of the period, with the Pentagon in Washington D.C., opened in 1943, still the largest-capacity offi ce building in the world. With changes in political arrangements around the world, a number of totally new capitals were constructed, the most well known being Canberra, Australia. In Turkey the move from Constantinople (Istanbul) to Ankara in 1923 represented a major change in Turkish thinking and attitudes to the world. While Canberra art and architecture (1900–1950) 27 was built in what had been agricultural land, Ankara was constructed in what had been the city of Angora. In March 1918 Moscow became the capital of the Soviet Union, having been the capital of Russia until 1703. The period of great turmoil during the 1920s and 1930s also saw a number of countries establish new temporary capitals. Burgos in northern Spain became the nationalist capital during the Spanish civil war, with the inland city of Chungking (modern-day Chongqing) serving as the capital of Nationalist China during the Sino-Japanese War. In France the spa resort of Vichy became the capital of occupied France for three years. The growth of the urban environment saw a number of suburbs growing up. The British architect and civil planner Sir Ebenezer Howard designed Letchworth Garden City and in the 1920s moved on to found Welwyn Garden City. Political forces of the far right and extreme left also supported designs that supported their views of the country in question. In Nazi Germany Adolf Hitler’s architect, Albert Speer, designed impressive and grandiose structures that gave rise to the term Albert Speer architecture, describing a building or edifi ce that makes the onlooker seem small. In the Soviet Union grand architecture and “heroic” paintings were popular. The former impressed observers about the wealth of the country, with the latter highlighting important historical scenes. The building of Lenin’s mausoleum in Red Square, Moscow, initially in wood and then in stone, incorporated some of the design of the grave of Cyrus the Great of Persia. The changes in technology during the fi rst half of the 20th century saw the construction of many railway stations around the world, but not on the scale of the edifi ces built during the late 19th century. The Moscow Metro was opened in 1935 and was part of the attempt to show the Soviet Union as a modern and effi cient country. The British architect Charles Holden worked extensively on the London Underground. In addition, airports and factories were built, some with impressive art deco buildings, others being functional and having small sheds and huts to cater to the air passengers, or in the case of many factories, unimpressive work areas behind the façade. The rise of art deco during the 1920s and 1930s featured not only in architecture but in art, furniture design, and interior decorating. In terms of architecture, the spire of the Chrysler Building in New York (1928– 1930), the city hall of Buffalo, New York, and many other civic buildings follow this style. As well as in the United States, it was also popular in Italy, with the port city of Asmara being the best surviving example of an art deco city; the most famous art deco building in Latin America is the Edifi cio Kavanagh (Kavanagh Building) in Buenos Aires, completed in 1936. The most wellknown art deco architects included Albert Anis, who worked at Miami Beach; Ernest Cormier from Quebec, who designed the Supreme Court of Canada; Sir Bannister Fletcher, author of the famous work on architecture; Bruce Goff, whose Boston Avenue Methodist Church in Tulsa is regarded as one of the best examples of art deco in the United States; Raymond Hood, who designed the Tribune Tower in Chicago; Joseph Sunlight; William van Alen, who worked on the Chrysler Building in New York; Wirt C. Rowland from Detroit; and Ralph Walker of Rhode Island. The writer Ayn Rand set her book The Fountainhead (1943), about an idealistic young architect, in the offi ce of the New York architect Ely Jacques Kahn, with some seeing it as being modeled on Frank Lloyd Wright. In sculpture art deco saw Lee Lawrie, Rene Paul Chambellan, C. Paul Jennewein, Joseph Kiselewski, and Paul Manship; and expressionism, which had fi rst fl ourished in Germany in the 1900s and early 1920s, led to artwork by Latvian-born American Mark Rothko, Jackson Pollock, and others. The prosperity of the 1910s and 1920s led to the building of many hotels around the world and the enlarging of many others. The Waldorf-Astoria in New York, an art deco building, was designed in 1931. In Africa Treetops in Kenya and in Asia the Raffl es Hotel in Singapore, the E&O Hotel in Penang, and the Strand in Rangoon were all either built during this period or had major refurbishment work. There were also many holiday resorts emerging from the late 19th century concept of life in the Tropics with a place to retreat to in the hot summer: Simla in India, Hua Hin in Thailand, the Cameron Highlands in Malaya, Dalat in Vietnam, and Maymyo (Pyin U Lwin) in Burma (Myanmar). This coincided with many civic buildings being constructed: town halls, schools, hospitals, and libraries. The Bund at Shanghai teemed with magnifi cent stone buildings showing stability and the feeling of commercial wellbeing. In time of war some of these structures were actually best able to weather bombing raids, with the Fullerton Building in Singapore being used as a shelter during Japanese bombing raids in early 1942. The new construction techniques led to the building of skyscrapers. The fi rst of these was the Flatiron Building in New York City, which was completed in 1902 and is 285 feet tall. However, in 1913 this was overtaken by the Woolworth Building (792 feet), which 28 art and architecture (1900–1950) in turn was overtaken in 1930 by 40 Wall Street and in 1931 by the Empire State Building, which was the first building in the world to have more than 100 floors. Further reading: Dube, Wolf-Dieter. Expressionism. London: Thames & Hudson, 1972; Fletcher, Bannister. A History of Architecture on the Comparative Method. London: The Athlone Press, 1961; Jacquet, Pierre. History of Architecture. Lausanne: Leisure Arts, 1966; Lucie-Smith, Edward. Symbolist Art. London: Thames & Hudson, 1972; ———. Lives of the Great 20th-Century Artists. London: Thames & Hudson, 2000; Read, Herbert. A Concise History of Modern Painting. London: Thames & Hudson, 1961; Richards, J. M. Who’s Who in Architecture from 1400 to the Present. New York: Holt, Rinehart and Winston, 1977. Justin Corfield

Atatürk, Mustafa KemalEdit

(1881–1938) Turkish leader and reformer Mustafa Kemal Atatürk was one of the greatest reformers of the 20th century, and his legacy is present-day Turkey. He built a modern state from the ruins of the Ottoman Empire through massive and progressive domestic reforms. Viewed with godlike status by Turks, he is considered the savior of a country that under his guidance resisted occupation and colonization and embraced democracy and modernization. He was born in 1881 in Salonika (present-day Thessalonica, Greece). His father, Ali Reza, was a low-ranking Ottoman government employee who died when Mustafa was young. His mother, Zubeyde, raised him and his sister, Makbule. Zubeyde was a religious woman and hoped that her son would attend the local religious schools. However, with the help of his uncle he instead attended military school. The military schools, reflecting the Ottoman system, allowed students to rise not according to class status but by ability. Mustafa excelled in his studies. He took the name Kemal, which means perfection. He completed his studies at the War College in Harbiye, Istanbul, in 1905. In Istanbul and elsewhere throughout his postings, Mustafa Kemal was deeply disturbed by the corruption in the Ottoman bureaucracy. He joined several underground organizations that had contacts with exiled Turks in Geneva and Paris. To keep him away from Istanbul, his superior officers, suspicious of Mustafa Kemal, posted him in faraway places such as Damascus and Tripoli, but he was able to remain active in the secret societies, although events unfolding in the Balkans pushed other figures to the forefront. The underground organizations united and formed the Committee of Union and Progress (CUP) and in 1908 started the Young Turk revolution. The subsequent leaders of this movement, Enver Pasha, Talat Pasha, and Cemal Pasha, ruled as a triumvirate and were also suspicious of Mustafa Kemal and preferred to keep him away from the seat of government. Mustafa Kemal was critical of the CUP’s lack of ideology and program. The CUP’s only objective in the revolution was to reinstate the 1876 constitution, which had been abolished by the sultan. Mustafa Kemal was also wary of the expansionist and pan-Turkic postrevolution ideology the CUP embraced. Germany cleverly took advantage of the situation and entered into an alliance with the CUP. Mustafa Kemal, although he did not agree with the alliance, gladly learned modern military technology Atatürk, Mustafa Kemal 29 Mustafa Kemal Atatürk was one of the greatest reformers of the 20th century, and his legacy is present-day Turkey. from German military offi cers who had been sent to train the Ottoman armies. ALLIED DEFEAT AT GALLIPOLI The CUP-led Ottoman Empire fared badly in both the Balkan Wars and World War I. The only major victory was at Gallipoli, where Mustafa Kemal soundly defeated the British invasion. In 1915 the British army and navy valiantly fought to open the Dardanelles in a plan created by Winston Churchill. It was essential for the Allies to take Istanbul in order to reopen the Bosphorus Strait. The Allied defeat in Gallipoli compromised that situation and possibly lengthened the war. Mustafa Kemal was heralded as a hero among the Turks during a war that saw few victories and many defeats for the Ottomans. At the conclusion of the war, the remaining Ottoman territories were divided amongst the Allied powers. France was given control of southern Turkey (near the Syrian border), Italy was given the Mediterranean region, and Greece was given Thrace and the Aegean coast of Turkey. Istanbul was to be an internationally controlled city (mainly French and British). The Kurds and Armenians were also granted territory under the Treaty of Sèvres. The Turks would have only a small, mountainous territory in central Turkey. Mustafa Kemal was outraged, as were most Turks. Of all the occupying armies, he viewed the Greek army as the most dangerous threat. Greek nationalism was at an all-time high, and many wanted to reclaim all of ancestral Greece (which extended well into Asia Minor). This fear was confi rmed by the Greek invasion of Smyrna (present day Izmir) in 1919. In May 1919 Mustafa Kemal secretly traveled to Samsun (on the Black Sea coast) and journeyed to Amasya, where he issued the fi rst resistance proclamation. He then formed a national assembly, where he was elected chairman. Next he organized a resistance army to overthrow foreign occupation and conquest. Under his leadership the Turkish resistance easily drove out the British, French, and Italian troops, who were weary of fi ghting and did not want another war. The real confl ict was with the Greek troops and culminated in horrible atrocities committed by both sides. In September 1922 the Turkish army drove the Greek army into the sea at Izmir as the international community silently observed. In 1923 the Treaty of Lausanne was signed and replaced the Treaty of Sèvres. This treaty set the borders of modern-day Turkey. On October 29, 1923, the Republic of Turkey was proclaimed, with Mustafa Kemal as president and Ismet Inönü as prime minister. Even though the government appeared democratic, Mustafa Kemal had almost absolute power. However, he differed from several rising dictators of the time in several respects. He had no plans or ideology pertaining to expansionism. His primary focus was the modernization and domestic reform of his country. He wanted to make Turkey self-suffi cient and independent. He believed that the only way to save his country was to modernize it, and by force if necessary. He moved the capital from Istanbul to Ankara, a centrally located city. He then abolished both the sultanate and the caliphate, and his fi ght against religion became one of his most contested reforms. He believed that Islam’s role in government would prevent the country from modernizing. He was not antireligion but against religious interference in governmental affairs. He closed the religious schools and courts and put religion under state control. He wanted to lessen the religious and ethnic divisions that had been encouraged under the Ottoman system. He wanted the people of Turkey to identify themselves as Turks fi rst. He established political parties and a national assembly based on the parliamentary system. He also implemented the Swiss legal code that allowed freedom of religion and civil divorce and banned polygamy. Atatürk banned the fez for men and the veil for women and encouraged Western-style dress. He replaced the Muslim calendar with the European calendar and changed the working week to Monday through Friday, leaving Saturday and Sunday as the weekend. He hired expert linguists to transform the Turkish alphabet from Arabic to Latin script based on phonetic sounds and introduced the metric system. As surnames did not exist until this time, Mustafa Kemal insisted that each person and family select a surname. He chose Atatürk, which means “father of the Turks.” Some of his most profound reforms, however, were in regard to women. Atatürk argued that no society could be successful while half of the population was hidden away. He encouraged women to wear European clothing and to leave the harems. Turkey was one of the fi rst countries to give women the right to vote and hold offi ce in 1930. He also adopted several daughters. One of them, Sabiha Gokcen, became the fi rst woman combat pilot in Turkey. These reforms did not come easily and in many cases garnered little support. Many religious and ethnic groups such as the Sufi dervishes and Kurds staged rebellions and 30 Atatürk, Mustafa Kemal were ruthlessly put down. Other minority groups suffered or were exiled as a result of the new government. A heavy drinker, Atatürk died of cirrhosis of the liver in November 1938. As he had no children he left no heirs and instead bequeathed to his country the democracy that he created, which would survive him to the present day. Although Atatürk forbade many basic concepts of democracy such as free press, trade unions, and freedom of speech, he paved the way for the future addition and implementation of these ideals. Further reading: Lord Kinross (Patrick Balfour). Atatürk: The Rebirth of a Nation. London and New York: William Morrow Company, 1965; Mango, Andrew. Atatürk: The Biography of the Founder of Modern Turkey. New York: The Overlook Press, 1999. Katie Belliel

Aung SanEdit

(1915–1947) Burmese nationalist and freedom fi ghter Aung San was born on February 13, 1915, at Natmauk in central Burma (Myanmar). Aung was the president of the student union at Rangoon University in 1938. He joined the left-leaning Dobam Asiayon (“We Burmese” Association) and was its general secretary between 1938 and 1948. Aung was also a founding member of Bama-htwet-yat Ghine (Freedom Bloc). At the time of World War II he was very active in the resistance movement against the British. He went to Amoy, China, and met with the Japanese to seek help forming an army to fi ght the British. An anti- British unit was formed by the “Thirty Comrades,” who received military training on Hainan Island in Japanese-occupied China. Aung became the commander of the Burma Independence Army (BIA), which was formed on December 26, 1941. Ne Win, the future authoritarian ruler of Burma (1962–88), was one of the comrades. The army was stationed in Bangkok and entered Burma in January 1942 along with the invading Japanese army. The BIA, which had formed a provisional government, became unpopular because of an infl ux of criminals into the organization. It was replaced by the Burma Defense Army (BDA), with Aung as commander. The BDA, trained by the Japanese, was a conventional army. The name BDA was changed to Burma National Army (BNA). In the Japanese-sponsored government Aung was minister of war. Aung became disillusioned with the Japanese and discussed with the other resistance leaders their next course of action. The Anti-Fascist Organization came into being in April 1944. Later renamed the Anti- Fascist People’s Freedom League (AFPFL), it was formed with Aung as its president. He openly turned against the Japanese in March 1945 and switched his loyalty to the British, renaming the forces the Patriotic Burmese Forces. The British then founded a new government, and he became its deputy chairman in the executive council, holding important portfolios of defense and foreign affairs. In January 1947 he went to London and negotiated with the British Labour government about granting independence to Burma. The Aung San–Attlee Agreement of January 27, 1947, guaranteed independence within a year. There would be an elected constituent assembly, and until it fi nalized its work, the country would be governed under the provisions of the Government of India Act of 1935. The British government also would sponsor Burma’s admission to the United Nations. On February 12 Aung signed the Panglong Agreement, which supported the cause of a united country with the leaders of other Burmese nationalist groups. Under his guidance the AFPFL won a landslide victory in April elections to the constituent assembly, securing 196 out of a total of 202 seats. Aung was concerned about his country’s future and called a series of meetings in Rangoon (now renamed Yangon) in June 1947. He urged people in a public meeting to remain disciplined in a speech on July 13. He was assassinated six days later, along with six other councilors, during a meeting of the constituent assembly. Aung San’s political rival, U Saw, a former premier, was found guilty of the crime and executed in 1948. On January 4, 1948, Burma became independent from British rule. Aung had become a martyr and a national hero and continued to inspire his people with his dedication and sacrifi ce. He was criticized by some for his collaboration with the Japanese; others say it was a tactical move to gain independence for his country. He turned against the Japanese at the opportune moment. His wife became a diplomat and later served as ambassador to India. Further reading: Aung San Suu Kyi. Aung San of Burma: A Biographical Portrait. Edinburgh: Kiscadle, 1991; Kin Oung. Who Killed Aung San? Bangkok: White Lotus, 1993; Maung Maung. Aung San of Burma. The Hague: M. Nijhoff, 1962; Naw, Angelene. Aung San and the Struggle for Burmese Independence. Bangkok: Silkworm Books, 2002; Silverstein, Aung San 31 Josef, ed. The Political Legacy of Aung San. New York: Southeast Asia Program, Cornell University, 1993. Patit Paban Mishra

Australia and New ZealandEdit

During the 1880s there were many attempts to establish a “federation” by which the six British colonies of Australia—New South Wales, Queensland, South Australia, Tasmania, Victoria, and Western Australia— would be able to come together under a single government. In 1890 it was fi nally agreed to call a convention in the following year and draft a federal constitution. Because of the depression of the 1890s, the constitution was not drawn up until 1898, and agreement from all the states was reached with Western Australia holding a referendum to agree to joining the Commonwealth of Australia in 1900. New Zealand decided not to join with Australia. As a result, on July 14, 1900, the fi rst governor-general of Australia, being the representative of the British sovereign, was appointed, and on January 1, 1901, the Commonwealth of Australia was proclaimed in Centennial Park in Sydney, New South Wales. Part of the reason why the federation had taken so long to negotiate was the intense rivalry between the states, which had to agree to hand over powers for defense, foreign relations, and foreign trade and which also had to agree to dismantle tariffs and restrictions on the sale of goods within the commonwealth. There were disagreements over where the new capital was to be, and initially it was in Melbourne. The fi rst opening of the federal parliament took place there on May 9, 1901, with Edmund Barton as the fi rst prime minister. Fittingly, some of the Australian contingents to China, sent in the wake of the Boxer Rebellion, had returned to Sydney a few days before the fi rst parliament was opened. They were rushed down by train to take part in the ceremony. At the time, Australian soldiers, as well as New Zealanders, were also involved in supporting the British in the Boer War. The early soldiers had left as part of state units—after federation Australian Commonwealth units were dispatched. After federation it was obvious that Melbourne could not remain Australia’s capital, and in 1902 a Capital Sites Enquiry Board started inspecting prospective sites, which had to be within 100 miles of Sydney. Eventually a site was agreed on, and in 1913 Lady Denman, wife of the governor-general, announced “I name the capital of Australia Canberra, with the accent on the Can”—Canberra being the Aboriginal name for the area. The region around it then became the Australian Capital Territory (ACT), designed with a conscious attempt not to make the mistakes that had taken place in the building of Washington, D.C. The ACT was 100 times larger than the District of Columbia, and all land in it was declared under leasehold to prevent property speculators’ taking it over. The U.S. architect Walter Burley Griffi n drew up plans for the city after he won fi rst place in a worldwide competition for the appointment. It was not until 1927 that a temporary parliament building was established there. Over the same period, in New Zealand, which was also a self-governing “dominion,” Richard “King Dick” Seddon was prime minister of a liberal administration from 1893 until 1906. One of the major issues he faced was the need to encourage the expansion of agriculture by the establishment of more small farms. Both New Zealand and Australia during this period relied heavily on primary industries: farming and mining. Although the Australian economy was diversifying slightly, New Zealand’s main products were sheep/lamb/mutton, wool, and butter, most of which was exported to Britain. By 1913 New Zealand had become the largest exporter of dairy products in the world. While the Liberals were in power in New Zealand, the trade union movement was growing in strength in both New Zealand and Australia. In 1889 a state Labour government was formed in Queensland, in northern Australia, and in 1891 the Australian Labour Party was formed. Seven years later, in 1898, the Trades and Labour Confederation decided to establish a New Zealand Labour Party, although it was not until 1935 that they were able to form a government. In Australia, in contrast, from 1904 to 1907 Chris Watson formed a minority administration and presided over the fi rst national Labour Party government anywhere in the world, and in 1910 Labour achieved an absolute majority in the Australian parliament. Australia and New Zealand were affected in the early 1910s by a small economic depression. This was followed by the outbreak of World War I, and both countries were keen to support Britain, the “mother country” of many Australians and New Zealanders. Australian and New Zealand soldiers were immediately sent to Egypt, where, as the Australian and New Zealand Army Corps, they became known as Anzac. In 1915 they were deployed to Gallipoli in a failed attempt to capture the Turkish capital, Constantinople. In Australia and New Zealand this became an impor- 32 Australia and New Zealand tant symbolic occasion for both countries, and many still visit Gallipoli each year on April 25. After Gallipoli both Australian and New Zealand soldiers fought in France, with the Australian general Sir John Monash leading his men to victory in November 1918. During the war two attempts to introduce conscription in Australia failed; New Zealand maintained conscription throughout the confl ict. At the Versailles Peace Conference after the end of the war, Australia and New Zealand were represented by their respective prime ministers, William Morris “Billy” Hughes and William Ferguson Massey. Both were keen to ensure that the war had achieved something, and Australia was given charge of German New Guinea (which was merged with Papua to form Papua & New Guinea, later Papua New Guinea) and the Solomon Islands, and New Zealand was given Western Samoa. The formation of the League of Nations after the war was treated differently by Australia and New Zealand. The former decided to play a more active role, but in New Zealand Massey felt that the organization was useless and that New Zealand should rely not on multilateral diplomacy but on the might of the Royal Navy. As a result, in the fi rst 10 years of the League of Nations, New Zealand only sent three delegations to its annual conferences of the International Labour Organisation and did not ratify any of the league’s conventions until 1938. This was in spite of New Zealand’s election in 1936 to the League Council and a gradual move to support collective security. THE DEPRESSION During the 1930s in Australia and New Zealand the worldwide Great Depression saw widespread unemployment, which hit many families very hard. Others, fearing they might become unemployed, stopped spending money, further defl ating the economy, and both countries were struggling to pay their war debts. Many of those badly hit were former soldiers who had fought in World War I and were now angry about a government that had “let them down.” Soup kitchens appeared, beggars were regularly seen in the streets, and children came to school malnourished. Some people turned to extreme political movements, and with the increase in strength of the trade union movement came the formation of pseudo-fascist organizations in Australia—the New Guard—and in New Zealand— the New Zealand Legion. In 1935 a Labour government came to power in New Zealand with Michael Savage as prime minister. When he died in 1940 he was succeeded by Peter Fraser, who remained in offi ce until 1949. In contrast, in Australia for most of the depression Joseph Lyons of the United Australia Party was prime minister, having defeated the Labour Party under James Scullin in 1932. Pointing to the desire of both countries to connect with the wider world, Australian and New Zealand aviators began a series of remarkable pioneer fl ights. On September 10–11, 1928, the Australian aviator Charles Kingsford Smith made the fi rst Australia–New Zealand fl ight. During that trip he met the teenage Jean Batten, who was to become a New Zealand fl ying legend. She moved to Sydney in the following year to train for a commercial pilot’s license. Kingsford Smith was to achieve numerous records for his fl ying across the Atlantic and Pacifi c Oceans and the Tasman Sea, as well as his October 1933 solo fl ight from England to Australia, and Jean Batten was to be the fi rst woman to fl y solo from England to Australia and back (1934–35), the fi rst woman to fl y the South Atlantic solo, and in 1936 the fi rst person to fl y from England to New Zealand. In the arts Australian painters Hans Heysen, Arthur Streeton, William Dobell, and in the 1940s Sidney Nolan and Russell Drysdale were to gain international prominence, as were New Zealand artists Charles Goldie and Frances Hodgkins. Prominent artistic families the Lindsays and the Boyds fl ourished in Australia. Writers like Frank Clune and Ion Idriess wrote many books describing Australia and Australians—perhaps the most famous book by Idriess was about the quintessential Australian hero Harold Lasseter and the search for gold in central Australia. Other writers such as Miles Franklin, Ernestine Hill, Eleanor Dark, and Henry Handel Richardson dealt with Australia in fi ction. Poets such as Dame Mary Gilmore, Banjo Paterson, and Judith Wright are representative of that genre of Australian literature. New Zealand literature is widely known by way of Katherine Mansfi eld and crime fi ction writer Ngaio Marsh. Australian actor Oscar Asche and singer Nellie Melba achieved as much fame overseas as they did in Australia. In the realms of medicine and science, respectively, Australian pathologist Howard Florey and atomic scientist Ernest Rutherford (from Nelson, New Zealand) were to make major contributions to the world. In Britain New Zealander Sir Arthur Porritt became surgeon to King George VI, and on the day of the coronation of Queen Elizabeth II in 1953 news was received of the scaling of Mount Everest by another New Zealander, Australia and New Zealand 33 Edmund Hillary, earlier that day, the fi rst known ascent of the mountain. In Australia and New Zealand the indigenous populations, the Aboriginals and the Maoris, remained marginalized economically and socially. Gradually, the Maoris in New Zealand began, through their numbers and the fact that they all spoke a common language, to exert some political infl uence. Maori started to be taught in some schools and by the 21st century was widely taught throughout the country. By contrast, the Aboriginal people in Australia remained geographically on the fringes of cities and towns and were discriminated against in work and housing. Children were taken away from parents when they were young to be brought up in foster homes or children’s homes, where they were alienated from their own culture. They became known as “The Stolen Generation.” Although Maoris were always recognized as citizens of New Zealand, it was not until 1967 that Aboriginal Australians had the right to vote. In 1931 the British parliament enacted the Statute of Westminster, by which Britain relinquished powers over self-governing dominions. However, it was not adopted in Australia until 1942 and was fi nally adopted in New Zealand in 1947. In 1940 Australia established its own diplomatic posts in foreign countries: in Washington, D.C.; Tokyo; and Ottawa. New Zealand followed in the following year with a minister in Washington, D.C. Representation in commonwealth countries was still by a high commissioner and in other countries by an ambassador. With the outbreak of World War II in 1939, Australia and New Zealand both immediately declared their support for the United Kingdom, and soldiers from both countries were sent to the Mediterranean, serving in North Africa and in Greece. In December 1941, when 34 Australia and New Zealand An Australian World War I recruitment poster calls for men to join the Allied cause. Though Australia and New Zealand became increasingly independent from Great Britain, both maintained strong cultural ties, especially in times of war. the Pacifi c War began, there was panic in both Australia and New Zealand over a possible Japanese invasion. Australian soldiers were immediately recalled from the Middle East, and some were sent into action in Malaya and Singapore, both of which quickly fell to the Japanese. On February 19, 1942, the Japanese bombed Darwin, causing signifi cant physical damage and showing Australia’s vulnerability to attack. Australian soldiers returning from North Africa were reinforced by large numbers of U.S. soldiers. Australian soldiers were then sent into action against the Japanese in New Guinea, where at Kokoda they managed to halt the Japanese advance and gradually drive them back. In contrast, in New Zealand soldiers were not recalled and continued to play an important part in the campaigns in the Western Desert and in Italy but a minimal role in the Pacifi c. U.S. soldiers also came to New Zealand, which at that point was largely defended by World War I veterans and teenagers who were hastily armed by the frightened government. Australia and New Zealand, seeing their joint vulnerability, decided to conclude the Canberra Pact of 1944, which was to determine that after the war Australia and New Zealand would dominate the South Pacifi c, and the United States would be excluded. As the Pacifi c War gradually saw the Japanese pushed back, New Zealand soldiers were recalled from Italy. Some were posted to the Pacifi c, but the war ended soon after. After the war both Australia and New Zealand became founding members of the United Nations, and both were led by governments that supported a multilateral approach to political problems. Further reading: Bolton, Geoffrey, and Stuart Macintyre, eds. The Oxford History of Australia. 4 vols. Melbourne: Oxford University Press, 1986; Davison, Graeme, John Hirst, and Stuart Macintyre, eds. The Oxford Companion to Australian History. Melbourne: Oxford University Press, 1998; Dennis, Peter, et al. The Oxford Companion to Australian Military History. Melbourne: Oxford University Press, 1995; McGibbon, Ian, ed. The Oxford Companion to New Zealand Military History. Auckland: Oxford University Press, 2000; Wilde, William, Joy Hooton, and Barry Andrews, eds. The Oxford Companion to Australian Literature. Melbourne: Oxford University Press, 1994. Justin Corfi eld

The Contemporary World 1950 to the Present Edit

AfghanistanEdit

Afghanistan is a predominantly Muslim, landlocked country bordered by Iran, Pakistan, and the former Soviet republics of Turkmenistan, Uzbekistan, and Tajikistan. It is not a nation-state along European lines—it shares no common language or ethnic heritage. Instead, it consists of a host of different groups, including Pashtuns, Hazaras, Tajiks, and Uzbeks. It also occupies rugged, divided terrain. This diversity has translated into a weak central state prone to interventions from the outside. From the 19th to early 20th centuries Afghanistan was caught between the Russian and British Empires as each expanded into Central Asia. During the second half of the 20th century Afghanistan again found itself a buffer between large empires, in this case between the Soviet Union and the United States. In 1933 Afghanistan’s king, Mohammed Zahir Shah, began what would become a 40-year reign, during which he would only rule directly during the final decade. Just before the end of World War II, in which Afghanistan was neutral, one of Zahir Shah’s uncles, Shah Mahmud, gained control of the country. In the immediate postwar years Shah Mahmud saw the breakdown of relations with Pakistan and Afghanistan’s subsequent movement toward the Soviet Union. Tensions with Pakistan, especially over the border issue, would characterize postwar Afghanistan’s history. The 1,300-mile border with Pakistan, the so-called Durand Line, had been established by the British decades earlier to divide the fractious Pashtun tribe. Pashtuns ended up on both sides of the border. The departure of the British in 1947 gave Shah Mahmud and other Pashtuns in Afghanistan hope for Pashtun unification. Mahmud and others called for an independent “Pashtunistan” and encouraged rebellion on the Pakistan side of the border. In 1950 in retaliation, Pakistan halted shipments of petroleum to Afghanistan. Crippled without oil, Afghanistan turned to the Soviets and signed a major trade agreement. Pakistan, meanwhile, became an important part of the American military alliance. In 1953 Mohammed Daoud, the king’s cousin and brother-in-law and a young, Western-educated modernizer, came to power. His vigorous pursuit of Pashtun unification created more tensions with Pakistan and pushed Afghanistan further toward the Soviets. Interested in spreading and consolidating power along its border regions, the Soviet Union was eager to assist. At the same time, though, the United States also tried to win influence in Afghanistan. As part of cold war strategy, the United States wanted to create an alliance of nations along the Soviet Union’s border—Afghanistan, Iran, Iraq, Pakistan, and Turkey. Daoud refused to join the resulting Baghdad Pact but accepted U.S. aid. During his 10 years in power, Daoud pursued a cautiously reformist agenda, in which economic development became the chief goal of the state. To help with modernizing projects, Daoud skillfully played the Soviets and the United States off of each other. Afghanistan received $500 million in aid from the United States and $2.5 billion from the Soviets. Daoud used this aid to consolidate his own power. A In the early 1960s Daoud, obsessed with Pashtun unification, made payments to tribesmen on both sides of the border and spread propaganda. In 1960 he sent troops across the border. As a result the two countries severed relations in September 1961 and the border was closed to even nomadic sheepherders. In 1963 as it became clear that an extended showdown with Pakistan would only hurt Afghanistan, King Zahir Shah dismissed Daoud and took direct control of the country. The king ruled from 1963 to 1973. Within two months of taking power he had reached an agreement reestablishing diplomatic and trade relations with Pakistan. He also began an experiment in liberalization called “new democracy.” At the center of this was a new constitution, promulgated in 1964. It barred the royal family—except the king—from politics, created a partyless system of elections, extended full citizenship to all residents of the country, including non-Pashtuns, and created a secular parliament and an independent judiciary. Although Afghanis voted in elections in 1965 and 1969, the king held most of the power. After a decade of economic stagnation and political instability, the king was deposed while in Europe in 1973 by Mohammed Daoud. The economy continued to stagnate and Daoud could only maintain stability through repression. In April 1978 a communist coup forced Daoud from power. In December 1979 intending to support the pro- Soviet communist regime and install Soviet favorites in power, 75,000 to 80,000 Soviet troops invaded Afghanistan. The decadelong war that resulted killed approximately 1 million Afghanis and forced another 5 to 6 million into exile in Iran and Pakistan. The United States, under Jimmy Carter, responded strongly. It withdrew consideration of the Soviet- American Strategic Arms Limitation Treaty (SALT II) in the U.S. Senate, boycotted the 1980 Moscow Olympics, leveled economic sanctions against the Soviet Union, and increased U.S. aid to Pakistan. The United States committed to protecting the greater Persian Gulf region from outside intervention. The United States also started to funnel millions of dollars of aid through the CIA to rebel groups in Afghanistan. The Soviets pulled out of Afghanistan in 1988 and 1989. By this time Soviet president Mikhail Gorbachev, who had come to power in 1985, had decided that the costs of the Afghan war in both soldiers and finances outweighed the benefits. The Soviets faced a fierce insurgency within Afghanistan and a growing antiwar movement at home, as well as continued international pressure. The last Soviet troops left in February 1989. The communist regime in Afghanistan collapsed in April 1992. The early 1990s saw a struggle for control between the various forces within Afghanistan. In 1996 the taliban—an extremist Islamic regime backed by Pakistan—captured power. The Taliban consisted of religious students and ethnic Pashtuns, as well as roughly 80,000 to 100,000 Pakistanis. They espoused an antimodernist plan to create a “pure” Islamic society in Afghanistan, which included repressive treatment of women. The Taliban allowed al-Qaeda, an anti- American Islamic fundamentalist terrorist organization led by the Saudi Osama bin Laden, to establish bases in Afghanistan in return for moral and financial support. In November 2001 after the Taliban rejected international pressure to hand over al-Qaeda leaders, the United States attacked al-Qaeda and the Taliban. Joining forces with the Northern Alliance—minority Tajiks and Uzbeks from the northern part of the country— Afghanistan Afghan and U.S. Army soldiers patrol a road outside of Forward Operating Base Kalagush, Afghanistan. the United States defeated the Taliban and destroyed the al-Qaeda bases, although it failed in its mission to capture Osama bin Laden or to destroy al-Qaeda or the Taliban completely. The December 2001 Bonn Agreements handed temporary power to Hamid Karzai, a moderate Pashtun from a prominent and traditionalist family. A new constitution, written by the Loya Jirga (national assembly), was ratified in early 2004. In October 2004, an overwhelming popular vote elected Karzai president of the Islamic Republic of Afghanistan. After 2001 the country saw dramatic changes. Hundreds of thousands of refugees returned, pushing the population of Kabul from 1 million to 3 million. In 2005 5 million girls were attending school; four years earlier fewer than 1 million had been in school. The economy, however, was still weak and dependent upon international aid. Indeed, despite this aid in 2005, Afghanistan was moving toward becoming a narco-state. In that year roughly 2.3 million Afghanis (out of a population of 29 million) were involved in the production of poppies for opium and heroin. Poppy profits equaled 60 percent of the legal economy. Warfare also continued in isolated pockets of the country as U.S. soldiers tried to mop up remnant Taliban and al-Qaeda forces. See also disarmament, nuclear; Islamist movements. Further reading: Anderson, John Lee. “The Man in the Palace.” The New Yorker (June 6, 2005); Cullather, Nick. “Damming Afghanistan: Modernization in a Buffer State.” Journal of American History 89, no. 2 (September 2002); Rubin, Barnett R. The Fragmentation of Afghanistan: State Formation and Collapse in the International System. New Haven, CT: Yale University Press, 1995. Thomas Robertson

African National Congress (ANC)Edit

Following a decade of political activism for the rights of blacks, Coloreds, and Indians in South Africa, the South African Native National Congress—later renamed the African National Congress (ANC)—was formed on January 8, 1912, in Bloemfontein. It unified the fragmented efforts of various organizations in the struggle against racial discrimination, political disenfranchisement, and economic exploitation of the majority of blacks in South Africa. Over the course of almost 80 years, the ANC used various means ranging from writing letters to the British king, negotiations, strikes, and boycotts to armed struggles and nonviolent mass actions to fight the apartheid system. Change came only after South African president F. W. de Klerk outlawed the discriminatory apartheid laws in 1990. As the ban against the ANC was lifted, the organization became the first ruling party in a free and democratic South Africa in 1994 with Nelson Mandela as its first black president. The ANC began its long battle against the political disenfranchisement and socioeconomic marginalization of blacks in courts of South Africa. As an economic upswing hit South Africa and intensified the need for a black work force in the early 1920s, the ANC attempted to include the dwindling rights of workers in their agenda. But the economic depression and new legislation prevented this. New laws released by the government systematically stopped the economic rise of a small black bourgeoisie. With the Land Act, the government denied black Africans the right to own land and pushed them into economically dependent positions. The government initialized the foundation of the Native Representative Council, which was meant to represent the Africans but which was effectively controlled by the white government. It actually decentralized and weakened the movement to such an extent that some pronounced the ANC literally dead in the early 1930s. The repressive legislation introduced by the government of Prime Minister Hertzog in 1935 led to renewed political activism on behalf of the ANC. In conjunction with 39 other organizations including those of coloreds, Communists, and Trotskyists, the ANC became active in the All Africa Convention (AAC) that fought racial discrimination and economic exploitation. The conservative approach of the ANC lasted until the late 1940s. With the candidacy of Jan Smuts in the presidential race of 1948, there was hope that discrimination would cease and real change would take place. This hope evaporated when Smuts was defeated and an even more discriminatory legislation was introduced. With this new legislation racial discrimination was officially legitimized and the apartheid system was born. Marriages between whites and individuals of color were prohibited (1949) and the Immorality Act (1950) forbade interracial sexual relations. The new legislation required a national roll according to racial classifications in the Population Registration Act (1950), and the Group Areas Act (1950) enacted demarcation of land use according to race, which secured the most fertile, resourceful, and beautiful African National Congress land for the whites and assigned marginalized areas of land to blacks as homelands. When the apartheid laws were introduced in 1948, a conflict between the older and younger generations in the ANC deepened. While the old guard wanted to continue their struggle with the same methods, but only broaden its base, the ANC Youth League envisioned a much more radical change. In 1952 the old guard of the ANC adopted the approach of the youth and joined other organizations in the National Defiance Campaign. In these campaigns the ANC activists deliberately broke the unjust apartheid laws to draw attention to them and have them examined in the courtroom. On June 26, 1955, the Congress of the People, which consisted of the ANC and other civil rights and antiapartheid organizations, formulated the so-called Freedom Charter at Kliptown. It demanded equal rights for people of all skin colors and no discrimination based on race. In 1956 the government arrested 156 leaders of the ANC and its allies and charged them with high treason using the Freedom Charter as the basis of its charge. All the accused were eventually acquitted. In the spring of 1960, the ANC began its campaign against the pass laws, which had required all blacks to carry their identification card with them at all times to justify their presence in “white areas.” On March 21 about 300 demonstrators marched peacefully against the law. The police first fired tear gas and then aimed directly at demonstrators; 69 people were killed and 180 injured. This incident became known as the Sharpeville Massacre. Internationally, the apartheid regime of South Africa faced increasing opposition in the 1950s and 1960s. The newly independent states in Africa, organized since 1963 in the Organization of African Unity (OAU), used diplomatic and political pressure to help end apartheid. In the United States, the Civil Rights movement shed attention on global issues of segregation and discrimination. The leader of the ANC, Albert Lutuli, led millions of activists in the nonviolent campaigns and believed in the compatibility of the African and European cultures. However, some of the ANC members concluded that nonviolent acts were not suitable for South Africa and that more aggressive actions had to be applied. In 1961 the ban on the ANC forced the movement to go underground. The military wing, Umkonto de Sizwe (“Spear of the Nation”), was formed to commit acts of sabotage. Mandela and nine other leaders of the ANC were arrested in 1962 and charged in the so-called Rivonia Trial with 221 acts of sabotage initiated to stage a revolution. Mandela’s verdict was imprisonment for life plus five years beginning in 1964. The rest of the leadership of the ANC was forced into exile. The ANC had the backing of the masses and was able to stage actions of mass resistance against apartheid in the late 1970s and 1980s. It trained its guerrilla force in neighboring countries. In 1973 workers’ strikes beginning in Durban spread to other parts of the nation. At the segregated black universities a new movement, similar to the black consciousness movement in the United States, emerged. Strikes and class boycotts at the University of Western Cape, at Turfloop near Pietersburg, and at the University of Zululand erupted. Resistance against the so-called Bantu education, which ordered that Africans were to be taught in Afrikaans, the language of the white oppressors, exploded in June 1976 in the Soweto Uprising. In the Soweto Uprising thousands of black students marched to protest the governmental decree. The police shot and killed at least 152 demonstrators. By the end of 1977, the government had killed over 700 young students in similar incidents. In the same year, the government retreated and decided that African schools did not need to instruct their students in Afrikaans any more. During the 1980s the fight against apartheid included all areas of life. The armed wing of the ANC received increasing support for the guerrilla fight within South Africa and the organization used propaganda to create a mood for resistance. Grassroots organizations emerged all over South Africa and created the mass organization called the United Democratic Front (UDF) in 1983. Finally, on February 2, 1990, new president F. W. de Klerk introduced change to the system. He had held secret conversations with the imprisoned Mandela before assuming the presidency. Once in office, he lifted the ban on the ANC and announced Nelson Mandela’s imminent release after 27½ years of imprisonment. De Klerk not only ended the censorship of the press but also invited former liberation fighters to join the government at the negotiation table and to help prepare for a new multiracial constitution. Both Mandela and de Klerk were honored with the Nobel Peace Prize in Oslo in 1993. Still, in the early 1990s, even after the end of apartheid, the armed struggle in South Africa had not ended. The black organization Inkatha, led by Gatsha Butelezi, challenged the ANC. In 1994 the ANC became a registered political party and won the first elections, which were open to individuals from all races, with over 60 percent of the votes. Nelson Mandela became South African National Congress Africa’s first postapartheid president and Thabo Mbeki followed him in 1999. Further reading: Ellis, Stephen. “The ANC in Exile.” African Affairs 90, no. 360 (1991); Feit, Edward. “Generational Conflict and African Nationalism in South Africa, 1949–1959.” The International Journal of African Historical Studies 5, no. 2, (1972); McKinley, Dale T. The ANC and the Liberation Struggle. London and Chicago: Pluto Press, 1997; Nixon, Rob. “Mandela, Messianism, and the Media.” Transition 51 (1991); Official website of the ANC, http://www.anc.org.za/ lists/links.html (cited April 2006). Uta Kresse Raina

African UnionEdit

The Organization of African Unity (OAU) was formed on May 23, 1963, in Addis Ababa, Ethiopia, by 32 decolonized African nations. Built on Ghana’s president Kwame Nkrumah’s dream of Pan-Africanism, the OAU brought the opposing groups of African nations together in a single African organization. The founding members of the OAU envisaged this unity among African states as transcending racial, ethnic, and national differences. The main goal was not only to build an alliance between the African nations but also to provide financial, diplomatic, and economic assistance for those movements that were still fighting for liberation. OAU members guaranteed each other’s national sovereignty, territorial integrity, and economic independence and aspired to end all forms of colonialism and racism on the continent. The OAU officially agreed with the charter of the United Nations and the Universal Declaration of Human Rights. By the time it was replaced by the African Union (AU) in 2002, the OAU counted 53 out of the 54 African nations as its members. In the context of decolonization and the cold war, the OAU saw itself as alternative. The alliance, cooperation, and unification of the numerous newly independent African states in the 1960s signified a period of emancipation and empowerment of Africa. It drew attention to the fact that solutions to problems that single member states faced after decolonization were transferable to others and made problem solving easier. It also decreased the possibility of Africa’s falling back into political or economic dependency on the former European colonizing nations. The OAU wanted to provide newly liberated African nations with a platform of their own. In conjunction with the young nations of Asia that had achieved national liberation they saw themselves as providing a third option beyond the ones of the superpowers. While the organization promoted African culture, the agreements of cooperation also included other major fields such as politics, diplomacy, transport, and communication. Matters of health, sanitation, nutrition, science, defense, and security also became issues of joint concern. The agreement stated that disputes between states would be settled peacefully through negotiation, mediation, conciliation, or arbitration, while the organization condemned all forms of political assassination, any subversive activities of one state against another, and stood united in its battle against apartheid. The OAU acted as referee in various border conflicts between neighboring African nations. For example, it helped to prevent the division of the national territory of Nigeria into separate countries due to armed battle between distinct ethnic groups in the Biafran War from 1967 to 1970. The OAU used its diplomatic power to strongly condemn Israel’s intervention in Egypt in the Six-Day War of 1967. It used political pressure, diplomacy, and economic boycotts to help end apartheid in South Africa. The democratic nation of South Africa joined the OAU in 1994 as the 53rd member nation. Addis Ababa, Ethiopia’s capital and the host of the first OAU meeting, became the permanent headquarters of the OAU. The OAU assembly was made up of the heads of the individual African states. The organization employed over 600 staff members that were recruited from over 40 of its member states. The OAU had an annual budget in the range of $27–$30 million. In 1997 the OAU established the African Economic Community, which envisioned a common market for the entire continent of Africa. After 39 years of existence, the OAU was criticized broadly for not having done enough for the African people. In its view it should have protected them from their own leaders who promoted corruption, persecuted political opponents, and created a new class of rich in their respective nations while the masses remained impoverished. Further reading: El-Ayouty, Yassin, ed. The Organization of African Unity After Thirty Years. Westport, CT, and London: Praeger, 1994; Organization of African Unity. Available online. URL: http://www.un.org/popin/oau/oauhome.htm (cited July 2006); van Walraven, Klaas. Dreams of Power: The Role of the Organization of African Unity in the Politics of Africa. Leiden: Ashgate, 1999. Uta Kresse Raina African Union

AIDS crisisEdit

The AIDS epidemic has been considered one of the most important health emergencies in the contemporary world due to the destabilizing social, economic, and political consequences of its global spread and the unsuccessful attempts to develop vaccination against it. At the same time, some scientists have argued that the problem in tackling AIDS is not so much the insufficient scientific and medical developments, but the politics of the global response to the disease. The acronym AIDS stands for acquired immunodeficiency syndrome. From a medical perspective, AIDS is not a singular disease, but a series of symptoms that occur for an individual person who has acquired the human immunodeficiency virus (HIV). HIV belongs to the family of retroviruses, first described in the 1970s. The characteristic trait of viruses from that family is that their genetic material is encoded in ribonucleic acid (RNA), which is located in the inner core of the viruses and surrounded by an outer membrane made up of the fatty material taken from the cells of the infected person. Furthermore, HIV belongs to the virus group of lentiviruses, which produce latent infections. This means that in the initial state of HIV infection, the virus remains inactive and asymptomatic, and its genetic material is hidden in the cell for a period of time. In some cases, HIV has remained inactive indefinitely. In most of the cases, after the inactive period, HIV does progressive damage to the immune and nervous systems. The first stage of HIV activity in the body of an infected person is called AIDS-related complex (ARC). In ARC, only a partial deficiency of the immune system occurs. The second state of HIV activity is AIDS, which is a more advanced immunodeficiency. There are three main transmission modes of HIV: through sexual penetrative intercourse, the transfusion of blood or blood-related products, and from infected mother to child during birth or breast-feeding. Furthermore three important characteristics of the HIV infection have been identified. First, the condition is incurable. Second, the person with HIV is infectious for life, including during the initial (inactive) HIV infection period. Third, the effect of the HIV infection is the increased vulnerability to various infections due to the undermined immune system. Therefore HIV/AIDS has been linked with a series of other diseases such as pneumonia, various fungal and protozoa infections, lymphoma, and Kaposi sarcoma (a rare form of skin tissue cancer). It is believed that the origins of HIV are linked to an HIV-related virus located in Africa. There are two different types of HIV: HIV-1 and HIV-2 (the latter is present almost exclusively in Africa). The first cases of AIDS infection were observed in 1977–80 by doctors in the United States, who identified clusters of a previously rare health disorder among members of the gay communities in San Francisco and New York. Because the first AIDS cases were diagnosed in gay communities, the condition was initially termed Gay-Related Immune Deficiency Syndrome (GRID). AIDS-related diseases were later observed also among hemophiliacs and recipients of blood transfusions, prostitutes, intravenous drug users, and infants of drug-using women. In 1984, the virus causing AIDS was identified by the French researcher Luc Montagnier of the Pasteur Institute in Paris and confirmed by an American researcher, Robert Gallo of the National Cancer Institute. Also in 1984 the first test for AIDS was developed. The first commonly used tests for AIDS were the ELISA test and the Western blot test. After the 1980s the statistics of HIV epidemiology showed a constant rise in the number of infected persons and those directly affected by AIDS. The major group at risk was identified by the Joint UN Programme on HIV/AIDS (UNAIDS) as sexually active adults and adolescents between 15 and 50 years. According to UNAIDS in 2005 there were approximately 40.3 million people living with AIDS, and over 150 million directly affected by AIDS. It is also important to place the HIV/AIDS epidemic in a broader demographic context. The statistics of the HIV/AIDS Department of the World Health Organization (WHO) showed that in sub-Saharan Africa, in Asia, and in the former Soviet republics young women with low incomes and living in rural areas constitute a particularly vulnerable social group, with the highest rate of new HIV infections. Global and national responses to AIDS included various prevention and treatment policies. After 1996 the so-called antiretroviral drugs (ARVs), compounds that treat the virus infections, were in use. Antiretroviral drugs were available in single therapies, double therapies, and triple therapies. One example of an antiretroviral therapy was the Highly Active Anti-retroviral Therapy, which had a relatively high cost of between US$10,000 and $20,000 per patient per year. Most of the populations of the North American and western and central European regions could gain access to antiretroviral drugs and antiretroviral therapies. This systematically decreased the number of deaths due to AIDS-related diseases. As a result, in the Western AIDS crisis world living with AIDS was gradually transformed into an endurable and nonfatal condition. The costs of the drugs and treatments made them inaccessible for most of the world. The 13th World AIDS Conference in Durban in 2000 marked a significant shift of global attention to AIDS treatment. In 2002 the UN set up the Global Fund to Fight AIDS, Tuberculosis and Malaria (GFATM) in order to spawn more generous international funding of AIDS-related programs and to increase the supplies of ARVs. GFATM functions as a platform for cooperation between the public sector, the private sector, and the civic society. Between 2003 and 2005 GFATM granted $4.3 billion to various projects in 128 countries, including $1.9 billion specifically to HIV-related projects. Other key donor organizations are the World Bank’s Multi-Country HIV/AIDS Program (MAP), the U.S. President’s Emergency Plan for AIDS Relief (PEPFAR) and the European Union HIV/AIDS Programme. There are also numerous private foundations, charities, and private-sector support networks that participate in the global struggle against HIV/AIDS. In 2003 UNAIDS and the World Heath Organization initiated a campaign known as the “3 by 5” initiative, which aimed at making ARVs available to 3 million people in poor- and middle-income countries by 2005. In 2003 an HIV vaccination clinical trial proved unsuccessful. The obstacles to developing a vaccination against HIV included mutability of the virus, what effective immunological reaction the vaccination should generate, and various practical problems in the testing of the vaccine. The Global HIV Vaccine Enterprise created a forum for public and private organizations, as well as research institutes, to cooperate and generate funding for the development of an HIV vaccine. Important organizations working on an HIV vaccine included the International AIDS Vaccine Initiative in New York. In the Western world, in particular in the United States, where AIDS was initially linked to marginal social groups, it raised prejudices and contributed to their stigmatization and discrimination in employment, education, residence, and health care. The religious standpoint created a link between liberal sexual patterns and the spread of AIDS, which framed AIDS as an issue of personal morality, guilt, and punishment. In contrast, leftist standpoints phrased the AIDS issue as a problem of the protection of civil liberties and nondiscrimination. In spite of contrary medical evidence, it was a widespread public belief in the 1980s that AIDS could be contracted by casual contact. This raised a number of social and legal controversies where individual rights to privacy were weighed against the collective right to protection from the spread of the disease. The main site of the AIDS epidemic remains sub-Saharan Africa, where the virus spread primarily through unprotected heterosexual intercourse and reuse of medical instruments and contaminated blood supplies. Experts suggested that the dynamics of the spread of AIDS and its social and geographical distribution in sub-Saharan Africa both reflected and exacerbated the systemic characteristics of the migration and mobility patterns, the social sexual behaviors, the social inequalities and impoverishment, and the breakdown of family structures in the region. A study by the investment bank ING Barings indicated that in South Africa HIV/AIDS policies cost over 15 percent of the country’s GDP. The personal and collective consequences of the AIDS epidemic in Africa were equally disruptive. One of the most serious consequences of HIV/AIDS in Africa was the increased number of orphans, whose parents died due to AIDSrelated diseases. It was predicted that by 2010 the number of orphans in Africa would reach 40 million, out of which approximately 50 percent would be orphaned by causes related to HIV/AIDS. Further reading: Barnett, Tony, and Alan Whiteside. AIDS in the Twenty-First Century: Disease and Globalization. Basingstoke, Hampshire: Palgrave Macmillan, 2002; Fan, Hung, Ross F. Coner, and Luis P. Villarreal. AIDS: Science and Society. Boston: Jones and Bartlett Publishers, 2004; Kopp, Christine. The New Era of AIDS: HIV and Medicine in Times of Transition. Dordrecht: Kluwer Academic Publishers, 2002; Mustafa, Faizan. AIDS, Law and Human Rights. New Delhi: Institute of Objective Studies, 1998; Preda, Alex. AIDS, Rhetoric, and Medical Knowledge. Cambridge: Cambridge University Press, 2005. Magdalena Zolkos

AkihitoEdit

(1933– ) emperor of Japan Akihito became Japan’s 125th reigning emperor in 1989 upon the death of his father, Hirohito. According to Japanese mythology, the emperors, beginning with the legendary Jimmu, descendant of the sun goddess Amaterasu, had ruled over the country since 660 b.c.e. Although the emperors had de jure powers, it was the shoguns who ruled over most of Japanese history. With the Meiji Akihito Restoration in 1868, Emperor Meiji became the head of state, holding sovereign power. The postwar constitution of 1947 again reduced the role of the emperor to one of symbolism. Akihito was born on December 23, 1933, the first male child of Emperor Hirohito and Empress Nagako. In keeping with the royal tradition, Akihito at the age of three was separated from his parents and was brought up by court attendants, tutors, chamberlains, and nurses. However, in a departure from custom, at the age of six Akihito was sent to school along with commoners. During World War II when the Allied countries, led by the United States, attacked Japan, Akihito was moved to other provincial cities far away from Tokyo for safety. At the end of the war in 1945, when the U.S. Army occupied Japan, Akihito attended high school and college with the sons of the elite class. A Philadelphia Quaker, Elizabeth Gray Vining, was made Akihito’s personal tutor and taught him Western customs and values. He also briefly studied politics and civics at Gakushuin University in Tokyo. Akihito was invested as a crown prince in 1952, when he was 18. In 1959 he married Shoda Michiko; she was the first commoner to marry into the imperial family. When his father died on January 7, 1989, at the age of 87, Akihito became the emperor and took his assigned role as the symbolic head of state. Further reading: Keene, Donald. Emperor of Japan. New York: Columbia University Press, 2002; Kinoshita, June, and Nicholas Palevsky. Gateway to Japan. Tokyo: Kodansha International, 1998; Vining, Elizabeth Gray. Windows for the Crown Prince: Akihito of Japan. New York: Tuttle Publishers, 1990. Mohammed Badrul Alam

Algerian revolutionEdit

The Algerian war against French colonialism lasted from 1954 to 1962, when Algeria gained its independence. In 1954 armed attacks occurred at 70 different points scattered throughout the nation. Having just suffered a humiliating defeat by the Vietnamese at Dien Bien Phu, the French army was determined to win in Algeria. The French colons (colonists) in Algeria were also determined to keep “Algérie Française.” The tactics adopted by the Algerians and Vietnamese and the French and the Americans were remarkably similar and brought similar results as well. The Front de Libération Nationale (FLN) was an outgrowth of earlier nationalist movements. Ahmad Ben Bella (1916?– ), in addition to Belkacem Krim, Muhammad Khidr, and Hussein Ait Ahmad, led the movement. Under the FLN Algeria was divided into six wilayas, or districts, each with an FLN organization and leader acting within a cell system. The top echelon of FLN leaders met periodically to coordinate strategy. The wilayas and the cell system provided flexibility and some degree of security in a war where the French enjoyed military superiority. As with other revolutions in developing countries the FLN adopted guerrilla warfare tactics, avoided direct confrontation with French troops, and attacked civilian targets as well as French military sites. With few advanced weapons, the FLN used the so-called bombs-in-baskets approach to inflict maximum damage on the French army and colons. Algerian women were also active in the movement, serving as lookouts, distributing food and arms to fighters, and sometimes participating in the fighting as well. In 1954 the French had 50,000 soldiers in Algeria, by the war’s end they had over half a million soldiers in Algeria and they were still not winning. The French had clear-cut superiority in armaments, including planes and advanced firepower, but the Algerians knew the terrain, had popular support, and were determined to fight in spite of high costs until they achieved the goal of independence. The French used air strikes, napalm, pacification projects of rounding up civilians in rural areas and imprisoning them in internment camps, and burning villages. These tactics only increased local support for the FLN. The French army also tortured FLN captives. When word of the torture reached mainland France many turned against the war. In an attempt to focus their power in Algeria, the French granted Morocco and Tunisia independence in 1956, but when FLN fighters took refuge in these neighboring countries, the French attacked them. The war expanded much as the fighting in Vietnam spread into Laos and Cambodia. In 1956 French agents skyjacked the Moroccan plane carrying Ben Bella to a meeting of FLN leaders in Tunis and imprisoned him. One of the first skyjackings, the tactic was condemned by the international community but became more commonplace in subsequent decades. French forces defeated the FLN in Algiers but the FLN merely moved its operations elsewhere in the country, forcing French troops to move. Then the FLN slow- Algerian Revolution ly reconstituted itself in Algiers and the French were forced to return to fighting in the same city where they had previously declared victory. In 1958 General Charles de Gaulle came to power in France with the support of the army and the colons, who believed he would win the war in Algeria. De Gaulle traveled to Algeria, where he pointedly did not speak about “Algérie Française.” De Gaulle realized that short of a full-scale, long-term war the French could not win in Algeria. Although he hoped for some sort of alliance between the two nations and access to the petroleum and mineral reserves in the Sahara, by 1960 de Gaulle was speaking of an Algerian Algeria. He opted for negotiations with the FLN at Evian in 1961. The negotiations dragged on and the war escalated as both sides attempted to improve their positions at the negotiating table by gaining victories on the battlefield. Furious with what they believed to be de Gaulle’s betrayal, dissident army officers led an abortive coup in 1961. The colons organized into the extremist Secret Army Organization (OAS) and attempted to bring the war home to France by trying to assassinate de Gaulle in 1961. The OAS even attempted to bomb the Eiffel Tower, a move that was thwarted by French intelligence services. The war polarized French society between those who opposed the war—including intellectuals such as Jean-Paul Sartre, students, and labor unions—and those, especially in the army, who supported the war effort. In 1962 Algeria became formally independent, and Ben Bella returned as the first premier and later as president. The economy of Algeria was in ruins. As many as a million Algerians had perished in the war and another million had been made homeless. Refusing to live in independent Algeria, the colons left en masse, many moving to Spain rather than to France under de Gaulle. Immediately following independence a form of spontaneous socialism, or autogestion, had evolved as homeless and unemployed Algerians took over abandoned farms and businesses and began to run them and share the profits. Initially Ben Bella supported the autogestion movement, but gradually the FLN-led government took over farms and factories along the Soviet state capitalism model. Ben Bella and his minister of defense, Houari Boumedienne (1925?–1978), championed the formal army rather than the more loosely organized guerrilla fighters and they outmaneuvered or eliminated potential rivals within the FLN leadership. Algeria adopted a neutral position in the cold war and sometimes, as in the 1979 U.S. hostage crisis in Iran, served as a mediator in disputes, as it was respected by both sides. Some of the Algerian infrastructure was rebuilt using petroleum revenues but the economy failed to keep pace with the population growth. In 1965 Boumedienne ousted Ben Bella, who then spent number of years in Algerian prisons; he was not released until after Boumedienne’s death, when Chadli Benjedid became president. His regime was marked by economic stagnation and privatization. As unemployment rose—particularly among the youth born after independence—many young Algerians opposed the authoritarian FLN regime and turned increasingly toward Islamist movements. When the Islamists seemed poised to win in the open and fair 1991 elections the FLN, with the support of France and the United States, cancelled the elections, thereby setting off a bloody civil war that lasted through the 1990s. Further reading: Alexander, Martin S., ed. France and the Algerian War, 1954–1962. London: Routledge, 2002; Horne, Alistair. A Savage War of Peace: Algeria 1954–1962. New York: The Viking Press, 1977; Ruedy, John. Modern Algeria: The Origins and Development of a Nation. 2d ed. Bloomington: Indiana University Press, 2005; Stora, Benjamin. Algeria, 1830–2000: A Short History. Ithaca, NY: Cornell University Press, 2004. Janice J. Terry

Allende, SalvadorEdit

(1908–1973) Chilean politician Longtime politician, medical doctor, self-proclaimed Marxist, and president of Chile’s Popular Unity (Unidad Popular) government from 1970 to 1973, Salvador Allende occupies a highly controversial place in Chilean history. The country’s only democratically elected Marxist president, Allende instituted a range of reforms that sharpened the polarization of Chilean society and led to a series of economic and political crises. He was overthrown and died in office on September 11, 1973, by a coalition of military officers backed by the country’s leading economic interests, and in collusion with the U.S. Central Intelligence Agency (CIA). His ousting and death ushered in the period of military dictatorship led by army general Augusto Pinochet (1973–89). Born in Valparaíso, Chile, on July 26, 1908, to a prominent leftist political family, Allende entered medical school and became active in the movement opposed Allende, Salvador to the dictatorship of General Carlos Ibáñez (1927–31). Cofounder of the Chilean Socialist Party in 1933, he won a seat in the country’s national legislature in 1937 and became minister of health in 1939. Making his first bid for the presidency in 1952, in which the former dictator Ibáñez triumphed, he finished a distant fourth. He ran again for president in 1958 and 1964 as the leader of the Communist-Socialist alliance (Frente de Acción Popular), founded in 1957, losing the elections but gaining a loyal political following that by 1964 comprised 39 percent of the electorate. Calling for socialism in Chile, sympathetic to the Communist regime of Fidel Castro in Cuba, and in the context of the cold war, Allende came to be viewed with deep suspicion by both the Chilean landowning and copper oligarchy and the U.S. government. In the hotly contested 1970 elections, Allende and his Popular Unity coalition won with a slim plurality of 36.5 percent, defeating Conservative Jorge Alessandri (34.9 percent) and Christian Democrat Radomiro Tomic (27.8 percent). On taking office, Allende instituted a populist strategy of freezing prices and hiking wages, which boosted consumer spending and redistributed income to favor the urban and rural poor. He also followed through on his campaign pledge to pursue a “peaceful road to socialism” by nationalizing some 200 of the country’s largest firms, many U.S.-owned, including banks and insurance companies, public utilities, and the copper, coal, and steel industries. By 1971 opposition to the reforms grew, especially among the military, large landholders, and leading industrialists. By 1972 runaway inflation compounded the political backlash, the result of higher wages, a bloated government bureaucracy, and the growth of an underground economy in response to price controls. As popular discontent mounted and the Popular Unity coalition fractured into groups divided over the pace of change, pro-Allende guerrilla groups launched an armed campaign against conservative elements. From spring 1973 a wave of strikes by copper miners, truck drivers, shopkeepers, and others compounded the regime’s mounting problems. Meanwhile, the U.S. administration of Richard Nixon and the CIA worked to undermine the regime, funding opposition groups and plotting with rightists for Allende’s overthrow. On September 11, 1973, the military assaulted the presidential palace in Santiago. By the end of the day Allende was dead—whether by his own hand or the military’s remaining a matter of dispute. Upwards of 5,000 people were killed in the coup and its aftermath, making it the bloodiest regime change in 20th-century South America. Revered by some, reviled by others, Allende and his short-lived socialist experiment, and the U.S. role in assisting the overthrow of a democratically elected president, left an enduring mark on modern Chilean and Latin American history. Further reading: Faundez, Julio. Marxism and Democracy in Chile: From 1932 to the Fall of Allende. New Haven, CT: Yale University Press, 1988; Kaufman, Edy. Crisis in Allende’s Chile: New Perspectives. New York: Praeger, 1988; Loveman, Brian. Chile: The Legacy of Hispanic Capitalism. Oxford: Oxford University Press, 1979. Michael J. Schroeder

Alliance for ProgressEdit

Announced by U.S. President John F. Kennedy on March 13, 1961, the Alliance for Progress was a massive U.S. foreign aid program for Latin America, the biggest aimed at the underdeveloped world up to that time. Likened to the Marshall Plan in postwar Europe, its express intent was to promote economic and social development and democratic institutions across the Western Hemisphere; to raise living standards for the poorest of the poor; and to make leftist social revolution an unattractive alternative. “Those who make democracy impossible,” warned President Kennedy in announcing the plan, “will make revolution inevitable.” Most commonly interpreted in the context of the cold war between the United States and the Soviet Union, as a response to Fidel Castro and the Cuban revolution of 1959, and as the U.S. foreign policy establishment’s effort to thwart the aspirations of leftist revolutionaries, the Alliance for Progress, despite some successes, is widely considered to have failed to meet its lofty goals. Pledging $20 billion in aid over 10 years, the program actually distributed an estimated $4.8 billion, the remainder of the approximately $10 billion overall U.S. contribution from 1961 to 1969 going toward loan repayment and debt service. The program came to an effective end in 1969 under President Richard Nixon, who replaced it with a new agency called Action for Progress. A refurbished version was formulated by President Ronald Reagan in 1981, in his Caribbean Basin Initiative, which suffered many of the same shortcomings as its predecessor. In August 1961 representatives from the United States and Latin American countries (save Cuba) met at 10 Alliance for Progress Punta del Este, Uruguay, to formulate specific objectives and targets for the program and ways to implement them. The most important of these objectives included raising per capita incomes by an average of 2.5 percent annually; land reform; trade diversification, mainly through export production; industrialization; educational reforms (including elimination of illiteracy by 1970); and price stability. The program’s theoretical underpinnings owed much to the work of U.S. economist Walter W. Rostow, and his notion of “economic take-off” (articulated in his 1960 book, The Stages of Economic Growth). He was a member of the inter-American “board of experts” (dubbed “the nine wise men”) that had final authority on the program’s specific content. The reasons for the program’s overall failure have been the subject of much debate among scholars. Most agree that deepening U.S. commitments in the Vietnam War diverted attention and resources away from Alliance programs and initiatives. Another frequently cited limitation concerns the difficulties inherent in promoting democratic institutions and land reform in societies dominated by stark divisions of social class and race, entrenched landholding oligarchies, and small groups of privileged economic and political elites. Another criticism concerns the top-down nature of the programs, which relied almost exclusively on active state support and failed to incorporate local community or grassroots organizations into their design and implementation. For these and other reasons, the Alliance for Progress achieved some successes but on the whole failed to achieve the goals articulated by President Kennedy in 1961. Further reading: Berger, Mark T. Under Northern Eyes: Latin American Studies and U.S. Hegemony in the Americas, 1898–1990. Bloomington: Indiana University Press, 1995; Scheman, Ronald, ed. The Alliance for Progress: A Retrospective. New York: Praeger, 1988; Schoultz, Lars. Beneath the United States: A History of U.S. Policy Toward Latin America. Cambridge, MA: Harvard University Press, 1998. Michael J. Schroeder

American Federation of Labor and Congress of Industrial Organizations (AFL-CIO)Edit

In 1955 the American Federation of Labor (AFL) and the Congress of Industrial Organizations (CIO) joined to create the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO). The 54 national and international federated labor unions within the AFLCIO are located in the United States, Canada, Mexico, Panama, and U.S. dependencies. Membership in the United States as of 2005 was over 9 million. The major functions of the AFL-CIO are to lobby for the interests of organized labor and to mediate disagreements between member unions. A long-standing campaign of the federation is against the right-to-work laws that ban closed or union shops. A related issue is repeal of the Taft-Hartley Labor Act, which authorized right to work half a century ago. The AFL-CIO also works against other antilabor legislation and candidates. The first leader of the AFL was Samuel Gompers, who modeled the AFL on the British Trade Union Congress. He was conservative politically and believed that unions should work within the economic system as it was rather than trying to alter it. Gompers was followed by William Green and George Meany. Under their guidance, the AFL grew to over 10 million members by the time of its merger with the Congress of Industrial Organizations in 1955. The union’s early accomplishments were significant. Union men gained higher wages, a shorter work week and work day, workers’ compensation, laws regulating child labor, and exemption from antitrust laws. The CIO dates only to the 1930s. Green had replaced Gompers as leader of the AFL in 1924, but he maintained Gompers’s business unionism, based on crafts. By then the old crafts approach seemed outdated to some AFL members. The United States had industrialized, and mass production had replaced craftsmanship. Production workers in major industries such as steel, rubber, and automobiles lacked union protections. A strong minority of the AFL wanted the federation to begin organizing industrially. Within the AFL was a union leader with experience organizing an industry, John L. Lewis of the United Mine Workers (UMW) of America. In 1935 Lewis led the dissidents in the formation of the Committee for Industrial Organization. With the sympathetic New Deal Democrats in the White House, the unions had a rare opportunity to organize American labor with the government on their side. The committee organized, winning significant victories in automobiles and steel. The CIO challenged the authority of the AFL, and the AFL revoked the charters of the 10 CIO unions. The CIO became the Congress of Industrial Organizations in 1938. The independent CIO, under Lewis until 1940 and then under Philip Murray until 1952, was more militant than the AFL. It had a Political Action Committee, led by Sidney Hillman of the Amalgamated Clothing Workers Union, that encouraged membership political American Federation of Labor and Congress of Industrial Organizations 11 activism. The CIO attempted a major southern organizing campaign that proved fruitless in the 1940s and internal discord led to the loss of the International Ladies Garment Workers Union in 1938 and the mine workers in 1942. Still, in 1955, the CIO had 32 affiliated unions with approximately 5 million members. Both unions had internal difficulties in the 1940s. The AFL had member unions dominated by organized crime. The CIO’s radicalism brought into its member unions a number of communists. The CIO expelled 11 supposedly communist-dominated unions in 1949–50. The end of World War II was the end of the close relationship with the federal government that had allowed the AFL to grow during the 1930s. The Republicans in Congress reversed that relationship, covering unions as well as employers under unfair labor practices legislation and prohibiting the closed shop as well as the organization of supervisors and campaign contributions by unions. Union leaders had to swear that they were not communists. Passed over Truman’s veto, Taft-Hartley was a major blow to unionism. Clearly, the union leaders had reason to worry about the new Republican administration, and repeal of Taft-Hartley was an ongoing desire of the AFL-CIO. Throughout the period of separation, at least some within both unions retained an interest in reuniting the two. After the election of Eisenhower, the two leaderships agreed that the first Republican administration in 20 years would probably be unfavorable to labor. Unity was desirable. George Meany, as head of the AFL, and Walter P. Reuther, as head of the CIO, worked to bring about a merger, which occurred in 1955. The first AFL-CIO convention elected Meany as president. In 1957 it enacted anti-racket codes and expelled the Teamsters Union for failure to meet ethical standards. In 1961 the AFL-CIO implemented mandatory arbitration of internal disputes. That failed to prevent a dust-up between Meany and Reuther, who regarded Meany as dictatorial and wanted the AFL-CIO to involve itself in civil rights and social welfare issues. Reuther wanted to be president of the AFL-CIO and felt that Meany had outlived his usefulness. Reuther’s United Automobile Workers (UAW) left the AFL-CIO in 1968. In 1969 the UAW and the Teamsters formed the Alliance for Labor Action (ALA), which sought to organize the unorganized, students, and intellectuals. Reuther died in a plane crash in 1970. Without his strong leadership, the ALA disbanded in December 1971 after proving unsuccessful as an alternative to the AFL-CIO. Meany retired in 1979, and his replacement was Lane Kirkland, the secretary-treasurer. Kirkland inherited a union in decline in an economy turning away from organized labor. It brought the UAW back into the fold in 1981, the Teamsters in 1988, and the UMW in 1989. The tide would not turn, however, and Kirkland retired under pressure in 1995. Thomas R. Donahue, secretary-treasurer become interim president, was challenged by John J. Sweeny of the Service Employees International Union (SEIU), who won the first contested election in AFL-CIO history. Sweeny and United Mine Workers president Richard Trumka represented a new generation of activist union leaders, potentially a force for changing the decline of organized labor. Under Sweeny the AFL-CIO supported Democratic candidates, including Bill Clinton, and gained a sympathetic ear in the White House. Sweeny proved unable to reverse the decline in unionism due to deindustrialization and the loss of high-paying or skilled jobs in traditional union industries. Critics charged that Sweeny was exhausting the union’s funds without anything substantial to show for it. In 2005 Andrew Stern of the SEIU led an effort to force Sweeny’s retirement. Stern proposed consolidating the AFL-CIO’s member unions into 20 super unions organized by sector of the economy. He also wanted reemphasis on the organization of unrepresented workers. Failing to reform the AFL-CIO or force Sweeny out, the SEIU left the federation and created the Change to Win Federation. Further reading: Buhle, Paul. Taking Care of Business: Samuel Gompers, George Meany, Lane Kirkland, and the Trag- President Gerald Ford (left) meeting with AFL-CIO president George Meany at the White House in 1974. 12 American Federation of Labor and Congress of Industrial Organizations edy of American Labor. New York: Monthly Review Press, 1999; Goldfield, Michael. The Decline of Organized Labor in the United States. Chicago: University of Chicago Press, 1993; Zieger, Robert H., and Gilbert Gall. American Workers, American Unions. Baltimore: Johns Hopkins University Press, 2002. John H. Barnhill

American Indian Movement (AIM)Edit

Relations between Native peoples and U.S. federal and state governments soon after World War II swung between paternalism and indifference. Native Americans responded with a new militancy that echoed the Civil Rights movement and, by 1968, produced the American Indian Movement (AIM). “Red power,” expressed in lawsuits, sit-ins, and demonstrations—some of them violent—created greater awareness of Native rights and fostered new economic and educational initiatives. But many Indians remained desperately poor and isolated. In the 1950s federal policies reverted to a pre–New Deal relationship with Native tribes. Indians were once again urged to assimilate, giving up tribal political rights and long-standing land claims. Natives were encouraged to relocate from reservations to urban areas. More than 100 tribes were stripped of their sovereignty and benefits. The federal Bureau of Indian Affairs (BIA), never beloved but still useful to Native groups, lost much of its mission. This again changed dramatically in 1962 when President John F. Kennedy ushered in what became known as the Self-Determination Era. Kennedy was first in a series of presidents of both parties to take Indian cultural and economic claims more seriously. Natives benefited from Great Society programs. President Richard Nixon played a major role as a proponent of the 1974 Indian Self-Determination and Education Assistance Act. By then the American Indian Movement was well under way. In 1969 AIM members occupied Alcatraz, the San Francisco Bay island formerly used as a federal prison. They would remain there, reclaiming Alcatraz as Indian land, for almost two years. In 1971 protesters briefly occupied Mount Rushmore, the South Dakota presidential monument near the 1876 site of a Sioux rout of General George Custer. Not all AIM protests were peaceful. In 1973 a violent clash at Wounded Knee, South Dakota, killed two activists and badly wounded a federal agent. It ended after 73 days when the Nixon administration promised to review an 1868 treaty. AIM activist Leonard Peltier, who grew up on North Dakota’s Anishinabe Turtle Mountain Reservation, received two life sentences for murdering two federal agents during a 1975 shoot-out on the Pine Ridge Indian Reservation. Human rights groups maintain his innocence. The overall trajectory of U.S.-Native relations was toward greater autonomy and respect. Some “terminated” tribes, like the Menominee of the northern Great Lakes, had their authority restored. A 1971 Alaskan Native Claims Settlement Act and a 2000 restoration of 84,000 acres to Utah’s Ute tribe (accompanied by an official apology) advanced self-determination. During the presidency of George H. W. Bush, almost 90 percent of BIA staff had tribal roots. U.S. courts, dusting off long-ignored treaties, restored many Native rights related to fishing, farming, travel, and sovereignty. In 1979 Florida’s Seminole were the first to use court-affirmed rights to run bingo games. By the mid- 1990s more than 100 casinos were operating on reservation lands across the United States. Gaming and other new businesses, including tax-free sales of tobacco and other highly taxed products, enriched many tribes. Some assimilated Natives reaffiliated with their tribes to participate in this new economy. But reliance on the greed of non-Indians proved no solution for fundamental inequities. Approximately 28,000 residents of Pine Ridge, the 3,500-square-mile Oglala Sioux reservation, live with high unemployment and annual family incomes below $4,000. High suicide and infant mortality rates have made life expectancy at Pine Ridge the nation’s shortest. Further reading: Evans, Sterling, ed. American Indians in American History, 1870–2001: A Companion Reader. Westport, CT: Praeger, 2002; Iverson, Peter. We Are Still Here: American Indians in the Twentieth Century. Wheeling, IL: Harlan Davidson, 1998. Marsha E. Ackermann

Angola, Republic ofEdit

The Republic of Angola is situated in south-central Africa. The country is bounded by the Democratic Republic of the Congo to the northeast, Zambia to the east, Namibia to the south, and the Atlantic Ocean to the west. It has an area of 1,246,700 square kilometers and its capital city is Luanda. It is divided into 18 provinces, Angola, Republic of 13 but one of them, Cabinda, is an enclave, separated from the rest of the country by the Democratic Republic of the Congo. The topography varies from arid coastal areas and dry savannas in the interior south to rain forests in the north and a wet interior highland. On the plateau, heavy rainfall causes periodic flooding. Overuse and degradation of water resources have led to inadequate supplies of potable water. Other current environmental issues are deforestation of the tropical rain forest, overuse of pastures, soil erosion, and desertification, which results in a loss of biodiversity. Angola had approximately 12,127,071 inhabitants in 2006. There were around 90 ethnic groups in the country, and although Portuguese was the official language, Bantu and other African languages were spoken by a high percentage of the population. Although Roman Catholicism remained the dominant religion, there were evangelist and indigenous religions that were very strong. Angola’s socioeconomic conditions rank in the bottom 10 in the world. Health conditions are inadequate because of years of insurgency. There is a high prevalence of HIV, vectorborne diseases like malaria, and other waterborne diseases. Although the agricultural sector was formerly the mainstay of the economy, it contributed only a small percentage of GDP, because of the disruption caused by civil war. The products derived from this sector are bananas, sugarcane, coffee, sisal, corn, cotton, manioc (tapioca), tobacco, vegetables, and plantains. It also has forest products and fish. Food must be imported in large quantities. Angola is one of Africa’s major oil producers. The oil industry is the most important sector of the economy and it constitutes the majority of the country’s exports. Angola has minerals: diamonds, iron, uranium, phosphates, feldspar, bauxite, and gold. But Angola is classified as one of the world’s poorest countries despite abundant natural resources. The reasons lie in the history of this country, which has suffered a 27-year civil war that was caused not only by ethnic factors but also by disputes over natural resources. Angola was a Portuguese colony. In the 1960s liberation movements such as Popular Movement for the Liberation of Angola (MPLA) and National Liberation Front of Angola (FNLA) began to call for independence. In 1961 the native Angolans rose in a revolt that was repressed. In 1964 a group inside of the FNLA separated and created the National Union for Total Independence of Angola (UNITA). During the mid- 1960s and 1970s there were a series of guerrilla actions, which finished with the negotiation for independence in 1975. But the postindependence period was distinguished by instability. The MPLA declared itself the government of the country so soon after independence that a civil war broke out between MPLA, UNITA, and FNLA, exacerbated by foreign intervention during the cold war. Angola, like many African countries, became involved in the struggle between the superpowers and many African political leaders resorted to U.S. or Soviet aid. The MPLA government received large amounts of aid from Cuba and the Soviet Union, while the United States supported first the FNLA and then UNITA. In 1976 the FNLA was defeated by Cuban troops, leaving the competition for government control and access to natural resources to MPLA and UNITA. By the end of the cold war era, in 1991, a cease-fire was signed between the government and UNITA and both agreed to make Angola a multiparty state and called for elections. In 1992 the MPLA was elected to lead the nation but UNITA disagreed and charged MPLA with fraud. This situation caused tensions and the war continued until 1994, when negotiations began, helped by South Africa and the United Nations (UN). The war finished in 2002 when Jonas Savimbi, the president of UNITA, was killed in battle. As a result of the civil war, up to 1.5 million lives were lost and 4 million people were displaced. Since the war Angola has been slowly rebuilding, increasing foreign exchange and implementing reforms recommended by the International Monetary Fund. Further reading: Abbot, Peter, and Manuel Rodrigues. Modern African Wars (2): Angola and Mocamgique 1961–1974. Oxford: Osprey Publishing, 1988; Campbell, Horace. Militarism, Warfare, and the Search for Peace in Angola. In The Uncertain Promise of Southern Africa. Bloomington: Indiana University Press, 2001; Central Intelligence Agency (CIA). Klare, Michael T. “The New Geography of Conflict.” Foreign Affairs (May/June 2001); Klare, Michael T. Resource Wars: The New Landscape of Global Conflict. New York: Henry Holt, 2001. Verónica M. Ziliotto

ANZUS TreatyEdit

The ANZUS Security Treaty binds together Australia, New Zealand, and the United States. ANZUS was signed in San Francisco on September 1, 1951, and 14 ANZUS Treaty took effect on April 28, 1952. It remains in force, although it has increasingly come under attack by both Australia and New Zealand since the 1980s and New Zealand has essentially withdrawn from the alliance. Beginning in the late 1940s the United States abandoned the isolationist impulse that had directed its foreign policy in previous decades to form and maintain a global network of alliances. U.S. policy makers in the cold war were especially interested in opposing the rise of communism. Following the outbreak of the Korean War in 1950, the United States became concerned with constructing a series of regional security arrangements to guard against communist attacks. For Australia and New Zealand, alliances were a necessity because of their need for protection, particularly from Communist China, the Soviet Union, and due to the problems associated with decolonization in Asia and the Pacific. Both countries were also concerned about the return of Japan to sovereign status, and sought a replacement for Great Britain as a dependable security guarantor. The United States offered exactly what both sought. The ANZUS Treaty stipulates that an armed attack on New Zealand, Australia, or the United States would be dangerous to each signatory’s own peace and safety. Accordingly, each country would act to meet the common danger in step with its constitutional processes. In the early and mid-1950s the United States rejected Australian efforts to move toward more security cooperation such as cooperative and systematic military planning and the designation of national security units that might fall under the ANZUS name and assignment, similar to the North Atlantic Treaty Organization (NATO) model. After the ANZUS pact was signed, nonsecurity ties between the three countries grew, paralleling the building of their security relations. Commercial, cultural, and other forms of U.S. influence were largely welcomed during the cold war years. The great disparity of size and power generated irritation within Australia and New Zealand, however, and both countries complained about the way they were treated by the United States, although both developed close military cooperation with the United States. Australia, in particular, became a valuable site for U.S. communication and surveillance facilities and naval ship visits. As the cold war began to wind down in the 1980s, the threat from outside sources lessened. Citizens of the two nations, particularly among members of the labor, began to question the elaborate security ties with the United States. Citizens of New Zealand and Australia challenged ANZUS as more a method for the United States to enlist support for its military agenda than a means of providing security for them. In 1984 New Zealand banned the entry of U.S. Navy ships into its ports in the belief that the ships were carrying nuclear weapons or were nuclear powered. The United States argued that New Zealand’s action compromised U.S. military operations. Additionally, Americans were offended by the manner in which New Zealand presented its differences with U.S. policy makers. When President Ronald Reagan announced in 1986 that the United States would decline to abide by the provisions of the unratified Strategic Arms Limitation Treaty (SALT) II that restricted nuclear weapons, New Zealand stated that the United States had not been negotiating in good faith. The United States responded by rescinding its ANZUS-based security obligations toward New Zealand in 1986. The future of ANZUS is in doubt. New Zealand has shown no indication that it wants to resume the partnership. For Australia, the alliance with the United States has continued to be a foundation of its defense policy. See also South East Asia Treaty Organization (SEATO). Further reading: Albinski, Henry S. ANZUS: The United States and Pacific Security. Lanham, MD: University Press of America, 1987; McIntyre, W. David. Background to the ANZUS Pact: Policy-Making, Strategy, and Diplomacy, 1945–55. New York: St. Martin’s Press, 1995; Young, Thomas- Durell. Australia, New Zealand, and U.S. Security Relations, 1951–1986. Boulder, CO: Westview Press, 1992. Caryn E. Neumann

appropriate technologyEdit

Appropriate technology is an approach of using environmentally conscious, cost-effective, small projects rather than high technology and huge expensive projects to improve the lives of people around the world. Mohandas K. Gandhi was an early advocate of appropriate technology use, arguing that the massive Indian population could not afford the waste and expense involved with many development projects advocated in the West. Gunnar (d. 1987) and Alva Myrdal (d. 1986), an economist and a diplomat from Sweden, also supported the use of appropriate technology in appropriate technology 15 Third World or Global South development projects. In Asian Drama: An Inquiry into the Poverty of Nations and the Challenge of World Poverty: A World Anti- Poverty Outline, Gunnar Myrdal focused on ways to break out of the cycle of poverty whereby low productivity led to low income that in turn contributed to low savings and low capital. A number of countries and individual development experts have successfully utilized appropriate technology. In the poor West African nation of Burkina Faso numbers of young people were given short training courses in administering shots; they then went out to rural centers in the countryside, where they gave shots to children. Thus at low cost the nation’s children were inoculated for the five major childhood diseases. The Egyptian architect Hassan Fathy (d. 1989) attempted to solve the problem of providing low-cost housing by using cheap mud brick that was easily available and aesthetically pleasing. After World War II he built an experimental village, Gourna, in southern Egypt, entirely of mud brick structures; unfortunately the project was mired in bureaucratic and political problems, and Fathy’s approach was only adopted by some artists in Egypt and wealthy Americans in the Southwest. In 1977 Wangari Muta Maathai of Kenya initiated the Green Belt movement, in which women were mobilized to reforest degraded land; she also fought for the cancellation of African debt and an end to political corruption. Her work for the environment was recognized with the 2004 Nobel Peace Prize. In another small but successful project pest-resistant grasses were planted around crops to increase productivity and the grasses were fed to livestock, increasing profits from both crops. In the field of health care President Carter’s center in Atlanta, Georgia, aimed to eliminate guinea worm disease, which afflicted many poor people, especially in western Africa. The Bill and Melinda Gates Foundation, the richest private philanthropic organization, established programs to raise vaccination rates and eliminate other virulent diseases. In Asia microfinance projects such as the Grameen Bank provided loans for poor women (who had a more reliable rate of repayment than men) for start-up money for small businesses or the purchase of farm animals such as chickens, goats, and cows that provided muchneeded income and protein to supplement meager diets. Until late in the 20th century the World Bank and other aid organizations tended to fund high-tech projects such as dams, factories, or roads. Toward the end of the century agencies shifted their priorities but, politicians preferred larger, more visible projects with investment from the top rather than on the grassroots level. Although advocates of appropriate technology and environmentalists argued that bigger was not always better, that it was not necessary to build the world’s highest skyscraper or biggest dam, nations as diverse as Egypt, Turkey, and China went ahead with the huge Aswa¯n Dam, Atatürk Dam, and Three Gorges Dam, and others continued the construction of environmentally damaging projects. See also Third World/Global South. Further reading: Fathy, Hassan. Natural Energy and Vernacular Architecture: Principles and Examples with Reference to Hot Arid Climates. Chicago: The University of Chicago Press for The United Nations University, 1986; Sachs, Jeffrey. The End of Poverty: Economic Possibilities for Our Time. London: Penguin Press, 2005; Tenner, Edward. Why Things Bite Back: Technology and the Revenge of the Unintended Consequences. New York: Vintage Books, 1997. Janice J. Terry

Arab-Israeli-Palestinian peace negotiationsEdit

Five major wars and numerous peace negotiations have failed to resolve the ongoing conflict between the Israelis and Palestinians over land and statehood. Israel declared its independence and won the first war against opposing Arab states and the Palestinians in 1948. The 1949 armistice mediated by Ralph Bunche, a U.S. diplomat to the United Nations, ended the hostilities but did not result in an actual peace treaty, and technically a state of war still existed. Although the Arab states refused to recognize Israel, Gamal Abdel Nasser of Egypt supported behind-the-scenes secret negotiations in the early 1950s, but when Israeli Prime Minister David Ben-Gurion demanded face-to-face negotiations, the diplomatic efforts failed. After the 1956 war, the United Nations, with Egypt’s agreement, placed peacekeeping forces in the Sinai Peninsula (Egyptian territory) at strategic locations along the borders between Israel and Egypt. Their removal at Egypt’s request was the ostensible cause of the 1967 war in which Israel decisively defeated the surrounding Arab nations and occupied East Jerusalem, the West Bank, the Gaza Strip, the Golan Heights (Syrian territory), and the Sinai Peninsula (Egyptian territory). Following this major victory, Israel expected the Arabs to 16 Arab-Israeli-Palestinian peace negotiations sue for peace and that some border modifications would be made. However, the Arabs refused to negotiate until Israel had withdrawn from all the territory occupied in the 1967 war and that some resolution of the Palestinian refugee issue and demands for self-determination had been achieved. Following the 1967 war, the Palestinians concluded that only armed struggle against Israel would achieve their national aspirations, and the Palestine Liberation Organization (PLO) emerged as their sole political and military representative. Israel and its U.S. ally both considered the PLO a terrorist organization and refused to negotiate with it. Various diplomatic settlements were suggested but all failed to break the impasse. shuttle diplomacy To regain the Sinai and to bring the United States in as a mediator to the dispute, Anwar Sadat of Egypt launched a surprise attack against the Israeli forces occupying Sinai in 1973. Although Israel suffered some initial defeats, its military soon recovered and regained the offensive. With U.S. and UN diplomacy, a cease-fire was declared, and both sides announced they had won the war. The U.S. secretary of state, Henry Kissinger, then embarked on shuttle diplomacy between Israel, Egypt, Jordan, Syria, and Israel in an attempt to reach a settlement to the conflict. He envisioned a step-by-step process that the U.S. would control. As a result, various phased withdrawals of Israeli forces from the Sinai were agreed upon and were to be guaranteed by U.S. forces stationed in the peninsula, but the overall cause of the conflict, namely the conflicting claims of Israel and the Palestinians, remained unresolved. Sadat attempted to revive the process by making a dramatic visit to Israel, where he spoke before the Knesset, the Israeli parliament, in 1977. Sadat was the first Arab leader publicly to visit Israel, and his gesture altered the psychological dimensions of the conflict and made it appear that peace between the Arabs and Israel was possible. In 1978 the U.S. president Jimmy Carter brought Israeli prime minister Menachem Begin and Sadat together for 13 days of occasionally acrimonious negotiations at Camp David. These negotiations led to the 1979 peace treaty between Egypt and Israel that was signed at a well-publicized ceremony hosted by Carter on the White House lawn in 1979. The treaty provided for the gradual withdrawal of Israeli forces from the Sinai and full diplomatic recognition between the two states. Carter anticipated that further negotiations to resolve the differences between Israel and the Palestinians, the cessation of Israeli settlements in the Occupied Territories, and the return of some land for an overall peace settlement would follow. The Arab states and the Palestinians rejected the treaty because it did not resolve most of the basic issues, and Israel continued to build settlements in the territories, further angering the Palestinians. In 1981 Egyptian Islamists who opposed the treaty assassinated Sadat; however, his successor, Hosni Mubarak, maintained the treaty in what has been called a “cold peace” between Egypt and Israel. In 1984 a full peace treaty between Israel and Jordan under King Hussein was signed. Hussein and then Israeli prime minister Yitzhak Rabin, both military officers, had a cordial relationship, and this treaty has also held. During the 1970s the PLO also gained recognition from a number of nations around the world. In spite of Israel’s opposition, Yasir Arafat even addressed the UN General Assembly in New York City. Israel attempted to eliminate the PLO by attacking its power base in Lebanon in 1982. The war seriously damaged the PLO infrastructure but did not destroy the organization that, with international assent, moved its base of operations to Tunisia. UN peace-keeping forces remained in southern Lebanon along the Israeli border, but a new indigenous Lebanese Islamist movement, Hizbollah, then began attacks on Israeli forces both in Lebanon and Israel. As early as 1974 the PLO hinted at the acceptance of a two state solution, or the so-called Palestinian ministate comprising East Jerusalem, the West Bank, and the Gaza Strip, occupied by Israel in the 1967 war. The Arab governments also made gestures regarding acceptance of Israel; the Fahd Plan of 1982, sponsored by Saudi Arabia, called for all the states in the region to live in peace. The Fez Plan of 1982 reiterated the Arab states’ willingness to consider trading land for peace as long as some form of Palestinian self-determination was achieved. These overtures were largely ignored by both Israel and its major ally, the United States, although the United States did have some secret contacts with the PLO. After 1988, when the PLO and Arafat agreed to recognize Israel’s right to exist, to recognize UN Resolution 242, and to renounce terrorism, the United States agreed publicly to negotiate with it as the representative of the Palestinians. The PLO and Arafat were further weakened by their support for Saddam Hussein during the First Gulf War; in retaliation the Gulf States, especially Kuwait, halted financial support for the PLO, and Kuwait ousted tens of thousands of Palestinians who then generally took refugee in Jordan. With the collapse of the Soviet Union the PLO also lost a key ally. With the end of the Arab-Israeli-Palestinian peace negotiations 17 cold war, the United States became the major mediator in the long-running dispute. In 1991 U.S. Secretary of State James Baker succeeded in bringing all of the parties to the conflict—Jordanians, Syrians, Israelis, and Palestinians—together for the first time for direct negotiations. The Palestinians were represented by a delegation from the Occupied Territories who unofficially represented the PLO. The Israeli prime minister, Yitzhak Shamir of Likud, the hard-line Right party, was a reluctant participant, and the negotiations dragged on without appreciable progress until 1993. direct negotiations At the same time, in 1993 the new Israeli Labor Party government under Yitzhak Rabin and Shimon Peres agreed to direct negotiations with PLO representatives. These top secret talks were held in Norway, a respected neutral party, and resulted in the first Oslo Accords. The accords included the Declaration of Principles (DOP) and letters of mutual recognition that were publicly signed in September 1993 on the White House lawn with President Bill Clinton as host. The occasion culminated with a famous handshake between the two old enemies, Israeli prime minister Yitzhak Rabin and Yasir Arafat. Under Oslo I, Israel agreed to withdraw from Jericho and most of the Gaza Strip, and a five-year process of negotiations for further withdrawals was to result in the creation of what the Palestinians believed would be an independent Palestinian state. The PLO was to maintain order in its territories and prevent attacks on Israelis. The territories were then turned over to the Palestine Authority under the PLO. In 1994 a Jewish settler massacred Palestinian worshippers in the Ibrahimi Mosque in Hebron; and Hamas, the main Palestinian Islamist group, retaliated with a car bomb in Israel that killed Israeli civilians. Arafat condemned suicide attacks, but they continued. Meanwhile, the PA was also charged with corruption and inefficiency and lost much popular support among the Palestinians. Under Oslo II in 1995, Israel began a phased withdrawal from Ramallah, Nablus, and Bethlehem on the West Bank. However, the issues of Israeli settlements, the final status of Jerusalem, and the refugees remained undecided. Militants on both sides opposed these agreements, and in 1995 an Israeli radical assassinated Rabin. Meanwhile, violence in the territories continued. None of these negotiations settled the dispute between Israel and Syria regarding the Golan Heights. The Likud, under Binyamin Netanyahu, won the elections following Rabin’s death, and once again the negotiations stalled. Israel withdrew from Hebron in 1997, one year past the agreed upon time frame. In the Wye Memorandum of 1998 (named after the Wye Plantation in Maryland where the talks were held) the United States mediated further Israeli withdrawals, and Arafat pledged to combat terrorism and to take steps to ensure further Israeli security. However, Netanyahu’s government collapsed owing to mounting opposition from within his own party, and the withdrawals were delayed. Thus the expected deadline of 1999 passed without the establishment of a viable independent Palestinian state on the 22 percent of historic Palestine proposed for it. In addition, new Jewish settlements continued to be built or enlarged within the territories still held by Israel. In a popular move within Israel, Prime Minister Ehud Barak withdrew Israeli troops from southern Lebanon in spring 2000. In the summer Barak met with President Clinton and Arafat at Camp David. At Camp David Barak presented an offer for a final settlement that involved the Israeli withdrawal from much of the West Bank and the Gaza Strip; Israeli control over the airspace, water aquifers and all of Jerusalem; the denial of the right of return of Palestinian refugees; and the continuation of some of the settlements. Although Clinton pressured Arafat to accept the proposal, Arafat knew he could not agree to give up the right of return and some Palestinian control over East Jerusalem, particularly the holy site of Haram al-Sharif, and survive politically. He rejected the offer but failed or refused to present a counter offer, and the talks failed. Shortly thereafter a Palestinian uprising, the al- Aqsa Intifada, broke out. As the violence mounted, many Israelis lost confidence in the peace process and Barak. A last attempt to revive the process was made at Taba (in the Sinai Peninsula close to the Israeli border) in January 2001. Under the Taba proposals, Israeli would retain about 6 percent of the West Bank, reduce the number of settlements, and the Palestinians would receive a state. But the two sides could not agree on the status of Jerusalem, the right of return, or the Israeli settlement near Jericho that effectively split the Palestinian West Bank into two parts. The Likud Party under Ariel Sharon won the ensuing Israeli elections, and Sharon became the new prime minister in 2001; he supported the crushing of the al-Aqsa uprising by military means. The Arab states adopted the Saudi peace initiative whereby they would recognize Israeli in exchange for the creation of a Palestinian state in the territories in 2002. In 2003 some former Israeli officials and leading PLO 18 Arab-Israeli-Palestinian peace negotiations members proposed the Geneva Plan. Rather than adopting the step-by-step process that had not succeeded, this plan was a full comprehensive agreement, in which the end game was known. The plan provided for a Palestinian state in most of the West Bank and all of the Gaza Strip and Israeli control over three settlement blocs in the West Bank and around Jerusalem. Palestinians would control the Haram al-Sharif in East Jerusalem, and Jews would control the Wailing Wall. The refugees would receive some compensation and the freedom to return to the Palestinian state. Provisions were made for mediation of disputes, and the Palestinians were to have a security force, not an army. Israel would keep two monitoring posts as an early warning system on the West Bank for no more than 15 years. Sharon rejected the plan although it received some muted political support within Israel. Arafat did not give full assent for the plan but did not openly reject it. Nor did other states, especially the United States, adopt the plan, and it died for want of support. Sharon and his successor, Ehud Olmert, adopted a policy of unilateral disengagement whereby Israel made decisions without negotiations or discussions with the Palestinians. Israel withdrew from the Gaza Strip and dismantled the settlements, but periodically launched military attacks into the territory and retained control over its borders, thereby cutting it off from trade and outside support. The Bush administration’s support for Israel and Sharon lessened the credibility of the U.S. as a neutral mediator to the dispute among Palestinians and other Arabs. After Hamas won the Palestinian elections in 2006 negotiations broke down entirely. Although Hamas suggested implementing a long-term cease-fire, it refused to recognize Israel’s right to exist. Israel considered Hamas, which continued suicide bomb attacks against Israelis within the territories and Israel proper, a terrorist organization and rejected all negotiations with it. As the peace process dragged on, a generation of disillusioned and angry Palestinians grew up under Israeli military occupation. Conversely, many Israelis knew the Palestinians only as suicide bombers or violent opponents. See also Arab-Israeli War (1967); Arab-Israeli War (1973); Arab-Israeli War (1982). Further reading: Ben-Ami, Shlomo. Scars of War, Wounds of Peace. New York: Oxford University Press, 2006; Gelvin, James L. The Israel-Palestine Conflict: One Hundred Years of War. Cambridge: Cambridge University Press, 2005; Sher, Gilead. The Israeli-Palestinian Peace Negotiations. 1999– 2001 London: Rutledge, 2005. Janice J. Terry

Arab-Israeli War (1956)Edit

The nationalization of the Suez Canal was the ostensible cause for the 1956 Arab-Israeli War. After the United States refused aid for building the Aswa¯n Dam on July 26, the anniversary of the 1952 revolution, Gamal Abdel Nasser nationalized the Suez Canal to finance building of the dam, Nasser’s dream project. Egypt managed to keep the canal running, much to the consternation of France and Britain. In announcing the canal’s nationalization, Nasser had carefully adhered to international law. The United States, especially the secretary of state, John Foster Dulles, an expert in international law, opposed the use of force to retake the canal and instead proposed a diplomatic settlement. The oil shipped through the canal was vital to the British and French economies, and it was apparent that the United States, then self-sufficient in oil, did not intend to supplement any possible oil losses to its European allies. Great Britain and France were determined to take back the canal by force. The British prime minister, Anthony Eden, personally detested Nasser, and his conservative Tory government was reluctant to cede British imperial control. The French were angry over Nasser’s support for the Algerians in the ongoing war there. Israelis feared Nasser’s growing popularity in the Arab world and wanted him removed from power before he could unify the Arabs and possibly form a united front to attack them. The Israelis secretly approached the French with a proposal for a joint military action against Egypt; the French then brought Great Britain into the plan. Although some British cabinet members opposed joining the alliance, Eden was determined to bring Nasser’s regime down, and the tripartite agreement of the French, British, and Israelis was concluded. According to the plan Israel was to launch a tripronged attack across the Sinai Peninsula, quickly take the territory, and stop the offensive prior to reaching the canal. The British and French would bombard Egyptian airfields and parachute forces along the canal on the supposed excuse that they were there to stop the war between Egypt and Israel. The Israelis launched the attack in October 1956, quickly Arab-Israeli War (1956) 19 cut through Egyptian defense lines, took the Sinai, but then stopped before reaching the banks of the canal. The British and French were late in launching their attack but ultimately took control of the canal. The war was a clear-cut military victory for Israel, Britain, and France, but Nasser immediately accused the three nations of collusion. Although Eden and the French for years publicly denied any collusion, ultimately firsthand accounts by Israeli and other military and political leaders revealed the secret agreement. With some justification, Nasser argued that the attack proved that Britain and France still had imperialist designs on the Arab world and that Israel was also a threat to its Arab neighbors. Nasser thus turned a military defeat into a political victory and became the most popular man in the Arab world. Contrary to Western and Israeli hopes, Nasser was not overthrown, and he consolidated power after the 1956 war. The war placed the United States in the awkward position of having to condemn its closest allies in the United Nations. The Soviets gained popularity in the Arab world by supporting Egypt. The war also diverted world attention away from the brutal suppression of the 1956 Hungarian revolt by Soviet forces. In the face of international condemnation, Britain and France were forced to withdraw in December 1956, and the canal reverted to Egyptian control. Subsequently Eden, suffering from ill health in part brought on by the stress of the conflict, stepped down as prime minister. The Israelis were reluctant to withdraw from the strategic area of Sharm al-Sheikh in the south of Sinai and the Gaza Strip. President Eisenhower intervened and threatened to cut off all U.S. economic aid if they did not return all the territories to Egypt. Israeli forces finally left in March 1957. However, Israel did gain a unilateral agreement from the United States that the Gulf of Aqaba up to the southern Israeli port of Elath was to be considered an international waterway. Egypt and the Arab states never recognized the legality of Aqaba as an international waterway but for a decade did not challenge Israeli shipping through the gulf. Israel made it clear that any future closure of the waterway would be casus belli, or cause for war, and its threatened closure was one cause of the