Anemia – Pernicious Anemia, Iron Deficiency Anemia, Glucose-6-Phosphate Dehydrogenase (G6PD) Deficiency, Sickle-Cell Disorders, Thalassemia, Porotic Hyperostosis
Anemia, an insufficiency of red blood cells (RBC) and hemoglobin for oxygen-carrying needs, results from a variety of disease processes, some of which have existed since ancient times. It was defined in quantitative terms in the mid-nineteenth century, but before that the evidence of anemia is found in the descriptions of pallor or in the occurrence of diseases that we now know cause anemia. For example, lead poisoning decreases RBC production and was apparently widespread in Rome. Intestinal parasites cause iron deficiency anemia and were known to exist in ancient times. Parasites found in paleopathological specimens include many that can cause intestinal blood loss and anemia.
Congenital abnormalities in RBC metabolism (including glucose 6-phosphate dehydrogenase [G6PD] deficiency and various forms of thalassemia and sickle-cell anemia) were probably also present in ancient times. All of these, including thalassemia, protect against malaria, and the incidence of the relatively mild, heterozygotic thalassemia minor probably increased in the Mediterranean region after the appearance of falciparum malaria, the most fatal type of the disease.
Iatrogenic anemia was also common throughout most of recorded history, be-cause bleeding was considered therapeutic from Greek and Roman times until the mid-nineteenth century.
Awareness of this type of anemia appears in the second half of the nineteenth century. Thomas Addison of Guy’s Hospital de-scribed a severe, usually fatal form of anemia in 1855. Macrocytes were recognized by George Hayem in 1877; he also noted a greater reduction of hemoglobin than of RBCs in pernicious anemia (PA). In 1880, Paul Ehrlich found large nucleated RBCs in the peripheral blood containing dispersed nuclear chromatin; he called them megaloblasts, correctly concluding that they were precursors of Hayem’s giant red cells that had escaped from the marrow.
In 1894, T. R. Fraser of Edinburgh became the first physician reported to have fed liver to patients with PA. Although he achieved a remission in one patient, others could not immediately repeat his success. But in 1918, George H. Whipple bled dogs and then fed them canned salmon and bread. After the dogs became ane-mic, he needed to remove very little blood to keep the hemoglobin low, although when the basal diet was supplemented, he found that he needed to bleed them more often. It turned out that liver was the most potent supplement, but it was not until 1936 that hematologists realized that the potency of liver was due to its iron content.
George Richards Minot of Harvard University became interested in the dietary history of his patients following the first reported syndromes due to deficiency of micronutrients. He focused attention on liver after Whipple’s ob-servations in dogs; in trying to increase the iron and purines in the diet of patients with PA, he fed them 100–240 grams of liver a day. He ob-served that the reticulocytes (an index of bone marrow activity) started to rise 4–5 days after the liver diet was begun. In fact, patients showed a consistent rise in RBC count and hemoglobin levels whenever they consumed liver in adequate amounts. In an attempt to purify the protein in liver, it was found that extracts were effective, and subsequently, cyanocobalamin – vitamin B12 – was identified. It was purified in 1948 and synthesized in 1973.
The possible role of the stomach in PA was pointed out by Austin Flint in 1860, only 5 years after Addison’s description of the ailment appeared. In 1921, P. Levine and W. S. Ladd established that there was a lack of gastric acid in patients with PA even after stimulation. William B. Castle established that gastric juice plus beef muscle were effective in treating PA, although either alone was not. An autoimmune basis for development of PA has been established in recent years.
Iron Deficiency Anemia
Iron deficiency anemia is by far the most com-mon cause of anemia in every part of the world today. It undoubtedly existed in ancient times as well. In this condition, fingernails develop double concave curvature, giving them a spoon shape (koilonychia). A Celtic temple at Nodens, in Gloucestershire, England, built in Ascelpian style after the Romans had left Britain in the fourth century A.D., contains a votive offering of an arm fashioned crudely proximally but with increasing detail distally; it shows characteristic koilonychia.
Pallor, the hallmark or cardinal sign of anemia, is seen especially in the face, lips, and nails, often imparting a greenish tint to Caucasians, a presenting sign that led to the diagnosis of chlorosis or the “green sickness” in the sixteenth century. In the seventeenth century, pallor became associated in the popular mind with purity and femininity, and chlorosis became known as the “virgin’s disease.” Constantius I, father of Constantine the Great, was called Constantius Chlorus, because of his pale complex-ion, and it seems most likely that he had a con-genital form of chronic anemia. (He came from an area known today to have a relatively high frequency of thalassemia.)
Preparations containing iron were used therapeutically in Egypt around 1500 B.C. and later in Rome, suggesting the existence of iron deficiency. In 1681, Thomas Sydenham mentioned “the effect of steel upon chlorosis. The pulse gains in strength and frequency, the surface warmth, the face (no longer pale and death like) a fresh ruddy coulour … Next to steel in sub-stance I prefer a syrup … made by steeping iron or steel filings in cold Rhenish wine.” In 1832 P. Blaud described treatment of chlorosis by use of pills of ferrous sulfate and potassium carbon-ate that “returns to the blood the exciting principle which it has lost, that is to say the coloring substance.”
Children have increased needs for iron during growth, as do females during menstruation, pregnancy, and lactation. Chronic diarrhea, common in the tropics where it is often associated with parasitism, decreases iron absorption whereas parasitism increases iron losses. estimates indicate that the needs of pregnant and lactating women in tropical climates are about twice those of women in temperate zones. In the tropics, high-maize/low-iron diets are common, and soils are iron deficient in many areas.
Glucose-6-Phosphate Dehydrogenase (G6PD) Deficiency
Favism, or hemolytic anemia due to ingestion of fava beans, is now known to occur in individuals deficient in G6PD. The Mediterranean type of G6PD deficiency is found in an area extending from the Mediterranean basin to northern India, an area corresponding to Alexander’s empire. Sickness resulting from ingestion of beans was probably recognized in an-cient Greece, forming the basis for the myth that Demeter, Greek goddess of harvest, forbade members of her cult to eat beans. Pythagoras, physician and mathematician of the fifth century B.C. who had a great following among the Greek colonists in southern Italy, also seems to have recognized the disorder, since he, too, for-bade his followers to eat beans. It is in that area of southern Italy that the incidence of G6PD deficiency is highest.
In 1956, the basis for many instances of this type of anemia was recognized as a hereditary deficiency of the enzyme G6PD within the red cell. Inheritance of G6PD is now recognized to be a sex-linked characteristic with the gene locus residing on the X chromosome.
It is estimated that currently over 100 mil-lion people in the world are affected by this deficiency. Nearly 3 million Americans carry the trait for G6PD deficiency, which is also found among Sephardic and Kurdish Jews, Sardinians, Italians, Greeks, Arabs, and in the Orient among Filipinos, Chinese, Thais, Asiatic Indians, and Punjabis. It has not been found among North American Indians, Peruvians, Japanese, or Alaskan Eskimos.
The first documented report of drug-induced (as opposed to fava-bean-induced) hemolytic anemia appeared in 1926 following the administration of the antimalarial drug pamaquine (Plasmoquine). During World War II, after the world’s primary sources of quinine were captured by the Japanese, about 16,000 drugs were tested for antimalarial effectiveness. In 1944, an Army Medical Research Unit at the University of Chicago studying these potential antimalarial drugs encountered the problem of drug-induced anemia. Research by this group over the next decade elucidated the basic information on G6PD deficiency.
Pamaquine was found to cause hemolysis in 5–10 percent of American blacks (about 10–14 percent of black American males are G6PD-deficient) but only rarely in Caucasians, and the severity of the hemolysis was observed to be dependent on the dose of the drug. Similar sensitivity to the related drug primaquine and many other related drugs was demonstrated, and the term “primaquine sensitivity” came to be used to designate this form of hemolytic anemia. It was subsequently demonstrated that the hemolysis was due to an abnormality in the erythrocytes of susceptible individuals and that it was self-limited even if administration of primaquine was continued. Several biochemical abnormalities of the sensitive red cells, including glutathione instability, were described. In 1956, Paul E. Carson and colleagues reported that G6PD deficiency of red cells was the com-mon denominator in individuals who developed hemolysis after one of these drugs was administered, and the term G6PD deficiency be-came synonymous with primaquine sensitivity. It was soon found that this deficiency was genetically transmitted.
Sickle-cell disorders have existed in human populations for thousands of years. However, the discovery of human sickle cells and of sickle-cell anemia was first announced in the form of a case report by James Herrick at the Association of American Physicians in 1910. In 1904, Her-rick had examined a young black student from Grenada who was anemic; in the blood film he observed elongated and sickle-cell-shaped RBCs.
By 1922, there had only been three cases of this type of anemia reported. But in that year Verne R. Mason, a resident physician at Johns Hopkins Hospital, described the first patient recognized to have that disease at that institution. Mason introduced the term “sickle cell anemia,” which became the standard designation.
In 1923, C. G. Guthrie and John Huck per-formed the first genetic investigation of this dis-ease and developed a technique that became an indispensable tool for the identification of sickle trait in later investigations, population surveys, and genetic studies.
Virgil P. Sidenstricker, of Georgia, recorded many of the clinical and hematologic features of sickle-cell disease. He introduced the term “crisis,” was the first to suggest that the anemia was hemolytic, and reported the first autopsy describing the typical lesions of the illness including a scarred, atrophic spleen. He was also the first to describe sickle-cell anemia in childhood, noting the peculiar susceptibility of victims to infection, with a high mortality rate.
The first case of sickle-cell anemia to be re-ported from Africa was described in 1925 in a 10-year-old Arab boy in Omdurman, and the first survey of the frequency of sickle-cell trait in the African population was reported in 1944 by R. Winston Evans, a pathologist in the West African Military Hospital. In a study of almost 600 men of Gambia, the Gold Coast, Nigeria, and the Cameroons, he found approximately 20 percent to have the trait, a sickling rate about three times that in the United States.
In East Africa, E. A. Beet found a positive test for sickling in 12.9 percent of patients in the Balovale district of northern Rhodesia. He also reported striking tribal differences in the prevalence of sickle-cell trait. By 1945, H. C. Trowell had concluded that sickle-cell anemia was prob-ably the most common and yet the least frequently diagnosed disease in Africa. He noted that in his own clinic in Uganda no cases had been recognized before 1940, but 21 cases were seen within the first 6 months of 1944 when he began routine testing for sickling.
For many years, it was thought that sickle-cell anemia was rare in Africa in contrast to the greater prevalence observed in the Americas (Especially North America), and some thought that interbreeding with white persons brought out the hemolytic aspect of the disease. It was not until the mid-1950s that it was under-stood that few homozygous sickle-cell cases came to medical attention because of a high infant mortality rate from that disease. This was demonstrated in L´eopoldville when J. and C. Lambotte-Legrand found that only two cases of sickle-cell anemia had been reported among adults in the Belgian Congo, although sickling occurred in about 25 percent of the black population. They subsequently followed 300 infants with sickle-cell anemia. They found that 72 died before the end of the first year of life, and 144 had perished by the age of 5.
Subsequent research by others, however, established the fact that sickle-cell anemia patients who did survive to adolescence came from the higher social groups, and that the standard of living, the prevalence of infection and nutritional deficiency, and the level of general health care were the principal factors affecting the mortality rate from sickle-cell anemia in young children. By 1971, as improved health care became available, the course of the disease was altered; at the Sickle Cell-Hemoglobinopathy Clinic of the University of Ghana, it was reported that 50 percent of the patients with sickle-cell anemia survived past age 10.
Geographic distribution of sickle-cell gene frequency was mainly charted by the mid-twentieth century. The prevalence of sickling in black populations of the United States was well established by 1950. Numerous studies performed in Central Africa and South Africa also revealed that, although the frequency of sickling varied, the occurrence of the gene that caused it was confined mostly to black populations.
In Africa, after World War II, surveys established that across a broad belt of tropical Africa, more than 20 percent of some populations were carriers of the sickle-cell trait. Significantly, a high frequency of sickle trait was also found among whites in some areas of Sicily, southern Italy, Greece, Turkey, Arabia, and southern India. Yet, by contrast, sickling was virtually ab-sent in a large segment of the world extending from northern Europe to Australia. These observations led to several hypotheses about where the mutant gene had had its origin and how such high frequencies of a deleterious gene are maintained.
Hermann Lehmann presented evidence that sickling arose in Neolithic times in Arabia, and that the gene was then distributed by migrations eastward to India and westward to Africa. He and others have speculated that the frequency of the gene increased significantly in the hyperendemic malarial areas of Africa and spread northward across the Sahara along ancient trade routes. Because the eastern and western Arabian types of sickle-cell disease are different, spread must have occurred along sea trade routes, accounting for similarities in sickle-cell anemia in eastern Africa, eastern Arabia, and southern India.
Obviously then, there was much interest generated in the cause of the very high frequency of the sickle-cell gene in Africa. In 1946, Beet in Rhodesia noted that only 9.8 percent of sicklers had malaria, whereas 15.3 percent of nonsicklers were affected. P. Brain, of Southern Rhodesia, suggested that RBCs of sicklers might offer a less favorable environment for survival of malarial parasites. In 1954, J. P. Mackey and F. Vivarelli suggested that “the survival value [of the trait] may lie in there being some ad-vantage to the heterozygous sickle cell individual in respect of decreased susceptibility of a proportion of his RBC to parasitization by P. falciparum.”
A relationship between sickle-cell trait and falciparum malaria was reported by A. C. Alli-son in 1954. He noted that the frequency of heterozygous sickle-cell trait was as high as 40 percent in some African tribes, suggesting some selective advantage or else the gene would be rapidly eliminated because most homozygotes die without reproducing. He decided that a high spontaneous mutation rate could not account for the high but varying frequencies of the gene and postulated that sickle-cell trait occurs as a true polymorphism and that the gene is maintained by selective advantage to the heterozygous. Comparing the distribution of falciparum malaria and sickling, Allison found that high frequencies of the trait were invariably found in hyperendemic malarial areas. He also found that people with sickle-cell trait suffer from malaria not only less frequently but also less severely than other persons, and he concluded that, where malaria is hyperendemic, children with the sickle-cell trait have a survival advantage.
Thalassemia, an inherited form of anemia, results from the deficient synthesis of a portion of the globin molecule and is also thought by some to have stabilized in the face of malaria. A variety of forms exist, based on the chain and site within a specific chain at which the genetically determined defect exists. It has been suggested that thalassemia originated in Greece and spread to Italy when it was colonized by Greeks between the eighth and sixth century B.C. At present, it is most frequent in areas where ancient Greek immigration was most in-tense: Sicily, Sardinia, Calabria, Lucania, Apulia, and the mouth of the Po.
Chronic anemia from any cause produces bone changes, which can be recognized in archaeological specimens. These changes, called porotic hyperostosis (or symmetrical hyperostosis) result from an overgrowth of bone marrow tissue, which is apparently a compensatory process. To-day, porotic hyperostosis is seen classically in X-rays of patients with congenital hemolytic anemias, as well as in children with chronic iron deficiency anemia. This is especially the case when the iron deficiency occurs in premature infants or is associated with protein malnutrition or rickets.
Porotic hyperostosis has been observed in archaeological specimens from a variety of sites, including areas of Greece, Turkey, Peru, Mexico, the United States, and Canada. In most areas, the findings are considered evidence of iron-deficiency anemia, although thalassemia was apparently responsible in some areas. Around the shores of the Mediterranean, malaria was probably the most frequent cause of chronic anemia at certain times.
Archaeological specimens from the Near East show an incidence of anemia of only 2 percent in early hunters (15,000–8000 B.C.), who ingested a lot of animal protein and thus took in reasonable amounts of dietary iron. By contrast, farming populations of 6500–2000 B.C. showed an anemia incidence of 50 percent.
Many New World natives whose diet consisted primarily of corn (maize) and beans had a diet deficient in iron and protein. Moreover, when cooked in water for long periods of time, the food in the diet was also low in ascorbate and folate. Ascorbate helps convert dietary ferric to ferrous iron, which is more easily absorbed; therefore, deficiency of this vitamin increased the problem of deficient dietary iron. A high incidence of iron deficiency has been demonstrated by modern studies of infants and children in populations living on a diet consisting mostly of maize and beans. It is not surprising then, in North America, that porotic hyperostosis was found in 54 percent of skeletons in the canyons of northern Arizona and northern New Mexico, among a population that ate little meat and subsisted mainly on maize. By contrast, plains dwellers in southern Arizona and southern New Mexico, who used more animal foods, had an incidence of only 14.5 percent. Absence of evidence for malaria or hemoglobinopathies in the New World before the arrival of the Europeans argues against these possible causes of porotic hyperostosis.