Million-Year-Old Cannibals Took Advantage of the Easy Calories

Million-Year-Old Cannibals Took Advantage of the Easy Calories

Research scientists have discovered that almost one million years ago an ancient human relative, Homo antecessor, ate humans “in preference” to other animals.

Inhabiting hunting planes in what is today Spain some 900,000 years ago, Homo antecessor hunted and ate their own kind, providing scientists with the oldest evidence of cannibalism . These shocking findings were published in the June 2019 issue of the Journal of Human Evolution and the paper suggests human flesh was “nutritious” and that humans were “easier targets than other types of large prey .”

Jesús Rodríguez, Ana Mateos and Guillermo Sorrel, scientists at the Centro Nacional de Investigación sobre la Evolución Humana (CENIEH), analyzed the cannibalistic behavior of our million year old ancestors from evidence gathered at the Spanish archaeological site Gran Dolina. The bones of seven Homo antecessor individuals were found to have “human tooth marks” and fractures that were caused to expose the bone marrow.

Homo antecessor , incomplete skull from "Gran Dolina" (ATD6-15 & ATD6-69), in Atapuerca, Spain (replica) ( Pubic Domain )

Those bones, according to the paper, “were mixed with bones representing nine other mammal species; 22 individuals that also had been butchered and eaten.” The CENIEH researchers strategy began with examining many pre-existing studies which demonstrate how animals feeding strategies are adapted to achieve the optimal “cost-benefit balance”. This was the basis upon which their new models were built to study cannibalism in Homo antecessor populations.

With an abundance of prey to hunt and eat, why did humans choose to eat humans?

Attempting to answer this perplexing question, computer models generated the calorific intake that H. antecessor require per day, which yielded “the caloric payoffs of various animals including humans” compared with the energy expended to catch them (calories used). They speculated that H. antecessor hunters targeted prey based on the most expected calories for the least effort spent to get it.

A PHYS article quotes Rodríguez saying”

Our analyses show that Homo antecessor, like any predator, selected its prey following the principle of optimizing the cost-benefit balance, and they also show that, considering only this balance, humans were a “high-ranked” prey type. This means that when compared with other prey, a lot of food could be obtained from humans at low cost.

Earlier studies reported in The Guardian had concluded that cannibalism was unlikely just for the calories, as other animals were more highly calorific. But this takes into account not just the calories gained but those lost in the chase, with eating your fellow man amounting to an aggregate win of energy.

  • The Prehistoric Feast of the Cannibals of Gough’s Cave
  • Prehistoric Britons Cannibalized Dead Relatives and Created Art with their Bones
  • The Mystery of Herxheim: Was an Entire Village Cannibalized?

Ancient Hunter Greed And Need For Nutrition

The study demonstrates that human bones accounted for “less than 13% of the hunters' caloric requirements” and that they mostly ate “rhinos, deer and horses.” But those animals sometimes required several days to exhaust, at a very high energy cost. I use the word “exhaust” because while movies depict ancient people leaping about mammoths with spears, in reality, people worked out that by simply chasing animals and prohibiting their eating and drinking they would eventually collapse making for easy pickings. Compared to days of stalking, human meat was regarded as even easier to obtain, at “low cost.”

The scientists paper adheres to academic scientific standards and of course they don’t actually illustrate what they mean by “low cost”, but without such literary constrictions, in this news article I will attempt do that for you.

In some cases, maybe after three days chasing a massive animal a team of six hunters started to get ratty as the weather turned bad. At 3 am, the hunter with the highest IQ maybe woke up startled having realized that they might have to spend two or possibly three more days tracking a strong healthy animal, but the team only had food supplies left for half a day.

He may have nudged his two brothers gently and sneaked to the little brook beside the hunting camp. As two of them washed sleep from their smoke crusted eyes the oldest brother, with an air of solemness, selected three fist-sized stones, blunt and heavy. Returning to the camp the brothers might have quietly aligned themselves at the heads of their sleeping ‘cousins’ and having no notion of the concept of ‘relatives’, on the nod of the biggest hunter; snap, crackle and pop! The hunters became the hunted, and food was procured for the ‘sharpest’ three.

Reconstruction of the "Boy of Gran Dolina" cranium (Museu d'Arqueologia de Catalunya, Barcelona)

What became of such victims? Well, after decades of hunting the same area, as they watched animal populations begin to thin out, such scenarios as described above must have happened more frequently, and these cannibalized remains lay in heaps beside hunting camps for almost a million years, until Jesús Rodríguez rolled into town with his microscope.

Million-Year-Old Cannibals Took Advantage of the Easy Calories - History

Herschel Hoffmeyer/Shutterstock These ancient oceanic beasts could live up to 100 years and dominated Earth’s oceans for millions of years.

Megalodon was a massive 110,000-pound prehistoric predecessor of modern-day sharks — and a new study has revealed that newborn megalodons were equally menacing.

Not only were megalodon babies found to be larger than most adult humans at birth, but also that they cannibalized each other in the womb.

Published in the journal Historical Biology, the study examined 150 megalodon vertebra from a 15 million-year-old specimen that was found in Belgium in the 1860s. This was also the first time that researchers have been able to determine the size of these beasts, which could grow up to 50 feet long, at birth.

Encyclopaedia Britannica, Inc./Patrick O’Neill Riley The size of an average adult megalodon compared to an average adult human. An adult megalodon head can span 15 feet.

According to LiveScience, shark vertebrae grow outward in layers that can be read like tree rings in order to determine age. Consequently, researchers took x-rays of this particular megalodon that showed 46 growth rings, meaning the shark was 46 years old when it died.

Researchers then calculated the shark’s growth patterns by comparing these rings with growth bands in the shark’s spinal cartilage and working backward to the earliest ring, or “band at birth.” It showed that the shark was six and a half feet long at birth.

“To think that a baby megalodon was nearly twice as long as the largest adult sharks we examine is mind-boggling,” said Matthew Bonnan of Stockton University.

“It is quite possible that they represent the largest babies in the shark world,” added Kenshu Shimada, lead author and vertebrate paleontologist at DePaul University in Chicago.

Shimada also posited that megalodons in utero were able to grow so large because they must have been feasting on the unhatched eggs of their siblings. Most sharks hatch from their eggs while inside the mother’s body and are then birthed as live young.

“It’s this big, calorie-dense, nutritious meal that can help those embryos get bigger, faster,” explained Allison Bronson, whose work at Humbold State University in California focuses on the evolution of fish.

DePaul University/Kenshu Shimada The annual growth bands (left) of the Belgium megalodon’s vertebra with the estimated silhouettes of the shark at birth and death (right) compared to the size of a typical human adult (bottom).

While in-utero cannibalism might seem counterproductive, researchers argue that it might actually have evolutionary benefits for both the mother and offspring. “Oophagy — egg-eating — is a way for a mother to nourish its embryos for an extended period of time,” said Shimada. “The consequence is that, while only a few embryos per mother will survive and develop, each embryo can become quite large at its birth.”

This growth ring examination also showed that megalodons likely took time to grow humongous in the womb as opposed to experiencing a growth spurt in its youth. As a result, the megalodon was born an apex predator that didn’t have to compete for food or fear predators even as a young shark.

“They could pretty much do whatever they wanted, swim wherever they wanted, eat whatever they wanted,” said shark researcher Jack Cooper at Swansea University in Britain.

This study is particularly significant as learning about megalodon reproduction has been no easy task for researchers. That is in part because shark skeletons are largely comprised of cartilage and therefore degrade long before they can be studied for their biology.

However, there’s still some debate about Shimada’s study as this one specimen is unlikely to represent the average size of an entire species. For now, though, the findings seem nearly as colossal as the megalodon itself.

7 Fascinating Facts About Elvis Presley

1. Elvis had a twin.
On January 8, 1935, Elvis Aron (later spelled Aaron) Presley was born at his parents’ two-room house in East Tupelo, Mississippi, about 35 minutes after his identical twin brother, Jesse Garon, who was stillborn. The next day, Jesse was buried in an unmarked grave in nearby Priceville Cemetery.

Elvis, who spoke of his twin throughout his life, grew up an only child in a poor family. His father, Vernon, worked a series of odd jobs, and in 1938 was sentenced to three years in prison for forging a $4 check (he spent less than a year behind bars). In 1948, the Presleys moved from Tupelo to Memphis in search of better opportunities. There, Elvis attended Humes High School, where he failed a music class and was considered quiet and an outsider. He graduated in 1953, becoming the first member of his immediate family to earn a high school diploma. After graduation, he worked at a machinist shop and drove a truck before launching his music career with the July 1954 recording of “That’s All Right.”

2. Elvis bought Graceland when he was 22.
In 1957, Elvis shelled out $102,500 for Graceland, the Memphis mansion that served as his home base for two decades. Situated on nearly 14 acres, it was built in 1939 by Dr. Thomas Moore and his wife Ruth on land that once was part of a 500-acre farm dubbed Graceland in honor of the original owner’s daughter, Grace, who was Ruth Moore’s great-aunt. The Moores’ white-columned home also came to be known as Graceland, and when Elvis purchased the place he kept the name.

The entertainer made a number of updates to the property over the years, including the addition of music-themed iron entrance gates, a “jungle room” with an indoor waterfall and a racquetball building. After finding out President Lyndon Johnson enjoyed watching all three network news programs simultaneously, Elvis was inspired to have a wall of built-in TVs installed in his home. In 1982, five years after Elvis was found dead in a bathroom at Graceland, his ex-wife Priscilla Presley opened the estate to the public for tours. Some 600,000 fans now flock there each year. Elvis’ only child, Lisa Marie Presley, inherited Graceland when she turned 25 in 1993 and continues to operate it today.

In 2006, George W. Bush became the first sitting U.S. president to visit Graceland, when he traveled there with Japanese Prime Minister Junichiro Koizumi, a die-hard Elvis fan.

Elvis and Colonel Tom Parker (Credit: GAB Archive/Redferns)

3. Elvis’ controversial manager, Colonel Tom Parker, was a former carnival barker.
Born Andreas Cornelis van Kuijk in the Netherlands in 1909, Elvis’s future manager immigrated illegally to America as a young man, where he reinvented himself as Tom Parker and claimed to be from West Virginia (his true origins weren’t known publicly until the 1980s). He worked as a pitchman for traveling carnivals, followed by stints as dog catcher and pet cemetery founder, among other occupations, then managed the careers of several country music singers. In 1948, Parker finagled the honorary title of colonel from the governor of Louisiana and henceforth insisted on being referred to as the Colonel.

After learning about the up-and-coming Elvis in 1955, Parker negotiated the sale of the singer’s contract with tiny Sun Records to RCA, a major label, and officially took over as his manager in 1956. Under the Colonel’s guidance, Elvis shot to stardom: His first single for RCA, “Heartbreak Hotel,” released in 1956, became the first of his career to sell more than 1 million copies his debut album, 𠇎lvis Presley,” topped Billboard’s pop album chart and he made his big-screen debut in 1956’s “Love Me Tender.”

The portly, cigar-chomping Parker controlled Elvis’ career for the next two decades, helping him achieve enormous success while at the same time taking commissions of as much as 50 percent of the entertainer’s earnings and drawing criticism from observers that he was holding Elvis back creatively. Parker outlived his protégé by 20 years, dying in 1997 at age 87 in Las Vegas.

4. Elvis served in the Army after he was already famous.
In December 1957, Elvis, by then a major star, was drafted into the U.S. military. After receiving a short deferment so he could wrap up production on his film “King Creole,” the 23-year-old was inducted into the Army as a private on March 24, 1958, amidst major media coverage. Assigned to the Second Armored Division, he attended basic training at Fort Hood, Texas. That August, while still at Fort Hood, he was granted emergency leave to visit his beloved mother, who was in poor health. Gladys Presley passed away at age 46 on August 14, 1958. The following month, Elvis shipped out for an assignment with the Third Armored Division in Friedberg, West Germany, where he served as a jeep driver and continued to receive stacks of fan mail.

While in Germany, he lived off base with his father and grandmother Minnie Mae Presley. It was also during this time that Elvis met 14-year-old Priscilla Beaulieu, the daughter of a U.S. Air Force captain. (After a lengthy courtship, Elvis and Priscilla married in 1967 the couple divorced in 1973.) Elvis was honorably discharged from active duty in March 1960, having achieved the rank of sergeant. His first post-Army movie, “G.I. Blues,” was released that November of that same year. The film’s soundtrack spent 10 weeks at the top of the Billboard album music chart and remained on the chart for a total of 111 weeks, the longest of any album in Elvis’ career.

5. Elvis never performed outside of North America.
An estimated 40 percent of Elvis’ music sales have been outside the United States however, with the exception a handful of concerts he gave in Canada in 1957, he never performed on foreign soil. A number of sources have suggested that Elvis’ manager, Colonel Parker, turned down lucrative offers for the singer to perform abroad because Parker was an illegal immigrant and feared he wouldn’t be allowed back into the U.S. if he traveled overseas.

Elvis’ second appearance on “The Ed Sullivan Show,” October 26, 1956.

6. Elvis was burned in effigy after an appearance on “The Ed Sullivan Show.”
In the summer of 1956, Colonel Parker arranged a deal for Elvis to make three appearances on “The Ed Sullivan Show” for a then-whopping fee of $50,000. Although Sullivan previously had said he wouldn’t book the hip-swiveling, lip-curling singer on his family-oriented TV variety show, he relented after competitor Steve Allen featured Elvis on his show in July 1956 and clobbered Sullivan in the ratings. When Elvis made his first appearance on Sullivan’s program on September 9, 1956, 60 million people—more than 80 percent of the TV viewing audience—tuned in. (As it happened, Sullivan, who had been injured in a car accident that August, was unable to host the show.) After the singer made his second appearance in October, crowds in Nashville and St. Louis, outraged by the singer’s sexy performance and concerned that rock music would corrupt America’s teens, burned and hanged Elvis in effigy.

Haast’s Giant Eagle

This New Zealand eagle is the largest known raptor to have existed. Disappeared in the 15th century, the species reigned for centuries on the island.

In Maori folklore, it is regularly mentioned the existence of a huge bird which hunted and ate human babies. The Maori called the bird Te Hokioi, but scientists believe it to be the eagle of Haast.The island’s main predator, it was almost a meter high, 1.5 meters long and weighed 14 kg. Its impressive wingspan of 3 meters, its claws and its beak twice as long as the largest eagles in the contemporary world make it a formidable predator.

Artist’s rendition of a giant Haast’s eagle attacking New Zealand moa. Image source: Wikipedia

How did it go out? You should know that this bird fed on moa. However, the Maori hunted the moa until extinction, which caused the number of Haast eagles to drop in droves. Indeed, there was no longer enough large and large prey to satiate the individual …


From the late 19th century through the Great Depression, social and economic forces exerted a harmful impact on the structure of Bengal's income distribution and the ability of its agricultural sector to sustain the populace. These processes included increasing household debt, [20] a rapidly growing population, stagnant agricultural productivity, increased social stratification, and alienation of the peasant class from their landholdings. [21] The interaction of these left clearly defined social and economic groups mired in poverty and indebtedness, unable to cope with economic shocks or maintain their access to food beyond the near term. In 1942 and 1943, in the immediate and central context of the Second World War, the shocks Bengalis faced were numerous, complex and sometimes sudden. [22] Millions were vulnerable to starvation. [20]

The Government of India's Famine Inquiry Commission report (1945) described Bengal as a "land of rice growers and rice eaters". [B] Rice dominated the agricultural output of the province, accounting for nearly 88% of its arable land use [23] and 75% of its crops. [C] Overall, Bengal produced one third of India's rice – more than any other single province. [23] Rice accounted for 75–85% of daily food consumption, [24] with fish being the second major food source, [25] supplemented by small amounts of wheat. [D]

There are three seasonal rice crops in Bengal. By far the most important is the winter crop of aman rice. Sown in May and June and harvested in November and December, it produces about 70% of the total annual crop. [26] Crucially, the (debated) shortfall in rice production in 1942 occurred during the all-important aman harvest. [27]

Rice yield per acre had been stagnant since the beginning of the twentieth century [28] coupled with a rising population, this created pressures that were a leading factor in the famine. [29] Bengal had a population of about 60 million [30] in an area of 77,442 square miles, according to a 1941 census. [31] [E] Declining mortality rates, induced in part by the pre-1943 success of the British Raj in famine reduction [32] caused its population to increase by 43% between 1901 and 1941 – from 42.1 million to 60.3 million. Over the same period India's population as a whole increased by 37%. [33] [F] The economy was almost solely agrarian, but agricultural productivity was among the lowest in the world. [34] Agricultural technology was undeveloped, access to credit was limited and expensive, and any potential for government aid was hampered by political and financial constraints. [35] Land quality and fertility had been deteriorating in Bengal and other regions of India, but the loss was especially severe here. Agricultural expansion required deforestation and land reclamation. These activities damaged the natural drainage courses, silting up rivers and the channels that fed them, leaving them and their fertile deltas moribund. [36] The combination of these factors caused stubbornly low agricultural productivity. [37]

Prior to about 1920, the food demands of Bengal's growing population could be met in part by cultivating unused scrub lands. [38] No later than the first quarter of the twentieth century, Bengal began to experience an acute shortage of such land, [39] leading to a chronic and growing shortage of rice. [40] Its inability to keep pace with rapid population growth changed it from a net exporter of foodgrains to a net importer. Imports were a small portion of the total available food crops, however, and did little to alleviate problems of food supply. [41] Bengali doctor and chemist Chunilal Bose, a professor in Calcutta's medical college, estimated in 1930 that both the ingredients and the small total amount of food in the Bengali diet made it among the least nutritious in India and the world, and greatly harmful to the physical health of the populace. [42] Economic historian Cormac Ó Gráda writes, "Bengal's rice output in normal years was barely enough for bare-bones subsistence . the province's margin over subsistence on the eve of the famine was slender." [43] These conditions left a large proportion of the population continually on the brink of malnutrition or even starvation. [44]

Land-grabbing Edit

Structural changes in the credit market and land transfer rights pushed Bengal into recurring danger of famine and dictated which economic groups would suffer greatest hardship. [45] The Indian system of land tenure, particularly in Bengal, [46] was very complex, with rights unequally divided among three diverse economic and social groups: traditional absentee large landowners or zamindars the upper-tier "wealthy peasant" jotedars and, at the lower socioeconomic level, the ryot (peasant) smallholders and dwarfholders, bargadars (sharecroppers), and agricultural labourers. [47] Zamindar and jotedar landowners were protected by law and custom, [48] but those who cultivated the soil, with small or no landholdings, suffered persistent and increasing losses of land rights and welfare. During the late nineteenth and early twentieth centuries, the power and influence of the landowners fell and that of the jotedars rose. Particularly in less developed regions, jotedars gained power as grain or jute traders and, more importantly, by making loans to sharecroppers, agricultural labourers and ryots. [49] [G] They gained power over their tenants using a combination of debt bondage through the transfer of debts and mortgages, and parcel-by-parcel land-grabbing. [50]

Land-grabbing usually took place via informal credit markets. Many financial entities had disappeared during the Great Depression peasants with small landholdings generally had to resort to informal local lenders [51] to purchase basic necessities during lean months between harvests. [52] As influential Bengali businessman M. A. Ispahani testified, ". the Bengal cultivator, [even] before the war, had three months of feasting, five months of subsistence diet and four months of starvation". [53] Moreover, if a labourer did not possess goods recoverable as cash, such as seed or cattle for ploughing, he would go into debt. [54] Particularly during poor crops, smallholders fell into cycles of debt, often eventually forfeiting land to creditors. [55]

Small landholders and sharecroppers acquired debts swollen by usurious rates of interest. [56] [H] Any poor harvest exacted a heavy toll the accumulation of consumer debt, seasonal loans and crisis loans began a cycle of spiralling, perpetual indebtedness. It was then relatively easy for the jotedars to use litigation to force debtors to sell all or part of their landholdings at a low price or forfeit them at auction. Debtors then became landless or land-poor sharecroppers and labourers, usually working the same fields they had once owned. [57] The accumulation of household debt to a single, local, informal creditor bound the debtor almost inescapably to the creditor/landlord it became nearly impossible to settle the debt after a good harvest and simply walk away. In this way, the jotedars effectively dominated and impoverished the lowest tier of economic classes in several districts of Bengal. [58]

Such exploitation, exacerbated by Muslim inheritance practices that divided land among multiple siblings, [59] widened inequalities in land ownership. [60] At the time, millions of Bengali agriculturalists held little or no land. [I] In absolute terms, the social group which suffered by far the most of every form of impoverishment and death during the Bengal famine of 1943 were the landless agricultural labourers. [61]

Transport Edit

Water provided the main source of transport during rainy seasons, and throughout the year in areas such as the vast delta of the coastal southeastern Sundarbans. River transport was integral to Bengal's economy, an irreplaceable factor in the production and distribution of rice. [62] Roads were generally scarce and in poor condition, [63] and Bengal's extensive railway system was employed largely for military purposes until the very late stages of the crisis. [64]

The development of railways in Bengal in the 1890s disrupted natural drainage and divided the region into innumerable poorly drained "compartments". [65] Rail indirectly brought about excessive silting, which increased flooding and created stagnant water areas, damaging crop production and sometimes contributing to a partial shift away from the productive aman rice cultivar towards less productive cultivars, and also created a more hospitable environment for water-borne diseases such as cholera and malaria. [66]

Soil and water supply Edit

The soil profile in Bengal differs between east and west. The sandy soil of the east, and the lighter sedimentary earth of the Sundarbans, tended to drain more rapidly after the monsoon season than the laterite or heavy clay regions of western Bengal. [67] Soil exhaustion necessitated that large tracts in western and central Bengal be left fallow eastern Bengal had far fewer uncultivated fields. The annual flooding of these fallow fields created a breeding place for malaria-carrying mosquitoes [68] malaria epidemics lasted a month longer in the central and western areas with slower drainage. [67]

Rural areas lacked access to safe water supplies. Water came primarily from large earthen tanks, rivers and tube wells. In the dry season, partially drained tanks became a further breeding area for malaria-vector mosquitoes. [69] Tank and river water was susceptible to contamination by cholera with tube wells being much safer. [70] However, as many as one-third of the existing wells in wartime Bengal were in disrepair. [70]

Throughout 1942 and early 1943, military and political events combined with natural disasters and plant disease to place widespread stress on Bengal's economy. [71] While Bengal's food needs rose from increased military presence and an influx of refugees from Burma, [72] its ability to obtain rice and other grains was restricted by inter-provincial trade barriers. [73]

Japanese invasion of Burma Edit

The Japanese campaign for Burma set off an exodus of more than half of the one million Indians from Burma for India. [74] The flow began after the bombing of Rangoon (1941–1942), and for months thereafter desperate people poured across the borders, escaping into India through Bengal and Assam. [75] On 26 April 1942, all Allied forces were ordered to retreat from Burma into India. [76] Military transport and other supplies were dedicated to military use, and unavailable for use by the refugees. [77] By mid May 1942, the monsoon rains became heavy in the Manipur hills, further inhibiting civilian movement. [78]

The number of refugees who successfully reached India totalled at least 500,000 tens of thousands died along the way. In later months, 70 to 80% of these refugees were afflicted with diseases such as dysentery, smallpox, malaria, or cholera, with 30% "desperately so". [79] The influx of refugees created several conditions that may have contributed to the famine. Their arrival created an increased demand for food, [72] clothing and medical aid, further straining the resources of the province. [80] The poor hygienic conditions of their forced journey sparked official fears of a public health risk due to epidemics caused by social disruption. [81] Finally, their distraught state after their struggles [82] bred foreboding, uncertainty, and panic amongst the populace of Bengal this aggravated panic buying and hoarding that may have contributed to the onset of the famine. [82]

By April 1942, Japanese warships and aircraft had sunk approximately 100,000 tons of merchant shipping in the Bay of Bengal. [83] According to General Archibald Wavell, Commander-in-Chief of the army in India, both the War Office in London and the commander of the British Eastern Fleet acknowledged that the fleet was powerless to mount serious opposition to Japanese naval attacks on Ceylon, southern or eastern India, or on shipping in the Bay of Bengal. [83] For decades, rail transport had been integral to successful efforts by the Raj to forestall famine in India. [84] However, Japanese raids put additional strain on railways, which also endured flooding in the Brahmaputra, a malaria epidemic, and the Quit India movement targeting road and rail communication. [85] Throughout, transportation of civil supplies were compromised by the railways' increased military obligations, and the dismantling of tracks carried out in areas of eastern Bengal in 1942 to hamper a potential Japanese invasion. [86]

The fall of Rangoon in March 1942 cut off the import of Burmese rice into India and Ceylon. [87] Due in part to rises in local populations, prices for rice were already 69% higher in September 1941 than in August 1939. [88] The loss of Burmese imports led to further increased demand on the rice producing regions. [89] This, according to the Famine Commission, was in a market in which the "progress of the war made sellers who could afford to wait reluctant to sell". [89] The loss of imports from Burma provoked an aggressive scramble for rice across India, which sparked a dramatic and unprecedented surge in demand-pull price inflation in Bengal and other rice producing regions of India. Across India and particularly in Bengal, this caused a "derangement" of the rice markets. [90] Particularly in Bengal, the price effect of the loss of Burmese rice was vastly disproportionate to the relatively modest size of the loss in terms of total consumption. [91] Despite this, Bengal continued to export rice to Ceylon [J] for months afterwards, even as the beginning of a food crisis began to become apparent. [K] All this, together with transport problems created by the government's "boat denial" policy, were the direct causes of inter-provincial trade barriers on the movement of foodgrains, [92] and contributed to a series of failed government policies that further exacerbated the food crisis. [93]

1942–1945: Military build-up, inflation, and displacement Edit

The fall of Burma brought Bengal close to the war front its impact fell more strongly on Bengal than elsewhere in India. [94] Major urban areas, especially Calcutta, drew increasing numbers of workers into military industries and troops from many nations. Unskilled labourers from Bengal and nearby provinces were employed by military contractors, particularly for the construction of American and British airfields. [95] Hundreds of thousands of American, British, Indian, and Chinese troops arrived in the province, [96] straining domestic supplies and leading to scarcities across wide ranges of daily necessities. [97] The general inflationary pressures of a war-time economy caused prices to rise rapidly across the entire spectrum of goods and services. [98] The rise in prices was "not disturbing" until 1941, when it became more alarming. [99] Then in early 1943, the rate of inflation for foodgrains in particular took an unprecedented upward turn. [100]

Nearly the full output of India's cloth, wool, leather and silk industries were sold to the military. [101] In the system that the British Government used to procure goods through the Government of India, industries were left in private ownership rather than facing outright requisitioning of their productive capacity. Firms were required to sell goods to the military on credit and at fixed, low prices. [102] However, firms were left free to charge any price they desired in their domestic market for whatever they had left over. In the case of the textiles industries that supplied cloth for the uniforms of the British military, for example, they charged a very high price in domestic markets. [102] By the end of 1942, cloth prices had more than tripled from their pre-war levels they had more than quadrupled by mid-1943. [103] Much of the goods left over for civilian use were purchased by speculators. [104] As a result, "civilian consumption of cotton goods fell by more than 23% from the peace time level by 1943/44". [105] The hardships that were felt by the rural population through a severe "cloth famine" were alleviated when military forces began distributing relief supplies between October 1942 and April 1943. [106]

The method of credit financing was tailored to UK wartime needs. Britain agreed to pay for defence expenditures above the amount that India had paid in peacetime (adjusted for inflation). However, their purchases were made entirely on credit accumulated in the Bank of England and not redeemable until after the war. At the same time, the Bank of India was permitted to treat those credits as assets against which it could print currency up to two and a half times more than the total debt incurred. India's money printing presses then began running overtime, printing the currency that paid for all these massive expenditures. The tremendous rise in nominal money supply coupled with a scarcity of consumption goods spurred monetary inflation, reaching its peak in 1944–45. [107] The accompanying rise in incomes and purchasing power fell disproportionately into the hands of industries in Calcutta (in particular, munitions industries). [108]

Military build-up caused massive displacement of Bengalis from their homes. Farmland purchased for airstrip and camp construction is "estimated to have driven between 30,000 and 36,000 families (about 150,000 to 180,000 persons) off their land", according to the historian Paul Greenough. They were paid for the land, but they had lost their employment. [109] The urgent need for housing for the immense influx of workers and soldiers from 1942 onward created further problems. Military barracks were scattered around Calcutta. [110] The Famine Commission report of 1945 stated that the owners had been paid for these homes, but "there is little doubt that the members of many of these families became famine victims in 1943". [111]

March 1942: Denial policies Edit

Anticipating a Japanese invasion of British India via the eastern border of Bengal, the British military launched a pre-emptive, two-pronged scorched-earth initiative in eastern and coastal Bengal. Its goal was to deny the expected invaders access to food supplies, transport and other resources. [L]

First, a "denial of rice" policy was carried out in three southern districts along the coast of the Bay of Bengal – Bakarganj (or Barisal), Midnapore and Khulna – that were expected to have surpluses of rice. John Herbert, the governor of Bengal, issued an urgent [112] directive in late March 1942 immediately requiring stocks of paddy (unmilled rice) deemed surplus, and other food items, to be removed or destroyed in these districts. [113] Official figures for the amounts impounded were relatively small and would have contributed only modestly to local scarcities. [114] However, evidence that fraudulent, corrupt and coercive practices by the purchasing agents removed far more rice than officially recorded, not only from designated districts, but also in unauthorised areas, suggests a greater impact. [115] Far more damaging were the policy's disturbing impact on regional market relationships and contribution to a sense of public alarm. [116] Disruption of deeply intertwined relationships of trust and trade credit created an immediate freeze in informal lending. This credit freeze greatly restricted the flow of rice into trade. [117]

The second prong, a "boat denial" policy, was designed to deny Bengali transport to any invading Japanese army. It applied to districts readily accessible via the Bay of Bengal and the larger rivers that flow into it. Implemented on 1 May after an initial registration period, [118] [ page needed ] the policy authorised the Army to confiscate, relocate or destroy any boats large enough to carry more than ten people, and allowed them to requisition other means of transport such as bicycles, bullock carts, and elephants. [119] Under this policy, the Army confiscated approximately 45,000 rural boats, [120] severely disrupting river-borne movement of labour, supplies and food, and compromising the livelihoods of boatmen and fishermen. [121] Leonard G. Pinnell, a British civil servant who headed the Bengal government's Department of Civil Supplies, told the Famine Commission that the policy "completely broke the economy of the fishing class". [122] Transport was generally unavailable to carry seed and equipment to distant fields or rice to the market hubs. [123] Artisans and other groups who relied on boat transport to carry goods to market were offered no recompense neither were rice growers nor the network of migratory labourers. [124] The large-scale removal or destruction of rural boats caused a near-complete breakdown of the existing transport and administration infrastructure and market system for movement of rice paddy. [125] No steps were taken to provide for the maintenance or repair of the confiscated boats, [126] and many fishermen were unable to return to their trade. [124] The Army took no steps to distribute food rations to make up for the interruption of supplies. [127]

These policies had important political ramifications. The Indian National Congress, among other groups, staged protests denouncing the denial policies for placing draconian burdens on Bengali peasants these were part of a nationalist sentiment and outpouring that later peaked in the "Quit India" movement. [128] The policies' wider impact – the extent to which they compounded or even caused the famine to occur one year later – has been the subject of much discussion. [129]

Provincial trade barriers Edit

Many Indian provinces and princely states imposed inter-provincial trade barriers from mid-1942, preventing trade in domestic rice. Anxiety and soaring rice prices, triggered by the fall of Burma, [130] were one underlying reason for the trade barriers. Trade imbalances brought on by price controls were another. [92] The power to restrict inter-provincial trade was given provincial governments in November 1941 under the Defence of India Act, 1939. [M] Provincial governments began erecting trade barriers that prevented the flow of foodgrains (especially rice) and other goods between provinces. These barriers reflected a desire to see that local populations were well fed, thus forestalling local emergencies. [131]

In January 1942, Punjab banned exports of wheat [132] [N] this increased the perception of food insecurity and led the enclave of wheat-eaters in Greater Calcutta to increase their demand for rice precisely when an impending rice shortage was feared. [133] The Central Provinces prohibited the export of foodgrains outside the province two months later. [134] Madras banned rice exports in June, [135] followed by export bans in Bengal and its neighbouring provinces of Bihar and Orissa that July. [136]

The Famine Inquiry Commission of 1945 characterised this "critical and potentially most dangerous stage" as a key policy failure. As one deponent to the Commission put it: "Every province, every district, every [administrative division] in the east of India had become a food republic unto itself. The trade machinery for the distribution of food [between provinces] throughout the east of India was slowly strangled, and by the spring of 1943 was dead." [137] Bengal was unable to import domestic rice this policy helped transform market failures and food shortage into famine and widespread death. [138]

Mid-1942: Prioritised distribution Edit

The loss of Burma reinforced the strategic importance of Calcutta as the hub of heavy industry and the main supplier of armaments and textiles for the entire Asian theatre. [139] To support its wartime mobilisation, the Indian Government categorised the population into socioeconomic groups of "priority" and "non-priority" classes, according to their relative importance to the war effort. [140] Members of the "priority" classes were largely composed of bhadraloks, who were upper-class or bourgeois middle-class, socially mobile, educated, urban, and sympathetic to Western values and modernisation. Protecting their interests was a major concern of both private and public relief efforts. [141] This placed the rural poor in direct competition for scarce basic supplies with workers in public agencies, war-related industries, and in some cases even politically well-connected middle-class agriculturalists. [142]

As food prices rose and the signs of famine became apparent from July 1942, [143] the Bengal Chamber of Commerce (composed mainly of British-owned firms) [16] devised a Foodstuffs Scheme to provide preferential distribution of goods and services to workers in high-priority war industries, to prevent them from leaving their positions. The scheme was approved by Government of Bengal. [17] Rice was directed away from the starving rural districts to workers in industries considered vital to the military effort – particularly in the area around Greater Calcutta. [144] Workers in prioritised sectors – private and government wartime industries, military and civilian construction, paper and textile mills, engineering firms, the Indian Railways, coal mining, and government workers of various levels [145] – were given significant advantages and benefits. Essential workers received subsidised food, [146] and were frequently paid in part in weekly allotments of rice sufficient to feed their immediate families, further protecting them from inflation. [147] Essential workers also benefited from ration cards, a network of "cheap shops" which provided essential supplies at discounted rates, and direct, preferential allocation of supplies such as water, medical care, and antimalarial supplies. They also received subsidised food, free transportation, access to superior housing, regular wages and even "mobile cinema units catering to recreational needs". [146] By December of that year, the total number of individuals covered (workers and their families) was approximately a million. [148] Medical care was directed to the priority groups – particularly the military. Public and private medical staff at all levels were transferred to military duty, while medical supplies were monopolised. [149]

Rural labourers and civilians not members of these groups received severely reduced access to food and medical care, generally available only to those who migrated to selected population centres. [81] Otherwise, according to medical historian Sanjoy Bhattacharya, "vast areas of rural eastern India were denied any lasting state-sponsored distributive schemes". [150] For this reason, the policy of prioritised distribution is sometimes discussed as one cause of the famine. [151]

Civil unrest Edit

The war escalated resentment and fear of the Raj among rural agriculturalists and business and industrial leaders in Greater Calcutta. [152] The unfavourable military situation of the Allies after the fall of Burma led the US and China to urge the UK to enlist India's full cooperation in the war by negotiating a peaceful transfer of political power to an elected Indian body this goal was also supported by the Labour Party in Britain. Winston Churchill, the British prime minister, responded to the new pressure through the Cripps' mission, broaching the post-war possibility of an autonomous political status for India in exchange for its full military support, but negotiations collapsed in early April 1942. [153]

On 8 August 1942, the Indian National Congress launched the Quit India movement as a nationwide display of nonviolent resistance. [154] The British authorities reacted by imprisoning the Congress leaders. [155] Without its leadership, the movement changed its character and took to sabotaging factories, bridges, telegraph and railway lines, and other government property, [155] thereby threatening the British Raj's war enterprise. [155] The British acted forcefully to suppress the movement, taking around 66,000 in custody (of whom just over 19,000 were still convicted under civil law or detained under the Defence of India Act in early 1944). More than 2,500 Indians were shot when police fired upon protesters, many of whom were killed. [156] In Bengal, the movement was strongest in the Tamluk and Contai subdivisions of Midnapore district, [157] where rural discontent was well-established and deep. [158] [O] In Tamluk, by April 1942 the government had destroyed some 18,000 boats in pursuit of its denial policy, while war-related inflation further alienated the rural population, who became eager volunteers when local Congress recruiters proposed open rebellion. [159]

The violence during the "Quit India" movement was internationally condemned, and hardened some sectors of British opinion against India [160] The historians Christopher Bayly and Tim Harper believe it reduced the British War Cabinet's willingness to provide famine aid at a time when supplies were also needed for the war effort. [161] In several ways the political and social disorder and distrust that were the effects and after-effects of rebellion and civil unrest placed political, logistical, and infrastructural constraints on the Government of India that contributed to later famine-driven woes. [162]

1942–43: Price chaos Edit

Throughout April 1942, British and Indian refugees fled Burma, many through Bengal, as the cessation of Burmese imports continued to drive up rice prices. In June, the Bengal government established price controls for rice, and on 1 July fixed prices at a level considerably lower than the prevailing market price. The principal result of the fixed low price was to make sellers reluctant to sell stocks disappeared, either on to the black market or into storage. [163] The government then let it be known that the price control law would not be enforced except in the most egregious cases of war profiteering. [164] This easing of restrictions plus the ban on exports created about four months of relative price stability. [165] In mid-October, though, south-west Bengal was struck by a series of natural disasters that destabilised prices again, [166] causing another rushed scramble for rice, greatly to the benefit of the Calcutta black market. [167] Between December 1942 and March 1943 the government made several attempts to "break the Calcutta market" by bringing in rice supplies from various districts around the province however, these attempts to drive down prices by increasing supply were unsuccessful. [168]

On 11 March 1943, the provincial government rescinded its price controls, [169] resulting in dramatic rises in the price of rice, due in part to soaring levels of speculation. [170] The period of inflation between March and May 1943 was especially intense [171] May was the month of the first reports of death by starvation in Bengal. [172] The government attempted to re-establish public confidence by insisting that the crisis was being caused almost solely by speculation and hoarding, [173] but their propaganda failed to dispel the widespread belief that there was a shortage of rice. [174] The provincial government never formally declared a state of famine, even though its Famine Code would have mandated a sizable increase in aid. In the early stages of the famine, the rationale for this was that the provincial government was expecting aid from the Government of India. It felt then its duty lay in maintaining confidence through propaganda that asserted that there was no shortage. After it became clear that aid from central government was not forthcoming, the provincial government felt they simply did not have the amount of food supplies that a declaration of famine would require them to distribute, while distributing more money might make inflation worse. [175]

When inter-provincial trade barriers were abolished on 18 May, prices temporarily fell in Calcutta, but soared in the neighbouring provinces of Bihar and Orissa when traders rushed to purchase stocks. [176] The provincial government's attempts to locate and seize any hoarded stocks failed to find significant hoarding. [177] In Bengal, prices were soon five to six times higher than they had been before April 1942. [178] Free trade was abandoned in July 1943, [179] and price controls were reinstated in August. [169] Despite this, there were unofficial reports of rice being sold in late 1943 at roughly eight to ten times the prices of late 1942. [180] Purchasing agents were sent out by the government to obtain rice, but their attempts largely failed. Prices remained high, and the black market was not brought under control. [181]

October 1942: Natural disasters Edit

Bengal was affected by a series of natural disasters late in 1942. The winter rice crop was afflicted by a severe outbreak of fungal brown spot disease, while, on 16–17 October a cyclone and three storm surges ravaged croplands, destroyed houses and killing thousands, at the same time dispersing high levels of fungal spores across the region and increasing the spread of the crop disease. [182] The fungus reduced the crop yield even more than the cyclone. [183] After describing the horrific conditions he had witnessed, the mycologist S.Y. Padmanabhan wrote that the outbreak was similar in impact to the potato blight that caused the Irish Great Famine: "Though administrative failures were immediately responsible for this human suffering, the principal cause of the short crop production of 1942 was the [plant] epidemic . nothing as devastating . has been recorded in plant pathological literature". [184]

The Bengal cyclone came through the Bay of Bengal, landing on the coastal areas of Midnapore and 24 Parganas. [185] It killed 14,500 people and 190,000 cattle, whilst rice paddy stocks in the hands of cultivators, consumers, and dealers were destroyed. [186] It also created local atmospheric conditions that contributed to an increased incidence of malaria. [187] The three storm surges which followed the cyclone destroyed the seawalls of Midnapore and flooded large areas of Contai and Tamluk. [188] Waves swept an area of 450 square miles (1,200 km 2 ), floods affected 400 square miles (1,000 km 2 ), and wind and torrential rain damaged 3,200 square miles (8,300 km 2 ). For nearly 2.5 million Bengalis, the accumulative damage of the cyclone and storm surges to homes, crops and livelihoods was catastrophic: [189]

Corpses lay scattered over several thousand square miles of devastated land, 7,400 villages were partly or wholly destroyed, and standing flood waters remained for weeks in at least 1,600 villages. Cholera, dysentery and other waterborne diseases flourished. 527,000 houses and 1,900 schools were lost, over 1,000 square miles of the most fertile paddy land in the province was entirely destroyed, and the standing crop over an additional 3,000 square miles was damaged. [190] [ page needed ]

The cyclone, floods, plant disease, and warm, humid weather reinforced each other and combined to have a substantial impact on the aman rice crop of 1942. [191] Their impact was felt in other aspects as well, as in some districts the cyclone was responsible for an increased incidence of malaria, with deadly effect. [192]

October 1942: Unreliable crop forecasts Edit

At about the same time, official forecasts of crop yields predicted a significant shortfall. [193] However, crop statistics of the time were scant and unreliable. [194] Administrators and statisticians had known for decades that India's agricultural production statistics were completely inadequate [195] and "not merely guesses, but frequently demonstrably absurd guesses". [196] There was little or no internal bureaucracy for creating and maintaining such reports, and the low-ranking police officers or village officials charged with gathering local statistics were often poorly supplied with maps and other necessary information, poorly educated, and poorly motivated to be accurate. [197] The Bengal Government thus did not act on these predictions, [198] doubting their accuracy and observing that forecasts had predicted a shortfall several times in previous years, while no significant problems had occurred. [199]

Air raids on Calcutta Edit

The Famine Inquiry Commission's 1945 report singled out the first Japanese air raids on Calcutta in December 1942 as a causation. [200] The attacks, largely unchallenged by Allied defences, [201] continued throughout the week, [200] triggering an exodus of thousands from the city. [202] As evacuees travelled to the countryside, food-grain dealers closed their shops. [200] To ensure that workers in the prioritised industries in Calcutta would be fed, [203] the authorities seized rice stocks from wholesale dealers, breaking any trust the rice traders had in the government. [204] "From that moment", the 1945 report stated, "the ordinary trade machinery could not be relied upon to feed Calcutta. The [food security] crisis had begun". [200]

1942–43: Shortfall and carryover Edit

Whether the famine resulted from crop shortfall or failure of land distribution has been much debated. [205] According to Amartya Sen: "The . [rice paddy] supply for 1943 was only about 5% lower than the average of the preceding five years. It was, in fact, 13% higher than in 1941, and there was, of course, no famine in 1941." [206] The Famine Inquiry Commission report concluded that the overall deficit in rice in Bengal in 1943, taking into account an estimate of the amount of carryover of rice from the previous harvest, [P] was about three weeks' supply. In any circumstances, this was a significant shortfall requiring a considerable amount of food relief, but not a deficit large enough to create widespread deaths by starvation. [207] According to this view, the famine "was not a crisis of food availability, but of the [unequal] distribution of food and income". [208] There has been very considerable debate about the amount of carryover available for use at the onset of the famine. [209]

Several contemporary experts cite evidence of a much larger shortfall. [210] Commission member Wallace Aykroyd argued in 1974 that there had been a 25% shortfall in the harvest of the winter of 1942, [211] while L. G. Pinnell , responsible to the Government of Bengal from August 1942 to April 1943 for managing food supplies, estimated the crop loss at 20%, with disease accounting for more of the loss than the cyclone other government sources privately admitted the shortfall was 2 million tons. [212] The economist George Blyn argues that with the cyclone and floods of October and the loss of imports from Burma, the 1942 Bengal rice harvest had been reduced by one-third. [213]

1942–1944: Refusal of imports Edit

Beginning as early as December 1942, high-ranking government officials and military officers (including John Herbert, the Governor of Bengal Viceroy Linlithgow Leo Amery the Secretary of State for India General Claude Auchinleck, Commander-in-Chief of British forces in India, [214] and Admiral Louis Mountbatten, Supreme Commander of South-East Asia [215] ) began requesting food imports for India through government and military channels, but for months these requests were either rejected or reduced to a fraction of the original amount by Churchill's War Cabinet. [216] The colony was also not permitted to spend its own sterling reserves, or even use its own ships, to import food. [217] Although Viceroy Linlithgow appealed for imports from mid-December 1942, he did so on the understanding that the military would be given preference over civilians. [Q] The Secretary of State for India, Leo Amery, was on one side of a cycle of requests for food aid and subsequent refusals from the British War Cabinet that continued through 1943 and into 1944. [218] Amery did not mention worsening conditions in the countryside, stressing that Calcutta's industries must be fed or its workers would return to the countryside. Rather than meeting this request, the UK promised a relatively small amount of wheat that was specifically intended for western India (that is, not for Bengal) in exchange for an increase in rice exports from Bengal to Ceylon. [K]

The tone of Linlithgow's warnings to Amery grew increasingly serious over the first half of 1943, as did Amery's requests to the War Cabinet on 4 August 1943 Amery noted the spread of famine, and specifically stressed the effect upon Calcutta and the potential effect on the morale of European troops. The cabinet again offered only a relatively small amount, explicitly referring to it as a token shipment. [219] The explanation generally offered for the refusals included insufficient shipping, [220] particularly in light of Allied plans to invade Normandy. [221] The Cabinet also refused offers of food shipments from several different nations. [18] When such shipments did begin to increase modestly in late 1943, the transport and storage facilities were understaffed and inadequate. [222] When Viscount Archibald Wavell replaced Linlithgow as Viceroy in the latter half of 1943, he too began a series of exasperated demands to the War Cabinet for very large quantities of grain. [223] His requests were again repeatedly denied, causing him to decry the current crisis as "one of the greatest disasters that has befallen any people under British rule, and [the] damage to our reputation both among Indians and foreigners in India is incalculable". [224] Churchill wrote to Franklin D. Roosevelt at the end of April 1944 asking for aid from the United States in shipping wheat in from Australia, but Roosevelt replied apologetically on 1 June that he was "unable on military grounds to consent to the diversion of shipping". [225]

Experts' disagreement over political issues can be found in differing explanations of the War Cabinet's refusal to allocate funds to import grain. Lizzie Collingham holds the massive global dislocations of supplies caused by World War II virtually guaranteed that hunger would occur somewhere in the world, yet Churchill's animosity and perhaps racism toward Indians decided the exact location where famine would fall. [226] Similarly, Madhusree Mukerjee makes a stark accusation: "The War Cabinet's shipping assignments made in August 1943, shortly after Amery had pleaded for famine relief, show Australian wheat flour travelling to Ceylon, the Middle East, and Southern Africa – everywhere in the Indian Ocean but to India. Those assignments show a will to punish." [227] In contrast, Mark Tauger strikes a more supportive stance: "In the Indian Ocean alone from January 1942 to May 1943, the Axis powers sank 230 British and Allied merchant ships totalling 873,000 tons, in other words, a substantial boat every other day. British hesitation to allocate shipping concerned not only potential diversion of shipping from other war-related needs but also the prospect of losing the shipping to attacks without actually [bringing help to] India at all." [228]

An estimated 2.1–3 million [A] Bengalis died, out of a population of 60.3 million. However, contemporary mortality statistics were to some degree under-recorded, particularly for the rural areas, where data collecting and reporting was rudimentary even in normal times. Thus, many of those who died or migrated were unreported. [229] The principal causes of death also changed as the famine progressed in two waves. [230]

Early on, conditions drifted towards famine at different rates in different Bengal districts. The Government of India dated the beginning of the Bengal food crisis from the air raids on Calcutta in December 1942, [200] blaming the acceleration to full-scale famine by May 1943 on the effects of price decontrol. [231] However, in some districts the food crisis had begun as early as mid-1942. [232] The earliest indications were somewhat obscured, since rural poor were able to draw upon various survival strategies for a few months. [233] After December 1942 reports from various commissioners and district officers began to cite a "sudden and alarming" inflation, nearly doubling the price of rice this was followed in January by reports of distress caused by serious food supply problems. [234] In May 1943, six districts – Rangpur, Mymensingh, Bakarganj, Chittagong, Noakhali and Tipperah – were the first to report deaths by starvation. Chittagong and Noakhali, both "boat denial" districts in the Ganges Delta (or Sundarbans Delta) area, were the hardest hit. [172] In this first wave – from May to October 1943 – starvation was the principal cause of excess mortality (that is, those attributable to the famine, over and above the normal death rates), filling the emergency hospitals in Calcutta and accounting for the majority of deaths in some districts. [235] According to the Famine Inquiry Commission report, many victims on the streets and in the hospitals were so emaciated that they resembled "living skeletons". [236] While some districts of Bengal were relatively less affected throughout the crisis, [237] no demographic or geographic group was completely immune to increased mortality rates caused by disease – but deaths from starvation were confined to the rural poor. [238]

Deaths by starvation had peaked by November 1943. [239] Disease began its sharp upward turn around October 1943 and overtook starvation as the most common cause of death around December. [240] Disease-related mortality then continued to take its toll through early-to-mid 1944. [235] Among diseases, malaria was the biggest killer. [241] From July 1943 to June 1944, the monthly death toll from malaria averaged 125% above rates from the previous five years, reaching 203% above average in December 1943. [241] Malaria parasites were found in nearly 52% of blood samples examined at Calcutta hospitals during the peak period, November–December 1944. [242] Statistics for malaria deaths are almost certainly inaccurate, since the symptoms often resemble those of other fatal fevers, but there is little doubt that it was the main killer. [243] Other famine-related deaths resulted from dysentery and diarrhoea, typically through consumption of poor-quality food or deterioration of the digestive system caused by malnutrition. [244] Cholera is a waterborne disease associated with social disruption, poor sanitation, contaminated water, crowded living conditions (as in refugee camps), and a wandering population – problems brought on after the October cyclone and flooding and then continuing through the crisis. [245] The epidemic of smallpox largely resulted from a result of lack of vaccinations and the inability to quarantine patients, caused by general social disruption. [246] According to social demographer Arup Maharatna, statistics for smallpox and cholera are probably more reliable than those for malaria, since their symptoms are more easily recognisable. [247]

The mortality statistics present a confused picture of the distribution of deaths among age and gender groups. Although very young children and the elderly are usually more susceptible to the effects of starvation and disease, overall in Bengal it was adults and older children who suffered the highest proportional mortality rises. [248] However, this picture was inverted in some urban areas, perhaps because the cities attracted large numbers of very young and very old migrants. [249] In general, males suffered generally higher death rates than females, [250] although the rate of female infant death was higher than for males, perhaps reflecting a discriminatory bias. [251] A relatively lower death rate for females of child-bearing age may have reflected a reduction in fertility, brought on by malnutrition, which in turn reduced maternal deaths. [252]

Regional differences in mortality rates were influenced by the effects of migration, [253] and of natural disasters. [254] In general, excess mortality was higher in the east (followed by west, center, and north ofBengal in that order), [255] even though the relative shortfall in the rice crop was worst in the western districts of Bengal. [256] Eastern districts were relatively densely populated, [257] [ failed verification ] were closest to the Burma war zone, and normally ran grain deficits in pre-famine times. [258] These districts also were subject to the boat denial policy, and had a relatively high proportion of jute production instead of rice. [254] Workers in the east were more likely to receive monetary wages than payment in kind with a portion of the harvest, a common practice in the western districts. [259] When prices rose sharply, their wages failed to follow suit this drop in real wages left them less able to purchase food. [15] The following table, derived from Arup Maharatna (1992), shows trends in excess mortality for 1943–44 as compared to prior non-famine years. Death rate is total number of deaths in a year (mid-year population) from all causes, per 1000. [260] All death rates are with respect to the population in 1941. [261] Percentages for 1943–44 are of excess deaths (that is, those attributable to the famine, over and above the normal incidence) [R] as compared to rates from 1937 to 1941.

Cause-specific death rates during pre-famine and famine periods relative importance of different causes of death during famine: Bengal [262]
Cause of death Pre-famine
1943 1944
Rate Rate % Rate %
Cholera 0.73 3.60 23.88 0.82 0.99
Smallpox 0.21 0.37 1.30 2.34 23.69
Fever 6.14 7.56 11.83 6.22 0.91
Malaria 6.29 11.46 43.06 12.71 71.41
Dysentery/diarrhoea 0.88 1.58 5.83 1.08 2.27
All other 5.21 7.2 14.11 5.57 0.74
All causes 19.46 31.77 100.00 28.75 100.00

Overall, the table shows the dominance of malaria as the cause of death throughout the famine, accounting for roughly 43% [S] of the excess deaths in 1943 and 71% in 1944. Cholera was a major source of famine-caused deaths in 1943 (24%) but dropped to a negligible percentage (1%) the next year. Smallpox deaths were almost a mirror image: they made up a small percentage of excess deaths in 1943 (1%) but jumped in 1944 (24%). Finally, the sharp jump in the death rate from "All other" causes in 1943 is almost certainly due to deaths from pure starvation, which were negligible in 1944. [263]

Though excess mortality due to malarial deaths peaked in December 1943, rates remained high throughout the following year. [264] Scarce supplies of quinine (the most common malaria medication) were very frequently diverted to the black market. [265] Advanced anti-malarial drugs such as mepacrine (Atabrine) were distributed almost solely to the military and to "priority classes" DDT (then relatively new and considered "miraculous") and pyrethrum were sprayed only around military installations. Paris Green was used as an insecticide in some other areas. [266] This unequal distribution of anti-malarial measures may explain a lower incidence of malarial deaths in population centres, where the greatest cause of death was "all other" (probably migrants dying from starvation). [263]

Deaths from dysentery and diarrhoea peaked in December 1943, the same month as for malaria. [264] Cholera deaths peaked in October 1943 but receded dramatically in the following year, brought under control by a vaccination program overseen by military medical workers. [267] A similar smallpox vaccine campaign started later and was pursued less effectively [268] smallpox deaths peaked in April 1944. [269] "Starvation" was generally not listed as a cause of death at the time many deaths by starvation may have been listed under the "all other" category. [270] Here the death rates, rather than per cents, reveal the peak in 1943.

The two waves – starvation and disease – also interacted and amplified one another, increasing the excess mortality. [271] Widespread starvation and malnutrition first compromised immune systems, and reduced resistance to disease led to death by opportunistic infections. [272] Second, the social disruption and dismal conditions caused by a cascading breakdown of social systems brought mass migration, overcrowding, poor sanitation, poor water quality and waste disposal, increased vermin, and unburied dead. All of these factors are closely associated with the increased spread of infectious disease. [240]

Despite the organised and sometimes violent civil unrest immediately before the famine, [O] there was no organised rioting when the famine took hold. [273] However, the crisis overwhelmed the provision of health care and key supplies: food relief and medical rehabilitation were supplied too late, whilst medical facilities across the province were utterly insufficient for the task at hand. [274] A long-standing system of rural patronage, in which peasants relied on large landowners to supply subsistence in times of crisis, collapsed as patrons exhausted their own resources and abandoned the peasants. [275]

Families also disintegrated, with cases of abandonment, child-selling, prostitution, and sexual exploitation. [276] Lines of small children begging stretched for miles outside cities at night, children could be heard "crying bitterly and coughing terribly . in the pouring monsoon rain . stark naked, homeless, motherless, fatherless and friendless. Their sole possession was an empty tin". [277] A schoolteacher in Mahisadal witnessed "children picking and eating undigested grains out of a beggar's diarrheal discharge". [278] Author Freda Bedi wrote that it was "not just the problem of rice and the availability of rice. It was the problem of society in fragments". [279]

Population displacement Edit

The famine fell hardest on the rural poor. As the distress continued, families adopted increasingly desperate means for survival. First, they reduced their food intake and began to sell jewellery, ornaments, and smaller items of personal property. As expenses for food or burials became more urgent, the items sold became larger and less replaceable. Eventually, families disintegrated men sold their small farms and left home to look for work or to join the army, and women and children became homeless migrants, often travelling to Calcutta or another large city in search of organised relief: [8]

Husbands deserted wives and wives husbands elderly dependents were left behind in the villages babies and young children were sometimes abandoned. According to a survey carried out in Calcutta during the latter half of 1943, some breaking up of the family had occurred in about half the destitute population which reached the city. [280]

In Calcutta, evidence of the famine was ". mainly in the form of masses of rural destitutes trekking into the city and dying on the streets". [216] Estimates of the number of the sick who flocked to Calcutta ranged between 100,000 and 150,000. [281] Once they left their rural villages in search of food, their outlook for survival was grim: "Many died by the roadside – witness the skulls and bones which were to be seen there in the months following the famine." [282]

Sanitation and undisposed dead Edit

The disruption of core elements of society brought a catastrophic breakdown of sanitary conditions and hygiene standards. [240] Large-scale migration resulted in the abandonment of the facilities and sale of the utensils necessary for washing clothes or preparation of food. [283] Many people drank contaminated rainwater from streets and open spaces where others had urinated or defecated. [284] Particularly in the early months of the crisis, conditions did not improve for those under medical care:

Conditions in certain famine hospitals at this time . were indescribably bad . Visitors were horrified by the state of the wards and patients, the ubiquitous filth, and the lack of adequate care and treatment . [In hospitals all across Bengal, the] condition of patients was usually appalling, a large proportion suffering from acute emaciation, with 'famine diarrhoea' . Sanitary conditions in nearly all temporary indoor institutions were very bad to start with . [285]

The desperate condition of the healthcare did not improve appreciably until the army, under Viscount Wavell, took over the provision of relief supplies in October 1943. At that time medical resources [286] were made far more available. [287]

Disposal of corpses soon became a problem for the government and the public, as numbers overwhelmed cremation houses, burial grounds, and those collecting and disposing of the dead. Corpses lay scattered throughout the pavements and streets of Calcutta. In only two days of August 1943, at least 120 were removed from public thoroughfares. [288] In the countryside bodies were often disposed of in rivers and water supplies. [289] As one survivor explained, "We couldn't bury them or anything. No one had the strength to perform rites. People would tie a rope around the necks and drag them over to a ditch." [290] Corpses were also left to rot and putrefy in open spaces. The bodies were picked over by vultures and dragged away by jackals. Sometimes this happened while the victim was still living. [291] The sight of corpses beside canals, ravaged by dogs and jackals, was common during a seven-mile boat ride in Midnapore in November 1943, a journalist counted at least five hundred such sets of skeletal remains. [292] The weekly newspaper Biplabi commented in November 1943 on the levels of putrefaction, contamination, and vermin infestation:

Bengal is a vast cremation ground, a meeting place for ghosts and evil spirits, a land so overrun by dogs, jackals and vultures that it makes one wonder whether the Bengalis are really alive or have become ghosts from some distant epoch. [293]

By the summer of 1943, many districts of Bengal, especially in the countryside, had taken on the look of "a vast charnel house". [291]

Cloth famine Edit

As a further consequence of the crisis, a "cloth famine" left the poorest in Bengal clothed in scraps or naked through the winter. [294] [295] The British military consumed nearly all the textiles produced in India by purchasing Indian-made boots, parachutes, uniforms, blankets, and other goods at heavily discounted rates. [101] India produced 600,000 miles of cotton fabric during the war, from which it made two million parachutes and 415 million items of military clothing. [101] It exported 177 million yards of cotton in 1938–1939 and 819 million in 1942–1943. [296] The country's production of silk, wool and leather was also used up by the military. [101]

The small proportion of material left over was purchased by speculators for sale to civilians, subject to similarly steep inflation [101] in May 1943 prices were 425% higher than in August 1939. [296] With the supply of cloth crowded out by commitments to Britain and price levels affected by profiteering, those not among the "priority classes" faced increasingly dire scarcity. Swami Sambudhanand, President of the Ramakrishna Mission in Bombay, stated in July 1943:

The robbing of graveyards for clothes, disrobing of men and women in out of way places for clothes . and minor riotings here and there have been reported. Stray news has also come that women have committed suicide for want of cloth . Thousands of men and women . cannot go out to attend their usual work outside for want of a piece of cloth to wrap round their loins. [103]

Many women "took to staying inside a room all day long, emerging only when it was [their] turn to wear the single fragment of cloth shared with female relatives". [297]

Exploitation of women and children Edit

One of the classic effects of famine is that it intensifies the exploitation of women the sale of women and girls, for example, tends to increase. [298] The sexual exploitation of poor, rural, lower-caste and tribal women by the jotedars had been difficult to escape even before the crisis. [299] In the wake of the cyclone and later famine, many women lost or sold all their possessions, and lost a male guardian due to abandonment or death. Those who migrated to Calcutta frequently had only begging or prostitution available as strategies for survival often regular meals were the only payment. [300] Tarakchandra Das suggests that a large proportion of the girls aged 15 and younger who migrated to Calcutta during the famine disappeared into brothels [301] in late 1943, entire boatloads of girls for sale were reported in ports of East Bengal. [302] Girls were also prostituted to soldiers, with boys acting as pimps. [303] Families sent their young girls to wealthy landowners overnight in exchange for very small amounts of money or rice, [304] or sold them outright into prostitution girls were sometimes enticed with sweet treats and kidnapped by pimps. Very often, these girls lived in constant fear of injury or death, but the brothels were their sole means of survival, or they were unable to escape. [305] Women who had been sexually exploited could not later expect any social acceptance or a return to their home or family. [306] Bina Agarwal writes that such women became permanent outcastes in a society that highly values female chastity, rejected by both their birth family and husband's family. [307]

An unknown number of children, some tens of thousands, were orphaned. [308] Many others were abandoned, sometimes by the roadside or at orphanages, [309] or sold for as much as two maunds (one maund was roughly equal to 37 kilograms (82 lb)), [310] or as little as one seer (1 kilogram (2.2 lb)) [311] of unhusked rice, or for trifling amounts of cash. Sometimes they were purchased as household servants, where they would "grow up as little better than domestic slaves". [312] They were also purchased by sexual predators. Altogether, according to Greenough, the victimisation and exploitation of these women and children was an immense social cost of the famine. [313]

Aside from the relatively prompt but inadequate provision of humanitarian aid for the cyclone-stricken areas around Midnapore beginning in October 1942, [314] the response of both the Bengal Provincial Government and the Government of India was slow. [315] A "non-trivial" yet "pitifully inadequate" amount of aid began to be distributed from private charitable organisations [316] in the early months of 1943 and increased through time, mainly in Calcutta but to a limited extent in the countryside. [317] In April, more government relief began to flow to the outlying areas, but these efforts were restricted in scope and largely misdirected, [188] with most of the cash and grain supplies flowing to the relatively wealthy landowners and urban middle-class (and typically Hindu) bhadraloks. [318] This initial period of relief included three forms of aid: [319] agricultural loans (cash for the purchase of paddy seed, plough cattle, and maintenance expenses), [320] grain given as gratuitous relief, and "test works" that offered food and perhaps a small amount of money in exchange for strenuous work. The "test" aspect arose because there was an assumption that if relatively large numbers of people took the offer, that indicated that famine conditions were prevalent. [321] Agricultural loans offered no assistance to the large numbers of rural poor who had little or no land. [322] Grain relief was divided between cheap grain shops and the open market, with far more going to the markets. Supplying grain to the markets was intended to lower grain prices, [323] but in practice gave little help to the rural poor, instead placing them into direct purchasing competition with wealthier Bengalis at greatly inflated prices. [324] Thus from the beginning of the crisis until around August 1943, private charity was the principal form of relief available to the very poor. [325]

According to Paul Greenough, the Provincial Government of Bengal delayed its relief efforts primarily because they had no idea how to deal with a provincial rice market crippled by the interaction of man-made shocks, [326] as opposed to the far more familiar case of localised shortage due to natural disaster. Moreover, the urban middle-class were their overriding concern, not the rural poor. They were also expecting the Government of India to rescue Bengal by bringing food in from outside the province (350,000 tons had been promised but not delivered). And finally, they had long stood by a public propaganda campaign declaring "sufficiency" in Bengal's rice supply, and were afraid that speaking of scarcity rather than sufficiency would lead to increased hoarding and speculation. [317]

There was also rampant corruption and nepotism in the distribution of government aid often as much as half of the goods disappeared into the black market or into the hands of friends or relatives. [327] Despite a long-established and detailed Famine Code that would have triggered a sizable increase in aid, and a statement privately circulated by the government in June 1943 that a state of famine might need to be formally declared, [328] this declaration never happened. [175]

Since government relief efforts were initially limited at best, a large and diverse number of private groups and voluntary workers attempted to meet the alarming needs caused by deprivation. [329] Communists, socialists, wealthy merchants, women's groups, private citizens from distant Karachi and Indian expatriates from as far away as east Africa aided in relief efforts or sent donations of money, food and cloth. [330] Markedly diverse political groups, including pro-war allies of the Raj and anti-war nationalists, each set up separate relief funds or aid groups. [331] Though the efforts of these diverse groups were sometimes marred by Hindu and Muslim communalism, with bitter accusations and counter-accusations of unfair treatment and favouritism, [332] collectively they provided substantial aid. [330]

Grain began to flow to buyers in Calcutta after the inter-provincial trade barriers were abolished in May 1943, [333] but on 17 July a flood of the Damodar River in Midnapore breached major rail lines, severely hampering import by rail. [334] As the depth and scope of the famine became unmistakable, the Provincial Government began setting up gruel kitchens in August 1943 the gruel, which often provided barely a survival-level caloric intake, [335] was sometimes unfit for consumption – mouldy or contaminated with dirt, sand, and gravel. [336] [ failed verification ] Unfamiliar and indigestible grains were often substituted for rice, causing intestinal distress that frequently resulted in death among the weakest. Nevertheless, food distributed from government gruel kitchens immediately became the main source of aid for the rural poor. [337]

The rails had been repaired in August and pressure from the Government of India brought substantial supplies into Calcutta during September, [338] Linlithgow's final month as Viceroy. However, a second problem emerged: the Civil Supplies Department of Bengal was undermanned and under-equipped to distribute the supplies, and the resulting transportation bottleneck left very large piles of grain accumulating in the open air in several locations, including Calcutta's Botanical Garden. [339] Field Marshal Archibald Wavell replaced Linlithgow that October, within two weeks he had requested military support for the transport and distribution of crucial supplies. This assistance was delivered promptly, including "a full division of. 15,000 [British] soldiers. military lorries and the Royal Air Force" and distribution to even the most distant rural areas began on a large scale. [340] In particular, grain was imported from the Punjab, and medical resources [286] were made far more available. [341] Rank-and-file soldiers, who had sometimes disobeyed orders to feed the destitute from their rations, [342] were held in esteem by Bengalis for the efficiency of their work in distributing relief. [343] That December, the "largest [rice] paddy crop ever seen" in Bengal was harvested. According to Greenough, large amounts of land previously used for other crops had been switched to rice production. The price of rice began to fall. [344] Survivors of the famine and epidemics gathered the harvest themselves, [345] though in some villages there were no survivors capable of doing the work. [346] Wavell went on to make several other key policy steps, including promising that aid from other provinces would continue to feed the Bengal countryside, setting up a minimum rations scheme, [344] and (after considerable effort) prevailing upon Great Britain to increase international imports. [242] He has been widely praised for his decisive and effective response to the crisis. [347] All official food relief work ended in December 1943 and January 1944. [348]

The famine's aftermath greatly accelerated pre-existing socioeconomic processes leading to poverty and income inequality, [349] severely disrupted important elements of Bengal's economy and social fabric, and ruined millions of families. [350] The crisis overwhelmed and impoverished large segments of the economy. A key source of impoverishment was the widespread coping strategy of selling assets, including land. In 1943 alone in one village in east Bengal, for example, 54 out of a total of 168 families sold all or part of their landholdings among these, 39 (or very nearly 3 out of 4) did so as a coping strategy in reaction to the scarcity of food. [351] As the famine wore on across Bengal, nearly 1.6 million families – roughly one-quarter of all landholders – sold or mortgaged their paddy lands in whole or in part. Some did so to profit from skyrocketing prices, but many others were trying to save themselves from crisis-driven distress. A total of 260,000 families sold all their landholdings outright, thus falling from the status of landholders to that of labourers. [352] The table below illustrates that land transfers increased significantly in each of four successive years. When compared to the base period of 1940–41, the 1941–42 increase was 504%, 1942–43 was 665%, 1943–44 was 1,057% and the increase of 1944–45 compared to 1940–41 was 872%:

Land alienation in Bengal, 1940–41 to 1944–45: number of sales of occupancy holdings [353]
1940–41 1941–42 1942–43 1943–44 1944–45
141,000 711,000 938,000 1,491,000 1,230,000

This fall into lower income groups happened across a number of occupations. In absolute numbers, the hardest hit by post-famine impoverishment were women and landless agricultural labourers. In relative terms, those engaged in rural trade, fishing and transport (boatmen and bullock cart drivers) suffered the most. [354] In absolute numbers, agricultural labourers faced the highest rates of destitution and mortality. [355]

The "panicky responses" of the colonial state as it controlled the distribution of medical and food supplies in the wake of the fall of Burma had profound political consequences. "It was soon obvious to the bureaucrats in New Delhi and the provinces, as well as the GHQ (India)," wrote Sanjoy Bhattacharya, "that the disruption caused by these short-term policies – and the political capital being made out of their effects – would necessarily lead to a situation where major constitutional concessions, leading to the dissolution of the Raj, would be unavoidable." [150] Similarly, nationwide opposition to the boat denial policy, as typified by Mahatma Gandhi's vehement editorials, helped strengthen the Indian independence movement. The denial of boats alarmed the public the resulting dispute was one point that helped to shape the "Quit India" movement of 1942 and harden the War Cabinet's response. An Indian National Congress (INC) resolution sharply decrying the destruction of boats and seizure of homes was considered treasonous by Churchill's War Cabinet, and was instrumental in the later arrest of the INC's top leadership. [356] Public thought in India, shaped by impulses such as media coverage and charity efforts, converged into a set of closely related conclusions: the famine had been a national injustice, preventing any recurrence was a national imperative, and the human tragedy left in its wake was as Jawaharlal Nehru said ". the final judgment on British rule in India". [357] According to historian Benjamin R. Siegel:

. at a national level, famine had transformed India's political landscape, underscoring the need for self-rule to Indian citizens far away from its epicenter. Photographs and journalism and the affective bonds of charity tied Indians inextricably to Bengal and made their suffering its own a provincial [famine] was turned, in the midst of war, into a national case against imperial rule. [358]

Calcutta's two leading English-language newspapers were The Statesman (at the time British-owned) [359] and Amrita Bazar Patrika (edited by independence campaigner Tushar Kanti Ghosh). [360] In the early months of the famine, the government applied pressure on newspapers to "calm public fears about the food supply" [361] and follow the official stance that there was no rice shortage. This effort had some success The Statesman published editorials asserting that the famine was due solely to speculation and hoarding, while "berating local traders and producers, and praising ministerial efforts". [361] [T] News of the famine was also subject to strict war-time censorship – even use of the word "famine" was prohibited [288] – leading The Statesman later to remark that the UK government "seems virtually to have withheld from the British public knowledge that there was famine in Bengal at all". [362]

Beginning in mid-July 1943 and more so in August, however, these two newspapers began publishing detailed and increasingly critical accounts of the depth and scope of the famine, its impact on society, and the nature of British, Hindu, and Muslim political responses. [363] A turning point in news coverage came in late August 1943, when the editor of The Statesman, Ian Stephens, solicited and published a series of graphic photos of the victims. These made world headlines [364] and marked the beginning of domestic and international consciousness of the famine. [365] The next morning, "in Delhi second-hand copies of the paper were selling at several times the news-stand price," [288] and soon "in Washington the State Department circulated them among policy makers". [366] In Britain, The Guardian called the situation "horrible beyond description". [367] The images had a profound effect and marked "for many, the beginning of the end of colonial rule". [367] Stephens' decision to publish them and to adopt a defiant editorial stance won accolades from many (including the Famine Inquiry Commission), [368] and has been described as "a singular act of journalistic courage without which many more lives would have surely been lost". [288] The publication of the images, along with Stephens' editorials, not only helped to bring the famine to an end by driving the British government to supply adequate relief to the victims, [369] but also inspired Amartya Sen's influential contention that the presence of a free press prevents famines in democratic countries. [370] The photographs also spurred Amrita Bazar Patrika and the Indian Communist Party's organ, People's War, to publish similar images the latter would make photographer Sunil Janah famous. [371] Women journalists who covered the famine included Freda Bedi reporting for Lahore's The Tribune, [372] and Vasudha Chakravarti and Kalyani Bhattacharjee, who wrote from a nationalist perspective. [373]

The famine has been portrayed in novels, films and art. The novel Ashani Sanket by Bibhutibhushan Bandyopadhyay is a fictional account of a young doctor and his wife in rural Bengal during the famine. It was adapted into a film of the same name (Distant Thunder) by director Satyajit Ray in 1973. The film is listed in The New York Times Guide to the Best 1,000 Movies Ever Made. [374] Also well-known are the novel So Many Hungers! (1947) by Bhabani Bhattacharya and the 1980 film Akaler Shandhaney by Mrinal Sen. Ella Sen's collection of stories based on reality, Darkening Days: Being a Narrative of Famine-Stricken Bengal recounts horrific events from a woman's point of view. [375]

A contemporary sketchbook of iconic scenes of famine victims, Hungry Bengal: a tour through Midnapur District in November, 1943 by Chittaprosad, was immediately banned by the British and 5,000 copies were seized and destroyed. [376] One copy was hidden by Chittaprosad's family and is now in the possession of the Delhi Art Gallery. [377] Another artist famed for his sketches of the famine was Zainul Abedin. [378]

Controversy about the causes of the famine has continued in the decades since. Attempting to determine culpability, research and analysis has covered complex issues such as the impacts of natural forces, market failures, failed policies or even malfeasance by governmental institutions, and war profiteering or other unscrupulous acts by private business. The questionable accuracy of much of the available contemporary statistical and anecdotal data is a complicating factor, [196] as is the fact that the analyses and their conclusions are political and politicised. [379]

The degree of crop shortfall in late 1942 and its impact in 1943 has dominated the historiography of the famine. [43] [U] The issue reflects a larger debate between two perspectives: one emphasises the importance of food availability decline (FAD) as a cause for famine, and another focuses on the failure of exchange entitlements (FEE). The FAD explanation blames famine on crop failures brought on principally by crises such as drought, flood, or man-made devastation from war. The FEE account agrees that such external factors are in some cases important, but holds that famine is primarily the interaction between pre-existing "structural vulnerability" (such as poverty) and a shock event (such as war or political interference in markets) that disrupts the economic market for food. When these interact, some groups within society can become unable to purchase or acquire food even though sufficient supplies are available. [380]

Both the FAD and the FEE perspectives would agree that Bengal experienced at least some grain shortage in 1943 due to the loss of imports from Burma, damage from the cyclone, and brown-spot infestation. However, the FEE analyses do not consider shortage the main factor, [381] while FAD-oriented scholars such Peter Bowbrick hold that a sharp drop in the food supply was the pivotal determining factor. [382] S.Y. Padmanabhan and later Mark Tauger, in particular, argue that the impact of brown-spot disease was vastly underestimated, both during the famine and in later analyses. [383] The signs of crop infestation by the fungus are subtle given the social and administrative conditions at the time, local officials would very likely have overlooked them. [384]

Academic consensus generally follows the FEE account, as formulated by Amartya Sen, [385] in describing the Bengal famine of 1943 as an "entitlements famine". On this view, the prelude to the famine was generalised war-time inflation, and the problem was exacerbated by prioritised distribution and abortive attempts at price control, [386] but the death blow was devastating leaps in the inflation rate due to heavy speculative buying and panic-driven hoarding. [387] This in turn caused a fatal decline in the real wages of landless agricultural workers, [388] transforming what should have been a local shortage into a horrific famine. [389]

More recent analyses often stress political factors. [390] Discussions of the government's role split into two broad camps: those which suggest that the government unwittingly caused or was unable to respond to the crisis, [391] and those which assert that the government wilfully caused or ignored the plight of starving Indians. The former see the problem as a series of avoidable war-time policy failures and "panicky responses" [150] from a government that was spectacularly inept, [392] overwhelmed [393] and in disarray the latter as a conscious miscarriage of justice by the "ruling colonial elite" [394] who abandoned the poor of Bengal. [395]

Sen does not deny that British mis-government contributed to the crisis, but sees the policy failure as a complete misunderstanding of the cause of the famine. This misunderstanding led to a wholly misguided emphasis on measuring non-existent food shortages rather than addressing the very real and devastating inflation-driven imbalances in exchange entitlements. [396] In stark contrast, although Cormac Ó Gráda notes that the exchange entitlements view of this famine is generally accepted, [397] he lends greater weight to the importance of a crop shortfall than does Sen, and goes on to largely reject Sen's emphasis on hoarding and speculation. [398] He does not stop there but emphasises a "lack of political will" and the pressure of wartime priorities that drove the British government and the provincial government of Bengal to make fateful decisions: the "denial policies", the use of heavy shipping for war supplies rather than food, the refusal to officially declare a state of famine, and the Balkanisation of grain markets through inter-provincial trade barriers. [399] On this view, these policies were designed to serve British military goals at the expense of Indian interests, [400] reflecting the War Cabinet's willingness to "supply the Army's needs and let the Indian people starve if necessary". [401] Far from being accidental, these dislocations were fully recognised beforehand as fatal for identifiable Indian groups whose economic activities did not directly, actively, or adequately advance British military goals. [402] The policies may have met their intended wartime goals, but only at the cost of large-scale dislocations in the domestic economy. The British government, this argument maintains, thus bears moral responsibility for the rural deaths. [403] Auriol Law-Smith's discussion of contributing causes of the famine also lays blame on the British Government of India, primarily emphasising Viceroy Linlithgow's lack of political will to "infringe provincial autonomy" by using his authority to remove interprovincial barriers, which would have ensured the free movement of life-saving grain. [404]

A related argument, present since the days of the famine [405] but expressed at length by Madhusree Mukerjee, accuses key figures in the British government (particularly Prime Minister Winston Churchill) [406] of genuine antipathy toward Indians and Indian independence, an antipathy arising mainly from a desire to protect imperialist privilege but tinged also with racist undertones. [407] This is attributed to British anger over widespread Bengali nationalist sentiment and the perceived treachery of the violent Quit India uprising. [408] Historian Tirthankar Roy critiques this view and refers to it as "naive". Instead, Roy attributes the delayed response to rivalry and misinformation spread about the famine within the local government, particularly by the Minister of Civil Supplies Huseyn Shaheed Suhrawardy, who maintained there was no food shortage throughout the famine, while noting that there is little evidence of Churchill's views influencing War Cabinet policy. [409]

For its part, the report of the Famine Commission (its members appointed in 1944 by the British Government of India [410] and chaired by Sir John Woodhead, a former Indian Civil Service official in Bengal), [411] absolved the British government from all major blame. [412] It acknowledge some failures in its price controls and transportation efforts [413] and laid additional responsibility at the feet of unavoidable fate, but reserved its broadest and most forceful finger-pointing for local politicians in the (largely Muslim) [414] [ failed verification ] [V] provincial Government of Bengal: [415] As it stated, "after considering all the circumstances, we cannot avoid the conclusion that it lay in the power of the Government of Bengal, by bold, resolute and well-conceived measures at the right time to have largely prevented the tragedy of the famine as it actually took place". [416] For example, the position of the Famine Inquiry Commission with respect to charges that prioritised distribution aggravated the famine is that the Government of Bengal's lack of control over supplies was the more serious matter. [417] Some sources allege that the Famine Commission deliberately declined to blame the UK or was even designed to do so [418] however, Bowbrick defends the report's overall accuracy, stating it was undertaken without any preconceptions and twice describing it as excellent. Meanwhile, he repeatedly and rather forcefully favors its analyses over Sen's. [419] British accusations that Indian officials were responsible began as early as 1943, as an editorial in The Statesman on 5 October noted disapprovingly. [420]

Paul Greenough stands somewhat apart from other analysts by emphasising a pattern of victimization. In his account, Bengal was at base susceptible to famine because of population pressures and market inefficiencies, and these were exacerbated by a dire combination of war, political strife, and natural causes. [421] Above all else, direct blame should be laid on a series of government interventions that disrupted the wholesale rice market. [422] Once the crisis began, morbidity rates were driven by a series of cultural decisions, as dependents were abandoned by their providers at every level of society: male heads of peasant households abandoned weaker family members landholders abandoned the various forms of patronage that according to Greenough had traditionally been maintained, and the government abandoned the rural poor. These abandoned groups had been socially and politically selected for death. [423]

A final line of blaming holds that major industrialists either caused or at least significantly exacerbated the famine through speculation, war profiteering, hoarding, and corruption – "unscrupulous, heartless grain traders forcing up prices based on false rumors". [424] Working from an assumption that the Bengal famine claimed 1.5 million lives, the Famine Inquiry Commission made a "gruesome calculation" that "nearly a thousand rupees [£88 in 1944 equivalent to £3,904 [425] or $1,294 [426] in 2019] of profits were accrued per death". [427] As the Famine Inquiry Commission put it, "a large part of the community lived in plenty while others starved . corruption was widespread throughout the province and in many classes of society". [428]

The surprise star of 2013: The 'Prancercise' lady

It's been quite a year for Joanna Rohrback. One year ago this Christmas, she posted an exercise video that showed her working off calories by mimicking the movements of a horse.

She called it "Prancercise."

For months, the video sat unnoticed on the Web. In May, however, somebody, somewhere—I don't really know how it all started—discovered the video, and the rest galloped into history.

I interviewed Rohrback that month, back when the video had only 316,000 hits. It's now closing in on 9 million. She took all the ribbing about the video in stride.

"Let them laugh," she said good-naturedly. "Who would pay any attention to a boring, average, everyday video? I am so glad . I have my confidence."

Rohrback has since appeared on network television and even landed her own commercials for pistachios. She also added an "X-rated" version of the original video, which . wasn't.

I contacted Rohrback to reflect on what a year it has been, and I asked her if Prancercise still has legs.

"The future holds a host of possibilities," she told me. "On the horizon some possibilities are a reality show, my own studio to teach classes and, ideally, franchises."

She's so busy that she's having a hard time staying on top of offers, Rohrback said.

"I still am in the process of enlisting a ⟎lebrity agent,' " she said. "My cautious nature and inexperience has me forestalling this necessary asset."

The best part of the year has been meeting creative, innovative people, according to Rohrback. The toughest part has been running her business almost entirely alone.

"This past year has introduced me to the hypercritical life a celebrity experiences, and that's not necessarily easy medicine to swallow," she said. However, she revels in experiencing what she calls "the groundswell of joy and fun I've obviously been able to deliver to masses of people all over the globe … autographs, pictures with me and hugs."

She's also been "doin' some walkin'," as the video says. "Prior to May I hadn't flown by plane or gone very far since 2000," Rohrback said. "Now it's been several trips to L.A., multiple trips to New York, a trip to Chicago, Washington D.C., and Indiana."

For all the laughs, she's dead serious about the benefits of the exercise she created.

"When the dust finally settles, I hope people will begin to really grasp the true essence and depth of my Prancercise program," Rohrback said.

Have they grasped it enough yet to let Joanna Rohrback Prancercise all the way to the bank?

"I'm not rich, and I'm not poor," she told me. "I'm making a living doing what I love and believe in so much, and for that I'm thoroughly thankful."

—By CNBC's Jane Wells Follow her on Twitter: @janewells


The exceptionally well-preserved specimen features the snail's 99-million-year-old soft-body, along with five recent offspring—the youngest of which is still connected to its mother by a trail of mucus.

The shells of four of her offspring, or neonate, are visible in the amber with the youngest one still attached via a whitish glob of mucus

'The snails were apparently encased in the tree resin immediately after birth and preserved in that position over millions of years,' evolutionary biologist Adrienne Jochum of the Senckenberg Research Institute and Natural History Museum Frankfurt said in a statement.

'The mother snail must have noticed her impending fate and is stretching her tentacles up in a 'red alert' posture.'

Live births are the exception, rather than the rule, in land snails: Cretatortulosa gignens may have evolved to birth to its young alive 'to protect its offspring from predators as long as possible in the tropical forests of the Cretaceous,' Jochum said.

'Just like their modern relatives from the genus Cyclophoroidea, our new discovery probably spent its life inconspicuously on dead and rotting leaves. We assume that the young of this species – compared to egg-laying snails – were smaller and lower in number to increase their chance of survival.'

Live births are rare for land snails but this species might have evolved the practice to birth 'to protect its offspring from predators as long as possible'

Besides the uniqueness of its maternal state, land snails are usually captured as fossilized shells or imprints—the preservation of their 'marshmallow-like' soft bodies is 'a rarity,' the researchers said.

Based on high-resolution photographs and CT scans, the team determined the mother was a newly discovered species of cyclophoroid they dubbed Cretatortulosa gignens, using the Latin word for 'giving birth.'


Amber is the fossilized remains of ancient tree resin that collected seeds, leaves and insects as it was secreted.

In many cases, small creatures like flies, bees, mosquitoes and even snails are attracted to the smell of the resin as it oozes out and are trapped forever.

As the eons march on, the resin is buried under layers upon layers of sediment, and the sustained heat and pressure turn it into the hard golden material known as amber.

Most trees secrete resin but it doesn't always form amber: Exposure to sunlight, rain, bacteria or fungi can prevent the transformation.

But if the conditions are right, the amber can provide a three-dimensional model of prehistoric life far more detailed than a traditional fossil.

Its shell was about a half-inch high.

'Our finding provides remarkable perspectives for interpreting gastropod evolution 80 million years earlier than the fossil record has known up to now,' they wrote.

'It shows that viviparity was already a relevant reproductive strategy in the Cretaceous, probably increasing the offspring's survival chance in a predator-lurking tropical forest.'

This week, scientists in Myanmar announced another unique discovery trapped in amber roughly 99 million years ago—a new species of ancient lizard.

The preserved specimen is in the same genus as the lizard 'Oculudentavis khaungraae,' whose original designation as a 'hummingbird-sized dinosaur' was retracted last year.

The new species, named 'Oculudentavis naga' in honor of the indigenous Naga people, was confirmed as a lizard following CT scans analyzing its skull and partial skeleton.

Major clues included the presence of scales, teeth attached directly to its jawbone — rather than nestled in sockets, as dinosaur teeth were — lizard-like eye structures and shoulder bones, and a hockey stick-shaped skull bone universally shared among scaled reptiles, also known as squamates.

Finding evidence of a pregnancy in a fossil is extremely rare, but in 2011 scientists in Nevada discovered a 246- million-year-old extinct marine reptile with its unborn offspring still in its womb

The creature, a pregnant ichthyosaur christened 'Martina,' was identified as a new species.

Martina's teeth, each about an inch in length, would have helped her tear up prey such as squid or fish in a sea that covered what is now the Western United States.

Martina was found at an excavation site in the Augusta Mountains, 150 miles east of Reno.


Often used in jewelry, Amber is fossilized tree resin—the oldest of which dates back more than 300 million years.

In recent years the Hukawng Valley in northern Myanmar, formerly Burma, has yielded numerous finds.

In January 2017, researchers discovered a 100-million-year-old insect preserved in amber which bore a passing resemblance to ET.

Its features, including triangular head and bulging eyes, were so unique that researchers placed in into a new scientific order, Aethiocarenodea.

The eyes on the side of its head would have given the insect the ability to see at almost 180 degrees simply by turning its head.

In June 2017, researchers revealed a stunning hatchling trapped in amber, which they believe was just a few days old when it fell into a pool of sap oozing from a conifer tree in Myanmar.

The incredible find showed the head, neck, wing, tail and feet of a now extinct bird which lived at the time of the dinosaurs, 100 million years ago, in unprecedented detail.

Researchers nicknamed the young enantiornithine 'Belone,' after the Burmese name for the amber-hued Oriental skylark.

The hatchling belonged to a group of birds known as the 'opposite birds' that lived alongside the ancestors of modern bird.

Archaeologists say they were actually more diverse and successful – until they died out with the dinosaurs 66 million years ago.

They had major differences from today's birds, and their shoulders and feet had grown quite differently to those of modern birds.

In December 2017, experts discovered incredible ancient fossils of a tick grasping a dinosaur feather and another – dubbed 'Dracula's terrible tick' - swollen after gorging on blood.

The first evidence that dinosaurs had bloodsucking parasites living on them was found preserved in 99 million-year-old Burmese amber.

The newly-discovered tick dates from the Cretaceous period of 145 to 66 million years ago.

In 2021, researchers announced they had discovered a new species of land snail from 99 million years ago preserved in amber moments after giving birth.

The gastropod's 'marshmallow-like' soft body of Cretatortulosa gignens was preserved in the sap, as were her five offspring.

The same week, scientists in Myanmar announced another the discovery of a new species of ancient lizard trapped in amber at roughly the same time.

'Oculudentavis naga' was confirmed as a lizard following CT scans analyzing its skull and partial skeleton.


Sugg, from the University of Durham, told The Smithsonian: 'The question was not, "Should you eat human flesh?" but, "What sort of flesh should you eat?".

He explains how Thomas Willis, a 17th-century pioneer of brain science, brewed a drink for apoplexy, or bleeding, that mingled powdered human skull and chocolate. Meanwhile the moss that grew over a buried skull, called Usnea, was used to cure nosebleeds and possibly epilepsy.

Human fat was thought to cure gout, and German doctors soaked bandages in the fat for wounds.

A clue to our grisly past can be found in our literature, says Noble, from the University of New England in Australia. She found references in everything from John Donne’s 'Love’s Alchemy' to Shakespeare’s 'Othello'.

Sugg also tells how fresh blood was highly valued for it's 'effects on vitality'. The German-Swiss physician Paracelsus, in the 16th century, believed blood was good for drinking.

Some followers advocated drinking blood fresh from the body, which does not seem to have caught on, but poor people could pay a small price for a cup of warm blood, served seconds after executions.

A 1592 engraving called Brazil by Hans Staden: While cannabalism has inspired art, it is also a long-established part of humanity

Survivors of the 1972 Andes flight crash had to resort to cannibalism to survive - and was portrayed in the 1992 film Alive (pictured)

Disaster: An image from the original 1972 plane crash

Sugg said: 'The executioner was considered a big healer in Germanic countries. He was a social leper with almost magical powers.'

Sugg also quotes a French recipe from 1679, which describes how to turn blood into marmalade.

A cannibal for modern days: Hannibal Lecter is thankfully a fictional version - although recent real cases do exist

The other belief at the time was that human remains contained the soul of the body, with young men or virgin women seen as the 'freshest', and highly prized.

Even the great Renaissance man Leonardo da Vinci said: 'We preserve our life with the death of others. In a dead thing insensate life remains which, when it is reunited with the stomachs of the living, regains sensitive and intellectual life.'

Cannibalism is not a new phenomena - and can be found in many cultures across the world.

According to the The Smithsonian, the practice began to die out as science flourished - but it still existed in the 19th century.

Sugg found such examples as an Englishman, in 1847, being advised to mix the skull of a young woman with treacle and feed it to his daughter to cure her epilepsy (which he dutifully carried out, but allegedly it failed).

And in 1908, a last known attempt was made in Germany to swallow blood at the scaffold.

However, it still continues - and in places you might not expect.

On top of the recent customs scandal, Noble cites news reports on the theft of organs of prisoners executed in China, and a body-snatching ring in New York City that stole and sold body parts from the dead to medical companies.

Watch how a soldier who survived an RPG in Iraq lives on after ten years

Posted On April 29, 2020 15:58:45

Victor Medina has an actual video of the moment that changed his life forever. One day, his unit in Iraq was forced to take a detour around its planned patrol route. It was June 29, 2009, and Sgt. 1st Class Medina was the convoy commander that day. After winding through alleyways and small villages around Nasiriyah, his convoy came to a long stretch of open road. That’s when an explosive foreign projectile struck the side of his Humvee.

He was evacuated from the scene and diagnosed with moderate traumatic brain injury, along with the other physical injuries he sustained in the attack. It took him three years of rehabilitation, and his wife Roxana became a caregiver – a role that is only now receiving the attention it deserves.

The footage of the attack in the first 30 seconds of the above video is the moment Sgt. 1st Class Medina was hit by the EFP, a rocket-propelled grenade. There just happened to be a camera rolling on his Humvee in that moment. The TBI that hit Medina affected his balance, his speech, and his ability to walk, among other things.

“It’s referred to as an invisible wound,” Victor says, referring to his traumatic brain injury. “In my case, you can’t see it, but I feel it every day.”

Since 2000, the Department of Defense estimates more than 383,000 service members have suffered from some form of traumatic brain injury. These injuries range in severity from ones caused by day-to-day training activities to more severe injuries like the one suffered by Sgt. 1st Class Medina. An overwhelming number of those come from Army personnel. Of the 225,144 traumatic brain injuries suffered by soldiers, most are mild. But even a moderate injury like Victor’s can require a caregiver for the veteran.

This video is part of a series created by AARP Studios and the Elizabeth Dole Foundation, highlighting veteran caregivers and the vets they care for. AARP wants to let families of wounded veterans know there are resources and support available through AARP’s Military Caregiving Guide, an incredible work designed to start your family off on the right foot. Some of you reading may not even realize you’re a veteran’s caregiver. Like Victor Medina’s wife Roxana, you may think you’re just doing your part, taking care of a sick loved one.

But like Roxana Delgado, the constant care and support for a veteran suffering from a debilitating injury while caring for the rest of a household, supporting the household through work and school, and potentially caring for children, can cause a caregiver to burn out before they even recognize it’s happening. It took Roxana eight months to realize she was Victor’s full-time caregiver – on top of everything else she does. It began to wear on her emotionally and strain their relationship.

But it doesn’t have to be that way.

Roxana Delgado and Victor Medina before his deployment to Iraq in 2009.

With AARP’s Prepare to Care guide, veteran caregivers don’t have to figure out their new lives on their own. The guide has vital checklists, charts, a database of federal resources, including the VA’s Caregiver Program. The rest is up to the caregiver. Roxana Delgado challenged her husband at every turn, and he soon rose to the challenge. He wanted to get his wife’s love back.

Before long, Victor was able to clean the house, make coffee in the morning, and generally alleviate some of the burdens of running their home. After 10 years in recovery, Victor Medina has achieved a remarkable level of independence, and together they started the TBI Warrior Foundation to help others with traumatic brain injuries. Roxana is now a health scientist and an Elizabeth Dole Foundation fellow. AARP Studios and the Elizabeth Dole Foundation are teaming up to tell these deeply personal stories of caregivers like Roxana because veteran caregivers need support and need to know they aren’t alone.

If you or someone you know is caring for a wounded veteran and needs help or emotional support, send them to AARP’s Prepare to Care Guide, tell them about Roxana Delgado and Victor Medina’s TBI Warrior Foundation, and let them know about the Elizabeth Dole Foundation’s Hidden Heroes Campaign.

More on We are the Mighty

More links we like


Million-Year-Old Cannibals Took Advantage of the Easy Calories - History

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.

TPIWW. ( Score: 2)

Re: ( Score: 1)

Re: ( Score: 2)

Huh. I had no idea that adult human males had only 125 Calories. Talk about very low calorie density with good micronutrient content. That is the ultimate weight loss food. This should be on Dr. Oz, since he likes promoting that kind of stuff regardless of any empirical evidence. Maybe one of his sponsors could sell assault rifles and charcoal barbecues. Just be mindful that not all are created equal. Barbara Hudson is bigger than the typical adult male and likely carries more calories, for example.

Re: ( Score: 2)

Huh. I had no idea that adult human males had only 125 Calories. Talk about very low calorie density with good micronutrient content.

Nice try at pedantry, with the implied "calories vs. Calories".The article says, "enough to meet the 1-day dietary requirements of more than 60 people." Unless you think a person can get by on 2 Calories per day, in which case it doesn't even rise to pendantry.

Re: ( Score: 2)

Unless you think a person can get by on 2 Calories per day, in which case it doesn't even rise to pendantry.

For up to two months, or maybe longer if you happen to be the size and shape of a walrus.

Re: ( Score: 2)

so, do we taste like gamy chicken?

No. Small animals taste like chicken. Humans taste similar to pork. My Appalachian in-laws call it "long pig". They say that once you get past the "yuck" factor, it isn't that bad.

Re: ( Score: 2)

It's saltier than pork. More like dachhase [].

But I'm wondering about the authors of the article if they jump from "not due to calories" to a conclusion that it must have been ritualistic. Why not taste?

Re: ( Score: 3)

But I'm wondering about the authors of the article if they jump from "not due to calories" to a conclusion that it must have been ritualistic. Why not taste?

The most plausible explanation is that they were fighting wars over territory with other tribes, and if people are killed in battle, then hey, there is no good reason to let a perfectly good corpse go to waste.

Re: ( Score: 2)

That was my uneducated guess too:)
But seriously, any meat provider would have been welcome. I agree that early humans wouldn't go specifically hunting other humans, but if they ended up clashing and killing each other, then all bets were off, consumption-wise.

Re: ( Score: 2)

Occam's razor says: "Because we're tasty!"

That's the main reason humans prefer any particular food group or (eg.) why we eat cows instead of horses. Why would cannibals be any different?

Re: ( Score: 2)

Horses being taboo for eating in the US is in large part based on how the continent was settled. Horses were vital, to the point that horse theft also carried the death penalty.
Once cultural taboos are established, it's hard to get rid of them[*].
So it's just unthinkable for most Americans to eat horse meat, even though it's both good tasting and nourishing, and eaten elsewhere.

Black cured horse meat sausage is probably my absolute favorite sandwich meat.

[*]: Another taboo that makes little sense anymore

Re: ( Score: 2)

"The horse is a noble animal. You ever take a peek at a cow or a pig? We do them a favor eating them. Saves them from having to look at themselves in the water trough every day." -- Colonel Sherman T. Potter

Re: ( Score: 2)

Re: ( Score: 2)

Exactly. Humano pibil is actually pretty nice provided they weren't smokers and got at least a bit of exercise on occasion.

Nah, exercise makes for tough meat. Like with kobe beef, the best meat is from specimens that get plenty of beer and massages. Much more tender, and with nice marbling and a fat rind that preserves the taste.

Re: ( Score: 1)

Re: ( Score: 2)

Eating each other for dietary needs would seem self-limiting. Also any society that condones murder doesn't last long.

Cannibals don't eat "each other". They eat their enemies and outsiders. Few societies consider wartime killings to be "murder". Even the Bible (at least the Old Testament) condones killing outsiders.

Re: ( Score: 2)

A cult ( Score: 2)

Re: ( Score: 2)

The submitter should have linked to the original article in Nature that the Science article refers to, and actually has some meat (no pun intended) to it, but IMO refutes a lot of what this says about ritualism, and caloric reasons.

Oblig. ( Score: 2)

Re: ( Score: 2)

Re: It's the parasites stupid ( Score: 1)

Wow, didn't know the homosapiens were scientist ( Score: 4, Informative)

Yeahhh. I'm sure the homosapiens compared the calories betweens species before hunting.

More seriously though, I find that this studies goes wayyyy to far in it's analysis of the situation. In my mind, for the decision of the prey to hunt, the quantity of food per individual fall pretty low in the decision process.

What's the season? How easy it is to hunt and what are the odd of success? How much experience we have hunting that prey? How far from the colony the prey is? How dangerous it is to hunt that prey? Is there additional benefit to hunt that prey (are they challenging our territory? Can I impress the village if I hunt this?)

Re: ( Score: 2)

Yeahhh. I'm sure the homosapiens compared the calories betweens species before hunting.

It needn't be a complex math problem, sometimes math is built in to evolution. That is, the specimens that didn't choose prey with the correct calorie balance didn't survive.

Re: ( Score: 2)

That's only partially true. There a lot of wiggle room once you have enough to survive. There are lots of side effects that show up in evolution. If something tastes good then creatures will eat it whether or not there was any evolutionary advantage.

Tigers will try to eat humans, even if there easier choices, these are attacks of opportunity. And tigers are not less evolved than humans.

Too often I see scientists trying to create an evolutionary explanation for everything whether an explanation is needed or

Re: ( Score: 2)

Tigers will try to eat humans, even if there easier choices, these are attacks of opportunity. And tigers are not less evolved than humans.

Most tigers will only attack a human if they cannot physically satisfy their needs otherwise. Tigers are typically wary of humans and usually show no preference for human meat. Although humans are relatively easy prey, they are not a desired source of food.

Re: ( Score: 2)

Man-eaters have been a recurrent problem for India. There, some healthy tigers have been known to hunt humans
During war, tigers may acquire a taste for human flesh from the consumption of corpses which have lain unburied, and go on to attack soldiers this happened during the Vietnam and Second World Wars

So some learn to find it tasty, and continue on preying on humans by choice. As Homer put it: "Faster Son, He's Got a Taste For Meat Now! "

Re: ( Score: 2)

Re: ( Score: 3)

African lions have been known to prey on humans as a primary food source.

Game warden and former professional elephant hunter George Rushby killed off the pride in the late 1940s. His autobiography "No More The Tusker" details this and there is a BBC documentary on the the man-eating lions of Njombe.

I think for the lions, the explanation was kind of simple -- a pride of lions basically started preying on humans often and long enough that their offspring learned it to be an easy food source to the point that

Re: ( Score: 2)

Yes. and why do you think something happens to taste good to a certain species? Evolution shaped those senses to pick up beneficial foodstuffs preferentially.

I scream, you scream, we all scream for ice cream. Because it's good for us.

Re: ( Score: 2)

But it is good for you - a mix of fats, proteins, and sugars, plus high levels of calcium and magnesium, all frozen so your body knows it hasn't spoiled.

The problem isn't the ice cream. The problem is eating a day's worth of Calories in ice cream in one sitting.

Re: ( Score: 3)

But it is good for you - a mix of fats, proteins, and sugars, plus high levels of calcium and magnesium, all frozen so your body knows it hasn't spoiled.

The problem isn't the ice cream. The problem is eating a day's worth of Calories in ice cream in one sitting.

4 ounces of vanilla ice cream is 137 Calories. Assuming a normal intake of 2000 Calories/day, you'd have to eat almost 1/2 gallon of ice cream in one sitting to do that. Not sure a normal human being could accomplish that on a steady basis, although I've seen a few people at Walmart for whom that might be possible.

Re: Wow, didn't know the homosapiens were scientis ( Score: 2)

Re: ( Score: 2)

That and you don't exactly need a lab to determine, there is more good meat on the bones of that Mammoth or Buffalo than on Bob over there.

Re: ( Score: 2)

Re: ( Score: 2)

It is a risk,cost/reward analysis, something most animals including humans can do quite effectively from instinct alone.
What you enumerate are mostly the risks and costs. However, the calorie content of the mammoth makes the reward really high, and thus, high risk is appropriate.

Of course they don't have an table with the calorie content of each specie but they should have a good idea about how well each one can feed the tribe.

Regular families ( Score: 2)

They were just trying to make ends into meat.

Re: ( Score: 2)

Did you put blood, sweat, and tears into that? Make no bones about it please.

NZ Musket Wars, 1830s, not Ancient ( Score: 2)

You attack your enemy, and if successful take their land. So what do you do with the people? Either make them slaves or eat them. Simple.

This is exactly what had happened in New Zealand for centuries. But then the great chief Honga Hika realized the potential of muskets. He managed to go all the way to England, proportadly to help missionaries with a Mauri dictionary, but actually to get his hands on the "thousand thousand" muskets he heard were stored in a place called the tower of London. In that he

The eaten person no longer eats themselves ( Score: 2)

They need to take into account not just the calories contained in the person eaten, but also the calories that will no longer be consumed by the eaten, who having been themselves consumed no longer consume themselves, and those calories are thus available to the eater.

Re: ( Score: 2)

no longer eats themselves

Our forefathers were flexible fellows.

Funny alternative ( Score: 2)

I'm imagining an alternate history version of this story, in a world where cannibalism is common. The same researchers, studying the same history, trying to figure out how the same practice started long ago, but from a different perspective.

Result: "Ancient Cannibals Didn't Turn to Cannibalism Just To Consume the Spirits of the Vanquished".

Re: ( Score: 2)

The indications are that in European history, was where a powerful phobia of consuming human flesh grew. Taking into account human propaganda to make killing humans more acceptable, likely the target of that prohibition were the Neanderthals. That whole ice giant exaggeration and the more northern Europe base for the legend tend to indicate, the whole thing was about the purposeful genocidal eradication of Neanderthals and the stories to excuse it plus the inevitable rape, abuse and slavery that likely acco

Science. ( Score: 2)

. replying to questions and comments that were never stated.

Seriously, how do you even get funding for this?

"I need money and time to research whether cannibals just eat people because coconuts are in short supply."

"Sure. I've definitely heard that claim being made. Think I read it in a text book."

This is straw man research at it's best. Come up with an arbitrary claim and test it. It is crap published simply because you need to keep publishing to keep your position, and it tak

Close but not quiet. ( Score: 4, Funny)

To say it was a ritualistic is missing the mark. The truth is that they ate people with "poko" (exceptional traits). Tribes would disseminate lists of exceptional traits they needed to acquire to advance within the hierarchy of their tribes. The quest to advance to the tribe leader has been succinctly described as, "Pokoman: gotta eat 'em all."

Re: ( Score: 1)

Re:Close but not quiet. ( Score: 4, Interesting)

A friend of mine is a paleoanthropologist (I hope that's the word in English too for "the guy that digs human bones from the ground that have been there a million years and tries to make sense of them"). According to him "ritualistic" and "religious" has become some sort of in-joke for everything that makes no sense. If they find something and can't really see any purpose of the ancient human for doing it, it's for "ritual" or "religious" reasons, because that doesn't have to make sense and it's as good an excuse as any for finding bones and other stuff in odd places, odd settings or arranged in some particular fashion for no apparent reason.

In other words, whenever you get to hear one of them talk about "ritual" reasons for something, it basically means "we really have no good idea why the heck they did that".

Re: ( Score: 2)

In other words, whenever you get to hear one of them talk about "ritual" reasons for something, it basically means "we really have no good idea why the heck they did that".

Considering the strange and elaborate rituals of current humans, it's a good deduction. I understand that science is all about continually seeking evidence and positing the best explanation when you find more.

Re:Close but not quiet. ( Score: 4, Interesting)

True, but in the end it's basically them saying "We have no idea what that shit is about."

And, let's be honest here, that's what religions look like. Imagine there is no written word. We haven't invented writing yet. And you unearth the ruins of a Catholic church. What will you find? Well, if you do it in Europe, chances are good that you will notice that this building was taller than many of the buildings around, giving you the idea that it was some important building. You will also probably find the altar and notice that this table played some central role in this building. Usually it's not big enough to serve as a table where everyone who was there could sit and eat, so it wasn't the dinner table for the congregation. You might find some of the wood used for the benches and notice that they were arranged in such a way to face that altar, and you would probably deduce that some sort of ritual or religious background is likely.

What else will you find? You will probably find the tabernacle (where the hosts are stored), and you might even find that the (usually) richly decorated bowl inside he (usually also quite lavishly decorated) tabernacle contained an edible substance. It is also usually offset to a side (unless you're dealing with Gothic cathedrals, where it can as well be present near or even on the altar), so you would probably deduce that food still played a key role in the rituals that were held there. Also, food was somehow sacred, because it was stored in such a lavishly decorated box, and it was obviously considered valuable because the box can usually be locked. Usually the ornaments also contain angels that appear as guards for the contents, so you would probably come to the conclusion that this food was also supernaturally guarded against evil spirits or that the congregation was supposed to fear the retaliation of supernatural forces should they somehow act "wrongly" towards food.

Your first conclusion would probably be that the cult celebrating there was either one celebrating food or a cult with sacrificing food as a central element. You will find that the food in that special place is of a single kind (usually host wafers), which suggests that the bread was distributed from there rather than everyone bringing something to the celebration and the food of the believers being stored there. So people congregated to eat together. Which will probably puzzle you because, as stated before, the table, the altar, is by no means big enough to allow everyone fitting into the church to sit around it and have a meal.

What else will you find? Well, invariably, you'll find a cross. Actually, usually you will find multiple ones. The cross as a central element of the faith will be emblazoned on pretty much every sacred item, sometimes multiple times, so you will easily identify it as the most important symbol of the religion. You will quickly also find out that this isn't just some pretty symbol but that the cross is something where someone gets nailed onto and that this is also critical to the religion, i.e. that someone is tortured by being nailed to the cross. You will find paintings, both on canvas as well as on walls, and stained glass, that tell the story of someone being nailed to a cross. This is very obviously a central element of the faith, and you can somehow deduce that the person being nailed to the cross is revered and that it is in some way connected to the divine, that the god or gods these people believe in shine upon the crucified.

So you could deduce that people in this religion either wanted to be nailed to crosses to be considered divine, or that they do it onto others in an attempt to "save" them for their religion. One could also ponder that nailing someone to the cross is some kind of fertility rite (remember the food in the special box off to the side), and that the people of this cult praise and deify the person sacrificing himself in such a manner.

And so on. You see the problem here, that it is virtually impossible to accurately identify and follow the idea of a religion just by the stuff you find in the ground. You can at best make some guesses, but as soon as metaphysical shit gets mixed into the fold, you're usually completely off.