Time has Come! Get Real Meat Manufactured in Factory

Meat without animals. It’s not a new notion. In a 1932 essay predicting sundry future trends, Winston Churchill wrote, “We shall escape the absurdity of growing a whole chicken in order to eat the breast or wing, by growing these parts separately under a suitable medium.”
The basic science to grow meat in a lab has existed for more than 20 years, but no one has come close to making cultured meat anywhere near as delicious or as affordable as the real thing. But sometime in the next few years, someone will succeed in doing just that, tapping into a global market that’s already worth trillions of dollars and expected to double in size in the next three decades. Despite a bevy of well-funded competitors, no one is better positioned than Memphis Meats to get there first.
Operating with a team of just 10 (though it’s expected to grow to 40 in a matter of months), the startup has already cultivated and harvested edible beef, chicken, and duck in its bioreactors, a feat no one else has achieved. Even allowing for the vagaries of regulation—it’s not clear which federal agency will oversee a foodstuff that’s real meat but not from animals—the company expects to have a product in stores by 2021.
“They’re the leader in clean meat. There’s no one else that far along,” says venture capitalist Steve Jurvetson, whose firm led Memphis Meats’ recent $17 million Series A.
Before he met Valeti in 2016, Jurvetson spent almost five years researching lab-grown meat and meat alternatives, believing the market was set to explode. “They’re the only one that convinced me they can get to a price point and a scale that would make a difference in the industry,” he says.
Uma Valeti remembers the first time he really thought about where meat comes from.
A cardiologist turned founder, Valeti grew up in Vijayawada, India, where his father was a veterinarian and his mother taught physics. When he was 12, he attended a neighbor’s birthday party.
In the front yard, people danced and feasted on chicken tandoori and curried goat. Valeti wandered around to the back of the house, where cooks were hard at work decapitating and gutting animal after animal to keep the loaded platters coming. “It was like, birthday, death day,” he says. “It didn’t make sense.”
Valeti remained a carnivore for more than a decade, until after he had moved to the U.S. for his medical residency. But in time, he found himself increasingly disturbed by food-borne illness. He was especially grossed out by the contamination that happens in slaughterhouses when animal feces get mixed in with meat. “I loved eating meat, but I didn’t like the way it was being produced,” he says. “I thought, there has to be a better way.”
In a tiny R&D suite in a nondescript office building in the unglamorous Silicon Valley exurb of San Leandro, a lanky, red-haired molecular biologist named Eric Schulze is fiddling with a microscope, and I’m about to get a look at that better way. Like the specimen he’ll show me, Schulze is something of a hybrid.
Formerly a Food and Drug Administration regulator, he’s now an educator, TV host, and senior scientist at Memphis Meats, the company that Valeti founded in 2016 and whose laboratory he is showing me. Lining one wall is a HEPA-filtered tissue cabinet, to which someone has affixed a “Chicken Crossing” sign, and a meat freezer labeled “Angus.” Along the opposite wall is an incubator dialed to 106 degrees Fahrenheit, the body temperature of Anas platyrhynchos domesticus—the domestic duck.
Schulze plucks a petri dish from the incubator, positions it under the microscope, and then invites me to look into the twin eyepieces. “Do you see those long, skinny things? Those are muscle-forming cells,” he says. “These are from a duck that’s off living its life somewhere.” The cells look like strands of translucent spaghetti, with bright dots—nuclei, Schulze says—sprinkled here and there.
He removes that petri dish and inserts another. In it, scattered among the spaghetti strands, are shorter, fatter tubes, like gummy worms. Those, he explains, are mature muscle cells. Over the next few days, they’ll join together in long chains, end to end, and become multicellular myotubes. These chains will form swirls and whorls until they look like the sky in Van Gogh’s Starry Night. Also, Schulze casually notes, “they’ll start spontaneously contracting.”
Wait. Contracting? As in … flexing?
“This is all living tissue. So, yes,” Schulze says.
The idea of a dish full of duck mince suddenly beginning to twitch and squirm makes me shake my head. What’s making duck bits move if not a brain and nerves? Schulze is used to this reaction. “For the past 12,000 years, we’ve assumed that when I say the word ‘meat,’ you think ‘animal,’ ” he says. “Those two ideas are concatenated. We’ve had to decouple them.”
Going in with Jurvetson was a lineup of household-name investors that includes Bill Gates, Richard Branson, and Jack Welch; their money will be used to build up Memphis Meats’ already formidable trove of intellectual property and to fine-tune the process of combining cells to produce the tastiest steaks and patties, and drive down the cost. The infusion of prestige also boosts competitors. Memphis Meats’ lineup of backers “is enormous, especially for a small company like mine,” says Mike Selden, CEO of lab-grown fish-filet startup Finless Foods. “When investors tell me, ‘Great idea, but we can’t really vet the technology,’ I can say, ‘Richard Branson and Bill Gates think it’s great.’ ”
The business case for clean meat, as the fledgling industry’s progenitors prefer to call it, could hardly be plainer. As emerging middle classes in places like China and India adopt Western-style diets, global consumption of animal protein skyrockets. (Memphis Meats is working on duck because it’s so popular in China, which consumes more of it than the rest of the world combined.) But the U.N.’s Food and Agriculture Organization estimates 90 percent of the world’s fish stocks are now fully exploited or dangerously overfished. More than 25 percent of Earth’s available landmass and fresh water is used for raising livestock. Only one of every 25 calories a cow ingests becomes edible beef. And meat processors often must pay disposal companies to haul away their inedible tonnage—hooves, beaks, fur, cartilage.
But it’s not just the financial opportunity that has the likes of Gates and Branson so excited: Meat is an ongoing environmental and public-health catastrophe. Livestock account for 14.5 percent of greenhouse gas production—more than all transportation combined. As meat demand soars, virgin rainforest gets razed to grow feed, and freshwater sources are diverted from drought-prone regions. Overcrowded pig and poultry farms are reservoirs for global pandemics; animals raised in them are pumped full of anti­biotics, spurring the rise of drug-resistant superbugs.
A subset of affluent consumers is willing to pay higher prices for free-range beef, cage-free eggs, and other animal products marketed as sustainably produced and cruelty-free, but that’s a tiny slice of the market. With the FAO expecting meat consumption to nearly double by 2050, only a radical break with the past will prevent doubling down on practices such as high-density feedlots and vertical chicken farms.
The idea of such a radical break attracted Branson, who stopped eating beef in 2014 out of concern over deforestation and slaughterhouse practices. “I believe that in 30 years or so,” he wrote in a blog post, “we will no longer need to kill any animals and that all meat will either be clean or plant-based.”
Big as it would be if Branson’s prediction comes true, those behind Memphis Meats believe they’re part of something even larger. Already, so-called cellular agriculture produces everything from leather and vaccines to perfume and building materials. Within a few years, proponents say, it could eliminate organ donation, oil drilling, and logging. The possibilities are as broad as life itself. “Human civilization was largely enabled by the domestication of livestock,” says Nicholas Genovese, Valeti’s co-founder. “If we can master producing meat without livestock, it’s really going to be the second domestication.”
Valeti’s meat-without-animals epiphany came soon after his cardiology fellowship at the Mayo Clinic in 2005. In a cutting-edge clinical trial, he used stem cells to repair damage caused by cardiac arrest. Stem cells are undifferentiated cells that can become different types of tissue as they mature; injected into a heart that’s been ravaged by a coronary, they can form healthy new muscle to replace what has been lost. If stem cells could be cultivated into heart muscle, he thought, why couldn’t they be manipulated into making a drumstick or a porterhouse? Why not grow just the porterhouse and skip the rest of the cow? And while you’re at it, why not grow a steak with a healthier nutritional profile?
A bit of research showed Valeti that he was far from the first to have the idea—but also convinced him that what hadn’t been feasible was quickly becoming so. Rapid DNA sequencing was making it radically faster and cheaper to, say, program yeast cells to manufacture proteins. Advances in data science made it possible to tease out relationships in huge volumes of experimental data. Meanwhile, the growing high-end market for sustainable and humanely raised foods pointed to a path for a product that was bound to be expensive in its earliest incarnations.
“If I continued as a cardiologist, maybe I would save 2,000 or 3,000 lives over the next 30 years,” Valeti says. “But if I focus on this, I have the potential to save billions of human lives and trillions of animal lives.” His ambitions got a major boost in 2014, when a friend from New Harvest, a nonprofit institute that supports work in cellular agriculture, offered to introduce him to Genovese, a stem cell biologist. Like Valeti, Genovese had become vegetarian. As a high school student, Genovese was a member of his local 4-H Poultry Club, competing to raise the largest chickens. “Everyone would get their baby chicks on the same day. A few months later, there’s a weigh-in, and they give out trophies,” he recalls. “As a teenager, it’s very exciting.” It was also sobering. Those chickens, he says, “looked up to you for their feed, and looked up to you to protect them. You lock them up at night so the foxes don’t get them. But at the end, you send them to their demise.”
He earned degrees in cell biology and tissue engineering and eventually got a job in a lab run by Vladimir Mironov, who was investigating the use of bioprinting—3-D printing using living cells—to generate replacement organs. In 2010, Genovese accepted a three-year fellowship from People for the Ethical Treatment of Animals, the controversy-courting animal welfare nonprofit, to conduct research into cultured meat. The PETA connection also made him a target for protest from local hog farmers, who objected to his presence after he moved to the University of Missouri. After learning about Valeti’s work, Genovese quickly nabbed a position in his new lab at the University of Minnesota.
By 2015, with Genovese on board, Valeti realized it was time to ditch academia. Another New Harvest contact suggested he reach out to IndieBio, the life-sciences-oriented tech accelerator. He did, and within an hour he was on the phone with its director, Ryan Bethencourt.
Bethencourt, a vegan, was well versed in the challenges and promise of cultured meat. He had previously tried to persuade Mark Post, a Dutch researcher who’d produced the first full hamburger patty out of lab-grown beef, to bring his work to IndieBio. (Post demurred but subsequently launched MosaMeat, backed by Google co-founder Sergey Brin.) “I said to Uma, this is an opportunity to become a leader in this space and transform food as we know it,” Bethencourt recalls. IndieBio became the first outside investor in Valeti and Genovese’s startup, initially dubbed Crevi Foods, after the Latin word for “origin.” (The founders quickly realized that it was a bit too clever. “Nobody understood it,” Valeti says.)
In September 2015, the two men moved to the Bay Area and started culturing cow muscle and connective tissue cells. (We think of meat as synonymous with muscle, but much of meat’s flavor and mouthfeel comes from the breakdown of collagen, a component of skin, ligaments, and fascia. It’s necessary to blend different types of cells to make lab meat that tastes like the real thing.)
By January, they had enough to make their first tiny meatball. “I’ll never forget when we first tasted what we had harvested,” says Valeti. “It just immediately brought back all the memories you get when you eat meat.” It had been 20 years since Valeti had, but it nonetheless confirmed that, as far as they still had to go, they’d produced, on the most fundamental level, meat.
That helped validate the idea of trying to grow meat in the first place. All the aims of Memphis Meats and its ilk—­making food healthier, more humane, and more ecofriendly—could arguably be better served by leading consumers to plant-based alternatives. Such options are getting more sophisticated: Another Silicon Valley startup, Impossible Foods, has raised almost $300 million for a veggie burger that browns like ground beef and even “bleeds” when served rare, thanks to the presence of heme, a com­ponent of the blood molecule hemoglobin, which is also found in plants. The Impossible burger mimics the taste of a haute fast-food patty, though its consistency is not quite there—the outside caramelizes, but the interior is a tad puddingy. (Gates has put money into Impossible, as well as in its competitor, Beyond Meat.)
But the lab-grown-meat crowd believes plants will never be the whole answer. Meat is simply too complex and culturally ingrained. “Humans evolved over thousands of years eating meat,” says Valeti. A high-tech veggie burger might be able to replace ground chuck, but that’s one narrow application. Lab meat, he says, “because it’s meat, can be cooked any way meat is cooked. People can buy it off the shelf, take it home, and cook it in the ways they’ve known for centuries.”
Those arguments led Hampton Creek, one of the best-known and best-funded plant-based food startups, to expand into clean meat. For its first four years, Hampton Creek focused on using plant proteins to replace eggs in products like mayonnaise and cookie dough. But CEO Josh Tetrick came to appreciate consumers’ attachment to what they know. “A big limiting step to plant-based meat is culture. My family wouldn’t go to Walmart and buy something that says ‘plant-based hamburger,’ ” says Tetrick, who grew up in Alabama.
Tetrick’s pivot toward clean meat happened amid a conflict with the company’s board of directors, which led to all five outside directors resigning. That followed a long series of company stumbles, including an attempted coup by top executives who tried to go behind Tetrick’s back to the board and were promptly shown the door; accusations of a large-scale buyback program to boost sales, which drew scrutiny from the Justice Department; and the loss of one of its biggest distributors—Target.
Skeptics wonder if the company’s surprise June announcement that it will have one or more cultivated-poultry products in stores by the end of 2018 was a diversionary tactic. The timeline seems optimistic. Even if the kinks can be worked out that quickly, there’s no guarantee regulators will sign off in time. Still, Hampton Creek has raised more than $200 million in venture capital and has a team of 60 working on R&D, including top cell biologists from academia and industry.
In September, to punctuate an announcement that it had secured patents around its clean-meat processes, Tetrick tweeted a video of what looks like a burger sizzling in a skillet; a spokesman declined to say whether the video shows the company’s first clean beef. A knowledgeable industry insider says Hampton Creek’s progress and dysfunctions are real. “I think the only thing that will prevent Hampton Creek from being first to market with this is the company exploding,” says the source. (Asked for a response to this statement, Hampton Creek declined to comment.)
For Memphis Meats, with its significant head start and singular focus, the path to success is straightforward. It needs to make its meats more appetizing and much cheaper. One morning this summer, Valeti assembled his full team to talk about how far they had come and how far they still had to go. A few weeks earlier, Memphis Meats had held its first-ever tasting for outsiders, inviting more than 25 people to sample fried chicken and duck à l’orange.
The event was deemed a success.
“They really nailed the texture and mouthfeel,” one guest, sustainable food advocate Emily Byrd, said. But it was expensive. Growing that “poultry” cost about $9,000 per pound. At his company meeting, Valeti revealed that the most recent harvest, in May, had been considerably cheaper, with the meat costing $3,800 per pound. “I want it to keep going down by a thousand dollars a month,” said Valeti. “Our goal is to get to cost parity, and then beat commercial meat.”
That remains a distant goal. But theoretically, cultivating meat should have high startup costs but low operational costs: Given the right conditions, living cells divide on their own. The major factor governing costs is the nutrient-rich medium in which those cells grow. All the companies that have successfully grown meat have relied on fetal bovine serum, which is extracted from cow fetuses, as a key medium component.
But FBS is expensive, and significantly weakens claims cultivated-meat companies can make about vegan or cruelty-free products. Hampton Creek says it has grown and harvested chicken without FBS, although it has been tightlipped about its methods. Memphis Meats acknowledges it used FBS to start its cell lines but says, “We have validated a production method that does not require the use of any serum, and we are developing additional methods as we speak.”
Tetrick likens the expense of medium—it’s called “feed” at Memphis Meats—to the need electric-car makers have to develop better batteries. “If we figure out how to surmount that limiting step,” he says, “suddenly all the economics start looking better.”
Electric cars are an apt metaphor, because whenever clean meat does hit supermarkets, it will almost certainly be pricier than conventional meat. Memphis Meats and its competitors will likely spend a few years courting consumers who buy wild-caught Atlantic salmon and grassfed sirloin at Whole Foods. “They’re going to have to somehow position it as something worth paying more for,” says Patty Johnson, an analyst who covers the meat industry for Mintel Group.
One possibility, she says: Like Impossible Foods, Memphis Meats could persuade influential chefs to feature its wares on their menus. Another would be genetically engineering nutritional profiles so the company could tout increased health benefits—adding, say, omega-3 fatty acids to beef to make it as healthy as salmon.
Valeti is careful to avoid sounding as if he wanted to put Big Meat out of business. He argues that the big meat processors will be keen on clean technology, whether as licensees, customers, investors, or acquirers. (Agribusiness giant Cargill joined Gates and Branson in Memphis Meats’ Series A; Tyson Foods has a venture fund that invests in similar technologies.) Cows and pigs aren’t getting any cheaper to raise or slaughter, but if lab meat follows the course of other early-stage technologies, it can continue to get more inexpensive for years to come. “It’s not crazy to think you might one day be able to brew meat at $2 per pound, $1 per pound,” says Bethencourt. “At that point, we can replace pretty much all industrial meat. In 20 years, I think people will look at growing and killing an animal as bizarre.”
And while Missouri’s pig farmers may see their doom in a world of meat without animals, companies that buy meat from farmers view it very differently, explains Jurvetson. When an outbreak of avian flu or mad cow strikes, “if you’re in their industry, it’s a very scary world,” he says.
Valeti won’t mince words, either. “The status quo in animal agriculture is not OK. That status quo is going to kill a lot of people.” All the more reason to bring on the second domestication.

Sex and the seedy: Masculine Concupiscence Meeting Its Comeuppance

American media is overflowing with headlines about predatory sexual behavior from the country’s male celebrities and power brokers, broadly justifying the diva Mae West’s quip that if you “give a man a free hand, he’ll try to put it all over you.” Going by the rash of recollections, disclosures and mea culpas, aggressive sexual behaviour from the male of the species has been happening for a long time, and the female of the species, said to be deadlier than the male only in certain contexts and categories, has been passive about it for equally long.
Those days may now be mercifully drawing to a close. Senators and Congressmen, entertainers and sportsmen, media anchors and market mavens, are all being brought to their knees by long-suffering women, many of whom have now found their voice, their courage, and the forums to express their anger. There isn’t a day or a place in this world where patronising patrimony isn’t on display, exemplified recently by the alarm in Japan from male politicians when their female colleague brought her new-born son to work. The dice is loaded against women.
While a few women made light of male advances – “If you wear a short enough skirt, the party will come to you,” said Dorothy Parker, after bemoaning that “men seldom make passes at girls who wear glasses” – the majority were subjected to the unwanted and loathsome burden of masculine concupiscence, not male sense. On their part, many men comfort themselves that they are good for war, not love, an emotion that requires both sense and sensibilities. “God gave men both a penis and a brain, but unfortunately not enough blood supply to run both at the same time,” explained Robin Williams.
Love and sex ought to be a two-way treat. Men tend to read too much into a woman’s language – body language, bawdy language, and indeed any language – including when they are tongue-tied, which they often are when faced with gross male behaviour. Most notably this includes the self-deceptive assumption that no means yes, a trope that has been milked to death in Bollywood movies. No means no. As the comedian Chris Rock once said “There are only three things women need in life: food, water and compliments.” If they want anything else, they’ll let the men know.
Does ‘character matter’ anymore?
Twenty years ago, America was embroiled in a major political sex scandal. It revolved around a powerful man using his office to prey on at least one young female co-worker and to carry on an adulterous and immoral sexual relationship with her in the workplace and elsewhere.
Twenty years later, we are again embroiled in a major sex scandal. This one is not just political, but also corporate and journalistic. It’s spilling the “open secrets” of Hollywood; New York; Washington, D.C. and beyond out into the open: pay-to-play fornication and adultery, sexual assault, rape and cover-ups, including but not limited to nondisclosure agreements and payments in the millions of dollars.
How did we get here in 2017? It has to with what we did in 1998. We got here because of what we did there.
In 1998, the sexual culprit wasn’t Harvey Weinstein, Mark Halperin, Al Franken, Kevin Spacey, Charlie Rose or dozens of other men whose actions are currently in the spotlight. It was United States President Bill Clinton.
Clinton indulged in a sexual relationship with one of his interns in 1995-1996, while holding the most powerful office on Earth. Other sexually related accusations abound, and I do not know which accusations, if any, are true. Except for one. One that was so shocking, so shameful and so openly investigated that the whole world found out about it—the affair with the intern.
But how did Americans react in the late 1990s when the president of the United States of America committed the same acts that we seem to be so righteously indignant about today?
“None of our business.”
Politicians, operatives and journalists openly said, “Character doesn’t matter.” The obviously corrupted personal character of the most important leader in the free world didn’t matter, they told us. All that mattered were his policies and “the economy, stupid.”
The opposing political party used the scandal to impeach the president, not for his sexual trysts but for perjury and obstruction of justice. Not a single member of Clinton’s party voted to convict him, and he was acquitted on all charges. America collectively shrugged its shoulders and officially said, A president’s personal character just doesn’t matter.
And here we are.
A producer’s personal character doesn’t matter. An actor’s personal character doesn’t matter. A journalist’s personal character doesn’t matter. A senator’s personal character doesn’t matter. A CEO’s personal character doesn’t matter. And they acted accordingly.
The New York Times, stalwart defender of the Clintons for so long, published an interesting column recently. Michelle Goldberg stated that she now sides with some of President Clinton’s alleged victims in an article titled “I Believe Juanita”—a reference to the woman who accused the former president of rape.
“New York Times columnist declaring in print that she believes the 42nd president of the United States is a rapist ought to make people stop and think,” wrote Jim Geraghty (emphasis added throughout). Geraghty also wondered if the Clinton scandal, and the relaxed reaction of many Americans, might have actually emboldened sexual predators like Harvey Weinstein. “Since we’re seeing a tide of slime from predatory men gradually oozing out of Hollywood studios, television networks and state capitals, it seems fair to ask whether Clinton’s experience left many powerful and abusive men convinced that they could escape serious consequence.”
Remarkably, President Clinton’s popularity actually increased during the sex scandal. One 1998 CNN/USA Today poll revealed he was the most admired man in the country!
A few weeks after President Clinton’s impeachment trial, Paul Weyrich, head of the conservative Free Congress Foundation in Washington, made a startling suggestion. He wrote, “I no longer believe that there is a moral majority. I do not believe that a majority of Americans actually share our values.” Otherwise, he said, President Clinton would have been driven out of office.
This was America in the 1990s. Even after learning all the gross details about its president and his behavior (in the White House, no less), the nation reportedly still admired him more than any other man. Do you know who probably admired President Clinton the most? Men like Harvey Weinstein, Bill Cosby and Kevin Spacey.
“The culture we are living in becomes an ever wider sewer,” wrote Weyrich in 1999. “In truth, I think we are caught up in a cultural collapse of historic proportions, a collapse so great that it simply overwhelms politics. … I don’t have all the answers or even all the questions. But I know that what we have been doing for 30 years hasn’t worked, that while we have been fighting and winning in politics, our culture has decayed into something approaching barbarism.”
Nearly 20 years have passed since the Clinton presidency and America’s collective decision that character doesn’t matter. This is what that looks like.

LED: A Source of Light Pollution & Carcinogenic Product

People are losing their view of the night sky all across the globe. A sweeping new study analyzing five years of satellite data reveals that nights are growing brighter around the world as cities switch to more energy efficient LEDs.

The new study shows that global light pollution has increased by about 2 percent per year from 2012 to 2016. This increase is important to document, since a range of health-related issues for wildlife and even humans have been traced to light pollution. “While we know that LEDs save energy in specific projects — for example when a city transitions all of its street lighting from sodium lamps to LED — when we look at our data and we look at the national and the global level, it indicates that these savings are being offset by either new or brighter lights in other places,” study author Christopher Kyba said during a news conference.

The study compares the increase in light pollution to global gross domestic product, showing that, while LEDs were promised as a way to reduce energy consumption around the world, they may not have fulfilled that promise.

Instead, cities are simply using more LEDs to light their cities, canceling out any savings they may have had from switching to the new lights in the first place, the authors suggest

Kyba and his colleagues used data collected by the VIIRS instrument on the Suomi NPP satellite to monitor changes in artificial light over the years.

The scientists analyzed the data to see which parts of the world got darker, brighter, or didn’t change at all.

Larger city centers actually looked like they decreased in brightness due to replacing older lights with white LEDs, according to Kyba, but those cities increased in brightness in areas around the city.

“There is a potential for the solid-state lighting revolution to save energy and reduce light pollution, but only if we don’t spend the savings on new light,” Kyba said in a statement.

Artificially bright night skies can create a whole host of health concerns for wildlife and humans.

“Recent findings on ecological light pollution include the disruption of critical ecosystem functions as well. There are several examples. For instance, a Nature study just reported an impact on pollination by nocturnal insects. So many insects have also functions as nocturnal pollinators, and not only that: We have to take care of diurnal pollinators,” Franz Holker, another author of the study, said.

“In addition, it [light pollution] can impact the seed dispersal by bats, for instance, carbon mineralization by microorganisms.”

A 2009 study showed that some trees have trouble adapting to artificial light as the seasons change, and newborn sea turtles also mistaken streetlights for the light of the moon, which they rely on to guide them to the ocean. When these turtles follow streetlights, it leads them away from the sea and to possible death.

The VIIRS data used for the new study has its own blindspots, however.

The satellite instrument can’t see blue light, which is considered one of the worst forms of light pollution. Blue light scatters more in the atmosphere than other parts of the spectrum, meaning that it spreads farther and wider in the atmosphere than other colors of light like red light.

Because VIIRS is unable to see that light, the authors of the study suggest that people on the ground actually experience more light pollution than what the satellite can monitor.

Blue light is also linked with serious health concerns.

“Blue light interferes with melatonin production and circadian rhythms,” Fabio Falchi, a light pollution researcher who is not affiliated with the new study, said in an email. “Our dark adapted eye is more sensitive to blue, so we’ll perceive the sky brighter, so more light polluted.”

The new study is the most recent in a wave of research digging into how the night sky is becoming less and less visible to more people on Earth.

An atlas released in 2016 showed that one-third of people on Earth aren’t able to see the clouds of the Milky Way at night.

That study found that Singapore is so bright that human eyes can’t actually adapt for night vision due to the light pollution.

While researchers agree that energy-efficient LEDs are the way to go on the whole, the study’s authors contend that orange or red-tinted LEDs would be far better for mitigating light pollution than white LEDs, which pollute cities with blue light.

“So the real dream is that we have great vision on the streets, every where you go you would never really experience inside of a city an uncomfortably dark place, but because the light is used so much more efficiently, you would have very many more stars to see over top of the sky,” Kyba said.

You may be saving money, but you’re likely using more lights.

That is due to the so-called rebound effect: lighting has become cheaper and more energy efficient, so people are using more lights more often.

While other studies have tackled the issue of light pollution, lead researcher Christopher Kyba, who is originally from Alberta and is now a physicist at the GFZ German Research Centre for Geosciences, said there is one key difference in this one: “We’re looking down versus up.”

The researchers in the study, which was published Tuesday in the journal Science Advances, used data from the Visible Infrared Imager Radiometer Suit (VIIRS) aboard the Suomi-NPP satellite from 2012 to 2016 to study surface light pollution. The radiometer isn’t sensitive to blue light — the colour put out by many LED lights — so Kyba expected to find a decrease in light detected.

It’s not just a problem of the west… we’re really seeing it all over the world, says John Barentine , International Dark-Sky Association

“What I had expected to see was that in wealthy places countries — the United States, countries with a lot of lighting, Italy — cities were going to be changing to LED and that would make them look darker,” Kyba said. “But it turns out the United States stayed flat, and other countries like Germany became brighter. That means that the increase is actually even larger than what we report here.

“Somewhere else, there’s new light.” That new light likely comes from increased use, since it’s cheaper.Over the past four years, the artificially lit surface of Earth at night increased by two per cent annually, or 9.1 per cent in total..

“When something becomes less expensive to produce, we tend to use more of it than less,” Barentine said. “Some of us had this suspicion that maybe all these cost savings of the better energy efficiency of lighting may just be ploughed back into buying more lighting…and that seems to be what’s happening.”

The study also found that the increase in light pollution corresponds to increasing gross domestic product (GDP): developing countries show the fastest growth.

Far-reaching effects

Light pollution has been cited by several studies as a potential risk factor for various health issues, including cancer.

“I am concerned about the fact that we now have scientific evidence that the light at night and especially the white light [that contains blue by definition], is linked with many health issues like breast and prostate cancers and sleep disorders,” Martin Aubé, who studies light pollution at Quebec’s University of Sherbrooke and was not involved in the study . “Even if we know this, we still increase the amount of light at night.”

Impact of light pollution: Franz Hölker, an ecologist at the Leibniz-Institute of Freshwater Ecology and Inland Fisheries, says “artificial light is an environmental pollutant with ecological and evolutionary implications for many organisms from bacteria to mammals, including us humans.” Light pollution has been shown to threaten 30% of nocturnal vertebrates and 60% of nocturnal invertebrates plus it affects plants and microorganisms. It can also impact critical ecosystem functions.

The American Medical Association last year said that white LED lights are increasingly suspected of impacting humans, estimating they have “five times greater impact on circadian sleep rhythms than conventional street lamps.”

Exposure to the light of white LED bulbs, it turns out, suppresses melatonin 5 times more than exposure to the light of high pressure sodium bulbs that give off an orange-yellow light. “Just as there are regulations and standards for ‘classic’ pollutants, there should also be regulations and rules for the pollution stemming from artificial light at night,” says Prof. Abraham Haim of the University of Haifa.

“White” light bulbs that emit light at shorter wavelengths are greater suppressors of the body’s production of melatonin than bulbs emitting orange-yellow light, a new international study has revealed.

Melatonin is a compound that adjusts our biological clock and is known for its anti-oxidant and anti-cancerous properties.

The study investigated the influence of different types of bulbs on “light pollution” and the suppression of melatonin, with the researchers recommending several steps that should be taken to balance the need to save energy and protecting public health.

“Just as there are regulations and standards for ‘classic’ pollutants, there should also be regulations and rules for pollution stemming from artificial light at night,” says Prof. Abraham Haim, head of the Center for Interdisciplinary Chronobiological Research at the University of Haifa and the Israeli partner in the research.

The study, by Fabio Falchi, Pierantonio Cinzano, Christopher D. Elvidge, David M. Keith and Abraham Haim, was recently published in the Journal of Environmental Management.

The fact that “white” artificial light (which is actually blue light on the spectrum, emitted at wavelengths of between 440-500 nanometers) suppresses the production of melatonin in the brain’s pineal gland is already known. Also known is the fact that suppressing the production of melatonin, which is responsible, among other things, for the regulation of our biological clock, causes behavior disruptions and health problems.

In this study, conducted by astronomers, physicists and biologists from ISTIL- Light Pollution Science and Technology Institute in Italy, the National Geophysical Data Center in Boulder, Colorado, and the University of Haifa, researchers for the first time examined the differences in melatonin suppression in a various types of light bulbs, primarily those used for outdoor illumination, such as streetlights, road lighting, mall lighting and the like.

In the first, analytical part of the study, the researchers, relying on various data, calculated the wavelength and energy output of bulbs that are generally used for outdoor lighting. Next, they compared that information with existing research regarding melatonin suppression to determine the melatonin suppression level of each bulb type.

Taking into account the necessity for artificial lighting in cities, as well as the importance of energy-saving bulbs, the research team took as a reference point the level of melatonin suppression by a high-pressure sodium (HPS) bulb, a bulb that gives off orange-yellow light and is often used for street and road lighting, and compared the data from the other bulbs to that one.

From this comparison it emerged that the metal halide bulb, which gives off a white light and is used for stadium lighting, among other uses, suppresses melatonin at a rate more than 3 times greater than the HPS bulb, while the light-emitting diode (LED) bulb, which also gives off a white light, suppresses melatonin at a rate more than 5 times higher than the HPS bulb.

“The current migration from the now widely used sodium lamps to white lamps will increase melatonin suppression in humans and animals,” the researchers say.

The researchers make some concrete suggestions that could alter the situation without throwing our world into total darkness, but first and foremost, they assert that it is necessary to understand that artificial light creates “light pollution” that ought to be addressed in the realms of regulation and legislation.

Their first suggestion of course, is to limit the use of “white” light to those instances where it is absolutely necessary. Another suggestion is to adjust lampposts so that their light is not directed beyond the horizon, which would significantly reduce light pollution. They also advise against “over-lighting,” using only the amount of light needed for a task, and, of course, to simply turn off lighting when not in use — “Just like we all turn off the light when we leave the room. This is the first and primary way to save energy,” the researchers say.

“Most Italian regions have legislations to lower the impact of light pollution, but they still lack a regulation on the spectrum emitted by lamps. Unless legislation is updated soon, with the current trend toward sources as white LEDs, which emit a huge amount of blue light, we will enter a period of elevated negative effects of light at night on human health and environment. Lamp manufacturers cannot claim that they don’t know about the consequences of artificial light at night,” says Dr. Fabio Falchi of ISTIL.

“As a first step in Israel, for example, the Standards Institution of Israel should obligate bulb importers to state clearly on their packaging what wavelengths are produced by each bulb. If wavelength indeed influences melatonin production, this is information that needs to be brought to the public’s attention, so consumers can decide whether to buy this lighting or not,” Prof. Haim says.

What can be done now:

The first cost-effective LEDs for outdoor lighting were blue LEDs coated with phosphor, but these can be replaced with newer versions that have less blue and more red or green, like the PC Amber LED lights or filtered LED lights called FLED.

People can control both the direction and the intensity of current LED lights. Controls systems can be added to dim lights by time or by motion rather than keeping lights on all of the time.

Longcore said in the case of the U.S. it was “where the money got ahead of the technology.” He said a lot of municipal street lights were replaced due to the Recovery Act, which gave a substantial amount of money to subsidize the transition to blue LEDs. “The DOE just didn’t listen to scientists who wanted a different standard,” he said. “DOE was really only interested in energy savings.”

Sufi Muslims: Objects of Hate for Extremists

Sufism is a mystical form of Islam, a school of practice that emphasises the inward search for God and shuns materialism. It has produced some of the world’s most beloved literature, like the love poems of the 13th-century Iranian jurist Rumi. Its modern-day adherents cherish tolerance and pluralism, qualities that in many religions unsettle extremists.
But Sufism, often known as Islamic mysticism, has come under violent attack in recent years. On Friday, militants stormed a Sufi mosque on the Sinai Peninsula, killing at least 235 people in what officials are calling the worst terrorist attack in Egypt’s modern history. The attack followed several assaults on Sufi shrines in Pakistan over the past year carried out by Sunni extremists. (The vast majority of Sufis are Sunni, though some are Shia.)
What is this form of Islamic belief, and why has it come under assault?
Roots and practices of Sufism
Sufism, known as tasawwuf in the Arabic-speaking world, is a form of Islamic mysticism that emphasises introspection and spiritual closeness with God.
While it is sometimes misunderstood as a sect of Islam, it is actually a broader style of worship that transcends sects, directing followers’ attention inward. Sufi practice focuses on the renunciation of worldly things, purification of the soul and the mystical contemplation of God’s nature. Followers try to get closer to God by seeking spiritual learning known as tariqa.
Confusion about Sufism is common, even among Muslims, according to Imam Feisal Abdul Rauf, a Kuwaiti-American Sufi cleric who preached in New York City for many years and founded the Cordoba Initiative, which promotes a moderate image of Islam in the West.
“It is nothing more than the spiritual dimension” of Islam, the cleric, who goes by Imam Feisal, said in a phone interview. “It is Islam, but we focus on meditation, on chanting sessions, which enable the Muslim to have his or her heart open. The myths people have about Sufis are analogous to the myths people have about Muslims.”
For a time, beginning in the 12th century, Sufism was a mainstay of the social order for Islamic civilisation, and since that time it has spread throughout the Muslim world, and to China, West Africa and the United States. As Sufism spread, it adapted elements of local culture and belief, making it a popular practice.
Alexander D Knysh, a professor of Islamic studies at the University of Michigan and expert in modern Sufism, describes it as a “very wide, amorphous movement” practiced within both the Sunni and Shia traditions.
Sufism has shaped literature and art for centuries, and is associated with many of the most resonant pieces of Islam’s “golden age,” lasting from roughly the eighth through 13th centuries, including the poetry of Rumi.
In modern times, the predominant view of Sufi Islam is one of “love, peace, tolerance,” Knysh explained, leading to this style of worship becoming synonymous with peace-loving Islam.
Why extremists have targeted Sufis
While some mainstream Muslims view Sufis as quirky, even eccentric, some fundamentalists and extremists see Sufism as a threat, and its adherents as heretics or apostates.
In February, militants aligned with the Islamic State attacked worshippers at the tomb of a Sufi philosopher+ in a remote part of southern Pakistan, killing more than 80 people, whom the militants described as polytheists. Sufis praying at the tombs of saints — a practice core to the group — have also been attacked in India and the Middle East.
The Islamic State targets Sufis because it believes that only a fundamentalist form of Sunni Islam is valid.
Some fundamentalists see the reverence for saints, which is common in Shia Islam, as a form of idolatry, because in their view it shows devotion to something other than the worship of a singular God. Some consider Sufis to be apostate, because saints were not part of the original practice of Islam at the time of the Prophet Muhammad, who died in 632.
“The opponents of Sufism see the shrines and these living saints as idols,” Knysh explained. “Their existence and their worship violates the main principle of Islam, which is the uniqueness of God and the uniqueness of the object of worship.”
Even though Sunni hard-liners have long viewed Sufis as well as Shias as heretical, terrorist networks like al-Qaida and the Islamic State have debated whether killing them is justified.
The two terrorist groups have clashed over whether to focus on the “far enemy”, powerful Western countries like the United States, or the “near enemy,” repressive governments in the Muslim world. Early in the Iraq War, when the Islamic State’s predecessor organisation targeted Iraq’s Shia majority, in the hopes of promoting sectarian conflict, al-Qaida criticized the Iraqi group’s leader at the time, Abu Musab al-Zarqawi, for doing so.
When a branch of al-Qaida captured northern Mali in 2012, militants used pickaxes and bulldozers to destroy the ancient mausoleums of Sufi saints in Timbuktu. But documents recovered in northern Mali revealed that the militants in Mali had acted without the permission of their leaders, who wrote to express their dismay, arguing that the destruction — while theologically justified — was unwise because it caused the population to turn against them.
Though al-Qaida has also targeted Sufi sites, the Islamic State has set itself apart by calling for brutal attacks against Sufis.
The status of Sufis in Egypt
While no group has yet claimed responsibility for Friday’s attack, it bore some of the hallmarks of previous assaults on Coptic Christians in Egypt. In the fall of 2016, Islamic State’s local affiliate claimed to have executed a Sufi cleric who was about 100 years old.
The religious objections of fundamentalists to the Sufi style of worship may not be the only factor behind the attacks on Sufis.
Experts say the amicable ties between Sufis and the Egyptian government may also be factor, giving the attack a political dimension. Egypt’s president, Abdel-Fattah el-Sissi, who took power after the military overthrew a democratically elected Islamist president, Mohammed Morsi, has vowed to do a better job at protecting religious minorities, who were shunned when Morsi’s party, the Muslim Brotherhood, was in power. By killing Sufis, the militants may be trying to undermine el-Sissi’s authority.
Like its counterparts in several other Muslim-majority countries, Egypt’s government supports the Sufis because it sees them as members of a moderate, manageable faction who are unlikely to engage in political activity, because their priorities are oriented inwardly.
Sufi sheikhs generally accept the legitimacy of the state, leading to tensions with Muslims who oppose their governments and are willing to act on their dissatisfaction — with violence if necessary.
“They think the society is moving in the wrong direction and Sufis are aiding and abetting the authorities on this corrupt path,” Knysh said. “In ways, their reasons are very much political. They say, ‘If Sufis support this, we will be against them,’ more or less.”
Imam Feisal said that attacks on Sufi worshippers, besides being a “major sin,” are the result of the politicisation of religion in the region over the past few decades. Egypt, in particular, he said, is a place where that politicization has fueled extremism.
“When religion becomes politicised,” Imam Feisal said, “it is not good.”

Saudi Arabia & the Ghosts of 1979

Events that year had a deep impact on the Subcontinent. Delhi must cheer on Saudi crown prince’s effort to take on religious extremists. Mohammad bin Salman, the bold crown prince of Saudi Arabia, has been making waves with a muscular foreign policy, an ambitious economic agenda to wean the kingdom away from oil, the will to destroy the domestic political order and plans for social liberalisation. Conventional wisdom warns pursuing any one of these four elements would be politically suicidal. But the 32-year old crown prince, promoted out of turn by his father King Salman, is pressing ahead.
Not all his exertions have succeeded. The Saudi intervention in Yemen has turned out to be prolonged and costly. His attempts to punish Qatar have not brought Doha to its knees. The recent arrest of 200 top royals, officials and business tycoons on charges of corruption, has been viewed by many as marking a political coup by Mohammad bin Salman, widely known as MbS.
It will be a while before his economic plans can be implemented and generate real results. His social reforms, such as letting women drive and calls for “moderate Islam”, are undermining the foundation of the modern Saudi state — an alliance between the House of Saud and Wahhabi clerics. Although MbS has been seen abroad as impatient and impetuous, he seems to have considerable support from the younger generation of Saudis that is fed-up with social oppression and economic stagnation.
Speaking at an investors conference in Riyadh last month, MbS said, “We are returning to what we were before — a country of moderate Islam that is open to all religions and to the world. We will not spend the next 30 years of our lives dealing with destructive ideas. We will destroy them today.” In an interview to The New York Times last week, MbS said, “Do not write that we are ‘reinterpreting’ Islam — we are ‘restoring’ Islam to its origins — and our biggest tools are the Prophet’s practices and [daily life in] Saudi Arabia before 1979.”
Other Arab leaders in the region including Abdul Fattah al-Sisi, the president of Egypt and Mohammed bin Zayed al Nayhan, the crown prince of Abu Dhabi and the deputy supreme commander of the United Arab Emirates armed forces, have been pushing for moderate Islam over the last few years. But coming from Saudi Arabia, which is the centre of the Islamic world, and its royal family whose legitimacy rests on the claim to be custodian of the holy sites, Mecca and Medina, it is significant.
Why is MbS constantly harping on 1979 — and the times before and after? It was indeed a critical year that transformed the Middle East and had powerful consequences for the whole world, especially the Indian Subcontinent. The first among the three pivotal events was the seizure of the grand mosque in Mecca by a group of zealots, who accused the Saudi royalty of abandoning Islam and selling its soul to the West.
From then on, the House of Saud moved rapidly towards conservativism. To counter the extremist flank from the right, it pandered to the Wahhabi clergy at home and promoted extremist groups abroad. But the Sunni flank has become ever more radical and sees the House of Saud as the most important political target.
The second event was the Islamic revolution in Iran that overthrew Shah Reza Pahlavi in Tehran. Claiming to be the true guardian of Islam, Ayatollah Khomeini presented a big political threat to Saudi Arabia’s leadership role in the Islamic world. The Saudi rivalry with the Islamic Republic of Iran for influence in the Islamic world, inevitably acquired a sectarian colour (Sunni versus Shia) as well as an ethnic dimension (Arab versus Persian).
The third event was the Soviet occupation of Afghanistan at the end of 1979. As US-Russian detente of the 1970s collapsed, Washington stepped in to mobilise a jihad against godless Russian communists with the help of the Saudis and the Pakistan Army. The Russian bear was pushed out of Afghanistan a decade later, but radical political Islam had been legitimised.
If the Middle East paid a huge price for the turmoil generated by 1979, so did the Subcontinent. Before 1979, the Subcontinent was a very different place. It had no dearth of economic and political problems. But violent religious extremism was not one of those. This is an awful legacy from 1979. General Zia-ul-Haque’s imposition of conservative Islam on Pakistani society and the promotion of religious radicalism to achieve political objectives in Afghanistan, India and Bangladesh has radically transformed the Subcontinent’s political dynamic.
But can MbS put the genie back in the bottle? Sceptics will caution against too much hope, for a strong resistance to the new agenda of “moderate Islam” is inevitable. Even among those who think MbS is on the right path, there will be much political disputation on how to exorcise the ghosts of 1979.
Delhi though must cheer on MbS and his effort to take religion back from the extremists. The ideas of religious moderation and social modernisation have been steadily pushed on the defensive in the four decades since 1979. Any effort to reverse 1979, therefore, must be welcomed whole-heartedly in the Subcontinent.

The Commonality of Thought in Riyad & Tel Aviv

The defeat of the militant Islamic State group marks not the beginning of peace, but simply the ending of one stage of the greater Middle Eastern war because IS, for all the horrors it perpetuated and all the press it got, was only a sideshow to the main act: the struggle for supremacy in the Middle East.
In this struggle, IS was never a player; it was at best a distraction, a convenient opportunity for various players keen on securing and advancing their strategic interests.
For Israel, the entire Syrian situation was a win-win, as it distracted many of its key opponents, Syria, Iran and Hezbollah to name just three, from focusing on Israel. Tel Aviv’s view of the war echoed what former Israeli prime minister Menachem Begin reportedly said during the Iran-Iraq war in the ’80s: “I wish luck to both parties. They can go at it, killing each other.”
The view from Tel Aviv is very similar to Riyadh’s. It was thus in Israel’s interests for the conflict to continue as long as possible because, as one Israeli officer put it: “If al-Assad wins we will have Hezbollah on two borders not one.” Israeli Brig Gen Ram Yavne went further in explaining why Israel doesn’t see IS or even Al Qaeda as a strategic threat, saying: “The axis headed by Iran is more risky than the global jihad one. It is much more knowledgeable, stronger, with a bigger arsenal.”
Curiously, the view from Tel Aviv is very similar to the view from Riyadh which has looked on with horror as Iran has expanded its influence across the Middle East, and we know that after gravity, the greatest attractional force in the universe is the commonality of interests.
It is that commonality that is now bringing these unlikeliest of allies, Israel and Saudi Arabia, closer together. While rumours of Saudi-Israel contacts have been making the rounds for years, the covert is now increasingly overt.
On Nov 16, the Saudi online newspaper Elaph did the previously unthinkable by publishing an interview with Israeli military chief, Lt-Gen Gadi Eisenkot. Eisenkot’s views about Iran echoed the words that have been coming from Riyadh for some time now. He called out Iran as the “real and largest threat to the region”. Accusing Tehran of seeking to take control of the Middle East, creating a [Shia] crescent from Lebanon to Iran and then from the [Persian] Gulf to the Red Sea,” Eisenkot pointed out that Saudi Arabia and Israel had never fought a war and that said that there was near-complete agreement between Israel and Saudi Arabia about the threat posed by Iran. If that wasn’t enough, he also made an offer to share intelligence, saying: “We are willing to share information if there is a need. We have many shared interests between us.” A few days before that, Israel’s communications minister, Ayoub Kara, invited Saudi’s Grand Mufti Abdul Aziz al-Sheikh to visit Israel
Even earlier, Israeli Prime Minister Netanyahu was one of the few Middle Eastern leaders to exuberantly welcome Lebanese Prime Minister Saad Hariri’s farcical resignation, calling it a wake-up call to the international community to take action against Iranian aggression, which is turning Syria into a second Lebanon.
More recently, in an interview with the New York Times, Crown Prince Mohammad bin Salman echoed Israeli language by calling Iran’s supreme leader Ayatollah Ali Khamenei the “new Hitler of the Middle East”.
For Israel, a formal alliance with Saudi Arabia may take away from the national narrative of a small nation surrounded by enemies, but such narratives can be adjusted. Moreover, Israel does maintain fairly cosy relations with Sisi’s Egypt, a concord that has effectively squeezed Hamas in the Gaza Strip. Similar ties are maintained with Jordan and so this rapprochement will not be unprecedented.
For Saudi, the risks are somewhat higher. Apart from handing Iran a propaganda victory allowing it to paint itself as the last bastion of resistance against Israel, the monarchy will open itself up to accusations of betraying the Palestinian cause, and while this was always at best a talking point, closer ties with Israel, long demonised in Saudi media, will likely intensify more radical domestic opponents of the Saudi regime. In isolation it may be possible for Riyadh to manage the blowback, but given the challenges the crown prince is already dealing with the prognosis does not look good.
Some of the sting could be taken out if Saudi proposals for a Palestinian peace plan were to be taken up by Israel, something that would be welcomed by the White House which is always looking for a win to take credit for, but how amenable the Israelis may be to this is unknown given the possible domestic outcry. But then, ideology is for the cannon fodder. At the top of the food chain, realpolitik rules.

Welcome, Plastic Waste

Waste plastics contaminate our food, water and air. Many are calling for a global ban on single-use plastics because throwing them “away” often means into our river systems and then into the world’s oceans.

Take the UK’s single-use plastic bottles: it’s estimated that 35m are used – and discarded – each day, but only 19m are recycled. The 16m bottles that aren’t recycled go to incinerators, landfill or the environment, even though, being PET (polyethylene terephthalate) they are easily reprocessed. Even those bottles that are placed in the recycle stream may be shipped to Asia, in a global market for waste plastics that is itself leaky.

It’s suspected that much of the “recycling” shipped to Asia may be joining local waste in the Great Pacific Garbage Patch. This soupy collection of plastic debris is trapped in place by ocean currents, slowly breaking into ever-smaller pieces, but never breaking down. Covered by bacterial plaques, they are mistaken for food by fish. Ingested, they contaminate the food chain and, potentially, may even be disrupting the biophysical systems that keep our oceans stable, thus contributing to climate change.

So we need to use far less plastic, re-use what we can, and dispose of what we must far more wisely. In facing this challenge, developed countries can learn from innovations in the less-developed world. People, globally, are innovating, creating new processes to use waste plastics and making new objects and art forms.

Filipino plastics

The Philippines, for example, is the world’s third biggest plastic polluter. Waste material, even from municipal collections, makes its way into the river systems and, from there, out into the open Pacific, contributing to its “plastic soup”. But it’s largely what happens on land that determines the load the oceans must bear.

Industrial recycling isn’t accessible to most people in the Philippines. Even if recycling were available, shipping their waste to China, long the centre of the global plastics recycling industry, is no longer a viable option: the industry is now the target of regulation by the Chinese government, eager to clean up its own environment.

Some waste in the Philippines is reprocessed locally. But people living in remote, rural areas have a stark choice. Either they bury their plastic waste locally, burn it, or come up with innovative solutions to repurpose the material instead. Given the unpalatable nature of the first two options, the country now has an innovative, artisanal plastic craft movement.

A few years ago, I worked with craftspeople and artists to put together an exhibition of repurposed plastic waste. The items that really stood out were those things that local people made according to traditional patterns. These were items people used for cultural events that marked their identity as tribal Filipinos. Much of the work of experimentation with the materials was happening in kitchens and workplaces as people shared techniques and tips with each other.

One of our contributing craftspeople, Ikkay, lives in a remote tribal village in Kalinga Province. She makes strands of plastic beads out of bits of waste plastic, using old CD cases and fast food spoons – anything with a bit of gloss. Her beads are replicas of traditional tribal designs. These beads are used in local cultural performances and shipped all over the world for demonstrations of Filipino dance.

Another example of creative re-purposing came from the nearby gold mines. There, mine workers weave the yellow, red and pink plastic wrappers of blasting cap detonators into traditional basket forms. They find the high quality wrappers just as good as the rattan they’d originally used. Burly mineworkers walking the roads with dainty pink-and-yellow plastic backpacks have become a frequent site in and around mining communities.

In both these examples – beads and backpacks – people had figured out innovative ways to re-purpose waste plastic. They took material that would have been garbage and turned it into an item that expressed important cultural values, making something cool, fun, and desirable.

The material itself – waste plastic – has something to do with this. In these Filipino cases, it made tribal displays of beads extra impressive. The makers and wearers of these items were not just subverting ideas about waste, but about social hierarchies and the social power to innovate and set trends. People who admired the plastic craft started to see plastic no longer as just “stuff” on its way to being garbage. Instead, it became imbued with the potential to become something new and different and a way of asserting local ingenuity and identity in a global world.

Flip-flops and whales

These examples from the Philippines are not isolated ones. Around the world, there are many small-scale groups doing very similar kinds of projects with re-purposed plastic waste. Rehash Trash in Phnom Penh, Cambodia, makes baskets out of used plastic bags. Ocean Sole in Kenya works with discarded plastic flip-flops to make art. Other initiatives are networks, like that created by Precious Plastic a 3D printing initiative from Amsterdam. They have groups all over the world building 3D printers to create re-purposed plastic items for local markets.

In the UK, people have used fine art to communicate the urgency of the issue to the wider public, whether it’s a giant plastic whale touring the UK or the artist Stuart Haygarth’s collection of plastic waste from UK beaches, hanging in University College Hospital London. But pointing out the problem is only the first step in providing comprehensive solutions. We need to reduce our plastics footprint, seek out items made from recycled materials, and – most importantly – learn, hands on, about new ways to reuse what is now a ubiquitous class of waste materials.

People are meeting the challenges plastic waste poses in creative ways. While the end goal will be to phase out the materials creating the bulk of the pollution, in the meantime we must improve the capture of recycling systems globally and make locally recycled and re-purposed materials more desirable and acceptable. Learning to love plastic – wisely – means taking on the responsibility for our own discarded items.

We could take up the examples of these local innovation workshops and 3D printing groups to get making in communities worldwide. While making new items won’t halt the consumption of plastics altogether, it will divert material from the waste stream while helping people to see its potential. Offering the public a chance to co-create part of the solution should make the inevitable regulatory responses – deposits, disposal taxes and more rigorous waste sorting – more acceptable.

After all, waste plastic is still plastic and amazing stuff – you can make it into pretty much anything you can imagine

The Means of Financing for Terrorists

The Global Terrorism Index 2017 was released this week. The number of terrorism deaths globally has declined for the second consecutive year, reveals the report, which is produced by the Institute for Economics & Peace. But in developed nations, deaths have increased, and terrorism has spread to more countries. In part, this reflects the changing dynamics of terrorism as witnessed in the developed world, from high intensity sophisticated attacks to more low-tech, low-cost and lone actor attacks. Such a shift in tactics mimics the evolutions in terrorism funding and highlights the need to consider longer-term strategies to inhibit the rise of terrorism.
In 2016, ISIL was the world’s wealthiest terrorist group. Its estimated annual revenue, according to the Global Terrorism Index, peaked at US$2billion in 2015, which is the equivalent to the GDP of some small nations. However, as the loss of its self-proclaimed caliphate has shown, the group’s strategy of self-funding in controlled territory has left them susceptible to any action that impinges on its territory. During the last year, ISIL’s funding structure has caved following major territorial losses in Iraq and Syria, as half of its funds were sourced from oil smuggling. ISIL was producing up to 75,000 barrels per day and generating revenues of US$1.3 million per day.
In response, the 68-member Global Coalition has targeted ISIL’s revenue sources. By early 2017, the coalition had destroyed more than 2,600 oil extraction, refinement and sale sites. Cash storage sites were also targeted, directly hindering ISIL’s ability to pay fighters and provide basic services. The Iraqi government has also shut down banking systems within ISIL-controlled territory to restrict payments to government workers in these areas. With the continuing loss of territory in 2017, it is estimated ISIL’s revenue has fallen from US$81 million per month in 2015 to US$16 million per month in 2017. It is highly likely its revenue will decline further.
This disruption to ISIL’s fundraising has unquestionably helped thwart the activities of the group, especially in its base countries of Iraq and Syria. However, the global threat posed by the group remains, particularly in light of the trend towards more low-cost attacks.
The September 11 attacks were highly sophisticated in their planning and implementation. Al-Qa’ida planned the attacks over many months and trained multiple perpetrators to carry out specific tasks. The attacks required substantial financing with estimates varying between US$400,000 and $500,000.
In stark contrast, the 2004 Madrid train bombings were estimated to have cost US$10,000. The failed 2007 London car bomb attacks carried an estimated price tag of about US$14,000. The foiled commuter train attack in Germany’s Cologne in 2006 was estimated to have cost only $500. The more recent attacks using vehicles, such as the 2016 Nice truck attack, were similarly inexpensive to conduct.
This shift towards inexpensive attacks reflects a shift in tactics. A study of 40 terrorist cells that plotted or carried out attacks in Western Europe between 1994 and 2013 found that most plots were self-funded. Furthermore, three-quarters of terror attacks in Europe between 1994 and 2013 cost less than US$10,000. This estimate includes all costs associated with the attack such as travel, communication, storage, acquiring of weapons and bomb-making materials.
The United Nations Security Council has long recognised the need to combat the financing of terrorism. Resolution 2178 (2014) and Resolution 2249 (2015) both sought to quell terrorist power. The council has encouraged all member states to ’prevent and suppress the financing of terrorism.’ Yet this shift towards inexpensive and self-funded attacks is cause for concern. Attacks are likely to be funded through legal means, such as an individual’s savings or donations. Their low cost renders them harder to detect during the preparation stage.
This trend shines a light on the need for greater attention to be given to the many complex issues associated with terrorism beyond the more frequently-discussed military and security responses. This includes better understanding the drivers of terrorism recruitment.
A recent analysis of 500 former members of various extremist organisations in Africa found that over half of respondents were motivated to join as they perceived their religion as under attack. Former fighters also cited low levels of trust in government institutions and high levels of animosity towards the police, politicians and the military. Similarly, a study of al-Shabaab members from Kenya found 97% of respondents claimed their religion was under threat and 65% had joined in response to the Kenyan government’s counter-terror strategies.
Recent studies examining the motivating factors behind an individual’s decision to join a terrorist group have also pointed to relative, rather than absolute, deprivation as an explanatory factor. Individuals whose expectations for social mobility and economic welfare have been frustrated are at a greater risk of radicalisation. Thus countries where a highly educated population remains largely unemployed or underemployed may be breeding grounds for extremist ideology.
In the European Union, where most countries are considered well-off in absolute economic terms, there remain large differences in youth unemployment levels when comparing native and foreign-born citizens. A first-generation young immigrant in Belgium is 64% more likely to be unemployed than a young person born in Belgium. Such differences may be due to other factors such as education or language levels, but importantly these differences contribute to a perceived unfairness.
These studies highlight the need for countries and the global community collectively to develop more long-term strategies for dealing with the spread of terrorism, and to ensure that any counter-terrorism measure does not inadvertently increase the risk of terrorism. This imperative is all the more pressing given the rise in the use of low-cost low-tech attacks.

 

Expansion of Knowledge, including Humanities- a Must for Economists

” Economists must speak to other disciplines – and let them speak back”
In a 2006 survey, American university professors were asked whether it was better to possess knowledge from numerous fields of study, or from just one. Among professors of psychology, 79% were enthusiastic about interdisciplinary learning, as were 73% of sociologists and 68% of historians. The least enthusiastic? Economists: only 42% surveyed said they agreed with the need to understand the world through a cross-disciplinary lens. As one observer put it bluntly: “Economists literally think they have nothing to learn from anyone else.”
In fact, economists would benefit greatly if they broadened their focus. Dealing as it does with human beings, economics has much to learn from the humanities. Not only could its models be more realistic and its predictions more accurate, but economic policies could be more effective and more just.
Whether one considers how to foster economic growth in diverse cultures, the moral questions raised when universities pursue self-interest at the expense of their students, or deeply personal issues concerning health care, marriage, and families, economic insights are necessary but insufficient. If those insights are all we consider, policies flounder and people suffer.
hard time in at least three areas: accounting for culture, using narrative explanation, and addressing ethical issues that cannot be reduced to economic categories alone.
People are not organisms that are first made and then dipped in some culture, like Achilles in the river Styx. They are cultural beings from the outset. But, because culture cannot be rendered in mathematical terms, economists typically embrace the idea of a pre-cultural humanness.
To understand people as cultural beings, one must tell stories about them. Human lives do not unfold in a predictable fashion the way Mars orbits the sun. Contingency, idiosyncrasy, and unforeseeable choices play an irreducible role. Life displays what might be called “narrativeness,” implying the need for explanation in terms of stories. And the best appreciation of this is to be found in novels, which may be considered not just a literary form, but also a distinct way of understanding the social world. Although the events that novels describe are fictional, the shape, sequence, and ramifications of those events is often the most accurate account we have of how lives unfold.
Finally, economics inevitably involves ethical questions that are not reducible to economics itself – or, for that matter, to any other social science. Economists often smuggle ethical concerns into their models with concepts like “fair” market price. But there are many ways to make these issues overt and open them to argument.
There is no better source of ethical insight than the novels of Tolstoy, Dostoevsky, George Eliot, Jane Austen, Henry James, and the other great realists. Their works distill the complexity of ethical questions that are too important to be safely entrusted to an overarching theory – questions that call for empathy and good judgment, which are developed through experience and cannot be formalized. To be sure, some theories of ethics may recommend empathy, but reading literature and identifying with characters involves extensive practice in placing oneself in others’ shoes. If one has not identified with Anna Karenina, one has not really read Anna Karenina.
When you read a great novel and identify with its characters, you spend countless hours engaging with them – feeling from within what it is like to be someone else. You see the world from the perspective of a different social class, gender, religion, culture, sexual orientation, moral understanding, or other features that define and differentiate human experience. By living a character’s life vicariously, you not only feel what she feels, but also reflect on those feelings, consider the character of the actions to which they lead, and, with practice, acquire the wisdom to appreciate real people in all their complexity.
The point is not to abandon the great achievements of economics, but to create what we call a “humanomics,” which allows each discipline to keep its own distinctive qualities. Rather than fuse economics and the humanities, humanomics creates a dialogue between them.
Such a conversation would actually bring economics back to its illustrious roots in the thought of Adam Smith, who, in The Theory of Moral Sentiments, explicitly denied that human behavior could be adequately described in terms of people’s “rational choice” to maximize their individual utility. After all, people often behave foolishly. More important for Smith, their care for others is an “original passion” that is not reducible to selfish concerns.
Smith’s writings on economic and ethics share a deep sense of the limits of reason. Central planning is bound to fail, but so are algebraic models of behavior. One needs a subtle appreciation of particulars, the sort of sensitivity that was dramatized, a half-century after Smith’s moral treatise, by Jane Austen and her successors. A great psychologist, Smith knew that we need both cents and sensibility.
Econometric methods and mathematical models teach us much, but only so much. When it comes to human lives, characterized as they are by contingency and narrativeness, stories are an indispensable way of knowing. That is why the quantitative rigor, policy focus, and logic of economics must be supplemented with the empathy, judgment, and wisdom that defines the humanities at their best. Economists must speak to other disciplines – and let them speak back.

 

Gates, Windows, & Doors

Not that I tend to be idolatry, not that I don’t use and appreciate Windows and certainly not that a fellow country man was chosen to head the organization, as a neurologist, I was surprised that Bill Gates, donated personal income to the tune of $ 50million towards research in Alzheimer’s.
That certainly is a personal tug, but it rings to light the profile of a man whose software made computers so simple. Who till recent was the richest man, and as an entrepreneur sees his company sustain aggression in sales, innovation and R&D to be at the top.
All these achievements are not unknown in in other industrialists, from Alfred Nobel, to Henry Ford, Edison, Carnegie, and they also contributed for social welfare, which makes CSR a rule today to preserve one’s manufacturing licence.
Back home, the Birla institutes, Tata trusts, Goenka, Cipla and Wokhardt and many others turned towards philanthropy. I would make an exceptional mention of Vikram Sarabhai, who headed the pharma industry in India, as Sarabhai, and Sarabhai—Squibb, headed the famous Calico mills, but beyond that initiated the IIM I Ahmadabad (still the foremost), Physical Research Laboratory, Institute of Plasma Physics, School of Architecture (one of the best in the country), National Institute of Design. Beyond that he headed the Satish Dhawan Space Centre, and for a while took additional responsibility of India’s Atomic Program after the sudden demise of Dr Homi Bhabha in an air crash. There is much more in his biograpy, “Vikram Sarabhai: A life” by Amrita Shah. From what I know, the man was incomparable, focused, humble, and approachable. He wrote a book on “Management”, which needs to come up for detailed discussions, perhaps by the “Niti Aayog”.
I come back to Bill Gates because of the better access through social media, a living model, and perhaps because of a different of maturity to analyse greatness with a human flair. A short biography he penned down on his own, “Bill gates: The Impatient Optimist”.
Somewhere on my mind is the theme of people of great talent, and how they sublimate from sheer impatient materialism (necessary to be seen as a success), and later move to larger matters of life, where they instantly see the need to contribute, not listlessly, but with a dynamic involvement.
Mr Gates is a living example, whose evolution as a inventor, the richest man, the right type of management structure that keeps him aloof as a mentor for Microsoft, and allows him to engage in newer interests I life. We remember when he created a “Bill and Melinda Gates” Trust to contribute millions as charity.
For a man in search of tangible results, perhaps that did not satisfy him. I can see he latched on to a more focused program on, “Alzheimer’s”.
With increase in longevity, particularly after 80, neurological degenerative diseases as parkinsonism, dementia, multisystem disorder, that snatch away the faculties such that a man exists, but does not lie in terms of cognition, awareness, and ability to interact with the environment. The burden on society, expenses are there, but I would consider them secondary.
Alzheimer’s though strictly a biopsy diagnosis, is loosely used for dementias, as it is the most common.
The pathological substrates, some of which Mr Gates would be asking his team to resolve, would be to restrain or dissolve “amyloid plaques”, and neurofibrillary tangles, a form of protein in nerve cells, that over-proliferate, that prevents electric passages, much like a tangle of wires. A technical networking mind, may take a fresh look!
It is the theme and focus that is laudable. As predictions go, it may take ten years to arrive at a breakthrough, unless there is a “Eureka moment” earlier.
What impresses me more, and comes as a philosophy of life, is a relaxed Mr Gates you see in pictures, and videos. The optimist makes him humble, and the “impatience” is gone.
Very few are so blessed to reach such a state of equanimity, to make a soft landing from such heights, and resort to calm taxying, even while approaching a destination where the doors open!
Bill Gates is certainly our example of the day, but such is the trajectory of other chosen ones. The Mahatma, started with a freedom struggle, but in the end culminated on a hypothesis for mankind—“Non-violence”. Sooner or later that shall permeate most of humanity.
Tolstoy, having got the Nobel for literature at the age of 42, later could see the game of life, and establish the concept of his “Tolstoy Farm”.
It took five years after he passed away for the Nobel Committee to be convinced, that the inventor of the destructive grenade had set provisions for a “Peace Prize”.
Mandela was jailed for a bomb blast. Years later when he took over as the President, he did not allow any reactionary revolution from the blacks, but settled for a peaceful democratic state. No whites had to flee due to pent-up animosity.
With so much strife and hatred all around, some wisdom is bound to pour in. When the world’s richest are looking to fund for research for quality ad longevity of human life, I suppose it is time for political, martial and diplomatic forces, to step down to a lower threshold!
Not that we expect any slackness on the high quality of Microsoft products. Somewhere the life trajectory of the inventor, is contributing to it.
It’s a lesson in mind and matter.
For the time being the nation is positively Moody, at least a certain part. Look out of the window, the air may be cleaner, even if for the time being! Is it due to a concealed virtue in “Trumpism”?
“Qaid-e-hayaat-o-dard-e-gham, asl mein dono ek hain,
Maut sey pehley aadmi, ghumsey nijaat paaye kyon” Ghalib
(The trap of existence, and the pain of sadness, Infact are the same,
Before death, how does one get complete relief from pain of disease)

Padmavati- A Sufi Tale of Rajput Kings Becomes a Tool of Inane Nationalism

It is necessary to read the ‘Padmavat’ focusing on the time and social order in which it was composed and then analyse the corruption in interpretation it has gone through to finally become an episode of Rajput and Hindu pride.

Every year between the months of February and March, the city of Chittorgarh in Rajasthan comes together in celebration of what is believed to be one of the most critical episodes of their community’s history — the Jauhar (self-immolation) of Queen Padmavati in defence of her honour and virtues. For the Rajputs, Rani Padmini or Padmavati, has held a semi-Goddess like position for centuries now. Her choice to rather die than be captured by another man has been celebrated with utmost vehemence as the symbol of Rajput valour and integrity. When director Sanjay Leela Bhansali announced his upcoming project — Padmavati — it was the Rajput consciousness of their historical identity which was at stake. So Rajput group Karni Sena took it upon itself to protest against the film’s alleged attempt to distort Rajput history.

The legend of Padmavati first appeared in a piece of poetry called ‘Padmavat’ dating back to the sixteenth century. Written in Avadhi language by Sufi poet Malik Muhammad Jayasi, ‘Padmavati’ was a tale of love, heroism and sacrifice, dotted all along with fantastical elements giving it a larger than life imagery. The poem narrates that a princess of unparalleled beauty called Padmini lived in the kingdom of Simhaladvipa, now Sri Lanka. Enamoured by her beauty, King Ratansen of Chittor was engulfed with the passion to acquire her and overcame a large number of adventurous obstacles to make her his queen. Back in the kingdom of Chittor, Ratansen banished a sorcerer, who travelled to Delhi and told its ruler Alauddin Khalji of Padmini’s beauty. The Khalji ruler marched to Chittor and vanquished Ratansen. But he did not manage to win Padmini as she along with other Rajput women committed Jauhar by consigning themselves to the flames.

Padmavati’s story is sacrosanct among the Rajputs who consider her the ideal wife and woman and within her is vested their legacy of bravery and virtue. Further, this narrative of their past is something that has been learned through oral transmission from one generation to another and local folk tales that have given it a sacred legitimacy. Ever since the protests against Bhansali’s film broke out, an issue of constant debate is to what extent the legend of Padmavati historically authentic and to what extent is she a product of fiction.Khalji defeated the Rana of Chittor in 1303 and died in 1316. No one by the name of Padmini or Padmavati existed then — or at any time — in flesh and blood resembling the story. She was born in 1540, 224 years after Khalji’s death, in the pages of a book of poetry by Malik Muhammad Jayasi, resident of Jayas in Awadh, a very long way from Chittor. Jayasi was a Sufi poet and followed the poetic format where God is the beloved and man is the lover who overcomes hurdles to unite with the beloved. Khalji embodied the many hurdles. There are just two historical facts relevant to the story: Khalji’s attack on Chittor and Rana Ratan Singh’s defeat.

But then, besides recorded and verifiable historical facts, there is another set of facts too, culturally constructed and embodied in popular memory, told, retold and retold yet again. Untrained to distinguish historical facts from cultural memory, these acquire the status of history for common people. Jawaharlal Nehru was particularly sensitive to this blurring in people’s minds. As memory does not follow the norm of verifiability, it is subject to quick metamorphoses.

The Padmavati story, like many others, has undergone several mutations. Ramya Sreenivasan has traced the wide circulation and mutation of the story from North India and Rajasthan to Bengal from the 16th to the 20th century in her magnificent book, The Many Lives of a Rajput Queen. To begin with, in Jayasi’s version and its several Urdu and Persian translations between the 16th and 20th centuries, Khalji was courting Padmini with a view to marrying her. In Rajasthan, during the same period, the emphasis changed to the defence of Rajput honour which had come to be invested in Padmini’s body. It was in Bengal in the 19th century that Padmini acquired the persona of a heroic queen committing jauhar in order to save her honour against a lusty Muslim invader. Concealed in it was a vicarious patriotic resistance to colonial dominance which also characterised other literary productions in the region such as Bankim Chandra’s celebrated Anand Math.

It is this memory in Rajasthan that has been turned into a hard, unambiguous historical fact which brooks no disputation. The inversion of a character imagined by a Muslim poet into the defender of Hindu honour can pass quietly unnoticed.

Cultural memory of a community hardly ever distinguishes between historical authenticity and fictional concepts that have over time acquired the garb of historicity. In that sense, it becomes increasingly difficult for the community to come to terms with the fact that a part of their historical pride may or may not have existed at all. While parts of the Padmavati legend has been proven historically, particularly the battle between Ratansen and Alauddin Khalji, the extensive use of fanciful elements in the story make it imperative for us to approach the authenticity of the narrative carefully. More important, however, is the necessity to read the Padmavat focusing on the time and social order in which it was composed and then analyse the corruption in interpretation it has gone through to finally become an episode of Rajput and Hindu pride.

The poet, the poem and its social context

Alauddin Khalji was the Sultan of Delhi between 1296 and 1316. Under his rule, the Khalji empire expanded rapidly to occupy regions in western, central and peninsular India. Khalji’s attack on Rajasthan had a particularly destructive impact upon the ruling lineages of the region resulting in the Delhi Sultan occupying a particularly hated space in Rajput memory. Khalji’s rule was also noted for having destroyed the authority of local chiefs, most of whom belonged to the social group of Rajputs. However, we need to note that Amir Khusrao, the Sultan’s court poet who had accompanied him during his invasion of Chittor, mentions no account of a Rani Padmini there in his accounts of the attack.

Padmavati, Padmavati movie, Sanjay Leela Bhansali, Deepika Padukone, Padmavati history, Rani Padmini, Padmavati story, Karni Sena, Padmavati protests, Karni sena protests, Padmavati news, India news, Indian Express An illustrated manuscript of Padmavat from c. 1750 CE (Wikimedia Commons)

Padmavati is introduced to us by Malik Muhammad Jayasi, about two centuries after the attack on Chittor took place. Jayasi was from the region of Jais in North India and had been initiated in the Chisti Sufi lineage of Saiyid Ashraf Jahangir Simnani. In the sixteenth century, when Padmavat was written, it was common for the Sufi pirs to provide religious legitimation to the ruling elite in return for the patronage the rulers gave them. “The choice for the story of the siege of Chittor and the role of the Rajput queen Padmavati as the main theme of Padmavat makes the poem particularly relevant in this context. It locates the poet in a literary field defined by the interests of both worldly and religious patrons,” writes historian Thomas de Bruijn in his book ‘Ruby in the dust: history and poetry in Padmāvat by the South Asian Sufi poet Muḥammad Jāyasī’.

From the fifteenth century new Rajput ruling lineages claimed lineal and political descent from the predecessors who had been destroyed by Khalji. Historian Ramya Sreenivasan notes that this was the period from when the Rajput memory of Alauddin Khalji’s invasion began to be actively reshaped focusing on the valour of the monarch of Chittor who resisted Khalji’s attacks. One of the first texts to participate in this celebration of Rajput history was the Kanhadade Prabandh, which was commissioned by the Chauhan chief of Jalor. The narration of Padmavati by Jayasi, needs to be contextualised in this new form social order that had emerged in Rajasthan.

Awadh at this time was populated by a large number of Rajput elites. Sreenivasan has located the creation of Padmavat in the sixteenth century politics of Awadh, where the rising influence of Sher Shah Suri had led to great anxiety among the Rajput elites. Further, she also pointed to the historicity of another king Ratansen who was the Rana of Chittor in the sixteenth century. Under his reign, nine years before the Padmavat was written, an episode of mass immolation had taken place in Chittor, just before its conquest by Bahadur Shah of Gujarat. It is possible that Jayasi in his narration of Padmavat was transporting contemporary politics to a historical period. “As these Awadh elites were deeply involved in the patronage of Chisti Sufis, it seems all the more justified to position Jayasi’s Padmavat in this context,” writes Thomas de Bruijn.

The construction of Padmavat’s narrative in the form of a fantastical tale of love needs to be located in the influence of other literary and cultural traditions of the sixteenth century in North India had on Sufi literature. Such narratives of a king falling in love with a beautiful princess, overcoming all obstacles in the process of acquiring her and in their union the king obtaining spiritual apogee was an imagery common among Jain, Persian and other folk genres of the period and Jayasi was bound to be inspired by it.

The interpretation of Padmavat in modern times

The circulation and transmission of the Padmavat has been an ongoing process and its interpretation at various historical stages needs to be located in the political context of the time in which it was being read. The modern interpretation of the text is a result of the twentieth century rendition of it inspired by the nationalist movement of the time. The nationalist struggle inspired scholars to investigate early modern vernacular literature to promote a Sanskritised form of Hindi which was deemed to be a necessary prerequisite to the linguistic unity of independent India. Awadhi and Braj literary traditions were particularly promoted as the predecessors of modern Hindi.

Padmavati, Padmavati movie, Sanjay Leela Bhansali, Deepika Padukone, Padmavati history, Rani Padmini, Padmavati story, Karni Sena, Padmavati protests, Karni sena protests, Padmavati news, India news, Indian Express Ramchandra Shukla (Wikimedia Commons)

With respect to Padmavat, the rendition of Ramchandra Shukla in 1924 was particularly important. Ideologically Shukla was inclined to represent early modern vernaculars in a Hindu religious context, and the Sufi literature of the period posed a problem for him. While he was unhappy with aspects of the poem as not being ideally Indian, he is known to have been touched by the mysticism in Jayasi’s poem which he believed was similar to Kabir’s poetry and therefore Jayasi was accepted as “Indian.” Further, as Thomas de Bruijn notes, “he is also positive about the representation of the behaviour of Padmavati when Ratansen is taken captive by Alauddin, which he interprets as an ideal image for the devotion of the Indian wife, in the manner in which he sees it portrayed in truly ‘Indian’ poetry.”

Shukla and his contemporaries’ interpretation of Padmavati as depicting the Indian and Hindu positivities of the nation is what has stayed on in the way the text is read and remembered till date. The celebration of Rani Padmini’s jauhar in Chittor and the Karni Sena’s relentless protest against Bhansali’s film need to be located in the flawed and motivated interpretation of Padmavat that has seeped into the Rajput cultural memory for decades now.

Sunday Special: Why do Humans Eat Bitter Foods?

My wife loves the bitter karela- the bitter melon, that is nothing of melony about it,  as if it is ambrosia. And so do many others across the world. Our bodies crave sugar, salt, fat, protein — all forms of replenishment or efficient providers of caloric energy. When something tastes bad, we’re meant to take it as a warning sign: danger, don’t eat this, it could kill you. And yet two of the five sensations we’ve universally categorized as tastes are, arguably, bad ones: sour and bitter. Then there’s spicy food, whose flavor can be so extreme that it actually qualifies as a form of pain instead of a taste. But the discomfort of eating a superhot chili pepper can also be physiologically compared — for some people, at least — to riding a roller coaster or watching a horror movie. It’s a pain we perversely crave, a feeling that’s as pleasurable as it is uncomfortable. Super-sour foods, too, can offer a kind of exhilarating rush. There’s something addictive about the tangy, mouth-puckering effect: Watch a baby suck on a lemon for the first time, burst into tears and then go back for more; try to resist a bag of Warheads or sour gummies.
Flavor, then, needn’t be pleasant for it to be attractive. But the appeal of bitterness is less obvious — unlike spicy and sour foods, the sensation has never been much of a selling point, at least not in America. If the physical impulse when eating sour foods is to suck in your cheeks, bitterness hits hard on the back of your tongue and, in excess, makes you gag. And yet this taste has long been an integral element in the cuisines of many other countries and cultures, which are starting to gain real traction here. The current trend for Middle Eastern food, for example, means more eggplant, even when roasted until caramelized; tahini, made from naturally bitter sesame seeds; and za’atar, an herb and spice mixture heavy on wild thyme and oregano. The movement toward more authentic Mexican food introduced mole, a Oaxacan sauce whose primary ingredients are tomatoes and alliums, charred until earthy and blended, and has made cilantro as common as basil.
As difficult as it is to trace the origin stories of the use of bitter ingredients, it seems safe to assume that it was often the result of necessity: In times of scarcity, you learn to make do with anything edible. Over time, however, eating bitter foods became not only traditional but in some cases even philosophical, revealing of a culture’s resilience: In China, there is a colloquialism that translates literally to “eat bitter,” a metaphor for the ability to endure hardship. Jews eat bitter herbs, usually horseradish, at Passover seders, to remind themselves of the suffering endured by their ancestors. On the Japanese island of Okinawa, a ubiquitous stir-fry of egg, tofu, pork and bitter melon called goya chanpuru is thought to ensure longevity — suffering in service of a long life.
It’s hard not to see the current worldwide health-food craze as being a convoluted translation of this: If beauty is pain, health, you might say, is bitter. Conscientious eaters choose salads overflowing with raw kale or collard greens; frothy, chalky matcha and “golden lattes” tinged with pungent turmeric. These things are nutritious, yes, but there’s also a psychological element: Bitterness equals raw, which in turn equals purity. Adding sweetness to your coffee or chocolate is a corruption of this purity. And, as with my bitter melon experience, eating bitterness can be a brag: How better to prove your connoisseurship than ordering something whose pleasures are either obscure or nonexistent? To order a drink with no trace of sweetness — say, a hoppy India pale ale or a straight shot of the Italian amaro known as Fernet-Branca, dark, viscous, herbal — is to announce one’s fortitude and disdain for instant gratification. Nothing worth doing is easy, and nothing worth consuming goes down easy. In an age of ready pleasures, choosing something difficult and unlikable is an announcement of sophistication. The craze is born, you might say, from having too much enjoyment.
Danny Bowien, the chef behind Mission Chinese, a restaurant with outposts in New York and San Francisco, is passionate about bitterness, which he describes, affectionately, as “challenging.” Bitter melon not only makes multiple appearances on his menus — most prominently in one of his signature dishes, thrice-cooked bacon with rice cakes — but he also seeks it out elsewhere, including at the taxi-stand Punjabi restaurant across the street from his Lower East Side apartment, where he orders an Indian varietal of the fruit braised in a curry with radish or potato. “It takes something out of you, in a way,” he muses. “The first time you have it your body kind of seizes up. I like that it punches you in the face.” What he calls “abrasive” flavors “break up the experience” of eating.
For Bowien, discovering bitter flavors was thrillingly world-expanding. He grew up in Oklahoma, where food was often sugary, but when he was 19, he left for San Francisco, where he had coffee-braised pork shoulder at a New American restaurant, and beef with bitter melon and fermented black beans at a Chinese restaurant in the Mission, both dishes that changed the way he understood flavor. At his restaurants now, he tries to create “food that really leaves an impression on you, and you can do that in many ways — drama, luxury products, really amazing techniques. But there are a lot of ingredients that up until recently have not really been highlighted within what we cook on a daily basis.” This includes, for example, grapefruit rinds, which Bowien has used to garnish scallop sashimi, giving it a mouth-twisting bite.
The Mexican chef Enrique Olvera, too, has been steadily elevating certain ingredients and recipes not only for the American palate, by way of his New York restaurants, Cosme and Atla, but also for his own countrymen at his Mexico City restaurant, Pujol. There, he serves fine-dining dishes that showcase bitter vegetables like the wild greens known as quintoniles and prickly pear cactus, or nopal — and Pujol is especially known for his dark, rich, intensely complex, intensely acrid mole, aged for over 1,200 days.
To Olvera, an affinity for bitterness is evidence of human innovation and diversity. “Mole doesn’t taste like tomato with garlic and onions — you probably think of that and you think of Italian pasta sauce,” he says. “But the fact that when you char tomatoes and add peppers and cinnamon and mix it together, and then it tastes like mole is magical.” For Olvera, bitterness is essential for depth of flavor and harmony of tastes. “In Mexican food,” he says, “it’s a huge component of every dish, combined with spice, or with sweetness, or even with acidity.” His mole teases at your tongue by promising but never quite delivering the relief of sugar. As such, it’s perhaps the best literal example of another poignant and enduring metaphor for an otherwise ephemeral human feeling: It’s bittersweet. And what’s more understandable than that

The Promise & Peril of Immersive Technologies: Augmented & Virtual Reality

We are at the cusp of a major revolution from mobile to immersive computing. Last year was seen as the dawn of a third wave of devices employing augmented and virtual reality (AR and VR), which define the two spectrums of immersive technology that could replace mobile computing.
A range of major products came to market in 2016 from companies including Oculus VR, Sony and Google. Since it bought Oculus for $2.1 billion, Facebook has acquired a further 11 AR/VR companies, underscoring the company’s view that VR and AR will form the next frontier. The large investments and acquisitions by tech giants suggest that these technologies will become increasingly integrated with the platforms on which we consume content.
According to a recent estimate by Goldman Sachs, AR and VR are expected to grow into a $95 billion market by 2025. As the chart below shows, the strongest demand for the technologies currently comes from industries in the creative economy – specifically, gaming, live events, video entertainment and retail – but will find wider applications in industries as diverse as healthcare, education, the military and real estate over time.
Moving from observation to immersion
AR and VR will offer a completely new creative medium – “an artist’s dream to build worlds, pixel by pixel,” according to Drue Kataoka, an artist and technologist. This promises the replacement of rectilinear devices with technologies that depict worlds in ever-expanding concentric circles, providing a level of immersion and experience that has never been seen before. This could be game-changing: users will no longer view content but will be placed inside ever-expanding virtual worlds and find themselves at the centre, hence the “immersive” nature of the technology.
“We’ve effectively had the same flat screen medium since 1896. VR/AR uniquely provides a sense of presence and immersion, it’s a brand new art form and brand new form of experiencing,” says Eugene Chung, founder and CEO at Penrose Studios.
Reduced production costs in creative activities
“Virtual prototyping” allows us to shorten the time and cost of iteration in product development while also improving the quality of the end product. For example, one design firm passed $50,000 worth of savings to a client in the aeronautics industry by using VR prototyping to abolish two physical prototype cycles and eliminate the time that would have been required for the assembly of custom samples. Wider use of virtual prototyping will allow companies to reduce the number of costly prototypes needed, as well as significantly decreasing the timeline from conceptual design to production and commercialization.
“You can iterate on your city plan, your home and your construction worksite many more times before you actually start to dig or make a change. As a result, we’re going to get better creations,” says Jeffrey Powers, co-founder and CEO of Occipital.
While traditional technologies also allow companies to prototype, with immersive tech, designers are provided a more direct experience by being able to walk, fly and interact with their prototypes, either in a VR or AR environment. The implication is that immersive technologies promise higher accuracy in design and, as a consequence, an end product of higher quality at a potentially cheaper cost than traditional prototyping technologies can provide.
Lower barriers to entry for new creators
Immersive technologies will also empower smaller firms to produce higher quality content at lower cost. The technology already exists to process 360-degree imagery in hours – something that until recently would take days – and is within reach for filmmakers on small budgets. In the same way that smartphones and apps moved mobile photography outside the realm of professionals and enthusiastic hobbyists, we can expect AR and VR to open up new creative avenues for all of us.
As a tool for empathy and cognitive enhancement
Immersive technologies may also allow us to feel closer to global issues, such as humanitarian crises, enabling a form of telepresence that evokes levels of empathy as if one were present. According to Lynette Wallworth, an artist and director, AR and VR “provide a layer of authenticity of experience” not offered by other mediums.
Although there are suggestions that increased consumption of digital media can cause a decline in empathy, many artists working with AR and VR are convinced that the medium will become the “ultimate empathy machine”, fostering a society with informed perspectives of other communities and identities. Gabo Arora, founder and president of Lightshed and a creative director and senior advisor at the United Nations, explains: “You’re discovering a new grammar of storytelling and emotions”. If the optimists are right, we could be well on the way to a more informed and creative world.
There is also promise for VR and AR to provide immersive learning experiences. Beyond immediate gamified learning, VR and AR’s biosensors have been harnessed in therapeutic domains, allowing one man to drive for the first time in his life using just his brain. “[Immersive technologies] are a form of brain augmentation that networks our biological systems to a digital device,” says Tan Le, CEO of Emotiv.
Competition for talent is a limit to growth
Beyond technical challenges – ranging from device size to battery life – one potential barrier to rapid progress in the AR/VR industry is the lack of talent to meet the demand for growth. The industry is at “ground zero”, so it is difficult to show the gap in supply and demand of talent. However, initial informal measures exist, such as US data showing that demand for freelancers with VR expertise grew far faster than that for any other skill in the second quarter of 2017, a 30-fold, year-on-year increase. Similarly, a recent survey of 200 Canadian companies working on VR projects concluded that VR will face a talent crunch that “could fuel consolidation between companies”.
Strategically developing domestic talent
Governments would be wise to develop strategic planning that captures talent in emerging technologies to guarantee their countries are at the leading edge of the next computing frontier. China is one example of an early mover and, according to Di Yi, vice-president of Perfect World Co., “the Chinese government offers substantial support for the VR industry.”
Areas such as Zhongguancun, Beijing, are subsidizing companies by up to $1.45m to further develop the VR industry and position the region as the next global technological hub. Other locales, including Beidouwan VR Village, in Guizhou province, offer grants to support content development and investment. By 2019, the village is expected to produce 1.5 million pieces of VR-related hardware, as well as 500,000 transactions of software content – delivering 3,500 new jobs in the process.
Enticing foreign talent
Governments can also take an active role in investing in immersive content. A 2016 survey of 500 AR/VR professionals reported that almost 50% are using their own personal funds to develop their companies. Fewer than 8% reported ‘other’ sources of funds, including government. Given that VR production costs can run into the hundreds of thousands, forward-looking policy-makers are promoting subsidies to entice VR talent from around the world. The canniest ensure that, in the process of doing so, they protect and develop their own industries at the same time.
In France, for example, the government-backed CNC Fund provides funding for VR/AR producers to co-produce content with local teams, offering grants that cover both development and production. In one recent example, the fund supported around 40% of the €500,000 costs required to produce a VR short film. The CNC also has the authority to award a tax rebate of up to 30% of qualifying expenditures to projects wholly or partly made in France and initiated by non-French companies.
Do these approaches work? While the effect on VR content is still being evaluated, the policy appears to have been effective in supporting ‘traditional’ filmmaking in France. Around a year after implementation, 31 projects from eight different countries had been launched, compared to only four in the previous year. These projects, which included blockbusters such as Inception, were estimated to generate approximately €119 million in direct spending in France and involve 450 days of filming – compared to €7.4 million in spending and 84 days of filming in the year leading up to the policy. Considering the nascent stage of VR, it is too early to tell whether the gains can be transferred to this new industry, but the attractiveness of the mechanism to creators is clear.
Immersive content will be more personalized – but at a cost
Designers of software have an incentive to keep users inside of their websites and apps because their business models increasingly rely on the collection of personal data as a way to personalize content.
Adam Alter, associate professor of marketing at NYU’s Stern School of Business, describes the strategy as a “brute force, big data-driven” approach: “Companies A/B test different features of a product and iterate to the point that is maximally difficult to avoid”. In addition, deep-pocketed companies have “teams of psychologists working according to the latest information to make humans more engaged”.
Indeed, evidence from the last decade shows that, while our overall leisure time is increasing, we are spending more of it using screen-based devices. A key driver of this shift is the practice of taking engagement as the main success metric for digital technology. The more time we spend on a device, the more data there is to collect about our interactions and the more targeted product offerings become – a reinforcing cycle that is likely to speed up as immersive technologies enter the mainstream.
The result is that the content we experience in immersive technology will be increasingly personalized. This could play out in a number of ways, but advertising, in particular, is well-placed to benefit.
Advertising promises to be more targeted
Advertising is already increasingly personalized as our personal data allows better and better targeting. In the context of immersive technologies, the term “gaze-through rate” has been coined to describe the effectiveness of an augmented or virtual advertisement in capturing user attention. Companies such as Retinad offer analytics to track behaviour on VR/AR devices with an aim to increase the conversion rate from advertisements as well as user engagement with content. Thus far, they have been shown to be up to 30 times more effective in engaging users than mobile advertisements.
Greater engagement may negatively impact well-being and privacy
The drive to capture our attention creates two challenges. First, our well-being is at stake: non-screen activities are more clearly linked to happiness than not. One longitudinal study of a major social network found a negative association with increased engagement and individual well-being, suggesting “a possible trade-off between offline and online relationships”. Teenagers in the US who devote just six to nine hours a week to social media are 47% more likely to say they are unhappy than those who use social media even less.
Secondly, a lack of sovereignty over personal data may push users away from the long-term adoption of new technologies. A report by the World Economic Forum shows that 47% of people across six countries have stopped or avoided using a service because of inadequate user controls and this figure rises as high as 70% for China. This suggests that user privacy and data controls are a key concern for consumers. Given the enhanced data tracking features from immersive technologies, from tracking eye-movements and facial expressions to haptic data (relating to the sense of touch), the personal data at risk will become more intimate than ever making worries about user privacy a far more serious concern.
Regulatory frameworks
The privacy concerns relating to traditional media are already surfacing in immersive content. If developers are unwilling to provide clear and agreeable terms of use, regulators must step in to protect individuals – as some jurisdictions have already.
The best place from which to draw inspiration for how immersive technologies may be regulated is the regulatory frameworks being put into effect for traditional digital technology today. In the European Union, the General Data Protection Regulation (GDPR) will come into force in 2018. Not only does the law necessitate unambiguous consent for data collection, it also compels companies to erase individual data on request, with the threat of a fine of up to 4% of their global annual turnover for breaches. Furthermore, enshrined in the bill is the notion of ‘data portability’, which allows consumers to take their data across platforms – an incentive for an innovative start-up to compete with the biggest players. We may see similar regulatory norms for immersive technologies develop as well.
Providing users with sovereignty of personal data
Analysis shows that the major VR companies already use cookies to store data, while also collecting information on location, browser and device type and IP address. Furthermore, communication with other users in VR environments is being stored and aggregated data is shared with third parties and used to customize products for marketing purposes.
Concern over these methods of personal data collection has led to the introduction of temporary solutions that provide a buffer between individuals and companies. For example, the Electronic Frontier Foundation’s ‘Privacy Badger’ is a browser extension that automatically blocks hidden third-party trackers and allows users to customize and control the amount of data they share with online content providers. A similar solution that returns control of personal data should be developed for immersive technologies. At present, only blunt instruments are available to individuals uncomfortable with data collection but keen to explore AR/VR: using ‘offline modes’ or using separate profiles for new devices.
Managing consumption
Short-term measures also exist to address overuse in the form of stopping mechanisms. Pop-up usage warnings once healthy limits are approached or exceeded are reportedly supported by 71% of young people in the UK. Services like unGlue allow parents to place filters on content types that their children are exposed to, as well as time limits on usage across apps.
All of these could be transferred to immersive technologies, and are complementary fixes to actual regulation, such as South Korea’s Shutdown Law. This prevents children under the age of 16 from playing computer games between midnight and 6am. The policy is enforceable because it ties personal details – including date of birth – to a citizen’s resident registration number, which is required to create accounts for online services. These solutions are not infallible: one could easily imagine an enterprising child might ‘borrow’ an adult’s device after-hours to find a workaround to the restrictions. Further study is certainly needed, but we believe that long-term solutions may lie in better design.
As businesses develop applications using immersive technologies, they should transition from using metrics that measure just the amount of user engagement to metrics that also take into account user satisfaction, fulfilment and enhancement of well-being. Alternative metrics could include a net promoter score for software, which would indicate how strongly users – or perhaps even regulators – recommend the service to their friends based on their level of fulfilment or satisfaction with a service.
The real challenge, however, is to find measures that align with business policy and user objectives. As Tristan Harris, Founder of Time Well Spent argues: “We have to come face-to-face with the current misalignment so we can start to generate solutions.” There are instances where improvements to user experience go hand-in-hand with business opportunities. Subscription-based services are one such example: YouTube Red will eliminate advertisements for paying users, as does Spotify Premium. These are examples where users can pay to enjoy advertising-free experiences and which do not come at the cost to the content developers since they will receive revenue in the form of paid subscriptions.
More work remains if immersive technologies are to enable happier, more fulfilling interactions with content and media. This will largely depend on designing technology that puts the user at the centre of its value proposition

Saturday Special: Religious People & Morality

According to research, religious people tend to believe they are more honest and charitable then athiests.

Why do people distrust atheists?

A recent study found widespread and extreme moral prejudice against atheists around the world. Across all continents, people assumed that those who committed immoral acts, even extreme ones such as serial murder, were more likely to be atheists.

Although this was the first demonstration of such bias at a global scale, its existence is hardly surprising.

Survey data show that Americans are less trusting of atheists than of any other social group. For most politicians, going to church is often the best way to garner votes, and coming out as an unbeliever could well be political suicide. After all, there are no open atheists in the U.S. Congress. The only known religiously unaffiliated representative describes herself as “none,” but still denies being an atheist.

So, where does such extreme prejudice come from? And what is the actual evidence on the relationship between religion and morality?

How does religion relate to morality?

It is true that the world’s major religions are concerned with moral behavior. Many, therefore, might assume that religious commitment is a sign of virtue, or even that morality cannot exist without religion. Both of these assumptions, however, are problematic.

For one thing, the ethical ideals of one religion might seem immoral to members of another. For instance, in the 19th century, Mormons considered polygamy a moral imperative, while Catholics saw it as a mortal sin.

Moreover, religious ideals of moral behavior are often limited to group members and might even be accompanied by outright hatred against other groups. In 1543, for example, Martin Luther, one of the fathers of Protestantism, published a treatise titled “On the Jews and their Lies,” echoing anti-Semitic sentiments that have been common among various religious groups for centuries.

These examples also reveal that religious morality can and does change with the ebb and flow of the surrounding culture. In recent years, several Anglican churches have revised their moral views to allow contraception, the ordination of women and the blessing of same-sex unions.

Discrepancy between beliefs and behavior

In any case, religiosity is only loosely related to theology. That is, the beliefs and behaviors of religious people are not always in accordance with official religious doctrines. Instead, popular religiosity tends to be much more practical and intuitive. This is what religious studies scholars call “theological incorrectness.”

Buddhism, for example, may officially be a religion without gods, but most Buddhists still treat Buddha as a deity. Similarly, the Catholic Church vehemently opposes birth control, but the vast majority of Catholics practice it anyway. In fact, theological incorrectness is the norm rather than the exception among believers.

For this reason, sociologist Mark Chaves called the idea that people behave in accordance with religious beliefs and commandments the “religious congruence fallacy.”

This discrepancy among beliefs, attitudes and behaviors is a much broader phenomenon. After all, communism is an egalitarian ideology, but communists do not behave any less selfishly.

So, what is the actual evidence on the relationship between religion and morality?

Do people practice what they preach?

Social scientific research on the topic offers some intriguing results.

When researchers ask people to report on their own behaviors and attitudes, religious individuals claim to be more altruistic, compassionate, honest, civic and charitable than nonreligious ones. Even among twins, more religious siblings describe themselves are being more generous.

But when we look at actual behavior, these differences are nowhere to be found.

Researchers have now looked at multiple aspects of moral conduct, from charitable giving and cheating in exams to helping strangers in need and cooperating with anonymous others.

In a classical experiment known as the “Good Samaritan Study,” researchers monitored who would stop to help an injured person lying in an alley. They found that religiosity played no role in helping behavior, even when participants were on their way to deliver a talk on the parable of the good Samaritan.

This finding has now been confirmed in numerous laboratory and field studies. Overall, the results are clear: No matter how we define morality, religious people do not behave more morally than atheists, although they often say (and likely believe) that they do.

When and where religion has an impact

On the other hand, religious reminders do have a documented effect on moral behavior.

Studies conducted among American Christians, for example, have found that participants donated more money to charity and even watched less porn on Sundays. However, they compensated on both accounts during the rest of the week. As a result, there were no differences between religious and nonreligious participants on average.

Likewise, a study conducted in Morocco found that whenever the Islamic call to prayer was publicly audible, locals contributed more money to charity. However, these effects were short-lived: Donations increased only within a few minutes of each call, and then dropped again.

Numerous other studies have yielded similar results. In my own work, I found that people became more generous and cooperative when they found themselves in a place of worship.

Interestingly, one’s degree of religiosity does not seem to have a major effect in these experiments. In other words, the positive effects of religion depend on the situation, not the disposition.

Religion and rule of law

Not all beliefs are created equal, though. A recent cross-cultural study showed that those who see their gods as moralizing and punishing are more impartial and cheat less in economic transactions. In other words, if people believe that their gods always know what they are up to and are willing to punish transgressors, they will tend to behave better, and expect that others will too.

Such a belief in an external source of justice, however, is not unique to religion. Trust in the rule of law, in the form of an efficient state, a fair judicial system or a reliable police force, is also a predictor of moral behavior.

And indeed, when the rule of law is strong, religious belief declines, and so does distrust against atheists.

The co-evolution of God and society

Scientific evidence suggests that humans – and even our primate cousins – have innate moral predispositions, which are often expressed in religious philosophies. That is, religion is a reflection rather than the cause of these predispositions.

But the reason religion has been so successful in the course of human history is precisely its ability to capitalize on those moral intuitions.

The historical record shows that supernatural beings have not always been associated with morality. Ancient Greek gods were not interested in people’s ethical conduct. Much like the various local deities worshiped among many modern hunter-gatherers, they cared about receiving rites and offerings but not about whether people lied to one another or cheated on their spouses.

According to psychologist Ara Norenzayan, belief in morally invested gods developed as a solution to the problem of large-scale cooperation.

Early societies were small enough that their members could rely on people’s reputations to decide whom to associate with. But once our ancestors turned to permanent settlements and group size increased, everyday interactions were increasingly taking place between strangers. How were people to know whom to trust?

Religion provided an answer by introducing beliefs about all-knowing, all-powerful gods who punish moral transgressions. As human societies grew larger, so did the occurrence of such beliefs. And in the absence of efficient secular institutions, the fear of God was crucial for establishing and maintaining social order.

In those societies, a sincere belief in a punishing supernatural watcher was the best guarantee of moral behavior, providing a public signal of compliance with social norms.

Today we have other ways of policing morality, but this evolutionary heritage is still with us. Although statistics show that atheists commit fewer crimes than average, the widespread prejudice against them, as highlighted by our study, reflects intuitions that have been forged through centuries and might be hard to overcome.

Asia in the “Second Nuclear Age” – Risk Assessment of a Nuclear Exchange

It is now a truism among foreign and defence policy practitioners that the post-Cold War nuclear build-up in the Indo-Pacific region constitutes the dawn of the “second nuclear age”, argues the Atlantic Council’s report, Asia in the “Second Nuclear Age”.
From the 1990s onward, China’s decision to stir out of its strategic languor and modernize its nuclear arsenal, along with the resolve of India and Pakistan to deploy operational nuclear forces, and, more recently, North Korea’s sprint to develop reliable long-range nuclear capabilities that can credibly threaten the continental United States, has led many to aver that the second nuclear age will rival the worst aspects of the first.
During the first nuclear age, baroque nuclear arms build-ups, technical one-upmanship, forward deployed nuclear forces, and trigger-alert operational postures characterized the competition between the superpowers and their regional allies. The nuclear rivals embraced nuclear war-fighting doctrines, which internalized the notion that nuclear weapons were usable instruments in the pursuit of political ends, and that nuclear wars were winnable.
Two rivalries
There is a sense of déjà vu among nuclear pessimists that nuclear developments in China, India, and Pakistan could produce similar outcomes. When North Korea’s nuclear advances are factored in, the prognoses become even direr. More specifically, the second nuclear age consists of two separate systems of nuclear rivalry, with potentially dangerous spillover effects.
The first rivalry is centred on India, Pakistan, and China, with a geographic footprint that overlays the larger Indo-Pacific region. The second rivalry encompasses the Northeast Pacific, overlaying the Korean peninsula, Japan, and the United States. North Korean developments, and a potential US overreaction to them, threaten China’s historic nuclear minimalism and its own interests as an emerging global power.
Similarly, US suggestions of global retreat, and the retraction of extended deterrence guarantees to its allies in Northeast Asia, could push those allies to acquire independent nuclear arsenals and intensify the second nuclear age.
Splendid first-strike
Until very recently, the threat of a nuclear war was thought most likely in South Asia, where India and Pakistan are involved in a festering low-intensity conflict (LIC) fostered by deep conflicts about identity and territory. Specific dangers include Pakistan’s threats to deploy tactical nuclear weapons in a conventional war with India. Likewise, India’s investments in ballistic-missile defences (BMD) and multiple-re-entry vehicle (MRV) technology could, in theory, afford future decision-makers in New Delhi the means to execute splendid first-strike (a counterforce attack intended to disable the opponent’s nuclear capacity before it is used) options against Pakistan.
Prognoses of the nuclear rivalry between India and China are generally less threatening. But, when the latter rivalry is considered in the context of ongoing boundary disputes between New Delhi and Beijing, their self-identification as great powers accounting for nearly 50% of global gross domestic product (GDP) by mid-century, their participation in regional balance-of power-systems, and potential operational brushes between sea-based nuclear forces forward deployed in the Indian Ocean, those concerns invariably overshadow any optimism.
Asia’s nuclear future
In the background of the unfolding gloom of the second nuclear age, the Atlantic Council’s South Asia Center conducted three workshops in India, Pakistan, and China in the fall of 2016, with the objective of drawing academics, policy practitioners, and analysts in each country to discuss the unfolding nuclear dynamics in the region. All three workshops had a common theme: Assessing Nuclear Futures in Asia.
Under this umbrella theme, workshop participants tackled three specific subjects:
The general nature of the strategic competition in Indo-Pacific region;
The philosophical approaches shaping nuclear developments in China, India, and Pakistan;
The hardware and operational characteristics of their nuclear forces.
The report Asia in the “Second Nuclear Age” presents the findings of the three workshops, in separate sections on China, India, and Pakistan.
What stands out in these findings is that regional participants generally reject the nuclear pessimism in Western capitals. The nuclear “sky is falling” argument, they maintain, is simply not supported by the evidence, at least when evidence is embedded in its proper context.
Key Conclusions
• While the first nuclear age was riven by deep ideological conflicts between two contrarian political systems that viewed the victory of the other as an existential threat, the nuclear rivalry between China, India, and Pakistan is nothing like that.
All three states accept the legitimacy of the international system, to the extent that they share goals of market capitalism, state sovereignty, and multilateral institutionalism. Undoubtedly, the three states have different domestic political systems: authoritarian capitalist (China), liberal democracy (India), and praetorian democracy (Pakistan). Yet, none of these nuclear powers views the domestic political system of another as jeopardizing its own existence.
• At least two among the three nuclear powers – China and India – have vast strategic depth, excellent geographical defences, and strong conventional forces. Neither fears a conventional threat to its existence. Leaderships in both countries have a shared belief that nuclear weapons are political weapons whose sole purpose is to deter nuclear use by others. They also share a common institutional legacy of civilian-dominated nuclear decision-making structures, in which the military is only one partner, and a relatively junior one, among a host of others.
All three factors – the structural, the normative, and the institutional – dampen both countries’ drives toward trigger-ready, destabilizing, operational nuclear postures that lean toward splendid first-strike options.
• However, this reassurance does not extend to Pakistan, which – due to the lack of geographic depth and weaker conventional forces against India – has embraced a first-use nuclear doctrine.
Pakistan’s hybrid praetorian system also allows its military near autonomy in nuclear decision-making. This combination of structural and institutional factors has led Pakistan to elect a rapidly expanding nuclear force that, within a decade, could rival the British, French, and Chinese arsenals in size, though not in sophistication.
Evidence also suggests that Pakistan has developed tactical nuclear weapons, although it does not appear to have operationalized tactical nuclear warfare.
• In the nuclear dynamic in the Indo-Pacific region, India and Pakistan are novice developers of nuclear arsenals; the weapons in their inventory are first-generation fission weapons. Likewise, their delivery systems are the first in the cycle of acquisitions. Their hardware acquisitions generate outside concern because of the scope of their ambitions. Both nations plan to deploy a triad capability.
Nonetheless, this ambitious goal and the selection of technologies underline the central lesson of the nuclear revolution, which is force survival (to enable an assured second-strike capability).
China’s goal
• Force survival through secure second-strike capabilities is also China’s goal. It is the only nuclear power among the three that is actually modernizing, i.e., replacing ageing delivery systems with newer and better designs.
Thus far, the evidence suggests that Chinese and Indian explorations of multiple-re-entry vehicle technologies are aimed at reinforcing deterrence through the fielding of more robust second-strike capabilities. This conclusion is also supported by the fact that neither India nor China has, nor is developing, the ancillary intelligence, surveillance, and reconnaissance (ISR) systems necessary to execute splendid first-strike attacks.
Another technology of concern is missile defence. India’s goals vis-à-vis missile defence are still unclear, and its technical successes with the programme are even less evident. Chinese goals are similarly unclear, and appear to be exploratory means for defeating adversarial attempts to stymie its deterrent capability.
• On a more positive note, neither India nor Pakistan is conducting nuclear tests to develop or improve designs for nuclear warheads. The same holds for China. However, Pakistan is rapidly accumulating fissile material, which could increase to four hundred and fifty kilograms of plutonium, sufficient for ninety weapons, and more than 2,500 kilograms of highly enriched uranium (HEU), sufficient for one hundred simple fission warheads by 2020.
India’s warheads
India is accumulating approximately 16.6 kilograms of fissile material annually, sufficient for a force of approximately 150-200 warheads, though all fissile material is probably not converted into nuclear warheads.
China, however, is no longer producing fissile material. It is only modestly increasing the size of its arsenal, from 264 to 314 warheads. The size of the Chinese, Indian, and Pakistani arsenals will remain a function of the calculations of damage ratios that each believes essential to achieve deterrence. Yet, if current trends remain stable, the size of their arsenals should remain comparable to the French and British nuclear arsenals. The arsenals will be large, but will by no means approach the gargantuan size of the US or Russian nuclear arsenals.
• Like other regional nuclear powers during the first nuclear age, China, India, and Pakistan might also decide to forego one or more vulnerable legs of their nuclear triad. At present, however, there are no indicators of this happening.
• The nuclear rivalry in South Asia remains ominous, because Pakistan wages LIC against India via non-state actors, while the latter has devised limited conventional-war options to punish the Pakistani military on Pakistani soil. India has also recently hinted that it could abandon nuclear no first use (NFU) in favour of splendid first-strike options. Simultaneously, however, India is backing away from its purported limited-conventional-war doctrine against Pakistan, on the premise that the LIC does not represent an existential threat to Indian security, and that there are other sophisticated methods for dealing with Pakistan’s aggressions that don’t involve pressing nuclear buttons.
The decline in India’s appetite for limited conventional war against Pakistan, if institutionalized over time, would represent a game changer and significantly reduce the risk of nuclear war in the region.
• The big difference between the first and second nuclear ages is the domestic stability of the nuclear-weapon powers. For the greater part of the first nuclear age, states that wielded nuclear arsenals were stable and boasted strong governing institutions.
In Asia – while China and India represent this continuity of strong state institutions, as well as checks and balances on the military – Pakistan remains internally unstable, and increasingly unable to rein in praetorianism over national security and nuclear policy.

Why most curable diseases aren’t being cured?

Once upon a time, the world suffered. And it suffers today. In 1987, 20 million people across the world were plagued by a debilitating, painful and potentially blinding disease called river blindness. This parasitic infection caused pain, discomfort, severe itching, skin irritation and, ultimately, irreversible blindness, leaving men, women and children across Africa unable to work, care for their families and lead normal lives.
But the recent discovery of a drug called ivermectin was about to change it all. Not only was ivermectin cheap and easily synthesized, but it was also a powerful cure: With only one dose a year, it was possible to completely rid patients of disease and even halt the progression toward blindness. In short, ivermectin was a miracle drug – one whose discovery would lead to Satoshi Omura and William Campbell winning the Nobel Prize in medicine in 2015.
There was no time to be wasted. Recognizing that the populations most at risk of disease were those least able to afford treatment, Merck & Co. pledged to join the fight to end river blindness. Thirty years ago this October, the pharmaceutical company vowed that it would immediately begin distributing the drug free of charge, to any country that requested it, “for as long as needed.” It was the final piece of the puzzle: an effective drug for a tragic and completely preventable disease. And we all lived happily ever after.
Only… we didn’t.
Merck’s generous offer should have been the final chapter of a brief story with an upbeat ending – the eradication of a tragic and preventable disease that had plagued humankind for centuries. But such was not the case: 30 years later, in 2017, river blindness rages on across the world, afflicting as many as 37 million people, 270,000 of whom have been left permanently blind.
Neglected tropical diseases like river blindness stand in stark contrast to those like tuberculosis, which is estimated to affect a third of the world’s population due to the increasing prevalence of highly antibiotic resistant strains.
In short, tuberculosis has stuck around because medicine has run out of drugs with which to treat it – which is why, as a molecular biologist, I am researching new ways we can finally defeat this stubborn disease.
But this only increases the urgency for river blindness and other widespread diseases for which, unlike tuberculosis, science does have effective cures – and inexpensive ones at that. Even with all the necessary tools, the world has failed to cure the curable.
Turning a blind eye
One-and-a-half billion people across the world suffer from neglected tropical diseases, a group of infectious diseases that prevail in tropical and subtropical countries lacking good health care infrastructure and medical resources. These diseases typically do not kill immediately but instead blind and disable, leading to terrible suffering, creating losses of capital, worker productivity and economic growth.
Thirteen diseases are universally recognized as neglected tropical diseases. At least eight of these diseases, including river blindness, already have inexpensive, safe and effective treatments or interventions.
For less than 50 cents per person, the United States could cure a fifth of the world’s population of these severely debilitating and unnecessary diseases. In spite of this, the United States allocates nearly as little to treating and preventing neglected tropical diseases around the world as it does to drugs for erectile dysfunction.
The forgotten fevers
Consider dracunculiasis, or Guinea worm infection, which occurs when people consume water contaminated with fleas carrying parasitic worms. The worms mature and mate inside the human body, where they can grow to be two to three feet long.
Adult females eventually emerge from painful blisters at the extremities to lay eggs in stagnant water, where offspring will infect water fleas and begin the cycle anew.
No drug exists that can cure Guinea worm, but because of a cohort of mostly privately funded public health efforts, the number of Guinea worm infections worldwide has dropped from 3.5 million in the 1980s to only 25 in 2016.
Funding from the U.S. and other countries could help in the final push to eradication, and some argue that funding from the individual countries themselves could help.
Another example, albeit more grim, is the group of soil-transmitted helminths, or worms. Roundworm, hookworm and whipworm collectively affect over a billion people across the world, all in the poorest areas of the poorest countries. All these worms infect the human intestines and can cause severe iron deficiency, leading to increased mortality in pregnant women, infants and children. Furthermore, hookworm infections in children retard growth and mental development, leading to absences from school and dramatically reduced labor productivity.
However, soil-transmitted helminths can be expelled from the body with a single pill, each of which costs only one penny. What’s more, preventing infection in the first place is completely achievable through increased awareness and sanitation.
The purse strings of nationalism
Without drastic increases in funding and public awareness, the plight of people affected by the neglected tropical diseases is unlikely to budge anytime soon.
The U.S. spends over US$8,000 per person per year on health expenditures, compared to countries in Africa that spend around $10. While this opens the door to a critique on efficiency, it’s far more indicative of the disparities in health resources.
Less than 20 percent of the world’s population lives in some of the most developed and economically high-functioning countries, including the United States – and nearly 90 percent of the world’s total financial resources are devoted to the citizens of these nations. And yet, low-income countries bear the majority of the world’s infectious disease burden. In short, the rest of the world does not suffer the same diseases the United States does, and Americans are doing little to nothing about it.
At first glance, this is not so surprising. As a whole, the world suffers – but how many neglected tropical diseases currently penetrate American borders?
Some experts predict that eliminating or controlling the neglected tropical diseases in sub-Saharan Africa alone, which shoulders over 40 percent of the global burden of neglected tropical diseases, could save the world $52 billion and over 100 million years of lifeotherwise lost to disease.
Conversely, some global health experts estimate that for every dollar spent on neglected tropical disease control, we get back over $50 in increased economic productivity. By increasing awareness and funding of neglected tropical disease eradication, the United States will be making one of the best global investments possible. The rest of the world has waited long enough.

The False Narrative of Realpolitik Revealed

In an era of divisive social media and partisan “fake news,” the notion that “actions speak louder than words” is no longer true. As we are rediscovering, words are both powerful and problematic, particularly in the context of geopolitics. The recent United Nations General Assembly meeting in New York offered the latest reminder that in diplomacy, words still matter.
Much attention has been drawn to US President Donald Trump’s remark that the United States “will have no choice but to totally destroy North Korea” should the Democratic People’s Republic of Korea (DPRK) threaten it or its allies. In fact, most military experts agree that a kinetic war on the Korean Peninsula would annihilate the DPRK, and quite possibly South Korea along with it.
But other parts of Trump’s UN speech, especially its passages about national interests and sovereignty, require further reflection. Trump makes no secret of his desire to “put America first,” and he reiterated that pledge at the UN dais. But he also urged other leaders to put their countries first, too. “To overcome the perils of the present and to achieve the promise of the future, we must begin with the wisdom of the past,” he said. “Our success depends on a coalition of strong and independent nations that embrace their sovereignty to promote security, prosperity, and peace for themselves and for the world.”
One could infer, and many have, that such statements signal a revival of US devotion to Realpolitik in world affairs. As the historian John Bew observed in his 2016 history of the term, the pendulum swing was to be expected: “Our foreign policy debates follow cycles, in which policymakers declare themselves more idealistic, or more realistic
But Bew’s survey also reminds us that the singular pursuit of national interests – the type of worldview championed by Trump – is not Realpolitik at all if it is uncoupled from a transformative idea or normative purpose. Severing moral concerns from global affairs would only weaken the US and all who emulate it.
The concept of Realpolitik emerged from the mixed outcomes of the European revolutions of 1848, when Germany’s future unification had many possible permutations, but the larger political goal – an international order comprising strong nation-states – was nonetheless clear. But in the wake of Trump’s “America First” doctrine, the challenge for the world today is to discern what the purpose of political realism has become.
One answer was shared at the World Economic Forum’s (WEF) annual meeting in Davos earlier this year. There, Chinese President Xi Jinping offered a robust defense of globalization and emphasized his view that in pursuing national agendas, countries should place objectives “in the broader context” and “refrain from pursuing their own interests at the expense of others.”
If leaders of the world’s two most powerful countries differ fundamentally in their approach to international relations, what are the prospects for strengthening cooperation globally
History is replete with examples of conflicts stemming from a rising power challenging the influence and interests of an incumbent. During the Peloponnesian War, according to the Greek historian Thucydides, “It was the rise of Athens and the fear that this instilled in Sparta that made war inevitable.” How China and the US avoid what Harvard’s Graham Allison has termed the “Thucydides Trap” is of great concern to the world, as is ensuring that geostrategic disputes elsewhere don’t lead to violence
As Stanford biologist Robert Sapolsky has argued, behavioral dichotomies that might seem inevitable and crucial one minute can, under the right circumstances, “evaporate in an instant.” For Sapolsky, “contact theory,” which was developed in the 1950s by psychologist Gordon Allport, can foster reconciliation among rivals, and help bridge the “us-them” divide. “Contact,” whether between kids at a summer camp or negotiators around a table, can lead to greater understanding if engagement is lengthy and on neutral territory, outcome-oriented, informal, personal, and avoids anxiety or competition.
Words and narratives affect international affairs in similar ways. Narratives that have emerged in response to – or as a result of – national, regional, and global divisions are often structured by an “us-them” dichotomy. But these national narratives, as appealing as they may be to some, must not be confused with Realpolitik,as they remain bereft of the innovation, inspiration, and idealism needed for transformational change.

The Reality of the European Culture War

When Indians read articles about India, whether written by Indians living abroad in internationally recognized newspapers of Western countries, or in reputed Indian newspapers, one gets the impression that Indian secularism is an exceptional cultural value, which has an absolute positive resonance in the West. But this distortion is only sustained to shame India, even though India gives enough rights to religious and linguistic minorities.
Secularism, as perceived and practiced in India, is very different from its counterpart in Western Europe. Indian lawmakers are still in the process of passing a law to stop the practice of triple talaq. A practice not acceptable even in some other Islamic countries.
While western newspapers and countries try to impose the regime of minority rights on colonized countries, the discourse on human rights in European countries, some of them being former colonial powers, has swiftly shifted from being minority-focused in the last 50 years to being what could be described as majority-focused. The wish and will of the majority is being listened to and considered legitimate and just.
So what is acceptable in a European country under the banner of secularization is not at the same as what is considered secular and sane in an Indian context. The rights of minorities seem to be an area of emphasis in many articles published on India, and whenever India is mentioned, a slight reference is often made to the state of religious and ethnic minorities.
Yet, if we observe the elections, whether in America or in Europe, political parties that focus on the perceptions of the majority are making it a cultural value to discuss the demand of the majority, and they are being rewarded. Without exception, each and every country in Western Europe is experiencing center-right parties or right-wing parties doing much better than a decade ago. The political middle ground is shifting swiftly towards the right and yet these election results are not characterized as being extreme.
Let us take the example of the Austrian People´s Party (ÖVP), which emerged as the largest party, making Sebastian Kurz, the leader of the party, the youngest head of government in Europe. Kurz had a clear-cut agenda of tightening the immigration laws and fighting political Islam. On top of that, the far-right party, FPÖ, received one fourth of the votes, making it a significantly large party, the third largest in Austria.
Take the instance of Brexit in UK, one of the reasons why so many voted for Brexit was that they saw it as a referendum on immigration and a way to express their dissatisfaction with the present immigration policies, which allowed workers from Eastern Europe to migrate to UK, putting pressure on the welfare state of the United Kingdom.
Let us look at the present political quagmire in Germany. It seems that the possibility of envisaging a “Jamaican” government seems utterly unfeasible. Despite the coalition negotiations between Merkel´s Christian Democrats (CDU), its sister party the CSU, the free democrats (FDP), and the Green Party seemed promising to start with, they have come to a stalemate. The opposition party, the Social democrats (SPD), which has traditionally been one of the biggest parties, is now lagging behind with an unprecedented low backing from voters.
So the present scenario is that most political parties want to be in opposition rather than be a coalition partner to Merkel´s Christian Democratic Party. The end result will automatically be a new election in Germany within a span of a few months. So how can we sum up these political tendencies?
Without an inch of doubt there is a cultural war taking place, and voters are demanding a society based on solidarity and a higher quality of welfare for all. A common citizen of any European country enjoys far more health and educational rights compared to an average citizen of America or Asia. Populism in Europe is about maintaining this broad-based system of universal rights, given to all, which means redistribution of wealth. This is unique to Europe, and the Europeans are revolting against the pressures of globalization in order to retain the good old system.
The Social Democrats, who have dominated the European continent for the last five decades, have been backsliding in polls and elections because they formed coalitions with parties that started reducing the welfare benefits in the name of reforms.
The cultural war in Europe is not similar to the one in USA or elsewhere. People want immigrants who are willing to pay high taxes and are willing to live up to the demands of integration, and learn linguistic skills in order to contribute to the sustainability of a universal welfare state.
‘Sammenhængskraft’(cohesion power) is the word in Scandinavia for a society based on the willingness to be part of the society. What does this mean? I can illustrate it with the fact that several times I have lost a wallet, a train card worth a lot of money, expensive gloves, etc., and whenever I have returned to the place where I had forgotten the item, some honest person had gone to the desk of the shop, office, or station and returned the lost property despite the fact that it was an easily usable and convertible element that could have been kept by the finder.
This honesty is a byproduct of the ‘sammenhængskraft’, which creates trust and honesty among citizens. There are islands in Denmark where people do not even lock their doors and have never done so. There is one policeman stationed. Such a society requires an enormously high amount of trust and faith in your fellow citizens.
The culture war in Europe is not about guns, it is not about the economic ideology of capitalism. It is a simple social urge to create a cohesive society where a poor and unintelligent person also enjoys certain inalienable rights.
In a recent lecture given in Copenhagen by the former French Ambassador of Human Rights, Francois Zimeray, who also happens to be the former French Ambassador to Denmark, he expressed the view that the Danes were more European than they realize. What was this idea of being European? Danes like their counterpart in European countries are not preoccupied about building the highest and the tallest buildings, they are not interested in creating the richest and the fanciest. European values are about moderation. Moderation which comes from creating a world where there is room for all.

The Fourth Industrial Revolution is to Empower People, not the Machines

Machines, rather than something to be feared, are the tools that will help us solve the world’s biggest problems. The Fourth Industrial Revolution is now. And, whether you know it or not, it will affect you.
Billions of people and countless machines are connected to each other. Through groundbreaking technology, unprecedented processing power and speed, and massive storage capacity, data is being collected and harnessed like never before.
Automation, machine learning, mobile computing and artificial intelligence — these are no longer futuristic concepts, they are our reality.
To many people, these changes are scary.
Previous industrial revolutions have shown us that if companies and industries don’t adapt with new technology, they struggle. Worse, they fail.
Mindset shift
But I strongly believe that these innovations will make industry – and the world – stronger and better.
The change brought by the Fourth Industrial Revolution is inevitable, not optional.
And the possible rewards are staggering: heightened standards of living; enhanced safety and security; and greatly increased human capacity.
For people, there must be a shift in mindset.
As difficult as it may be, the future of work looks very different from the past. I believe people with grit, creativity and entrepreneurial spirit will embrace this future, rather than cling to the status quo.
People can be better at their jobs with the technology of today—and the technology that is yet to come—rather than fearing that their human skills will be devalued.
Human and machine
I’m reminded of chess.
We have all heard the stories about computers beating even the greatest grandmasters. But the story is more nuanced; humans and computers play differently and each has strengths and weaknesses.
Computers prefer to retreat, but they can store massive amounts of data and are unbiased in their decision-making.
Humans can be more stubborn, but also can read their opponent’s weaknesses, evaluate complex patterns, and make creative and strategic decisions to win.
Even the creators of artificial chess-playing machines acknowledge that the best chess player is actually a team of both human and machine.
The world will always need human brilliance, human ingenuity and human skills.
Software and technology have the potential to empower people to a far greater degree than in the past—unlocking the latent creativity, perception and imagination of human beings at every level of every organization.
Power of data, power of people
This shift will enable workers on the front line, on the road and in the field to make smarter decisions, solve tougher problems and do their jobs better.
This is our mission at Uptake—to combine the power of data and the power of people, across global industries.
Here’s what this looks like:
Railroad locomotives are powered by massive, highly complex electrical engines that cost millions of dollars.
When one breaks down, the railroad loses thousands more for every hour it’s out of service (not to mention, there are a lot of angry travellers or cargo customers to deal with).
After the locomotive is towed in for repairs, technicians normally start by running diagnostic tests. These can take hours, and often require technicians to stand next to roaring engines jotting down numbers based on the diagnostic readings.
That’s the old way – or, at least, it should be.
New solutions
When locomotives operated by our customers roll into the shop for routine services, all diagnostics have already been run.
Our software has forecast when, why and how the machine is likely to break down using predictive analytics — algorithms that analyze massive amounts of data generated by the 250 sensors on each locomotive.
Our systems have examined that data within the context of similar machines, subject- matter experts, industry norms and even weather. If there’s a problem, we detect it, and direct the locomotive to a repair facility.
A mechanic can then simply pick up an iPad, and learn in a few minutes exactly what is about to break down, as well as the machine’s history and the conditions it’s been operating under.
Virtuous loop
That leaves the mechanics to do what they do best: fix it, using their experience, judgement and skill. And the mechanics decisions and actions become data that feeds back into the software, improving the analytics and predictions for the next problem.
So, technology didn’t replace mechanics; it empowered them to do their job.
In the same way that chess masters and computers work best together, the mechanic used human skills that a machine can’t replicate: ingenuity, creativity and experience. And the technology detected a problem that was unknown and unseen to human eyes.
In short, when the mechanic and the technology work together, the work gets done faster, with fewer errors and better results.
Multiply this across all industries: aviation, energy, transportation, smart cities, manufacturing, natural resources, and construction.
The productivity we unleash could be reminiscent of what the world saw at the advent of the first industrial revolution. But the impact of the Fourth Industrial Revolution will run much broader, and deeper, than the first.
We’ll have the knowledge, the talent and the tools to solve some of the world’s biggest problems: hunger, climate change, disease.
Machines will supply us with the insight and the perspective we need to reach those solutions. But they won’t supply the judgement or the ingenuity. People will.

Most People-Rich or Poor- Are Wrong About Poverty

The percentage of people living in extreme poverty around the world has fallen by more than half over the past three decades. But polls show that most people are not only ignorant of this fact, but believe that poverty has increased. This column explores progress towards ending global poverty by 2030, the first of the UN’s Sustainable Development Goals. Poverty figures have fallen around the world since 1990, and there is a broad consensus on the policies needed for further reductions. Eradicating global poverty is achievable, but it is dependent on global and domestic political cooperation.
“Did you know that, in the past 30 years, the percentage of people in the world who live in extreme poverty has decreased by more than half?”
In 2014, 84% of Americans who were asked this question were unaware of such declines in global extreme poverty. In fact, 67% of adult respondents thought global poverty had been on the rise over the past three decades. Unsurprisingly, 68% did not believe it would be possible to end extreme global poverty within the next 25 years (Todd 2014).
A recent study analysing the public awareness around the world of the Sustainable Development Goals confirmed that this widespread ignorance is not a US anomaly (Lampert and Papadongonas 2016).1 A significant majority of respondents from several countries, both developing and developed, are unaware of this achievement (see Figure 1). Interestingly, Chinese citizens appear much better informed about global poverty trends than those of the US or Germany.
But this is just what adults report. On 17 October – the International Day for the Eradication of Poverty – a poll of about 150 college students from the Washington metropolitan area were invited to the World Bank and showed more encouraging results (World Bank 2016a). Two thirds correctly responded that global extreme poverty has been reduced. Yet, more than half of those didn’t think the decline had been substantial. Even less auspicious, only four out of ten thought that extreme poverty could be ended by 2030 (World Bank 2016a).
It is tempting to engross ourselves in a discussion of why so many people know so little about this incredible accomplishment, dubbed by the New York Times’ Nicholas Kristof “the best news you don’t know” (2016). Keeping speculations aside, we simply express our bewilderment at how widespread these wrong perceptions about today’s number one global development challenge are, especially within a hyper-connected and social media-addicted world.
The second part of this story is equally fascinating. Will global poverty end by 2030? We cannot really say for sure, and pretending to have a definite answer may seem ludicrous when we can’t even confidently estimate real-time poverty numbers. Indeed, our latest estimates in 2016 refer to global poverty in 2013 – a three-year lag.
However, there are several reasons to be optimistic about ending poverty by 2030.
The new Poverty and Shared Prosperity Report 2016 launched this October by the World Bank explains why (World Bank 2016b). The figure below shows that the number of extremely poor people worldwide – measured by the very low $1.90 a day standard – has fallen by 1.1 billion people over the last two and a half decades, a period in which the global population grew by almost 2 billion (see Figure 2). This is true for all regions in the world without exception, from relatively richer Eastern Europe and Latin America to poorer Sub-Saharan Africa and South Asia. True, each region has reduced their poverty numbers at different paces. But the numbers leave little room for doubt – extreme poverty has been effectively and dramatically reduced.
The second reason to be optimistic is that we know a good deal about how to reduce extreme poverty. The slashing of extreme poverty worldwide is not a random phenomenon. Leaving ideological views on the role of globalisation in this decline aside, the astonishing reduction in global poverty has been steadfast since the moment we have been able to track it confidently with household surveys, which began around 1990. The unwavering decline has taken place through economic booms but has also weathered global crises, most notoriously the Great Recession in 2007-08. For the record, the only exception was the Asian crisis in the late 1990s, when global poverty actually increased in numbers and rates.
Summarising decades of research on poverty reduction is beyond the scope of this column. Let’s simply say that there is a broad consensus around a set of policies that allow countries to grow inclusively:
– Invest in the human capital of citizens and the infrastructure of countries, to make both people and economies competitive and diversified;Protect populations from risks that threaten to reverse hard fought and won prosperity gains – this includes everything from illness to unemployment, droughts to cyclones.
– While the exact recipe may change depending on a given country’s circumstances, the strategies mentioned above are common ingredients in most successful cases of poverty reduction. Strategies eloquently coined by the World Bank as “grow, invest, and insure” (Gill et al. 2016).
The third reason for optimism is that eliminating extreme poverty may not be as expensive as one may think. The true cost is probably impossible to estimate with precision, but a simple back of the envelope calculation of the total income needed to close the gap to the minimum standard of living ($1.90 per day) for the world’s current extreme poor produces a perplexingly low bill – 0.15% of global GDP, or $150 billion a year. This bulk figure bluntly ignores that the real world involves administrative costs, political will, the need to properly identify and target the poor, and the challenge of sustaining millions out of poverty in the future. But this figure busts the myth that ending poverty is a chimera that troubled economies today cannot afford. Our back of the envelope figure is arguably half of the tax revenue estimated to be lost every year to tax havens. It is also less than half of the money lost in gambling every year in just ten countries around the world (Aziz 2014). We can all do the maths.
But as with any prediction into a distant future, one ought to be very cautious. If the world were to reduce poverty at the same pace it currently does, poverty would end well before 2030. Sadly, making such a linear extrapolation is naïve and misleading. We cannot bank anymore on the rapid reduction of poverty that came from the spectacular performance of China and other emerging populous countries. Simply put, their success will make these countries run out of extreme poor soon – again, and let’s insist on this, measured by earning less than $1.90 a day. China and Indonesia’s latest numbers are about 25 million each. India, however, still hosts 217 million poor and its ability to reduce this number will be instrumental in reaching the goal of ending global poverty. But, more generally, it is from the corners of the world ridden by fragility, conflict, poor governance, undiversified economies, vulnerability to climate change, and a long list of other socioeconomic and political woes that the last mile in reducing poverty must be walked through. And trusting it all to strong economic growth in these countries will simply not do the trick. The world continues to show symptoms of a protracted economic pneumonia, with low-income countries now facing more challenging circumstances (even after displaying considerable resilience during the Global Crisis of 2008-09). Since the end of the commodity super-cycle in 2014, growth rates have come down across the developing world and there is little reason to expect a quick turnaround. This is why a better distribution of the benefits of growth becomes key to achieving the goal of eliminating poverty by 2030.
So, we do not know for sure if poverty will be eliminated by 2030. After all, what do we know for certain these days, anyway? President Obama has set the goal of sending humans to Mars by the 2030s and returning them safely to Earth. Will the world achieve such a feat? Or end poverty by 2030? In the case of poverty we do know a great deal about how to do it. Concretely, we already know the policies that are needed to bring the world to the end of extreme poverty, if only global and domestic politics would let them work. And we also know what challenges lay ahead and where. The only thing we seem not to know is that we do know all of this. And as Winston Churchill once warned, “those who fail to learn from history…”

Fact & Myths of Alleged Russian-Collusion Story

Polls show voters are jumping to the same conclusion as much of Washington: that President Trump “colluded” with Vladimir Putin to steal the presidential election.
But the evidence doesn’t back that up. Instead, such perceptions are driven by a number of key government and media assertions, which on closer inspection, dissolve into illusion:
MYTH:
“We have 17 intelligence agencies that know — with great certitude — that [the DNC hacking] was done by the Russians [to help Trump],” House Intelligence Committee member Rep. Jackie Speier (D-Calif.) recently said, echoing AP, CNN, The New York Times, NBC and CBS, among others.
FACT:
The Obama administration’s Jan. 6 assessment reflected the views of just three intelligence agencies — and one of them, the NSA, which captures Russian signals, expressed only “moderate confidence” in the conclusion. The others, the CIA and FBI, cautioned their judgment “might be wrong.”
The FBI and CIA reached their conclusion based on the forensic analysis of a private contractor who was hired by the DNC to examine its hacked e-mail server. “We didn’t get direct access [to the server],” former FBI Director James Comey testified.
MYTH:
In a quid pro quo with Moscow, the Trump campaign “gutted,” as The Washington Post described it, the GOP’s anti-Russia platform position on Ukraine.
FACT:
The final convention platform actually added tougher language on Russian aggression, including calling for “increasing sanctions against Russia unless and until Ukraine’s sovereignty and territorial integrity are fully restored.”
MYTH:
Much of the notorious “Steele Dossier,” despite being paid for by the Clinton campaign, still “checks out” (New York Times) or has “proven to be accurate” (Washington Post).
FACT:
The parts of the dossier the media are citing as true are merely echoes of their own reporting. Even a recent WaPo analysis of the 35-page document concedes that “many claims involve things that would have been publicly known at the time the report was drafted.”
Some press accounts have treated the dossier’s allegation that Russian officials offered Trump adviser Carter Page billions to end US sanctions as confirmed in September 2016 reporting by Yahoo News’ Michael Isikoff. But Isikoff’s “Western intelligence source” was almost surely the dossier itself. So the media used the dossier to corroborate the dossier. (Page, who has repeatedly denied under oath he met with the Russian officials cited in the dossier, is suing Yahoo News over the Isikoff story.)
What doesn’t check out at all, though, is the dossier’s most serious charge: that Trump officials secretly met with Kremlin officials overseas to hatch the hacking scheme against the Clinton campaign.
The idea that Trump lawyer Michael Cohen traveled to Prague in August 2016 to meet with “Kremlin representatives and associated operators/hackers” to discuss “how to process deniable cash payments to . . . anti-Clinton hackers paid by both Trump team and Kremlin” has been debunked. Cohen denied ever visiting Prague; his passport carries no stamps showing he left or entered the US at the time; Czech authorities found no evidence he visited Prague; and University of Southern California officials confirm he was on campus visiting his son during that time.
MYTH:
Russian interference in the election opens the door to questioning the results of the 2016 election and the legitimacy of the Trump presidency.
FACT:
The Obama intel assessment concluded none of the Russian hacking targets was “involved in vote tallying.”
And several states, including Wisconsin and California, now deny initial government reports that their election systems were ever even “scanned” by Russian cyber actors. Obama himself said any intrusions did not compromise the election results: “The election was not tarnished . . . We have not seen evidence of machines being tampered with.”
MYTH:
Russia launched a social media campaign that was “pivotal” to Trump’s victory: CNN.
FACT:
Facebook data reveal there was no real strategy behind the social-media ads paid in rubles. Most never mentioned either candidate. Geographic distribution was broad, targeting even non-battleground states like Maryland and Texas. Virtually all the Michigan and Wisconsin ads ran in 2015, which hardly helped Trump in 2016.
Russian Twitter and Facebook bots trolled both the left and the right with agitprop — in what appears to have been a general effort to deepen divisions and sow political chaos in America, not to favor one party or candidate over the other. Judging by the rampant mythology on this issue, that part was successful.

New Ripples, That Can be Wild Waves, in the Oceans

Looking beyond the traditional areas of high-technology and defence cooperation, and the more recent focus on global mitigation of climate change, Delhi and Paris appear ready to lend a strong regional dimension to their strategic partnership. Until we get there, there is much that India needs to do in elevating its bilateral security cooperation with the members of the quad as well as other partners in the Indo-Pacific.
Amidst all the excitement and anxiety about the Indo-Pacific quad — which brings together India, United States, Japan and Australia — it is easy to miss the significant prospects for Delhi’s bilateral maritime security cooperation with Paris in the Indian Ocean. Looking beyond the traditional areas of high-technology and defence cooperation, and the more recent focus on global mitigation of climate change, Delhi and Paris appear ready to lend a strong regional dimension to their strategic partnership.
A series of recent high-level consultations — between foreign and defence ministers as well as the national security advisers — have focused on finding ways for India and France to work together, especially in the Western Indian Ocean. These discussions are likely to be turned into concrete decisions by the time French President Emmanuel Macron visits Delhi early next year.
Meanwhile, the debate on “getting France to join the quad” entirely misses the point about the nature of the new grouping. The quad is a flexible mechanism to coordinate the approaches of like-minded states to promote their shared political objectives in the Indo-Pacific. It is a work in progress and will take time to achieve institutional heft and make a real impact. When this quad is eventually up and running, there will certainly be room for its expansion.
Until we get there, there is much that India needs to do in elevating its bilateral security cooperation with the members of the quad as well as other partners in the Indo-Pacific. France is at the top of that list. France has territories in the Western Indian Ocean and South Pacific and has been a maritime power in the region for nearly four centuries. Paris has military bases in the Indian Ocean. It has the lead role in the Indian Ocean Commission that brings together the island states of Mauritius, Seychelles, Madagascar, Comoros and the French territory of Reunion.
As a member of the NATO, France does not need the latest quad to do things with America. As it seeks to reclaim some of its historic role in the east, France is already stepping up its security cooperation in the Pacific. It has two quads of its own in the region.
Earlier this year, the naval forces of France, Japan, Britain and the United States conducted naval exercises in the Western Pacific. In a second quad, France coordinates South Pacific defence operations with Australia, New Zealand, and the United States. Paris also has a trilateral arrangement with Australia and New Zealand (FRANZ) to provide disaster relief to the island states of the Pacific. The missing link has been the inadequate political and security cooperation between Delhi and Paris.
This limitation stands in contrast to the general affinity between the French and Indian quest for different degrees of strategic autonomy during the Cold war. France was among the first to propose raising a coalition of middle powers to promote a multipolar world in the mid-1990s to limit the dominance of what the French called the American “hyper power” after the Cold War.
Paris also was the first to argue that ending the atomic blockade against India and integrating Delhi into the global nuclear order were important objectives. If this demanded a revision of the non-proliferation system centred on the NPT, then so be it, France said. This idea was taken forward by the US President George W. Bush in the historic civil nuclear initiative with India.
One can recall two earlier efforts — in the early 1980s by President Francois Mitterrand and in the late 1990s under Jacques Chirac — to transform the partnership with India. But the lack of consistent purpose in Delhi led to limited results from the two earlier efforts. Prime Minister Narendra Modi, however, appears determined to realise the full potential of the partnership with France.
The case for a bold re-imagination of the India-France partnership has never been more urgent than it is today. The rise of China, the renewed tensions between Russia and Europe, the uncertainty in the US political trajectory, and the loosening of the old alliances demand more leadership from middle powers like India and France. Nowhere are the possibilities greater than in the maritime domain.
An intensive dialogue with the French on maritime issues under the Narendra Modi government over the last three years has created the basis for sharing intelligence and military facilities, promoting inter-operability between their navies, and the future conduct of joint operations. Once progress is made in the Western Indian Ocean, France could also help boost India’s strategic footprint in the South Pacific.
Although India and France have long shared the Indian Ocean maritime neighbourhood, they have not put it at the heart of their partnership until now. The new regional framing will help develop the much needed depth to the India-France strategic partnership through maritime burden-sharing and reinforcement of each other’s positions in the Indo-Pacific.
If India discards its military isolationism, develops productive defence diplomacy, and embarks upon deeper security cooperation with its partners through bilateral, minilateral and multilateral mechanisms, the “quad talk” might generate a lot less heat than it does today.

Sexual Harassment: A Part of Growing Up

Allegations about sexual harassment in Hollywood, politics and various other sectors have exposed a reality already familiar to most women. Harassment, interruption, and intrusion from men is commonly disregarded as an inevitable part of life, unpleasant but expected. It is rarely acknowledged for what it is: a key factor structuring women’s lives.

Comedian Jo Brand’s recent comments on the quiz show Have I Got News For You help explain how it works. Responding to panellist Ian Hislop’s dismissal of some forms of harassment as “not high level crime”, Brand explained the continuum of sexual violence:

I know it’s not high level, but it doesn’t have to be high level for women to feel under siege in somewhere like the House of Commons. And actually, for women, if you’re constantly being harassed, even in a small way, that builds up and it wears you down.

Rather than a hierarchy of one-off events, Brand’s comments demonstrate that sexual harassment needs to be understood as a process that is cumulative and connected. My own research in the area shows how early this process begins.

Just over a third of the women I spoke with recalled an experience of sexual harassment at 12 or younger, sometimes from known adult men but much more commonly from among their male primary school peers. Harassment was even more common in girl’s teenage years with almost two thirds describing experiences of intrusion during adolescence, experiences much more likely to be perpetrated by adult men.

These early experiences are confusing. Unsure why men are doing this, young women look to the women around them to explain what is happening. And the message they receive was broadly shared across accounts: that sexual harassment is ordinary.

One woman I spoke to said her first experience of harassment was being whistled at by an adult man when she was doing her paper route at 13. She told me: “I remember going home, talking to my mum, being very upset about it and she was like, ‘this is life’.”

Another recalled a similar response from her mother: I remember as a kid men whistling at me and stuff and my mum just laughed it off and said ‘stupid man’.

Another described an experience at 15: I was walking up to my boyfriend’s house and this man exposed himself to me, quite close as well, and I didn’t really know what he was doing. And the police weren’t called because … well, I’m not sure why they weren’t called. My mum said ‘it’s all part of growing up’.

Such responses are a powerful illustration of sexual harassment as a process. The message is that sexual harassment for women is unremarkable, all part of life, an individual problem needing an individual solution. And so women learn to adapt their behaviour and movements, habitually limiting their own freedom in order to prevent, avoid, ignore, and ultimately (like their mothers) dismiss.

It wears you down. As experiences start to accumulate in their own lives, women reinterpret the responses to their childhood experiences as an act of care – a passing down of knowledge from one generation to the next, as one of my interviewees explained:

I think that’s why I probably remember the first time so well, because it was like this horrible thing that happened to me and I have something to say about it, but from then on, slowly over time, it’s become more and more normal, just part of life, your daily routine as my mum said to me. She knew.

Though I was talking to women about public space, successive reports have shown these experiences follow women across all areas of their lives. They follow young women into school, where almost 60% faced some form of sexual harassment in 2014. They follow women to university, where close to 70% experience sexual harassment. And, as recent events have reaffirmed, they follow women into the workplace. Not to mention the levels of sexual violence and assault that many women experience from men in their homes. All of which act to reinforce the very messages that prevent us from speaking. It must be about us because it keeps happening to us. And what happens just to us, doesn’t really matter.

This helps explain why women are actually coming forward to talk about their experiences now. The revelations about sexual harassment embedded in some of the core institutions around which we operate, from Hollywood to Westminister, have helped challenge the individualisation and dismissal that have typified responses to harassment since childhood. We are getting a different message: that sexual harassment is an individual manifestation of a structural problem. And structural problems have structural solutions. Now we need to shift the focus of our questions from why women are speaking up, to why are they only now being heard

Power Makes Men Oversexed & is Magnet fior Women

Studies prove that powerful men have higher libidos. But not all give in to their cavemen instincts like Harvey Weinstein and Kevin Spacey did!
Henry Kissinger said it when he claimed, “Power is the ultimate aphrodisiac!” What he did not specify is that this holds true both ways, and for women as well as for men. A person in a position of power has a high libido, and also becomes more attractive to others who find the power a turn-on.
And, while talk centres more on over-sexed powerful men, the same holds true for powerful women too. If reel is a good indicator of the real, then Television shows such as House of Cards and The Fall prove this beyond doubt. A powerful woman can be just as ruthless in desiring and grabbing an opportunity for sex and discarding the object of desire thereafter! We hear less of high-powered scandals involving women simply because there are fewer women in the upper echelons. Among the CEOs of Fortune 1000 companies, a mere 54 are women! Then again, women by nature are more cautious and take fewer, more studied risks. Women also do not feel as entitled to power as men do, and so do not settle into it comfortably.
Men get carried away by power, relating to it more naturally. Power opens the lid to their instinctive egotistical, arrogant and oversexed behavior. They feel entitled to anything they want, and expect others to do their bidding. Experts in male sexuality aver that when men see opportunity, they see no reason not to make the most of it. For these men, unlimited opportunities lead to uncontrolled desires and appetites. They fool themselves to the extent that they genuinely do not believe it when someone does not want them! It is here that they meet their Nemesis. Not believing the denial, they force their way through – and engineer their own fall from grace!
This is when a Harvey Weinstein and Kevin Spacey happen! Uncontrollable impulse and a feeling of invincibility and omnipotence leads to exposure and a mess that takes over their entire existence and leads to self-defeating patterns. This is what happened to Weisntein with allegations of assault and harassment by 60 women! Kevin Spacey is everyday collecting more allegations of sexual harassment by male colleagues! Consider allegations against Donald Trump, Bill Clinton, Eliot Spitzer, Arnold Schwarzenegger – the list goes on. In India we have sex scandals like N.D.Tiwari, Abhishek Manu Singhvi or Ram Rahim. The more high-profile ones (make no mistake, there are many!) are all ‘handled’ appropriately before they hit the scandal sheets!
Sex is indeed a powerful force, which society has taught both genders to rein in and apply to willing partners. Look at it biologically. Males of all organisms, more than females, are genetically wired to spread their seed to ensure the survival of genes. Humans are no different. Power corrupts both professionally and in romantic relationships and serves to blow the lid off the need for restraint and self-regulation in men.
So then, are all powerful men oversexed and debauch? Not necessarily. Different people need to struggle in varying degrees to control their urges. Some manage it easier than others and stay rational; others may indulge for a while and finally settle down, while yet others may either not see the need, or not be able to emerge from the debauched existence. Those are the ones who provide grist for the mill…

This world needs more masculinity, not less.

Are you aware that traditional masculinity is lethal and is destroying our nations? At least, this is what many people now genuinely believe. Boston Globe columnist Renée Graham wrote: “Literally and figuratively, TOXIC MASCULINITY IS KILLING US. Mass shootings. Domestic violence. Fatal fraternity hazing. Rape culture. Workplaces and schools turned into cesspools of sexual harassment and assault. This is not consigned to one race, ethnicity, or socioeconomic level. Feral masculinity affirms itself every day through violence and domination”
“Toxic masculinity” is now so big, it even has its own Wikipedia page. “The concept of toxic masculinity is used in the social sciences to describe traditional norms of behavior among men in contemporary America and European society that are associated with detrimental social and psychological effects. Such ‘toxic’ masculine norms include dominance, devaluation of women, extreme self-reliance, and the suppression of emotions.”
Isn’t this just a little frightening? If you are male and you subscribe to “traditional norms of behavior,” a growing number of people in this world, some in influential positions, consider you toxic. And what does one do with a toxic substance? You eradicate it.
You have no doubt heard about the revolting behavior of Harvey Weinstein, Kevin Spacey, Dustin Hoffman, Roger Ailes, and a host of other powerful men. It seems every day there are new allegations of sexual assault, new apologies, and, very often, resignations and firings. Here in the United Kingdom, politicians are resigning so fast one wonders if we’ll have enough left to run the country.
As despicable as all these accounts are (and some of them, let’s not forget, are alleged), doesn’t it feel as though something much more significant is going on here?
The truth is, these scandals are being hijacked by an army of amoral, anti-religious, anti-tradition individuals and institutions, who are using them to redefine, isolate and even eradicate traditional masculinity.
This sudden quest for sexual justice is happening at the same time as a somewhat related movement is taking off: the crusade to normalize transgenderism. This cause has advanced further in the last 10 months than in the last 2,000 years. The exploitation of these sex scandals to redefine manhood and the dramatic advance of transgender rights are both part of a larger, coordinated attack on true masculinity!
It’s working too. Today there are various programs, including university courses, designed to help men conquer “toxic masculinity.” Teachers, professors and intellectuals are attacking anything that promotes traditional masculinity, including classic literature and Disney movies. Across the planet, famous males are setting the example in purging themselves of characteristics associated with conventional masculinity.
There’s no doubt about it, traditional masculinity—already in retreat from decades of being undermined and assailed—is quickly being eradicated!
If you consider yourself a traditional man, or if you’re in a relationship with a traditional man, then you are a target. If you’re a male who exhibits “dominance”—perhaps you’re a bit of an extrovert, maybe you occasionally raise your voice, perhaps you are capable of giving an order or two—then you are a target. If you are a male who “suppresses his emotions”—who rarely sheds a tear, who rarely talks about his feelings—then you are a target. If you are a male who practices “extreme self-reliance,” then you are a target.
When you think about Harvey Weinstein and these other men, the fundamental problem is not traditional masculinity. The central issue here isn’t the masculine tendency to exert dominance or to treat women differently from men. At its core, this isn’t even about the abuse of power. The fundamental issue here is acute selfishness and vanity, and a desperate lack of self-discipline’ doesn’t justify their behavior, but it does explain it. Fundamentally, the cause of these problems is not traditional masculinity, it’s human nature.
Moreover, understanding the truth about human nature makes the solution to these problems patently obvious. Ultimately, the solution isn’t to wage war on traditional masculinity, it’s to wage war on human nature. And for men, this means developing more masculinity—that is, developing more right, proper, masculinity.
What is real masculinity? First, traditional masculinity as it’s understood today is not real masculinity. To be sure, some aspects of traditional masculinity are rooted in certain teaching and principles, but these principles have generally been misunderstood and misapplied. Moreover, traditional masculinity does not include the essential knowledge revealed in the scriptures about what it means to be a godly man. Traditional masculinity, for all its merits, is not real masculinity.
So what exactly is real masculinity? Ultimately, being masculine means using the power to conquer selfish human nature, and to fulfil the responsibilities that come with being a man. Being male, and particularly a husband and father, comes with various responsibilities. But fundamentally, a man’s job is to love and serve.
Love, honor, provide—are these toxic characteristics? These are the characteristics of true, biblical masculinity. These are traits that produce happiness and joy, that build stable, contented, thriving families and communities and nations.
Can you see it? The problem here isn’t toxic masculinity; it is toxic human nature. And the solution is conquering that human nature! For men who understand this reality, the way to conquer human nature is to embark on the Spirit-led, Spirit-infused crusade to develop real masculinity.

 

Technology & the Future of Work

How many of us can say, with certainty, what jobs we would choose if we were kids today? The pace of technological change in the time I’ve been in work is only a shadow of what we will see over the next 15 to 20 years. This next wave of change will fundamentally reshape all of our careers, my own included.
It’s estimated that some 65% of children entering primary schools today will likely work in roles that don’t currently exist.
We expect the pace of change in the job market to start to accelerate by 2020. Office and administrative functions, along with manufacturing and production roles, will see dramatic declines accounting for over six million roles over the next four years. Conversely, business and financial operations along with computer and mathematical functions will see steep rises.
There is a central driver for many of these transformations, and it is technology.
Artificial intelligence, 3D printing, resource-efficient sustainable production and robotics will factor into the ways we currently make, manage and mend products and deliver services. The latter two have the potential to create jobs in the architectural and engineering sectors, following high demand for advanced automated production systems.
When the World Economic Forum surveyed global HR decision-makers, some 44% pointed to new technologies enabling remote working, co-working space and teleconferencing as the principal driver of change. Concurrently, advances in mobile and cloud technology allowing remote and instant access were singled out as the most important technological driver of change, enabling the rapid spread of internet-based service models.
It’s worth reflecting on how we could imagine a changed world like this.
Our future place of work might not be an open plan office, but interconnected workspaces not tied to one place, but many. They will be underpinned by virtual conferencing, complete and constant connection and portability.
Our working day will be fundamentally different. Leveraging big data, like real-time traffic information, could cut journey times, making the school run easier, and the morning commute more manageable. That is, if you have to commute: home-working will no longer be defined as a Friday luxury, but a more efficient way to work enabled by technology, taking the physical strain from megacities and regionalising work locations.
Technology underpinning what futurologists have christened ‘The Fourth Industrial Revolution’ will enable disruptive business models to decentralise our economies as we move from value systems based on ownership to ones enabling access. Personally owned assets, from cars to spare bedrooms, will expand entrepreneurship, diversifying revenue streams. It’s no fluke that within three years of trading, home-sharing platform Airbnb offers more rooms than some of the biggest hotel chains.
These disruptive business models will fundamentally reshape how we do business, both individually and as companies. For example, digitally enabling smallholder farmers can allow them to operate as a collective, transferring knowledge and sharing vital learnings with each other from proper crop irrigation technology to water efficiency. Cloud-based analytics hosted on BT’s Expedite platform can assist in radically transforming such supply chains.
Critically, these very technologies might help us unlock the solutions to some of the biggest societal challenges we currently grapple with. The ICT underpinning these technologies, in consort with the transformational power of big data, could support smart systems that will help tackle climate challenges. Connected homes, factories and farms leveraging smart energy management systems could mean dramatically lower energy use, which would contribute to the decarbonisation of our economies.
And yet we must be vigilant. Not of technological change; we have the power and innovation to harness and use its power as we see fit. But of access to the connectivity and opportunity it brings.
What will be absolutely decisive is how we equip our children, our students and our colleagues to harness the power of this technology to transform our world for the better. That means ensuring the ICT skills of current school leavers are fit for the future. It means providing incentives for lifelong learning as the pace of technological advancement quickens. And it means reinventing the HR function, equipping it to continually assess and provide for the training needs of employees.
If we get this right the prize is clear. We have the potential to revolutionise the way we live and work and do it in a way that avoids the vicissitudes of previous industrial revolutions, creating new economic opportunities that, even as children, we would not have before imagined.
Lastly, we must use every tool within our armoury to ensure the current and future generations are not left behind in the global digital skills race.
.

 

 

The Lost Voice of Reason

What are the traits of a totalitarian regime? How does it take shape not just in banana republics but also in countries with matured methods of governance? Why does a large section of the elite fall in line and supports such a move considering it to be beneficial to the country in that particular frame of time?

History has a strange way of repeating itself and questions which were raised 75-100 years back are relevant again. In this modern world! In the age of social media, instant communication and at a time when many countries have adopted the Western concept of democracy. There is another difference from the past. Today matured democracies are turning totalitarian with majority support of its population. By choice and not by force!

The signs are ominous and worrisome for the world that we may leave for our children. At such times, the silence of the lambs is hard to bear as the wolves take over the reins. The phenomenon evolved with the right-wing forces led by the so-called ‘silent majority’ in the countries raising their voices against the policies of empathy, love, acceptance, mutual respect and dignity of life in their respective countries. Undue hate and divisive issues were mitigated where none existed to divide and rule over people by desperate political, religious and social interest groups.

Words of a disillusioned writer! Let us discuss the current scenario and in this article focus specifically on India. Decades back in a secular and stabilising democracy like India which was focusing on national growth, an issue of Babri Masjid was taken as a stepping stone to launch the rightist forces into top gear. With the support of the majority and lame duck governance, the rightist agenda was nurtured well to become a force that it is has become now.

Today Golwalkar is more relevant than Ambedkar, even Godse than Gandhi! India seems to have been taken over by self-declared nationalists, cow-vigilantes, religion bashing, hate spreading groups which seem to rule over the masses and at times even the government in power. It is difficult to speak anything against their agenda lest being declared anti-national and sedition charges or immediate jail being thrust upon you. Sometimes one does not even need to wait for that and the mob and the society deliver instant justice.

Words like liberal, secular, accepting our now labelled as abuses in India and so are such people distanced with. People love to carry their nationalism on their sleeves.

And yet at the cost of being subjected to the deepest of rebuke even from my own friend circle, I need to mention my thoughts clearly as we cannot allow India to go down the lines of a failed state like Pakistan or a cocoon-like Saudi Arabia.

I would support any person to eat anything that they desire including beef or pork and the state should not interfere over what one eats or drinks. And for the sake of sanity take action against the perpetrators of violence that take the law in their hand and are beastly enough to kill or torture humans in the name of protecting animals. The photograph of the body of a man draped in Indian tri-colour who was charged for the murder of another man accused of eating beef is something which is the most horrific abuse of national flag that we could see in our living history. It has happened and has been watched in silent support by the political masters and blessed by their presence.

We have our political battles to fight with Pakistan and terrorists and not Muslims. So stop abusing Muslims at various forums including social media. Stop making fun of Shah Rukh, Aamir or Salman because they at times speak out against the hatred brewing in the society. Hindus, Muslims, Sikhs, Christians-all are equal citizens of this country and no one has to prove their allegiance to the country every time a madman goes berserk or a hostile nation invades our soil. Or even if they speak against the evils of our society!

Questioning the government in power or calling people from a hostile country as humans is not sedition. Stop perverts from putting any kind of censors on the thoughts of people like Ramya, Kulkarni, Bhatt, Karan Johar or the likes who have as much right to express their thoughts as the ones who differ from them. Stop throwing eggs on their cars, blackening their faces, putting frivolous cases against them in courts of law or throwing abuse at them at all forums. Above all stop showing your pseudo-nationalism by shouting at the top of your voice against such people.

Also even if India is at war with Pakistan and not just against a section of people branded as terrorists, let the government decide what is good for the state and what is not! Why should the people take the law in their hand? If the government feels that the relationship with Pakistan needs to be stopped then they are capable of stopping the issue of visas to Pakistani actors, business man or common people. Or do we feel that the government is incapable of acting in the right interests of the nation?

Speaking for and against religious thoughts is the way this great nation has built on its richness. A Buddha could enlighten with his views which differed from the majority views and still be allowed to speak. Mahavir, Nanak, Dayanand, Ramakrishna, Raja Ram Mohan, Kabir, Osho all stood against the narrow boundaries of religion created in those times and broadened our horizons with their vision. Why Hinduism is at danger now if anyone speaks against some practices or rituals and why should a mob threaten people from expressing their view or the state thrust a case for hurting religious sentiments? Alas in today’ India a Jesus would need to be crucified before his message travels far and wide.

Hinduism is more than the religion and is a way of life. It sets you free. ‘Aham Brahmasmi’ or one himself is God is also accepted or you could revere a Supreme force ‘Ohm’ or even thousands of its forms. One is free to choose and live cohesively and yet call oneself Hindu. Hinduism enables choice and is all accepting. ‘Vasudeva Kutumbakam’ or the entire universe is one family is the foundation stone of Hinduism. One where everyone is welcome and diversity is respected as a way of life!

To all the soothsayers, India has survived for centuries, nay thousands of years being secular, accepting and has been a true melting pot of all thoughts and cultures. We speak a multitude of languages, display various cultures, enjoy diverse traditions and yet are capable of living together. What has worked for ages will continue to work for times to come. Do not mitigate the atmosphere with your divisive, narrow and all controlling ways.

Totalitarianism is dead. Its champions like Mussolini, Hitler, Stalin, Mao, Gaddaffi, Suharto are not just dead but passé. We have no place for it and any reference to its proponents past or present like Kim Jong-un is detested by the modern world.

Right? Wrong. The truth is that totalitarianism has evolved, is alive and kicking and is in fact gaining in strength in almost all corners of the world. It has a new name and a new camouflage – Totalitarian democracy!

Totalitarian democracy creates a charade of liberty and citizen rights but derives its strength from the ‘will of majority’ or majoritarianism- a word gaining prominence in today’s world. It is created by the cunning of the leader who well and truly knows how to light the fire by fanning the sentiments of the majority, use that fire to burn down to ashes any obstacles on the way and then rule over the citizenry who start considering silent cemeteries as gardens of peace.

Liberty, equality, freedom- farcical democratic values in the eyes of such a maniacal leader are manipulated by stroking the primitive banalities of the majority view to create a regime which is totalitarian in thought and action.

In words of Herbert Marcuse in his book ‘One dimensional man’: “Liberty can be made into a powerful instrument of domination… Free election of masters does not abolish the masters or the slaves.” Totalitarian democracy is just that – a mirage of freedom and equality where in the name of enabling common good, the citizenry gleefully surrenders its rights to its masters. Where the citizens feel that they have done their duty by electing their representatives and then let their elected shepherd decide which direction the herd will tread. Where the leader has the right to suppress and crush the views of any individual or group without fear of any rebuke or reprisal from the collective.

The state through its own apparatus or its crusaders decides what the common good is for the people as it is supposed to rise above the individual. It is beyond the sensibility of the individual to decipher the truth and the state needs to decide between right and wrong.

Any person or institution that does not adhere to the common goal has no right to differ and serves no useful purpose. As a result they need to be silenced or eliminated.

The state decides what is right for you to drink and eat even if it may not be based on scientific evidence but on moral or social norms and one must follow that. If one dissents then one gets punished for rising against the state. Even lynched by the mob!

The state decides who is an enemy or a friend based on political, religious or market sentiments and the individual must follow. Any differing views may make you liable for the charge of sedition and opposing the state. Even being deported in certain countries!

The state decides what social norms should be followed and what are blasphemous. Any deviation can put your life and limbs to risk. I am not talking about the last century or medieval ages. I am referring to modern, vibrant, liberal democracies which are turning into totalitarian regimes.

The list is endless- The French state fighting the Burqini which is worn by not even 0.1% of their population to create Islamophobia, Indian states spending time and energy on cow protection and liquor bans than on issues related to governance, Saudis lashing atheists, Russians and many African and Asian nations imprisoning LGBT communities.

It is the people who have allowed fierce hate and anger to be built in their minds based on colour, religion, creed and region. Donald has just touched the raw cord and let loose the animal inside from its leash. Trump is not an aberration but a product of the system.

Chabahar- Base of Pakistan-Iran-India Tango

The operationalization of Chabahar port is significant because India has demonstrated its intention to play on the regional chessboard, even while it balances its own relations with the US and Iran. The old great game just got a new veneer
A new churning is taking place in the region, with India announcing its first shipment to Afghanistan, via the Chabahar port in Iran, and Pakistan’s army chief taking a delegation to Iran earlier this week for a series of meetings.
Has India’s Chabahar initiative caused Pakistan to re-engage with Iran? Or, is this a parallel development, addressing bilateral issues and the repercussions of Pakistan’s involvement in the Middle East?
Pakistan’s Iran Predicament
Since Zia ul-Haq’s time, Pakistan’s relationship with Iran has been tense, indifferent and sometimes, even hostile. Zia’s Islamisation strategies were perceived by Shia Tehran as the deepening of Sunnization, creating new stress in the bilateral relationship and emphasising sectarian faultlines inside Pakistan.
High-level visits between Iran and Pakistan became the exception. Afghanistan soon became a much more important neighbour, with the US using Pakistan as a cat’s paw in its own war against the former Soviet Union in the late 1980s. Meanwhile, Iran-US relations went through the wringer, even as Teheran was bogged down with other issues in the Middle East.
Despite the continuing political tension between Iran and Pakistan, both countries drew closely together on two other matters. First, Pakistani nuclear scientist A Q Khan drew a willing Iran into his own underground network of nuclear linkages that served both sides well. Second, smuggling between the Pakistan-Iran border, especially along the Makran coast, began to take place.
Enter the Middle East Cold War and the Islamic Military Alliance
But the political divide was exacerbated by Saudi Arabia’s expanding influence on Pakistan. Riyadh’s Islamic Military Alliance is now headed by Pakistan’s former army chief, Gen. Raheel Sharif. Clearly, the Pakistani government isn’t terribly attracted to the idea, especially because its own Shias, between 30-40 million, are said to comprise about 10 per cent of the total 200 million population. Pakistan’s National Assembly has even discussed Raheel Sharif’s new job and pointed out that there is a need to go slow.
Was Raheel Sharif given the job because he was once the most powerful man in Pakistan and Pakistan is the only country in the Islamic world with a proven nuclear weapon capability ?.
Meanwhile, Teheran’s relations with Saudi Arabia began to deteriorate over the ongoing conflict in Yemen. Riyadh also seemed determined to isolate Qatar, in an attempt to consolidate its leadership in the Muslim Ummah. Its efforts to get the US on board this regional great game were enormously boosted with Donald Trump identifying Iran as the cause of instability in the Arab Islamic American summit in Riyadh in May 2017, even as King Salman looked on.
Certainly, Pakistan being a part of this Summit would not have gone down lightly in Tehran.
Chabahar: Trigger, not the Cause
The operationalization of Chabahar port by India has triggered the panic button within Pakistan. As Delhi faltered in its execution of Chabahar in recent years, Pakistan was cynical and even sarcastic; meanwhile there was the China-supported Gwadar port as well as the Beijing-funded China-Pakistan Economic Corridor, both projects being described as a “regional game changer”.
With Chabahar now in the mix, the regional great game has taken a new turn. Chabahar is not far from Gwadar. As the crow flies, the straight distance is only 171 km, while the road route doubles it to 356 km. Second, Chabahar is more than a port, it is the starting point of a trade and transit corridor that could become parallel to the CPEC as it cuts across Iran and into Afghanistan. Third and most importantly, New Delhi has big plans for Chabahar, to connect it to the International North South Transport Corridor (INSTC) and opening it up to the passage of goods into Russia and onwards.
Gen Bajwa’s Visit: Should India be worried?
India’s political will to walk the talk with Chabahar has exaggerated the bilateral and regional predicament in Pakistan’s west. Islamabad would certainly like to repair its relations with Teheran. Gen. Bajwa’s visit to Iran must be seen in this context, when he met the Iranian president, its defence minister as well as the commander-in-chief of the Islamic Revolutionary Guards.
The army in Pakistan has always been all-powerful, but a trend towards greater consolidation of power can now be clearly seen. Gen. Bajwa’s visit to Iran was preceded by a trip to Kabul, where he also met President Ashraf Ghani as well as the top Afghan leadership. Both New Delhi and Teheran, now connected through the Chahabar thread, must be closely watching.
But despite the fanfare of the visit, Gen. Bajwa did not succeed in getting a succulent joint statement with the Iranians. Whatever was made public is mediocre and focussed on border security between the two countries relating hotline communication, border fencing and patrolling, intelligence sharing etc. The fact that Pakistan has to talk about establishing hotlines in 2017 shows the level of communication so far!
The powerful director-general of the media wing of Pakistan’s armed forces, ISPR, Maj-Gen Asif Ghafoor, effusively thanked the Iranian Supreme Leader for a “supportive statement” on Kashmir and said, “It is a long pending dispute between India and Pakistan. Regional peace and security remains at stake unless it’s resolved to the aspiration of Kashmiris in line with UN Resolution.”
Predictably, the Pakistan media sought to project this as Iran’s Supreme leader throwing its “weight behind Pakistan on Kashmir”.
That’s why the operationalization of the Chabahar port in Iran is so significant. India has demonstrated its intention to play on the regional chessboard, even while it balances its own relations with the US and Iran. The old great game just got a new veneer.

 

Germany With More Robots than the US, But No Job Losses & Lessons From History

By 2014, there were 7.6 robots per thousand German workers, compared to 1.6 in the US. The rise of the robots, coming first for our jobs, then maybe our lives, is a growing concern in today’s increasingly automated world.On Oct. 10, the World Bank chief said the world is on a “crash course” to automate millions of jobs. But a recent report from Germany paints a less dramatic picture: Europe’s strongest economy and manufacturing powerhouse has quadrupled the amount of industrial robots it has installed in the last 20 years, without causing human redundancies.
In 1994, Germany installed almost two industrial robots per thousand workers, four times as many as in the US. By 2014, there were 7.6 robots per thousand German workers, compared to 1.6 in the US. In the country’s thriving auto industry, 60–100 additional robots were installed per thousand workers in 2014, compared to 1994.
Researchers from the Universities of Würzburg, Mannheim, and the Düsseldorf Heinrich-Heine University examined 20 years of employment data to figure out how much of an effect the growth of industrial manufacturing has had on the German labor market.
They found that despite the significant growth in the use of robots, they hadn’t made any dent in aggregate German employment. “Once industry structures and demographics are taken into account, we find effects close to zero,” the researchers said in the report.

Robots are changing career dynamics

While industrial robots haven’t reduced the total number of jobs in the German economy, the study found that on average one robot replaces two manufacturing jobs. Between 1994 and 2014, roughly 275,000 full-time manufacturing jobs were not created because of robots.
“It’s not like jobs were destroyed, in the sense that a manufacturing robot is installed and then the workers are fired because of the robots—that never really happened in Germany,” study co-author Jens Südekum, from Düsseldorf Institute for Competition Economics, told . “What happened instead is that in industries where they had more robots, they just created fewer jobs for entrants.”
“In a sense the robots blocked the entry into manufacturing jobs.” “Typically around 25% of young labor market workers went into manufacturing and the rest did something else, and now more workers have immediately started in the service sector—so in a sense the robots blocked the entry into manufacturing jobs.”
The study also found that those who are already in jobs where they were more exposed to automation, were significantly more likely to keep their jobs, though some ended up doing different roles from before. The big downside for some medium-skilled workers, who did manual, routine work, was that it meant taking pay cuts.
“This is where these wage results come in, what we find is that no one was really fired because of a robot, but many swallowed wage cuts. And this has mostly affected medium-skilled workers who did manual routine tasks.” Around 75% of manufacturing workers are medium skilled, and the wage cuts have so far been moderate, he says.
“The robots really fueled inequality.” But Germany hasn’t got it perfect. One core reason for why Germans haven’t been fired in favor of robot, is the country’s famously powerful unions and work councils, which have are often keen to accept flexible wages on behalf of workers, to maintain high employment levels.
“Unions of course have a very strong say in wage setting in Germany,” Südekum says. “It’s known that they are more willing than unions in other countries to accept wage cuts to ensure jobs are secured.”
While robots have increased productivity and profits, and not driven people into unemployment (yet), they haven’t been good news for blue collar workers in Germany.
“The robots really fueled inequality, because they benefitted the wages of highly skilled people—like managers and scientists, people with university education—they even gained higher wages because of the robots, but the bulk of medium-skilled production workers suffered.”
Disruptive technologies are dictating a new future for humankind. Almost every day we hear of new advances that blur the lines between the realms of the physical, the digital and the biological. Robots are now in our operating rooms and fast-food restaurants. It’s possible, using 3D imaging and stem cell extraction, to grow human bone from a patient’s own cells. 3D printing is creating a circular economy – rather than the linear model of making things then throwing them away – by altering how we use and recycle raw materials.
This tsunami of technological change is clearly challenging the ways in which we operate as a society. Its scale and pace are profoundly changing how we live and work, and signposting fundamental shifts in all disciplines, economies and industries.
In what we now call the Fourth Industrial Revolution, we will see the confluence of several technologies that are coming of age, including robotics, nanotechnology, virtual reality, 3D printing, the Internet of Things (IoT), artificial intelligence (AI) and advanced biology.
Although at different stages of development and adoption, as these technologies bed in, becoming more widespread and convergent, we will see a radical shift in the way that individuals, companies and societies produce, distribute, consume and re-use goods and services.
Will the robots & new industrial revolution destroy jobs?
These developments are prompting widespread anxiety about what role humans will have in the new world. As the pace of change accelerates, so the alarm levels ratchet up. A University of Oxford study estimated that close to half of US jobs could be lost to automation over the next two decades. In the opposite camp, economists like James Bessen argue that, on the contrary, automation and jobs often go hand in hand.
It’s impossible at this point to predict what the overall impact on employment will be. Disruption will happen; of that we can be certain. But before we swallow all of the bad news, we should take a look at history. Because this tells us that it is more often the nature of work – rather than the opportunity to take part in work – that will be impacted.
How industrial revolutions changed the nature of work
The first industrial revolution took British manufacturing out of people’s homes and into factories, creating the beginnings of organizational hierarchy. People moved from rural areas to industrial ones, change was often violent – the famous “Luddite riots” in early 19th century England are a case in point – and the first labor movements emerged.
The second one was characterized by electrification, large-scale production and the expansion of transportation and communication networks. It led to the birth of the professions – such as engineering, banking and teaching – created the middle classes, and introduced social policies and the role of government.
And as electronics and information technology automated production during the third industrial revolution, many human jobs started to become service-driven. When automated teller machines (ATMs) arrived in the 1970s, it was initially viewed as a disaster for workers in the retail banking industry. Yet branch jobs actually increased over time as branch cost went down, becoming less transactional in nature and more about managing customer relationships.
What can we learn from history?
Each industrial revolution has brought attendant disruption, and the fourth wave will be no different. We must remember this and use what we have learned to manage the change:
Focus on skills. Instead of focusing on the specific jobs that will appear or disappear, we should instead concentrate on the skills that will be needed, then educate, train and reskill the human workforce to leverage the new opportunities afforded by technology. HR departments, educational institutions and governments will be at the forefront of driving this.
Protect the disadvantaged. Experience points repeatedly to the need to protect the disadvantaged and create the time and means for them to adjust. As we have seen this year, it is more important than ever not to let inequalities create social groups who have lost all hope on the altar of progress.
Work together to create new ecosystems. Government will have a crucial role to play, along with business and civil society leaders, in driving the appropriate levels of collaboration, regulation and standards that will be needed to ensure that the fourth industrial revolution translates into economic growth and creates benefits for all.
I am not under any illusions this will be easy. Particularly in democracies, change will be hard and slow. It will require a mix of forward-looking policy-making, agile regulatory frameworks and – above all – effective partnerships that cross our organizational and national boundaries. Politics, rather than technology, will determine the pace of change.
Denmark is rightfully an often quoted example here, where its “flexicurity” system allows for a high degree of labour law flexibility, while offering citizens a safety net of benefits, training and reskilling.
Despite the exponential pace of technological change, we should not forget the all-important role of time. While the changes ahead will be momentous – indeed, revolutionary – they will not land as a big bang. On the contrary, they will likely take place over many decades. We have time, therefore, to adjust; as individuals, as companies and as societies. For sure, this is no reason to wait and see, but rather one to get to work and create the new future.

Create jobs in the age of robots and low growth

The growth economy suffers from a productivity paradox. Corporations compete to reduce the time and effort that goes into production processes, which is generally seen as a sign of efficiency, but in reality has a troubling outcome.
Unless more stuff is produced and consumed, people lose their jobs as the same output can be achieved with fewer workers. This is why so many well-meaning people around the world fear the prospect of low growth, even when they recognise that the current form of industrial development is destroying the social fabric and natural ecosystems of societies.
Such a structural unemployment problem is further compounded by the real prospect of massive automation replacing humans in production chains. A study by Oxford University predicts that robots are likely to displace no less than 50% of jobs in the US and Europe in the next 20 years.
Machines are putting people out of work in emerging economies too, including in China, which has long been the global job creator. According to Martin Ford, bestselling author of The Rise of the Robots, most routine jobs are becoming obsolete. As Ford says: “We must decide, now, whether the future will see broad based prosperity or catastrophic levels of inequality and economic insecurity.”
Unless societies change their approach to growth and development, they will not only end up with a broken planet and conflict-ridden communities: they will also face massive unemployment. There is no way the vertical structures of production dominating the current industrial system will generate enough jobs, let alone good jobs, to satisfy our needs.
The only way out of this predicament is to empower people to become producers and consumers at the same time. Or what I call “prosumers”. They must be capable of making most of the things they need through local systems of co-production and networks of small businesses.
And certain technologies can help develop a new form of post-industrial artisanship. Thanks to open source hardware, small business networks are building computers. With the assistance of 3D printers, artisans are taking on big business in a way that may challenge conventional assumptions about the efficiency of economies of scale.
Changing the focus
Rethinking work is crucial for industrialised economies as well as emerging economies, where job losses are being felt even in the presence of substantial, although diminishing, economic growth.
For instance, Africa is expected to achieve a record 2.8 billion people by 2060, becoming the largest continent in the world. Most of these people will be young and thirsty for work. There is no way the continent will create decent employment opportunities by adopting an industrial model that’s already eliminating jobs globally.
A new approach is needed. The wellbeing economy forces us to rethink the nature of work by shifting the focus from the quantity of the production-consumption cycle to the quality of the relations underpinning the economic system. The growth economy pushes for mass consumption through an impersonal relationship between producers and consumers (which can be more efficiently performed by robots). The wellbeing economy must embrace a customised approach to economic exchanges, in which it’s the quality of the human interaction to determine the value.
Take a doctor as an example. From the perspective of the growth economy, the best doctor is the one that visits as many patients as possible in the shortest period of time. In theory, this function could even be performed by a robot. By contrast, in the well-being economy, the personal attention invested in the doctor-patient relationship becomes the key to value creation.
Intuitively, all of us associate the value of good health care with the personal attention that comes with it. The same holds true for education: any reasonable person would frown upon a school that asked teachers to teach faster and faster to an ever larger number of students. Common sense tells us that value is being lost through the mass consumption of these relationships, even when profits (both for the clinic and the school) may increase.
Rethinking productivity
Productivity is certainly a good thing, but it should not be embraced blindly. Above all, we need to ask what productivity is for.
The economy is nothing else than a system of social relations. If productivity undermines those relations, the economy itself crumbles, even when profits (at least for someone) may go up.
Many professional activities based on the quality of the performance cannot, by their own nature, become more productive. Asking an orchestra to play faster would not increase productivity: it would simply turn a melodic experience into a nightmare. The same applies to painters, dancers, barbers, teachers, nurses and the like.
This is why it still takes the same number of people to play a Mozart opera today as it did when Mozart first composed it. So, what about extending the same intuition to the rest of the economy?
What if the mechanic of the future won’t be a robot churning out new spare parts every second, but rather a qualified artisan that helps us fix, upgrade and upcycle our vehicle for the entire duration of its life? What if the engineer of the future won’t be a remote computer taking care of our house appliances but a personal advisor helping households produce their own energy. They would help optimise the use of natural resources, from water to vegetable gardens, and make sustainable use of building materials?
A circular economy in which production is optimised and no waste is produced can be achieved in two ways. It can be achieved through robots-dominated production chains or by a horizontal network of qualified artisans and small businesses operating as a collaborative organism. In the first case, a few people will dominate the global economy. In the second case, everyone will be empowered.
A wellbeing economy cannot exist without empowerment, access and equal opportunities for personal and collective fulfilment. Developing countries can now leapfrog to such a new trajectory without being held back by a model of industrialisation that’s increasingly unfit for the 21st century

 

Fantastically Rising Inequality? Piketty Errs

Inequality is measured by the Gini coefficient, ranging from zero for total equality to 1 for total inequality. Consumption surveys of the National Sample Survey Organisation show a modest Gini increase from 0.3 in 1983 to 0.36 in 2011-12. Businesses that were bound hand and foot during the licence-permit raj were unbound, and soared. So the increased inequality is unsurprising.
Many economists believe inequality has risen much faster. Thomas Piketty, guru of inequality, and Lucas Chanel have produced a new paper “Indian Income Equality, 1922-2014: from British Raj to Billionaire Raj?” Using a complex mix of data on income tax, national accounts and household surveys, this concludes that the top 1% of income earners in 2014 earned 22% of national income, the highest share since 1922 when income tax was introduced.
Piketty and Chanel note that consumption as reported in NSSO surveys is barely 40% of consumption as shown in GDP data: the ratio has fallen steeply over recent decades. Of the many explanations, Piketty emphasises the thesis that the rich hide their consumption from surveyors for fear of tax consequences. The non-rich have no such compulsion.
Piketty says India has no income surveys over long periods. So he uses heroic but questionable assumptions to construct income estimates, admitting these are “fraught with methodological and conceptual difficulties.” He uses income tax data for the richest 5%, and adjusted NSSO consumption data for others. This yields a picture of fantastically rising inequality.
I won’t bore readers with Piketty’s technical flaws. But consider this. Piketty says the rich lie to NSSO surveyors. So for them he switches to tax data. Ergo, he thinks that the rich tell the truth to the taxman, but lie to surveyors! Laugh aloud, please.
The poor lie to surveyors too. Exit polls for elections vary widely from one another and the actual outcome. Why? As one voter told pollsters, “Why should I tell you truthfully? What will I get out of it?”
Economist Devesh Kapur once conducted a survey of Dalits, using a local facilitator. A Dalit being surveyed said he was in very bad shape. Then, by coincidence, it came to light there was a marriage proposal between the families of the Dalit and the facilitator. The Dalit immediately declared that he had been lying, and was actually very well off! Clearly the non-rich will lie if it benefits them.
Once, most subsidies were universal. But in recent decades the central government has targeted those below the poverty line, who get much cheaper foodgrains, kerosene, the Antodaya scheme and Indira Awas Yojana (cheap houses). State governments have additional targeted schemes. So, villagers have strong incentives to understate income to maximise benefits. Piketty is wrong to think only the rich lie to surveyors, not the non-rich.
Indeed, the incentives of the rich to lie have fallen with lower tax rates in recent decades, even as the incentives of the poor have gone the other way. This dents, perhaps destroys, the assumptions and results of Piketty’s fancy economic modelling.
He says the income share of the top 1% fell sharply to 6% under Nehru-Indira socialism from 1950 to the mid-1980s, and then rose stridently, especially in the era of economic liberalization after 1991. This implies that ordinary folk were treated better in the socialist era and much worse in the liberalisation era. Dead wrong. The poverty ratio did not fall at all between 1947 and 1977, while the population almost doubled. So, the absolute number of poor almost doubled. Piketty ignores this terrible outcome of “garibi hatao” socialism.
By contrast, fast growth induced by economic liberalisation raised 138 million people above the poverty line between 2004 and 2011.
Inegalitarian liberalisation was far better for the poor than egalitarian socialism. Liberalisation provided new opportunities, which mattered more than equality. All rural areas have much lower Ginis (hence more equality) than urban areas, yet all migration is from villages to towns. People vote with their feet for opportunity over equality. Bihar and Assam have the lowest Ginis, but this denotes terrible stagnation, not an egalitarian paradise. Biharis migrate in millions to unequal but richer states. The new opportunities of liberalisation have created 3,000 Dalit millionaires. This form of inequality merits cheers, not criticism.
Piketty fails to consider inequality of opportunity, India’s worst curse. People with skills and global connectivity have soared. Those without have been left behind. India needs quality schools, health centres, broadband and other infrastructure in every village. It needs uncorrupt, skilled and accountable officials. That will finally improve equality of opportunity. Soaking the rich won’t, as proved by the socialist era.

India’s Indigenous Intellectual Traditions & the Pseudo-Intellectual Left

What explains the refusal of India’s leftists to honour what is honourable in Indian history and traditions? India has one of the greatest intellectual traditions in the world and it has nothing to do with modern Indian leftist scholars and writers. It is the tradition of numerous yogis, sages and seers going back to the Vedas, extending through Vedanta, Buddhism and related dharmic traditions to their many exponents today.
Dharmic traditions teach us how to develop the mind in the highest sense of universal consciousness, not simply logic and conceptual thought. India’s great minds, centuries ago, produced the many paths of yoga and the largest variety of exalted spiritual philosophies and psychologies in the world. And their teachings remain alive and vibrant even today, spreading globally.
Yet, India’s dharmic tradition has not just addressed consciousness and spirituality, but has also produced a vast literature on art, science, medicine, mathematics and politics – all the main domains of thought and culture.
At the turn of the twentieth century, Swami Vivekananda transformed world thinking, introducing yoga, meditation and higher states of consciousness at a time before Einstein had discovered the relativity of time and space and the illusory nature of physical reality, something long taught in Vedanta and Buddhism.
Sri Aurobindo unfolded the idea of a higher evolution of consciousness in humanity and produced Savitri, the longest blank verse poem in the English language, revealing transformative yogic secrets that the West had yet to conceive. Yet, many of these great Indic thinkers wrote in Sanskrit or regional languages of India and have not been properly noted, much less studied.
Vedantic teachers like Swami Chinmayananda and Swami Dayananda have guided India in recent decades, commenting on cultural as well as spiritual affairs, using English as their main language of expression, so that the modern audience can easily understand them. Ram Swarup and Sitaram Goel produced excellent critiques of communism and Western religious fundamentalism.
New Yoga teachings have come out from India’s modern gurus, too numerous to mention, and there is now a detailed modern literature on Ayurvedic medicine in English. New books on India’s past have been written by important archaeologists and historians, uncovering the depth and antiquity of India’s many-sided civilisation.
Meanwhile, there is a dynamic new generation of insightful and articulate Indic/dharmic writers with new books and articles, and active in the social media, including Sanjeev Sanyal, Hindol Sengupta, Vamsee Juluri, Tufail Ahmad, and Amish Tripathi. Prime Minister Narendra Modi has been an important part of this intellectual/media awakening of pro-India scholars and writers, who honour the profound traditions of the country going back to ancient times.
The Left’s false claim to intellectual superiority
India’s Left has long claimed that Hindus are not intellectual and are unscientific, mindlessly repeating old racist colonial and missionary propaganda. Yet the Left has not produced any original thinkers, much less sages. It hasn’t even understood India’s own vast culture, which is the saddest commentary on its endeavours. India’s leftist scholars are largely Lord Macaulay’s children, promoting Western thought, disowning India’s older and more extensive cultural heritage.
India’s Left has no understanding of higher states of consciousness, as clearly explained in the dharmic traditions, or any interest in exploring them. It is wedded to gross materialism and physical reality, preferring to write about sex and politics, not anything transcendent. While traditional Hindu thought recognises seven chakras from basic human urges to the highest cosmic consciousness, leftist writers are happy to wallow in the lower one or two, as if they were contributing something exalted to the world.
While modern physics is embracing the idea of cosmic consciousness and great physicists like Oppenheimer have quoted the Bhagavad Gita, India’s Left is firmly caught in the outer world of maya, which it does not question. It has little sense of cosmology and not much vision beyond political propaganda. Yet, India’s scientists honour their own spiritual traditions like Subhash Kak and George Sudarshan, who are not products of the Left.
India’s leftists seldom learn Sanskrit or study the great philosophers, thinkers and poets of the country. While they can quote Shakespeare they deem it’s beneath their dignity to honour Kalidas. They cannot examine the Ramayana or Mahabharata except in terms of Marxist or Freudian theories. They may discuss women’s rights but have no experience of India’s powerful traditions of Goddess worship. They are like the children of the old British Raj for whom anything Indian, particularly Hindu, is primitive superstition to be frowned upon.
Indian immigrants now make up the highest strata of Western society in terms of education and affluence, comprising doctors, engineers, scientists, and software developers, most of who are respectful of India’s spiritual traditions. They are not products of the Left either.
India’s leftists, meanwhile, take academic posts both in India and the West, from which they can take potshots at their own culture and pretend to be wise while drawing comfortable salaries from the very governments they like to criticise. They would never practice yoga, mantra or meditation, as the people in the West are now doing more and more – including many thinkers and innovators. Note the example of Steve Jobs of Apple Computers, who carried Paramahansa Yogananda’s Autobiography of a Yogi with him, and probably never heard of Romila Thapar, Ram Guha or Irfan Habib.
Intellectual arrogance of the Left
The problem is that India’s leftist intellectuals are products of the ego-mind, what is called the rajasic buddhi in yogic thought, which is marked by intellectual arrogance. Without first learning deep meditation, one cannot go beyond the prejudices of the outer intellect and its attachment to name, form and personality. One needs to become silent and receptive within in order to truly know oneself and the universe. This teaching has been clearly articulated since the ancient Upanishads did so in a series of inspired dialogues and debates over 3,000 years ago.
It is time for India’s leftist intellectuals to honour their own profound dharmic traditions. Then they might be capable of something more original and transformative than to imitate the superficial views of the western leftists, which is their current state of affairs. It might give them better ethical rules of behaviour to emulate as well.
The role of India’s true intelligentsia should be to sustain India’s cultural unity, spirituality and creativity, for the nation and the world – not trying to replace their own venerable traditions with worn out leftist agendas that have failed everywhere they have been implemented.

Game Theory Tells Whether We Are Heading to Nuclear War?

“Are we headed for a nuclear war?” It’s the question hanging over, well, basically everyone these days, as North Korea flaunts new developments in its nuclear weapons program, threatening the United States, and President Trump promises “fire and fury” in response. Those tensions seem to be easing when Trump wrote on Twitter that “Kim Jong Un of North Korea made a very wise and well reasoned decision,” possibly referencing a seeming pause in Pyongyang’s anti-U.S. threats.
But the episode remains frightening, and it’s all the more so because making predictions about nuclear war is deeply difficult. While history is full of case studies about what causes nation states to launch conventional war, the U.S. bombing of Hiroshima and Nagasaki are (thankfully) the lone examples of atomic attacks — and those were with weapons orders of magnitude less powerful than the current nuclear arsenal. That lack of historical precedent makes it hard for analysts to reason about nuclear conflicts and how to stop them.
That’s where game theory comes in. Game theory uses mathematical models to study conflict and cooperation between rational decision-makers.
“Game theory has been used to think about military issues since the beginning of the field in the 1940s,” said Tim Roughgarden, professor of computer science at Stanford University who focuses on game theoretic questions. He is the author, most recently, of “Twenty Lectures on Algorithmic Game Theory,” and won the Gödel Prize in 2012 for his work on routing traffic in large-scale communication networks to optimize performance of a congested network.
Let us see how game theory can be used to help us understand war, nation states’ actions, and the current tension between the United States and North Korea.
What is game theory?
Game theory is a field that involves reasoning mathematically about what happens when you have different actors who are strategic, who have different objectives, and what might happen when you have those actors in the same environment.
Has it always been used to think about military strategy?
The field has its roots in the 1920s. In some ways, there were mathematicians back then thinking about it in terms of poker and games of chance. But John von Neumann and Oskar Morgenstern’s book, “Theory of Games and Economic Behavior,” published in 1944, marked the birth of the field as an independent subject. Von Neumann also worked for U.S. think tanks in the context of Cold War strategy in the ’40s and ’50s, so game theory has really been used to think about military issues since the beginning of the field in the 1940s.
How is game theory useful in understanding nuclear war?
One thing game theory is useful for is providing very clean mathematical examples, parables almost, that help you articulate and reason about real-world situations in a logical way. They help you think through who the actors are, what their preferences are, which actions they can take and what possible outcomes could occur.
One of the most famous ones is the “Prisoner’s Dilemma.” The story goes that there are two prisoners accused of a crime. Authorities put them in different rooms and interrogate them separately, so they can’t communicate with each other. The setup supposes that each can do one of two things: remain silent or betray their partner. If both remain silent, they both get a light jail sentence. If both betray the other, they each get more significant sentences. If one remains silent, and one betrays, the traitor goes free and the other gets a particularly stiff sentence. Each prisoner has every incentive to betray their colleague, even though the best outcome for them collectively is to both remain silent.
Nuclear war — to attack or not to attack — has a prisoner’s dilemma-like aspect to it. Each player has the opportunity to screw over the other. But there’s an important caveat: The prisoner’s dilemma assumes there is no future. If you’re just playing once, your dominant strategy would be to betray — or attack. But in a repeated prisoner’s dilemma, where players — say the U.S. and North Korea — are in long-term interactions, they’re reasoning not just about today but about the chances of retaliation tomorrow. In the presence of a credible threat of retaliation, now each country has an incentive to cooperate — to not attack. They act against their own interest in the short-term because it assures them no retaliation.
This is what happened with the U.S. and the Soviet Union during the Cold War. And this thinking was used as a way to reason about the stockpiling of nuclear weapons. Game theory can be used to provide one mathematical justification for the argument that nuclear buildup makes the world a more peaceful place.
So are the U.S. and North Korea in a repeated prisoner’s dilemma?
To the extent that the scenario today with North Korea is analogous to that with the Soviet Union, the same parables that were helpful for thinking about strategy then could be helpful today as well. But the current situation has not quite reached a prisoner’s dilemma because North Korea doesn’t yet have a symmetric ability to retaliate against us as we do against them. They’re developing that ability so that the situation becomes a prisoner’s dilemma.
How does game theory suggest the U.S. should act?
There are situations where you apply game theory and it gives you crisp, clear prescriptions about what you should do. In principle you could use game theory to decide what is the best way of bluffing in a game like poker. But in real world applications it doesn’t always tell you what you should do. It can give you the possible outcomes, but it’s doesn’t give you a whole lot of advice about how you should guide things to the endgame you want.
Here’s another famous example: Imagine two people want to go out to dinner and there are two restaurants in town—one French, one Italian. The first order of preference for both people is to have dinner with each other, as opposed to eating alone. But one person prefers French and the other prefers Italian. So there are two possible endgames worth striving for — where both go to the French restaurant or both go to the Italian restaurant. But it’s not at all clear how you get there.
Game theory is very helpful, though, when you have a clear belief about what the other side is going to do. This is the notion of a “best response.” If you know that another country will bow to a threat rather than retaliate, then there’s a much stronger case for issuing a threat. If you don’t know how they’ll act, reasoning about a best response becomes much more difficult.
This is one aspect in the situation with North Korea that has everyone a bit on edge: Neither side has a very good understanding of the other. Kim is young, there’s not been much direct interaction between him and the U.S. government and not a lot of confidence in understanding how he might act in different situations. And given the things Trump has been saying, North Korea might not be sure how the U.S. will react either.

 

House is a Feeling

Japanese architect Sou Fujimoto points to a new direction the way his houses wrestle with space and convention. Sometimes, in order to understand architecture, one need not look out of the window; it might help to look under a potato chip instead. That’s what Sou Fujimoto showed when he lined up 40 found objects at the Chicago Architecture Biennale in 2015. On 5×5 inch ply blocks, he placed everyday objects — from nails to ping pong balls — with tiny people figures around these exhibits. His block of potato chips read: “It should be possible to make architecture like hills… layering of hills is architecture”.
For this Japanese architect, who works more with feelings than brick and mortar, it is no surprise to hear him chant “nature” and “architecture” in one breath. “I was born and brought up in Hokkaido, an island filled with mountains and forests. When I moved to Tokyo to join the university, I could see the contrast between the artificial forest and the natural world. My challenge is to find a relationship between the two,” says the 46-year-old. Fujimoto is in Delhi for India Arch Dialogue, an FCDI initiative, and will be speaking at the India Design 2017 on February 17. This is his first visit to India, where he will be sharing his thoughts on his “visions for tomorrow”.
The youngest architect to design the Serpentine Gallery Pavilion in London in 2013, Fujimoto drew out a nimbus cloud grid of steel poles, which melted into the greenery of Kensington Gardens. With spaces blurring within and without, this apparition in white was an exploration of his idea of nature and architecture. The House NA in Tokyo was another experiment. Done completely in glass in a narrow plot for a young couple, it delinked ideas of privacy and relationships. “I wanted to recreate architecture not by building a box, but layering the volume into smaller spaces, where meaning and function blur. There is a lot of potential and hidden meaning to a space, and if you communicate with that space, it reacts differently,” he says.
His theory of Primitive Future goes back to the time when he began Sou Fujimoto Architects in 2000. “After graduation, I didn’t have any projects. I began wondering about my vision for the future, and I realised if you think only about the future, you don’t have starting point. For architecture, it’s a good place to think of how people lived centuries ago. For thousands of years, something fundamental hasn’t changed, yet something has, and it’s in these contrasts that architecture lies. Future is not only change, it is also about roots,” he says.
His Final Wooden House in Kumamoto, Japan, is like a baggage tag for his theory, walking the mile between function and form. In this weekend home, he used one-foot thick timber blocks stacked organically, creating nooks and hollows, where the roof becomes the seat, and the floor becomes the ceiling. “It was my attempt to redefine scale and see what happens,” he says, assuring us that it’s not as uncomfortable as it looks.
His most recent project, Mille Arbres (Thousand Trees), in Paris, is a revitalisation project of mixed use development across 55,000 sqm, with commercial and housing projects, where nature and architecture co-exist. He continues to do small houses as well. “I enjoy the diversity and scale of these programmes. If we only do one kind of project, we don’t see the architect of a city as a whole,” says Fujimoto.

Europe on Edge of Bankrupsy!

France ran out of money last Tuesday — and within days, so will the rest of Europe. At first glance, not a lot was going on last Tuesday. Priti Patel was desperately trying to hang onto her job, unsuccessfully as it turned out. Donald Trump was mouthing off on a trip to South Korea. Manchester City were rumoured to bidding for an incredibly expensive footballer, and somebody somewhere was inevitably moaning on about Brexit. It was all fairly run-of-the-mill stuff.

But for number-crunchers, something interesting happened. It was the day when France ran out of money. As of Nov. 7, all the money the government raises through its taxes – and this being France, there are literally dozens of them – had been spent. The rest of the year is financed completely on tick.

Next week, Poland will be out of cash, followed by Italy, which will be officially skint on Nov 26

And yet France is far from alone. All the main European countries, the UK included, are running out of tax revenue well before the year is over. That is worrying for three reasons. It is reminder that spending is still way too high. It tells us that governments have failed to curb deficits. And it is a warning that next time there is a recession governments won’t have any room to respond with a fiscal boost.

There are lots of dry ways of pointing out that governments are spending a lot more money than they raise in tax revenue. Economists and statisticians wheel out debt to GDP ratios and chancellors and finance ministers set targets for deficit reduction. Those, however, usually come in hard-to-follow percentages, or else the billions and billions involved pile up so quickly that most of us simply glaze over. But in France, the Institut Molinari has come up with a very neat way of illustrating the issue in simple terms. It works out the moment in the calendar after which everything the government does has to be financed through borrowing. If you wanted to, you could call it the day the money runs out.

So how’s that going? France, perhaps not very surprisingly, turns out to be the country that is out of cash first. A government which last managed to balance the books in 1980 used up all its money with 55 days of the year still left. That was a day earlier than the year before, and four days earlier than back in 2014. France is not only living beyond its means, but it is now doing so at an accelerating pace. And that was despite the fact that taxes and social security charges have gone up. Once those charges are combined, the state is raking in 53 per cent of GDP in revenues – its problem is that it then spends an even more massive 56 per cent of GDP over the same period.

But it was far from alone. Spain ran out of money on Saturday. Over in Romania, the bank account was empty as of yesterday. Next week, Poland will be out of cash, followed by Italy, which will be officially skint on Nov 26. In the UK, our politicians will have officially spent all the income tax, corporation tax, VAT, fuel duty they take from us by Dec 7.

Across Europe as a whole, central governments will be out of money on Dec 6. Only four EU countries manage to make it through Christmas and into the new year still in the black. They are Cyprus, Malta, Germany, and a surprisingly thrifty Sweden, which gets all the way to Jan 20. They are the exception, however. The norm is now for spending to be way ahead of the money collected in taxes.

That is not always a problem, of course. Very few people would argue that we should go back to the days before Keynes where any kind of deficit in even the most dire of circumstances was regarded a sign that the world was about to end. In a recession, it makes sense for governments to borrow a bit more, and get businesses moving again and people back into work. Nor is there necessarily anything wrong with government borrowing to invest, although a lot of what it “invests” in may not necessarily have the returns that are promised.

But there is a difference between that and huge and persistent deficits. The European economy, helped along by a couple of trillion euros of printed money, is doing OK this year. The EU as a whole is forecast to expand by 2.3 per cent, the fastest pace in a decade. The deficits are not an emergency response to a sudden downturn. They are built into the system. That is worrying – for three reasons.

First, across Europe, governments are living way beyond their means. Those deficits are not coping with a sudden emergency, and they are not paying for investment that will help them grow faster in the future. The most persistent deficits are in social security schemes (and many would be even worse if pension liabilities were properly accounted for). Is that sustainable indefinitely? It takes heroic faith in finance ministries and central banks to believe it is.

Next, even though economies have mostly recovered from the crash, the deficits are still piling up, with no plan for paying them back. If you look at debt-to-GDP ratios, they are spiralling out of control as well. For the EU as a whole, the ratio stands at 89 per cent. Greece is on an alarming 176 per cent of GDP, while Italy is on 137 per cent and France and Spain are just a fraction under 100 per cent. When are they going to start to be reduced? Right now, the answer is simple. Never.

Finally, governments have run out of room for any kind of fiscal boost when there is another recession. The economy will inevitably turn down at some point, and there could be a major crash. When it happens, you’d hope the government could respond with increased spending. But it can’t do that if it is already locked into permanent deficits.

From now until the end of this year, most of Europe will be living on tick. Sure, that is sustainable right now. The markets are benign, and the European Central Bank is still buying government bonds by the cartload. But sooner or later, that debt will catch up with them – and that means Nov. 7 was a far more significant milestone than it may have appeared on the day.

Democracy’s Path in South Asia

The fact that there are only 193 member states of the United Nations gives political scientists a problem. Their research into the way nation-states work is highly constrained by the small number of cases they have to study. It is not easy to build explanatory models on the basis of such a limited sample. It is one reason why the field of comparative politics has produced so few reliable predictions of how nation-states will behave.
The situation of India and Pakistan in 1947 does, however, open up some intriguing possibilities for research. In 1947, the two new countries had much in common. After a shared experience of colonialism, they achieved independence at the same time. And as they tried to build new political systems, both India and Pakistan were governed by a single party that had opposed British rule.
Congress and the Muslim League were faced with identical tasks: writing a new constitution and uniting a population with a low standard of living. Yet the two countries took very different paths. So how come Pakistan’s democratic development has been much more troubled than that of India? What accounts for the different trajectories they took?
Some have cited Mohammad Ali Jinnah’s early death and his assumption of viceregal powers as key factors. They argue that Pakistan either needed someone who would embed a distribution of power and authority or a longer-standing charismatic leader who could overcome the initial challenges of nation-building. To have a strong leader who died early was the worst of both worlds.
They had identical tasks but went down different roads.Despite his legal background, Jinnah showed little interest in the separation of powers. Nehru, by contrast, was more of a Westernised liberal. But explanations that rely on the personality of individuals have limited utility. And over many years academics such as Ian Talbot, Christophe Jaffrelot, Maya Tudor, Katherine Adeney and Andrew Wyatt have tried to identify deeper, underlying factors behind the different rates of democratic progress in India and Pakistan. These explanations can be split into two categories — those that relate to the pre-Independence period and those that cite factors that emerged after 1947.
Before Partition, India’s Muslim community had good reason to be suspicious of democracy. As the minority, it always feared that any British attempt to introduce elections would result in the Muslims losing. But there was more to it than that. While Congress to a significant degree represented the aspirations of middle-class Indians seeking a stake in their society, the Muslim League was controlled by landowners who wanted to keep privileges hardwired into their outlook during the Mughal era. Many of those powerful families remain strong to this day. Class politics have played an important role in both countries.
There were also important differences in the way Britain treated different parts of the subcontinent. Many of the places that became Pakistan were colonised primarily for security rather than economic reasons. As a result, they had less experience of representative government in the colonial era.
Other factors kicked in after Independence. For all the talk of equal treatment in the way Partition was handled, Pakistan got the worst deal and had to start creating government structures from scratch. The early accounts of the post-Independence ministries run by men using packing cases for desks show how big the challenge was. India never faced that.
And the early Pakistanis also had to worry about building an army. Because they feared India would try to reverse Partition, there was an urgent need to create a force capable of defending the country. As a result, the military, from the outset, absorbed more than half of all public expenditure and consequently became a disproportionately powerful force in the land.
Some have pointed to Pakistan’s diversity as a factor in its lack of democratic development on the grounds that it is difficult to build a cohesive, stable nation when there are such sharp regional rivalries. Against that, it can be argued that India has, even more, diversity than Pakistan. But that perhaps misses the point that the division of Pakistan into two separate geographic entities posed a particularly difficult problem. The fight to keep hold of East Pakistan — and the shock of losing it — significantly distorted Pakistan’s political progress.
One factor Western historians of Pakistan tend not to discuss is the role of Islam in influencing the country’s political development. Since 9/11, there has been increasing attention paid to the idea that Islam is in some way incompatible with democracy.
There is now a vast literature dealing with this issue but, to put it briefly, the significant democratic progress in countries such as Tunisia, Indonesia and others in Southeast Asia undermines the assertion that Islam and democracy are necessarily at odds with each other. And as the case of Pakistan indicates, non-religious historical factors can explain stunted democratic development.

Application of Chomsky’s Principles

Hope has been replaced by resignation and speculation. Corruption is accepted as a norm, and when successfully practised by persons elected to high offices of public trust, it is respected as a symbol of power, privilege and patronage. Everybody loves a winner! Losers are fascinated and seduced by the possibilities of sharing in the patronage. Expec­ta­tions are minimised. Electoral victories are assured except for the risk of losing to ‘friendly opposition’. Critics can let off steam as long as they don’t disturb the furniture and wake up the people.
In 1776, Adam Smith observed that the principal architects of policy make sure their own interests are very well cared for, however grievous the impact on the people. They follow the “vile maxim” of “all for ourselves and nothing for anyone else”. Almost 250 years later, our democratically elected political businessmen fit this description perfectly. They are bewildered and deeply offended by the injustice of being politically embarrassed just because of massive corruption.
The Welfare State must be limited to being a Nanny State to nurture the rich under the guise of a Security State. Let us apply Chomsky’s 10 principles to the current situation of several countries:
(i) Reducing democracy in order to control the people. A corporate capitalist economic model is more compatible with feudal attitudes and authoritarian structures than with democratic and participatory processes. A security state more or less wholly concerned with enemies, emergencies and wars on terror tends to subordinate human and political rights to state (elite) interests. The containment of ‘excessive’ democracy must continue in the name of strengthening democracy.

(ii) Shaping the ideology by interpreting religion and patriotism in a way that disguises and serves the interests of the elite instead of the people. The media and the education system are required to play key roles in building and selling appropriate narratives for this purpose.

(iii) Redesigning the economy to equate growth and development with increasing inequality, impoverishment and insecurity which are the inevitable consequences of a predatory concentration of wealth. The small but vibrant middle class helps disguise the depth of mass poverty.

(iv) Shifting the burden of supporting a class-based economy from the rich to the middle and poorer classes requires predominantly indirect taxation, tax exemptions, loan write-offs, inflationary financing, debts to pay off debts, disproportionate defence expenditures; building infrastructure without human resource development, pervasive corruption and an undocumented black economy which sustains an impoverished underclass;

(v) Attacking solidarity. Speaking truth to power threatens no one. Speaking truth to each other, however, threatens elite structures and interests with an informed citizenry aware of its power against those who exploit it. Any people’s movement is, accordingly, intolerable and must be co-opted, isolated or neutralised. In particular, education needs to be controlled and limited through conservative, religious and elite supervision;

(vi) Run the regulators. Those institutions that are required to protect consumers and the people against fraud and injustice need to be ‘captured’ so that elite interests are protected against the entitlements and encroachments of the people, including common consumers. The Welfare State must be limited to being a Nanny State to nurture the rich under the guise of a Security State. This requires controlling legislation, undermining the law and influencing, intimidating and ignoring the courts. It also requires a significant percentage of Pakistan’s elected legislators, both provincial and federal, to become dollar millionaires to pay for their electoral expenses — past, present and future — in return for serving elite and corrupt interests in the name of parliamentary democracy.

(vii) Engineering elections. Pakistan can teach the world. Concentration of wealth means concentration of power. This facilitates control over election officials, the costs of vote buying and defraying the expenses of constant electoral theatre over several months in which personalities instead of issues are discussed. Narratives and slogans help to eliminate scrutiny of mandates and candidates.

(viii) Keeping the rabble in line. Organised labour and socialist political thought, with all their shortcomings, are still one of the true advocates and guarantors of the people’s interests. They need to be discredited as secular inventions against divine injunctions in support of private property. As long as the people are kept ignorant of their power they can be controlled, deceived, divided and co-opted into various elite vote banks against their own interests;

(ix) Consent is manufactured through the removal of hope for redress and the use of overwhelming narratives and state force against ‘disturbers of the peace’. This is the essence of ‘maintaining law and order’. The assistance of a complicit media, hired intellectuals, the deep state and criminally perverted politics is indispensable, and

(x) Marginalising the people by ensuring that their representatives do not represent them, public opinion does not determine public policy, and the public relations industry distracts public attention from what is happening on a daily basis to the overwhelming majority of Pakistanis. Inculcating an other-worldly piety and philosophy among the people can also reconcile them to being victims in this world. In particular, it is important to ensure that reason, enlightened self-interest and a driving mutual compassion never inform the political thinking of the people.

Chomsky is not a cynic. Nor is he a pessimist. He is a sage who knows honest hope requires knowing reality, relentless struggle, and optimism regarding the eventual triumph of the public good over its many enemies.

Strategic Coherence Required in US Policies Toward India & Asia

 

President Donald J. Trump completes his first official trip to Asia, with stops in Japan, South Korea, China, Vietnam,  and Philippines. In his address to the APEC CEO Summit, he outlined his stamp on Asia statecraft, which includes a vision of upholding a “free and open Indo-Pacific.” However, the United States cannot achieve that goal without strong Asian partnerships—including with India.
Though India is not on the president’s Asia itinerary, the nomenclature alone—Indo-Pacific rather than Asia-Pacific—suggests that New Delhi stands rightly to play a central part in the Trump administration’s larger Asia strategy. With long-standing allies like Japan, South Korea and Australia, India offers democratic and economic ballast to deal with the rise of China’s power. Sadly, US economic policy appears disconnected from the administration’s broader strategic goal. For the Trump team to succeed with the ambition of building a network of Asian partners which share our values, including India, the White House will need to corral its economic policies to match its strategic pursuits.
It is worth noting that the pursuit of a free and open rules-based order in a larger Indo-Pacific region represents the most purposeful articulation to date from the administration on Asia. Secretary of Defense James Mattis uses the phrase. Last month, Secretary of State Rex W. Tillerson delivered an entire speech about the concept, in which he called the United States and India “two bookends of stability” who can together “foster greater prosperity and security” in this broad region.
The Indo-Pacific idea recognizes that a rising China has become more assertive as well as authoritarian, and it elevates Washington’s ties with New Delhi as an alternative model to all that Beijing represents. By expanding Asia’s geographic net to include the world’s largest democracy, this larger region encompasses a greater balance favoring rule of law, freedom of navigation, open trade, and democracy. We commend this vision and see it as entirely in line with the Barack Obama and George W. Bush approaches to India and Asia.
To elevate India’s role, make it a full partner in our Asian network, and enhance Washington’s relations with New Delhi, the administration should help India gain a seat at the tables from which it is absent. In the security realm, Mattis has this part right, focused on tighter integration with New Delhi in joint exercises, defense technology, and sharing security perspectives in the region. In diplomacy, Tillerson also understands the important role India can play, and he has called for a partnership with India to develop financing that can provide an alternative to the market-rate Chinese Belt and Road infrastructure loans, which have caught smaller countries like Sri Lanka in a potentially insurmountable debt trap.
But to date, the administration has said little about what Washington can do to advance these interests. The “Quad” grouping that adds Australia to the robust trilateral of India, Japan, and the United States appears on the verge of revival, a positive step. In addition to strengthening ties to our traditional Asian allies, the president could start by clearly stating support for cooperative economic institutions like the Asia-Pacific Economic Cooperation forum. He should call explicitly for APEC to offer membership to India. Asia’s third largest economy deserves to have a seat at the table, and it will help India to be more embedded in the premier regime focused on free and open trade in Asia.
To address the urgent need for infrastructure funding in the Asian region—a principal political and economic imperative —the president should also support a capital base expansion for the World Bank, something favored by countries around the world, but which Secretary of the Treasury Steven Mnuchin opposed during the Bank’s annual meeting this year. It’s hard to see where alternative support for development financing, especially financing for Asia’s massive infrastructure needs, will come from if Washington does not enable the World Bank to do more at the low interest rates that can actually help countries develop—and which offer a real alternative to the Belt and Road loans. In fact it is impossible to imagine China improving its Belt and Road loan terms unless it is faced with real competition from the United States, Japan, Europe and India.
In economic dialogues with India, the administration needs to keep its gaze on the strategic and not get buried in the transactional. At a time when China has emerged as the most powerful economic partner to virtually every country in Asia, including South Asia, we must have a stable strategic and non-contentious relationship with India. While we recognize India’s famously difficult stances on trade and market access questions, a narrow focus on the $24 billion trade deficit with India (compared to more than $300 billion with China), should not distract from this larger goal. Of course, we and India need to sort out market access problems and our difficulties with Indian intellectual property rights polices, but these questions are not strategic in nature.
Instead, we should identify realistic steps to enhance greater trade and investment with India—recognizing the time horizon might well appear farther than desired—and continue technical assistance to encourage New Delhi’s ongoing reforms, which will be the key to unleashing greater economic growth. The Commerce Department’s technical discussion with India on standards marks a great step in that direction. The trade deficit, a new favorite punching bag for the U.S. Trade Representative, does not.
To meaningfully support a “free and open Indo-Pacific,” the Trump administration will have to be creative in building broad Asian partnerships, especially with its India policy. We need all the allies we can muster. A strong, stable, democratic India committed to a rules-based order will indeed be a “bookend” for the region. Washington will have to alter its economic focus to get there.

 

An Idea Dead & Alive

It is strange, and yet not so strage. While Putin’s Russia ignores the centenary of the Russian Revolution. Xi Jinping’s China wants to keep its message alive. And there is a reason for that. World history is a history of big ideas. Big changes have always needed big ideas capable of appealing to the hearts and minds of the multitudes and energising them into action. Karl Marx, the originator of one such Big Idea — the theory of communism that envisioned a society based on equality and free of exploitation, and a state that would ultimately wither away — said it best. “Material force (violence used by guardians of the old social order) must be overthrown by material force; but theory also becomes a material force as soon as it has gripped the masses.”
A hundred years ago, Marx’s theory gripped the Bolsheviks in Russia, who, led by Vladimir Lenin, acted like a material force to overthrow the Tsarist empire and established the first communist-ruled state. The Russian Revolution influenced the spread of the Big Idea. In 1919, Lenin founded the Communist International or Comintern, a coalition of national communist parties that advocated world communism. Even though Joseph Stalin dissolved Comintern in 1943, the revolution in China in 1949, led by Mao Zedong, marked Marxist theory’s next major success. In the 1980s, communist parties in India used to proudly claim that “one-third of the world is already under socialism; and the rest of the world will follow.”
But where, a century later, is the Russian Revolution? Russia consigned it to history in 1991 when the Soviet Union, a child of the revolution born in 1922, died, and each of the 15 constituent “socialist republics” became independent nations. Russia itself overthrew communist party rule. Mikhail Gorbachev, the last communist ruler, had embarked on a reformist initiative called glasnost (openness) and perestroika (restructuring), but there was nothing left to restructure by the end of his five-year rule. Under Boris Yeltsin, his successor, Russia aggressively dismantled most parts of the communist state and economy. For a few years, it seemed the Russian state had actually “withered away”. Crony capitalism and corruption led to a massive transfer of national wealth to oligarchs. Hyper-inflation made life for most Russians miserable. Russia’s international glory faded. Journalist Artemy Troitsky, writing in Moscow News, has described those chaotic years thus: “If you want to see what a big, truly anarchic country is like — look no further than Yeltsin’s Russia. I called it ‘the land of unlimited impossibilities’ — people were free to do whatever they wanted.”
In came another Vladimir (Putin) in 2000. He has rescued and salvaged the Russian state in his strongman rule. He has attempted to make Russians proud again by reviving nationalism and the orthodox church at home and by militarily resisting the US in Europe and West Asia. According to Putin, the collapse of the Soviet Union was the “greatest geopolitical catastrophe of the 20th century.” However, he has not restored Russia’s continuity with its revolution. There has been no official commemoration of its centenary. Most young Russians I speak to said they have no emotional connect to Lenin and his revolution.
It looks as if China is more interested than Russia in keeping alive the memory of the first Marxist-Leninist revolution.
Not surprising, because the Communist Party of China (CPC) continues to swear by Marxism-Leninism, although it also extols Mao, Deng Xiaoping and Xi Jinping. Soon after he became party chief in 2012, Xi told his colleagues that the Soviet Union had collapsed “because nobody was man enough to stand up and resist”. As is amply clear from his speech at the 19th CPC congress last month, where he was re-elected, Xi sees himself as someone who would “stand up and resist” any attempt or reform that could possibly lead to the end of the CPC rule. He has audaciously announced that China’s own “Two Centennial Goals” — 100 years of the founding of the CPC in 2021 and, in 2049, of Mao’s revolution that founded the People’s Republic of China — would serve as major milestones in the triumphant march of “socialism with Chinese characteristics in the new era”. With the US and Europe in decline, the world will surely look to follow, or create, new models of equitable development.
This begets the question: Whither Marx’s Big Idea? The short answer — it’s both dead and alive. All history-changing ideas undergo change themselves. China has changed Marx by Sinifying him. In Russia, I met several intellectuals who said, “Not everything about the revolution and the Soviet era was wrong. What was wrong was the horrific use of violence by the communist state against its own people, the brutal suppression of freedom and democracy, and the ubiquitous personality cult of Lenin and Stalin. But we don’t forget that it was also the era when we defeated Hitler, when we made much progress in education and scientific research, and when most citizens shared both limited prosperity and limited poverty, without the kind of disparity we now see in Russia and in many countries. We should learn from our past mistakes and attempt to create a better future.”
As Ione looks at the vastness of the Black Sea in Yalta, where Stalin, Roosevelt and Churchill — three victors of World War II — met in February 1945 to design a new global order, one is overwhelmed by a sobering reflection: We imperfect humans create, destroy, and strive to re-create our dreams and revolutions… again and again and again.

 

A Secret Revealed: The 1917 Russian Revolution was a German Plot

Communists will celebrate the centenary of the Russian Revolution of 1917 as a triumph of workers and peasants. In fact it owed much to a German diplomatic plot, executed through an armed coup by Lenin.
In World War I, Germany was stuck in the impasse of trench warfare. It faced French-British troops in the west, and Russians in the east. Breaking through trench lines required massive military dominance, which neither side had. So, all advances ended in bloody stalemates. Germany badly needed a deal with Russia in order to shift forces from the eastern to the western front.
The war was socially and economically ruinous for Russia. Mutinies and desertions spread in its army. In March 1917, riots in the capital, St Petersburg, culminated in the abdication of the tsar.
A provisional government, headed by Alexander Kerensky of the Social Revolutionary Party, promised fresh elections. But chaos ruled in much of the country with the tsar’s exit, widespread opposition to the war, and coup plots by both the right and left.
Germany saw opportunity in this. Lenin was in exile in Western Europe, and strongly opposed the war as a clash of imperialists. Germany saw that a Bolshevik coup in Russia could quickly end fighting on the eastern front. So it offered to smuggle Lenin and other Bolshevik exiles from Switzerland across war lines into Finland, from where they could cross into St Petersburg nearby, persuade local Bolsheviks to stage a coup, and then sign a peace treaty.
It was a cynical deal, with Lenin and Germany using one another. Lenin said famously, “Power is lying on the streets of St Petersburg, just waiting to be picked up.” Unlike some communists who sought power through elections, Lenin favoured an armed coup.
With the support of soldiers and workers in the capital’s soviet (local council), the Bolsheviks began gathering weapons to organise a coup. The Germans supported them with 50 million deutschmarks in gold.
Alarmed, Kerensky in July cracked down on the Bolsheviks, and Lenin fled to Finland. But soon after, to counter a threatened rightwing coup, Kerensky sought Bolshevik support, giving them arms and letting Lenin return. This turned out to be political suicide.
Bolsheviks soon wrested control of the St Petersburg soviet, and Kerensky lost military control of the city.
On November 7, Lenin launched his coup, with pro-Bolshevik soldiers and gangs attacking official installations. The naval ship Aurora, controlled by Bolshevik sailors, threatened to blow up the Winter Palace and other targets unless Kerensky surrendered. Deserted by allies, Kerensky did so, and Lenin came to power.
An election followed on November 25. The Social Revolutionaries won 380 of the 703 seats against the Bolsheviks’ 168. The Bolsheviks won most votes of soldiers and cities, while the Social Revolutionaries swept rural areas. Lenin refused to hand over power, kicked out the Social Revolutionaries, and created a dictatorship of the proletariat.
Soon after, on March 3, 1918, the German plan came to fruition. Lenin sued for peace in the Treaty of Brest-Litovsk, surrendering huge areas of the Russian empire. He called this “sacrificing space to gain time.”
The delighted Germans switched 50 divisions to the western front, gaining military dominance there for the first time. They struck mightily at Allied lines on March 21, breaking through at several points. Some historians say they came very close to winning. But weak logistics prevented the Germans from rolling up the French and British flanks. The trench stalemate returned. But not for long. Armoured tanks, a new Allied invention, proved they could smash through trench lines. In addition, over a million US soldiers joined the battle. This ended the stalemate. Germany began retreating in August, and surrendered in November.
The victorious Allies forced Germany to give up its war gains. So, Lenin got back vast territories signed away at Brest-Litovsk (though some other territories gained independence). This helped him win the civil war raging at home, that ultimately cost 2.5 million lives.
Was Lenin a German agent, as critics claim? Surely not. But without German assistance, he would not have come to power.
Marxist historians claim that the Russian Revolution actually began earlier, in the form of major strikes and agitations from 1905 onwards, and argue that Lenin was the natural heir of these.
However, the Social Revolutionaries, leftwing rivals of the Bolsheviks, could also claim to be the heirs of the early agitations, and can point to their election victory as proof. The clincher was Bolshevik support in the army. Troops, rather than workers or peasants, ensured Bolshevik victory. Indeed, the peasants soon lost all their land to communist collectives.

Modern Piracy Needs Age-old Solution

Pirates are notoriously hard to capture. Their actions occur on the shifting, vast expanse of the open oceans. Perpetrators cannot simply be “arrested” by a conventional police force and, even if they are caught, it’s a challenge to prosecute an offender who by their very nature transcends borders.
There is no single answer to the problem, particularly given pirates’ different guises and motivations. Yet a study of historical anti-piracy operations, both ancient and recent, does reveal one commonality in the repression of piracy: international cooperation.
Pirates terrorised the Caribbean during the infamous “golden age” of piracy in the 1710s and 1720s. The escapades of these pirates – among them Blackbeard, “Black” Bart Roberts, Charles Vane and Anne Bonny – have long since passed into legend. Though their lives were full of adventure, their demise was brutal.
Cooperation was the key to the pirates’ eventual eradication. The 1713 Peace of Utrecht ended over a decade of constant fighting among the colonial powers (Great Britain and Spain in particular). It allowed them to turn their attentions to the blight of piracy. The treaty called on signatories to “cause all pirates and sea-robbers to be apprehended and punished as they deserve, for a terror and example to others”. These words formed the basis for a successful campaign against the Caribbean pirates, with the colonial powers working together to defeat the antagonists.
At a 1717 piracy trial in Boston, the condemned were informed by a preacher that “all nations agree to treat your tribe as the common enemies of mankind, and to extirpate them out of the world”. It seems likely the pirates would have persisted, but for this commonality of purpose.
Getting together to fight the scourge
Justice was swift for those captured: pirates were usually tried and hanged within days of capture under the 1698 Act for the More Effectual Suppression of Piracy, a new anti-piracy law that allowed for trials in the British colonies. The trials were light on procedural protections, but they were effective: as many as 600 pirates were hanged in the British colonies alone between 1716 and 1726.
To get around jurisdictional issues, courts espoused the principle of “universal jurisdiction”, the idea that any state may prosecute any pirate given the severity of the offence. It’s a valuable principle that, today, has become commonly accepted and enshrined in international law.
It was the combination of cooperation and effective law, then, that put an end to piracy in the Caribbean. The same factors were brought to bear on the modern wave of piracy that blighted the Indian Ocean off the coast of Somalia in the early 2000s.
Somali piracy first became a serious problem in 2005 and peaked in 2011, when 237 incidents were reported. The pirates of Somalia have since gone into decline, however, with only a handful of reported pirate attacks in the region since 2013.
International cooperation (pushed and promoted by the UN Security Council) has again proved to be essential to removing the threat, particularly given the regional power vacuum left by Somalia’s lack of governance. The EU-sponsored naval force, Operation Atalanta, proved especially invaluable in disrupting the threat, removing some 160 active pirates from the seas since beginning operations.
The real success story here, though, is the way in which regional African legal systems were enhanced to deal with the problem, particularly with the assistance of the UN Office on Drugs and Crime. Trials take resources, expertise and dedication that are best obtained from international partners. In Kenya, The Seychelles, Mauritius, Tanzania, and Somalia itself, hundreds of suspects have been processed in purpose-built courtrooms, staffed by trained lawyers, using updated anti-piracy laws. The collapse of piracy in the region (notwithstanding recent isolated incidents) is a tremendous success story for international cooperation and problem sharing. It’s also a case study in building legal capacity, creating a robust platform for dealing with future outbreaks.
Yet piracy is going nowhere anytime soon. The Gulf of Guinea and the seas of South-East Asia, both areas where valuable maritime trade clashes with lacklustre governance, have superseded East Africa as new “pirate hotspots” where successors to Blackbeard’s brethren continue to put maritime trade to the sword. New approaches are needed, and the root causes have to be addressed. Yet the core of any successful strategy will always be the same: international cooperation and unity of purpose. The international community must constantly unite against common threats, be they piracy, terrorism, or international crime.