Showing posts with label social capital. Show all posts
Showing posts with label social capital. Show all posts

Thursday, August 25, 2011

Madison on the Mediterranean: What Lies Ahead for Libya?


[Note: I write this not as a scholar of Libyan history or an expert on the Middle East and North Africa, but as a person with family in Tripoli who follows events there closely.  If I have erred on the facts or the analysis, let me know in the comments.]

Libya, it seems, just went from a civil war to a revolution.  At least that’s what the title cards on Al Jazeera suggest, as “The Libyan Revolution” replaced headlines like “The Crisis in Libya” after opposition forces appeared to take Tripoli over the weekend.  Speaking of the Confederacy, Eric Foner once said that an uprising is just a rebellion until you win; only then does it become a revolution.  The Declaration of Independence gives people license to overthrow an unjust authority, but the overthrowers’ authority only becomes accepted and legitimate once they have successfully pulled off the overthrowing.  Otherwise, you are no more than a riot or a rebellion that got snuffed out – not unlike what is happening in Syria, where Bashar al-Assad’s regime has cracked down ruthlessly and relentlessly on dissenters.

Libya stands quite apart from many other participants in the so-called “Arab Spring” – a term that was coined by Western journalists, apparently alluding to the “Prague Spring” of reformism that was so brutally crushed by Soviet intervention in Czechoslovakia back in 1968.  The term always seemed to evoke “Springtime for Hitler” for me, along with a sort of soap commercial way of describing political change – Get Fresh with the Arab Spring – but for whatever reason the term has stuck.  It stuck so well that you hear people speaking of a “Libyan Summer” – one stickier, uglier, and plainly more violent than its closest parallels, more so than Tunisia and Egypt, though not as vicious as the repression occurring in Syria or Bahrain.

Click to enlarge

What set Libya apart is that a protest movement rapidly shifted into armed resistance, with the emergence of a nascent rebel authority in Benghazi and the emergence of a shambolic military presence in the east, in the western city of Misrata, and in the mountains west of Tripoli.  Whereas Egyptians protested peacefully in Tahrir Square, and the military establishment felt somewhat (if not entirely) constrained in dealing violently with them, Muammar Qaddafi’s regime responded aggressively right away and the opposition moved to resisting authority on a military basis, with the result, more or less, of a civil war breaking out.

Now that the rebels have swept into Tripoli with less initial resistance than expected, the opposition appears close to gaining control of the country.  The smiling, sneering appearance of Saif al Islam, Qaddafi’s favored son, among crowds of regime loyalists Monday, after he was already reported arrested, shows how foolish it is to rush to judgments, positive or negative, about what is happening in Libya.  The eastern and western halves of the resistance, which grew up mostly apart from each other, could break out in conflict even after the regime is definitively beaten; certainly, the Transitional National Council (TNC) has been dominated by people from the eastern city of Benghazi, while those who actually stormed Tripoli were rebels from the western parts of the country like Misrata, which are closer to the capital city.  Divisions could emerge between the Benghazi crowd and everyone else; ethnic conflicts between Berbers and Arabs could erupt; and people who depended on the old regime may find themselves on the outs and seek whatever means to destabilize the new government.  All these things are possible, and more – the simple inability to keep the lights on or the water running could prove the undoing of the seemingly triumphant rebels.

The theme to Flashdance is clearly playing in his head

But Libya has certain things going for it.  It is a small country, with a population about the size of metro Atlanta’s in a space the size of Alaska (America’s biggest state).  Though tensions between Arabs and minority Berbers exist, the country is still relatively homogeneous compared to other nations in the region; it lacks the stark sectarian divisions of Iraq or Bahrain.  The leaders of the TNC have so far evinced a commitment to moderate Islam, as well as reconciliation with former Qaddafi collaborators.

If anything good ever came from the Iraq War, it is that people have learned from the neoconservatives’ tragic experiment in “nation-building” (which consisted primarily of dismantling the nation and selling it off for scrap).  Most Libyans realize that liquidating the entire police and army and disempowering anyone who had any ties to the regime is unrealistic; the US tried dissolving the security forces in Iraq and denying anyone with Baathist connections a role in the new government, but this move ostracized huge numbers of people.  In Libya, blacklisting anyone who had anything to do with Qaddafi just would not work, since anyone who held any kind of position of influence or responsibility in the country had to work with him in some way or another.  The rebels have so far shown a considerable openness to figures with ties to the former regime, though the assassination of Abdul Fatah Younis, a very close ally of Qaddafi who resigned to lead the opposition's military forces before his killing under mysterious circumstances, suggests that old scores may be settled and supporters of the dictatorship might not get off with a free pass.


Conservatives at National Review have suddenly lost interest in Arab democracy

In any case, the outcome of this conflict is sure to be rough – as the leader of the TNC said, a revolution is not a “bed of roses.”  But the profile of the opposition movement is promising, at least as far as prospects for an open society are concerned.  The instigators of this revolution are lawyers, doctors, writers, professionals – the liberal bourgeoisie, backed by untold numbers of young, jobless, frustrated working class and middle class youths in a country that had 20% unemployment before the revolution, despite having immense mineral wealth and one of the higher GDP per capita ratios in the world – if not amazing, certainly out of line with the ratio of wealth to population in most Arab and African countries. 
.
The gentleman in the blue cardigan has a two part question

Libya, perhaps, has a better chance of achieving a liberal democratic revolution and public sphere than some of its neighbors.  The country lacks the same deep-set, entrenched, immovable military establishment that is inevitably a giant part of the political landscape in Egypt, even after Mubarak’s humiliating departure – the “deep state,” to borrow a term from Turkish politics.  The opposition forces say they will retain as many members of the old army and police force as possible, barring only those closest to Qaddafi and with the most blood on their hands.  Still, the possibility remains that a hardcore of loyalists will continue to make life miserable through bombing and the like.  An Iraq-style insurgency of disenfranchised hardliners could ensue, though many seem to doubt that Qaddafi has the committed ideological supporters to sustain such a campaign of resistance or terror.  My own father, who knows far more about Libyan politics than I do, seems remarkably sanguine about the prospects for a peaceful transition.  He believes that most of the people working for Qaddafi are simply opportunists, lackeys, hangers-on, and sycophants, who lack a deep sectarian or ethnic allegiance to Qaddafi himself.  He is perhaps too optimistic – indeed, those allied with Qaddafi’s family and tribe in his hometown of Sirte may be willing to fight on, for the sake loyalty or simply revenge – but it is safe to say that a long-smoldering insurgency is, at least, not inevitable.

Libya may have the ingredients for a prosperous, liberal society: a rebel leadership that claims to support religious moderation and political reconciliation; the lack of any one interest with a preponderance of power, whether military, business, feudal landowners, clerics, etc.; a highly literate population; a wealth of expatriates with skills who are ready to come back to the country; and, of course, oil.  What it lacks is a charismatic cleric, who could seize the initiative and try to steer the revolution in a more Islamic direction, as occurred in Iran’s revolution of the late 1970s.  Some Islamist groups have participated in the rebellion, and some observers believe they were behind the assassination of Younis.  But Islamist rhetoric and ideology has not been especially conspicuous in Libya’s rebellion; the TNC’s leaders have taken pains to emphasize that, while Libya is a Muslim country, it will not pursue a fundamentalist policy after the revolution.  TNC Chairman Mustafa Abdel Jalil described their intentions in this way:
We are on the threshold of a new era ... of a new stage that we will work to establish the principles that this revolution was based on. Which are: freedom, democracy, justice, equality and transparency, within a moderate Islamic framework.

As many nervous observers in the West have been happy to see, protesters throughout the Arab world have eschewed religious or even ethnic nationalist language in favor of a broad rhetoric of human rights.  Pan-Arabism and Islamism have not been the dominant paradigms driving these rebellions, even if religious conservatives have backed and participated in them.  As one excited young Libyan told al Jazeera on the streets of Tripoli, he and his compatriots had no interest in “Islamists and racists.”  If religious language has suffused the rebellion at times, it is only to the extent that people of faith see a higher power guiding them in the course of dramatic events, not necessarily because of an agenda to impose fundamentalism on society at large.  It is by no means uncommon for people to see their own political struggle in spiritual terms.  In other words, the protesters who appeal to Allah in the streets are more Martin Luther King than Pat Robertson.


To me, one of the most emblematic moments of the remarkable events of recent weeks was an interview al Jazeera conducted as rebels shocked the world by storming Tripoli far faster than most expected on Sunday evening.  The reporter began the interview by saying she would not ask for his name, but she wanted to know what he was experiencing.  Before she could finish her question, he told her he was not afraid to give his name.  She said okay, and he stated his first name.  She went on trying to ask her question, and then he gave his last name, and then he began to spell out his name for the channel’s viewers.  “I am not afraid anymore,” he said.  “It’s over.”  Not only the ability to speak his mind, but the freedom to state who he was and stand by his views was a euphoric feeling for him. He wanted to be known, perhaps for the first time.

This is the hope of a new public sphere in a region where outside experts long characterized the people as passive and the politics hopelessly stagnant.  Not long ago Mubarak and Qaddafi both looked likely to pull off the repugnant succession of power to their smooth, Western-educated sons, Gamal and Saif.  Now there is at least an opening for something better, even if remnants of the establishment hold onto power as tenaciously as possible.  The challenges of building a new, open civil society remain daunting after years of stifling authoritarianism.  Countless protesters are still being held in Egypt, even after Mubarak and some his lackeys lost power.  The military government there claims to be moving toward a new, democratic regime, but it will only let go of as much as of its power as it absolutely has to – just like Mubarak and every other venal power-hoarder in the region. 

The challenges Libya faces will be different, and the threat of tribal, ethnic, and regional conflict looms especially large.  So does the perennial problem of “petrocracy,” the inefficiency and corruption that haunts so many countries that are blessed with mineral wealth.  To top it all off, actually finding jobs for the dispossessed and frustrated youth who set off protests throughout the region will be no small order in the midst of political and economic upheaval.  But compared to its huge neighbor Egypt, Libya seems to be moving toward a kind of democracy unencumbered by the burdens of a powerful military or Islamist political constituency, and the rebel leaders represent a capable, technocratic, seemingly open-minded lot. 

University of Michigan professor Juan Cole has a list of suggestions for how Libyans could best manage the transition and minimize these pitfalls – including a proposal that Libyans avoid letting their national resources be privatized and sold off to corporate interests, as occurred in Iraq under the regency of L. Paul Bremer.  Like me, Cole is more of an optimist about the revolutions and rebellions in the Arab world.  Things could, of course, take a turn for the much, much worse, if, say, the wily Qaddafi had some kind of plan to destabilize the country even after his fall from power, or his loyalists prove to be much more determined than expected.  The old line about making God laugh by telling him your plans is especially true where the Middle East is concerned.  The Libyan people may not create a classic Madisonian democracy or Habermasian public sphere in Tripoli, but there remain many reasons to hope – not the least of which is the shocking fall of the world’s longest “serving” despot at the hands of a motley band of rebels (and, of course, NATO jets).
 
 
Earlier this week, a young man who was among the rebels to storm Qaddafi’s Bab al-Aziziyah compound captured this hope when he spoke to Sky News.  He had just raided Qaddafi’s bedroom and rather comically put on his hat and elephant scepter, looking a bit like a character from an early 90s Brand Nubian video.  His comments reflect the unifying rhetoric common among many members of the opposition, which is conciliatory, not particularly religious in character and nationalist (a “Libyan” identity that transcends tribe, ethnicity and sect) if anything:
Now we should forget all the past.  We should take a better stance, and we should work together as Libyans, the Arabians and the Berbers.  And I am sure Libyans will shock the world, because we would like to do something, since Qaddafi has put us in a bad situation these past years… I wouldn’t have this feeling to have revenge against those people that stood with Qaddafi. I would like to ask them to be with us, to shake our hands, and to start a new beginning, a new life, a new future, a new Libya, as we all Libyans would like to have.

Alex Sayf Cummings

Thursday, January 6, 2011

Looking for the City of Knowledge

For at least the last twenty years, scholars have proposed that the rise of a post-industrial economy led to the reinvention of urban life -- the so-called "informational city."  What is the relationship between cities, on one hand, and high-technology industries such as computers, media, and pharmaceuticals, which seem to cluster in metropolitan areas like Silicon Valley?   Numerous historians, geographers, sociologists, planners, and theorists have tried to understand the “geography of innovation,” searching for the factors that turn some places into centers of advanced scientific and technological work instead of others. Why, for instance, did Boston become a haven for cutting edge research while Detroit has floundered for decades in the death throes of its manufacturing base, with few zippy, shiny “new economy” enterprises to call its own?


The answer, some have suggested, lies in the conscious effort to develop “nodes” or “agglomerations” of scientific research in the form of research parks, often tied to a neighboring university. Such projects have a Field of Dreams quality about them – if you build it, they will come. “They” are entrepreneurs; venture capitalists; highly trained scientists and engineers with plenty of income to spend and tax; branches of multinational corporations like IBM or Pfizer; and all the other workers who attend to the labs and their inhabitants. “It” is a place where multiple corporations and start-ups can enjoy the network effects of nearness to other companies engaged in the same field, access to the resources of a university (libraries, faculty expertise), and the general ambience of an environment populated by scientists and scholars.

In this view, Detroit failed to reinvent itself because it lacks the cultural and academic strengths that have invigorated Boston (MIT, Harvard), the Research Triangle of North Carolina (Duke, Chapel Hill, NC State), San Francisco (Stanford), and even Pittsburgh (Carnegie Mellon, Pitt). As Margaret Pugh O'Mara suggests in Cities of Knowledge: Cold War Science and the Search for the Next Silicon Valley (2005), people throughout the world have sought to imitate these successful examples by deliberately setting aside land and money to create a new innovative milieu.

O’Mara brings a broad view to the historical complexities of technology and economic development. While others have focused on statistical data to determine which cities have the greatest agglomerations of high-tech industry, and why cities have accumulated such business over time, O’Mara employs a more historical and qualitative approach, looking first at the origins of American science policy (and funding) in World War II and going on to weigh the successes and failures of particular attempts to build “cities of knowledge” in the San Francisco Bay area, Philadelphia, and Atlanta. This method allows the author to look at the model of a research park par excellence (Stanford Industrial Park), while examining what factors limited the success of similar efforts around the University of Pennsylvania and Georgia Tech University.


In the book’s first section, “Intent,” O’Mara examines how the US government got into the business of funding science and higher education in the first place, driven by a desire to maintain American technological and military superiority with the emergence of the Cold War. These chapters provide an eye-opening look at the politics that surrounded the creation of the National Science Foundation, and the reluctance of some conservatives to see the government spend money on and meddle in the affairs of science. Scientists too were ambivalent about putting research “in the hands of bureaucrats.” O’Mara reveals how scientific leaders like Vannevar Bush successfully lobbied for a government agency that would be dominated by the (supposedly) meritocratic elite of academia.

O’Mara also shows how security concerns and planning traditions defined the geographical contours of postwar science. Federal officials believed “dispersal” was wise, so that laboratories and high-tech industries would be scattered across the landscape, reducing their vulnerability to military strikes. A long-standing belief that intellectual contemplation required a serene, verdant environment also supported the suburbanization of science, particularly in the form of a research “park.” A location in leafy suburbia also seemed more likely to win the approval of the nation’s highly sought-after scientists and engineers, who companies like IBM competed to hire. Perceptions of traffic, crime, and other unpleasantries in the city combined to make the move of research facilities to the suburbs of the South and West practically over-determined.

There are wrinkles to the story, though. The conventional narrative of the Sunbelt at first seems an easy fit for O’Mara’s analysis. Democrats, long dominant in Congress, assured that federal funds would flow to their friends and constituents in North Carolina, Florida, Texas, California and other states. But the rebirth of Boston as a center of software, biotech and other industries seems like a major exception, especially since O’Mara’s southeastern case study, Atlanta, appears to have been a failure. Despite enjoying the largesse of Pentagon spending, the Georgian capital never became well-known as a high-tech node (though it certainly accomplished itself as a center of media, business services, and other new-economyish activities).

The shape of jazz to come

Perhaps the next-best example of a successful research park after Stanford, the Research Triangle Park, receives only passing notice. Was the North Carolina region similarly dependent on Cold War spending? Did it share the policies that made Stanford’s park so appealing to entrepreneurs, corporations, and workers, and how did its own racial, cultural, and political dynamics compare to those of San Francisco or Atlanta? One would assume that the Triangle resembled Atlanta more in its political culture, yet the attempt of local political, business, and academic elites to build a center of science and technology achieved a Stanford-like degree of renown.

Stanford offers the textbook example of how to create a magnet for employment in research and technology. The university had abundant land to work with, and was barred by Leland Stanford’s wishes from selling any of the property. Instead of leasing it to farmers, university leaders decided to let research oriented companies settle there. (It was initially called Stanford Industrial Park, but the name was changed in the 1970s.) With unquestioned control of the physical environment, Stanford could dictate what kind of companies would occupy the land, how large their facilities could be relative to lot size, and numerous other factors that were crucial for fostering a green, dispersed environment. Big lawns and unobtrusive, modernist architecture were the order of the day. Stanford also benefited from its cozy relationship with political and economic powerbrokers in the Bay Area, and from its position near the defense complex that grew up in San Francisco during and after WWII. With the arrival of General Electric, Lockheed Martin and others, the park got off to an auspicious start.

Stanford was able to develop its land on its own terms, with federal funds flowing to its corporate lessees and little political interference with its vision. The park builders of Philadelphia and Atlanta were not so fortunate. The University of Pennsylvania attempted to develop its University City Science Center as part of urban renewal efforts in the poor neighborhoods that surrounded the school, uprooting less favored tenants and aiming to bring in more valuable workers who would contribute to the area’s tax base. Although the Center was developed and persists to this day, it inspired local opposition and left hard feelings in the community, much like the similar campus expansion at Columbia University in the 1960s.

Meanwhile, Georgia Tech’s attempt to reinvent itself as the core of a dispersed technological industry in Atlanta was foiled by crisscrossing political loyalties within the city and the state as a whole, as well as resistance to efforts to expand its campus into areas populated by working class white and black residents. Although the alma mater of many big players in city affairs, Tech simply did not have the financial resources, political leverage, or unified vision to succeed in transforming Atlanta into a “city of knowledge.” Moreover, the preference of city leaders for a sprawling, metropolitan view of Atlanta – a segregated and car-dependent archipelago of suburbs – militated against having a concentrated area where hi-tech companies coexisted.

This raises the question of what a city of knowledge is at all. Is it the Stanford Research Park itself, the workplace where people spend the better part of their days, or the shopping malls and suburbs where scientists and engineers intermingle with students, managers, and service workers? These agglomerations of research and technology are often referred to by non-urban names – Silicon Valley, the Research Triangle, the Space Coast in Florida. In short, they are places, areas, or regions, but not quite, perhaps, cities. O’Mara suggests that the dense, urban environment around Penn and its complex city politics were a major liability for its own “University City.” Of course, plenty of other colleges have their own University Cities, as the area around my own former college in Charlotte is known.

What makes these Cities of Knowledge, Informational Cities, or University Cities distinctive is their image, their branding. At least one scholar who has looked at data on research parks throughout the United States concluded that cities with parks did not fare all that much better than cities without them, at least in terms of attracting and sustaining scientific industry. The presence of a significant university, sociologist Stephen Appold says, appears to correlate with greater amounts of research activity over time.

In other words, it was not the park that made Stanford great, but the other way around. For many local boosters, from Silicon Valley to the Research Triangle, universities were among the chief selling points in their efforts to recruit companies to their communities. These companies, in turn, considered the appeal of their location to potential employees to be a primary concern. The idea of being near people like themselves – well-paid, well-educated, primarily white and privileged people – was a major plank in the promotion of cities of knowledge. Local authorities wanted these kinds of people, whether in the poorer districts of Philadelphia or the exurbs of Raleigh, and these kind of people wanted to be around each other.

The climate is here, wish you were beautiful

In my own study of research parks, I have noticed that vague and euphemistic terms like “atmosphere” and “climate” come up often when both employers and employees talk about their decision of where to settle. This language recalls Richard Florida’s circular thesis that places become cool because cool people are there, and cool people want to be around other cool people – cool meaning, of course, people with similar education, cultural interests, and possibly race. It does not really explain why a city like Huntsville, Alabama becomes a center for research and technology – few intellectuals have likely longed for the erudite climate of northern Alabama – but Huntsville’s preeminence seems to be thoroughly the result of defense spending in the “Rocket City,” ever since Werner von Braun landed there in the midst of the Korean War.

For other places, such as Silicon Valley and the Research Triangle, selling the immaterial qualities of “culture,” “diversity,” and an intellectual “climate” has been key – and the existence of one or more distinguished universities has typically been the basis for making such claims. Academic leaders, businesspeople, and state and local government officials wanted to create spaces that would attract a certain kind of citizen and worker, with a high level of income and education, residents who could enhance the tax base and the marketability of the community itself.

In short, the work of O’Mara and others tentatively suggests that it is less the purposely designed concentration of skilled labor and advanced technology in a research park that makes a place prosper, but the pre-existing cultural and social capital of schools that matters most. Unfortunately for those who are still hunting the “next Silicon Valley” and hoping to find it in their own backyards, that is the kind of value that can’t be created through a real estate scheme or a good marketing campaign. 

Alex Sayf Cummings

Thursday, September 16, 2010

Brother, Can You Spare a Social Capital?

Is there a way to measure the social resources of individuals and groups – and does the effort to do so distract us from the real causes of poverty and inequality?

"If only we were more connected..."

Not long ago, liberals found a new lodestar in their endless search for some kind of political direction. This was in the 1990s, when the Reagan-Gingrich consensus was in full force and the left seemed to be lost in the wilderness of political irrelevance. Bill Clinton and other politicians seized on the theme of community (most prominently championed by the sociologist Amitai Etzioni) as a salve for the problems of crime, broken homes, failing schools, widening inequality, and diminished economic opportunity. If American society seemed to be going to the dogs, then it was a declining sense of community that was to blame.

How these problems were supposed to be solved with Community™ was never exactly clear. It had something to do with family values, the V-chip, school uniforms, and locking up every pot dealer from Santa Barbara to Sheboygan. Like most political shibboleths, it was better used as a banner and a cudgel in campaigns than as a hard-nosed policy prescription. But like the most successful clichés, there was also something about the rhetoric that resonated with the public, which seemed to comport with lived experience in a way.

Then came Robert Putnam. The sociologist’s 2000 book Bowling Alone: The Collapse and Revival of American Community was a smash success, drawing as it did on an impressive array of empirical data to make a simple point: Americans were less involved with their communities than in the past. Whether it was membership in the Elks or the Rotary Club, or simply having your neighbors over for dinner, Putnam argued that Americans were doing much less of it. Like the Greatest Generation mania of recent years, Putnam’s thesis tapped into a deep well of feeling that society used to be healthier, more wholesome, and just plain nicer back in the old days.

For scholars, the key part of Putnam’s analysis has to do with social capital – “that is, social networks and the norms of reciprocity in contemporary postindustrial societies,” as he put it. He has elsewhere described it as “the very fabric of our connections with each other.” The novelty of this idea (though Putnam certainly did not claim to have invented it) was that there was some elusive quality in people and communities that had a genuine value, albeit one that did not directly register in terms of dollars and cents.

In other words, looking at tax returns or home values for a community would not necessarily tell you everything you need to know about the wealth of the people who lived there. As Putnam and Kristin Goss argue, “The single most common finding from a half century’s research on the correlates of life satisfaction in countries around the globe is that happiness is best predicted by the breadth and depth of one’s social connections.” This can have a narrowly material meaning, in the sense that “who you know” has a hard economic value when you’re looking for a job and you have great connections. But it can also refer to an individual, family, or community’s ability to cope with stress and solve problems by relying on reciprocal bonds.

Certainly, it seems reasonable enough that communities where people know their neighbors and can turn to them for help would be happier and more successful than those where people live in isolation from each other – alone as they fret over bills and the evils of the world on the 11 o’clock news. Individuals who can reach out to friends and relatives for advice on how to deal with a broken down car, a financial problem, or how to get their kids into college would seem likely to fare better.

Things get trickier when you consider the real world implications of this idea. Are poor people poor because they lack social capital – that is, they do not have a sufficient network of friends, neighbors, and family to support them? We have heard this argument before in a different guise – the pathology of poverty, particularly among African Americans who migrated to the cities of the North in the mid-twentieth century. According to Daniel Patrick Moynihan, Nick Lemann and others, the breakdown of the traditional family and community networks led to the poverty and violence of Pruitt-Igoe and Cabrini-Green.

Many scholars challenged this interpretation, pointing to devastating job losses in the course of deindustrialization, the racist war on drugs, and the inadequacies of public housing and other services as cause for the despair of inner-city life, particularly in the North. In a broader sense, it is not clear at all that low-income communities have an inferior web of social connections when compared to upper-income nuclear families who live in gated suburbs and send their kids to private schools. If anything, social networks are part of what makes it possible for families to survive in the face of economic and legal conditions that are heavily stacked against them. (The work of Sudhir Venkatesh on networks in poor urban communities is worth considering in this respect.)

The problem may lie in Putnam’s understanding of social capital as a public good. He has treated it as a measurable quantity, an asset that an individual or community can have more or less of – more connectedness, or less. And although Putnam acknowledges that social capital can have negative “externalities” like any other capital, it mostly seems to be a good thing. The more social capital, the better.

One wonders if the same people who joined the Elks lodge in 1920 were bolstering their social capital when they attended Klan meetings. Is racist violence a negative externality of Klan capital? When a Yale grad has his pick of cushy jobs in Los Angeles because his dad knows the head of UCLA Law School, is his enjoyment of social capital hurting other job seekers? His competitors also know people, but the capital they have just isn’t worth as much.

In other words, not all social connections are created equal. The work of Pierre Bourdieu comes in handy here, as the French sociologist saw social capital as not just “more of a good thing.” He emphasized that social capital, just like financial wealth, is accumulated over time, and he recognized that one person’s gain in social capital could disadvantage another – as when a wealthy family passes on the benefits of its own web of connections to its progeny, while a person who is a first-generation college student may struggle to negotiate the terrain of schooling and the job market. For Bourdieu, social capital is “the aggregate of the actual or potential resources which are linked to possession of a durable network of more or less institutionalized relationships of mutual acquaintance and recognition… which provides each of its members with the backing of collectively-owned capital.” This capital defines the group but also “reaffirms the limits of the group.”

It is through social connections – the ability to live in certain neighborhoods, belong to certain cultural organizations (like churches), and gain access to certain schools and jobs – that people acquire cultural capital. This kind of capital encompasses the assets that privileged people can deploy to navigate the system, the savoir-faire that includes how to dress, how to speak, how to fill out a college application. As Bourdieu has shown, the acquisition of skills, knowledge, and good taste makes it possible for middle and upper class people to perpetuate their own privilege. In this view, inequality results not from a lack of connection to others but from a lack of access to the networks through which privilege and distinction emanate.

Of all institutions, schools are probably the most important factories of cultural capital. In a post-industrial society where earning a decent living seems impossible without an education – and sometimes very difficult with one – schools are the engines of both opportunity and inequality. Hence, today’s twin obsessions with gaining access to the upper end of the educational system (for the elite and wannabe elite) and fixing the public schools (for the poor and middle class).

Ironically, many in the professional class devote their tireless energies to providing “the best” for their children – the best private schools, the best shot at an Ivy League education – by working 60-70 hours a week and becoming less connected to their families and neighbors. In the process, they impoverish the social connections that Putnam holds dear. On the other end of the spectrum, working-class parents take on two or three jobs just to keep body and soul together, unable to participate in school and community in ways that would give their own children a better chance at success. Either way, the chasm between those who can provide entrée to social privilege and economic opportunity and those who cannot continues to broaden.

Why does this happen? Is it because prestigious colleges and universities can ensure their cultural capital remains much desired (and expensive) by keeping it scarce? This certainly benefits the proud parents of Harvard grads, since the prestige that cost them so dearly continues to pay dividends on the job market. Is it because economic policies (like tax cuts and deregulation) have enriched the wealthy and privileged, who can afford to pay more and more for an elite education – leading to an escalating sort of educational arms race?

Perhaps we would do better to consider structural economic factors that exacerbate inequality and make family life difficult if we want to understand why Americans are more stressed and pine for better days. The risk in Robert Putnam’s work on social capital is that it diverts our attention from causes to effects. Is it not more comfortable to talk about why people don’t belong to the Elk Lodge any more, instead of tackling the much thornier problems of Americans working more for less, lacking the time to be parents, and sending their children to broadly unequal schools? Putnam rues the fact that “having friends over” has dropped 35% in the last twenty five years; this change certainly has something to do with the narcotic of television and the divisions imposed by cars and suburbia, but it also has a whole lot to do with the daily struggle of the poor and the middle class to survive in today’s economy. Why do we not have people over anymore? Are we just a bunch of jerks?

Critics such as Barbara Arneil (in her excellent book Diverse Communities: The Problem with Social Capital) have called Putnam out for his nostalgia, arguing that the rosy past of civic participation was not so great for women and people of color. Certainly, the narrative of declining social capital seems to run parallel to the increasing entrance of women into the workforce since the 1960s, as well as the emergence of civil rights and multiculturalism. Homilies about “togetherness” and “community” seem to mask some kind of yearning for a more harmonious, homogeneous past – and, if this is so, they seem to imply that the inequality and despair of recent decades is somehow tied to the changing politics of gender and race. Conservatives have often decried the hyperindividualism of the generation that demanded all kinds of rights and self-fulfillment, and the social capital thesis provides them with a sophisticated intellectual defense of their views.

In my view, focusing on social connectedness is the wrong approach. Although Putnam found a wide variety of measures to quantify a drop in social capital, it remains possible that people have simply changed their ways of being connected. The rise of online social networking since Bowling Alone was published provides one possible example. More importantly, this analysis shifts the blame for society’s woes from the economic and political changes that have made people’s lives more difficult and toward their own failure to connect with each other. Deindustrialization left those without white-collar skills and education with few prospects of decent pay, creating a widening gap between low-paid service jobs and professional work; policies such as the drug war and mandatory minimum sentencing have ensnared millions in a losing battle with the criminal justice system; perceptions of failure have gutted public institutions, as those who can flee to charter schools and private education.

All of these factors mean that people from various walks of life struggle harder to keep their heads above the water, and have less time for the good things in life, which Robert Putnam and his followers wish we would pursue. Perhaps it is a lack of access to cultural capital – education and other markers of class – that most disadvantages many Americans today, rather than a functional measure of how connected or disconnected we are from each other. Or maybe it is human capital, human potential that is most wasted in a country with the world's highest incarceration rate, where the intelligence and skills of millions are lost on the unemployment line or behind a cash register. In the dynamics of a competitive capitalist society, where one missed paycheck or one arrest can send you down the road to ruin, who has time for the Knights of Columbus?