Thursday, February 24, 2011

Hefner, Hughes, and Rogen: Playboy and the Origins of the 21st Century Hipster

It's no exaggeration to say that having large numbers of single young men and women living independently, while also having enough disposable income to avoid ever messing up their kitchens, is something entirely new in human experience. Yes, at other points in Western history young people have waited well into their 20s to marry, and yes, office girls and bachelor lawyers have been working and finding amusement in cities for more than a century. But their numbers and their money supply were always relatively small. Today's pre-adults are a different matter. They are a major demographic event.

- Kay Hymowitz

On first blush, Kay Hymowitz seems to be talking about the rise of the yuppie and its Brooklyn cousin, the hipster. (Is there a difference anymore?) This pack of pre-adults consists of youngish people, presumably educated, who spend the disposable income they might otherwise drop on diapers or a mortgage to do whatever suits them. In fact, though, she is ranting in the Wall Street Journal about young men who refuse to grow up. Untethered to a wife and children, these strays scrounge around dorm rooms, leaving only to export this way of life to their parents’ homes and eventually their own apartments, all while blithely ignoring demands to grow up. The portrayal of the shiftless young man has only grown more prominent in the grand sweep of recent history, from pitched battles over male irresponsibility on Jerry Springer to the rise of Judd Apatow’s empire. Writers turn a pretty penny dishing advice to frustrated women while Apatow rolls out one blockbuster after another about arrested development, portraying the childlike man as a lovable hero. 

But where did this character come from? When did pop culture begin to promote the childless, free-spirited, self-indulgent adult as a cultural ideal? The dream of remaining young and unsullied by maturity is an old one, running from the lost innocence of creation myths through a long cultural tradition of imps, elves, and hobbits, all the way down to naïve Candide and Peter Pan, the lovable permanent child. But the more recent update is deeply tied to consumerism; its origins are largely male-oriented in nature, but it is far from a purely male phenomenon, despite what pundits in the grip of traditional notions about gender, marriage, and parenthood might suggest. The journey of the yuppie/hipster/slacker begins in Chicago, where Hugh Hefner founded Playboy in 1952. Long before the Playboy Mansion moved male fantasy out to the exurbs, Hefner situated his prototypical sophisticate in a hip urban bachelor pad.


The playboy of the 1950s was, in part, a reaction to the so-called domestication of the American male. Shoved into a “gray flannel suit” and boxed in a suburban ranch house world of cribs and Bundt cakes, middle class men yearned for a sense of freedom from the prosaic concerns of child-rearing and breadwinning, especially when their bread was won in a sterile corporate office in the city or a suburban office park. Hugh Hefner’s keen insight was to channel the psychic poverty of suburban manhood into a vision of suave, masculine, uninhibited freedom, defined by discerning taste and consumption of the better things in life – sophisticated drinks, jazz, the hip urban bachelor pad. As historian Elizabeth Fraterrigo has argued, the lifestyle promoted in Playboy was a sort of “answer to suburbia.” At the same time as the magazine promoted the pursuit of individual desires and redeemed the city as a place to live amid the rush to the suburbs, it also set up a sort of mirror image of male domesticity. Interior decorating might seem like a feminine domain in the suburbs, but Playboy actually recast the same activity in acceptably masculine terms. The sophisticated bachelor would seek to emulate Hefner by decking out his apartment in the sleek, high-tech, modernist style depicted in the magazine’s pages. Gone were colors, soft edges, floral prints, and in their place one found “a neutral palette and striking textural contrasts from its cork tile floor, to the stone fireplace heath, to an exposed brick wall” – a hallmark of today’s hip “loft” living, and a fine place for “a quiet discussion on Picasso, Nietzsche, jazz, sex,” as Playboy said in its first issue.


Clearly, Hefner and his writers were laying out the contours of the single or childless life that many educated, affluent people are embracing in the twenty-first century. The possibility of pursuing one’s discerning consumer taste without responsibility to children or a broader community was pioneered as a fantasy for the men who read Playboy, yet it has mutated into a choice available to anyone who wants to live in a city, dine on Burmese-Mexican fusion, and invite a friend or lover over to discuss The Wire over drinks. (Note the exposed brick in the background.) Once the province of men, the ideal of affluent individualism has been at least partly translated into an option open to women through the sitcom alchemy of Seinfeld and Sex and the City.

Yet this Eden of adulthood without domesticity continues to have a peculiarly male dimension to it, which suggests a deeper tie to the original Playboy idea of unfettered selfishness. In a recent diatribe against prolonged adolescence in men, Kay Hymowitz cited Playboy as the proving ground for a “refusal” of responsibility. “The arrival of Playboy in the 1950s seemed like the ultimate protest against male domestication; think of the refusal implied by the magazine's title alone,” Hymowitz recently said in the Wall Street Journal. “In his disregard for domestic life, the playboy was prologue for today's pre-adult male. Unlike the playboy with his jazz and art-filled pad, however, our boy rebel is a creature of the animal house.” The philosopher-quoting, gin-guzzling bachelor of the 1950s had unwittingly fathered the video-game-addicted schlubs who refuse to grow up in movies like Knocked-Up and Pineapple Express.

In search of adulthood

Growing up, it seems, means getting married and having children – along with, presumably, holding down a mortgage and a job. Without this marker of adulthood, the man drifts further from view and only the manchild remains. Curiously, Hymowitz claims that the transition to adulthood was traditionally clearer for women than men, and has only gotten murkier for the guys in recent years. “It’s been an almost universal rule of civilization that girls became women simply by reaching physical maturity, but boys had to pass a test,” she writes. (Anthropologists would likely quibble with so broad a claim.) “They needed to demonstrate courage, physical maturity, or mastery of the necessary skills.” This argument seems deeply wedded to long-standing claims that men in modern society have had to compensate for the loss of their role as guarantors of physical security (fighting, shooting, performing manual labor) by indulging in a wide array of macho substitutes: playing with guns, buying an SUV, an undue obsession with gadgets and lawncare. Such pop psychological arguments have grown old and stale without losing their ability to hold sway over social critics (as well as the marketers and publishers who have so fiendishly exploited perceptions of male inadequacy).


Tuesday, February 22, 2011

Asian Ballers, Trailer Trash and Sissy Rap: The Best of SASA




The following are among the most thought-provoking papers from last week's Southern American Studies Association conference at Georgia State University in Atlanta.

Stanley Thangaraj, "Competing Masculinities: Sport and Ethnic Minorities in Atlanta,” Department of Sociology, Vanderbilt University


In this intriguing paper, Stanley Thangaraj adds to the growing literature on the importance of leisure space in providing a means for urban and national membership. Focusing on the role of Asian only basketball leagues in and around Atlanta, Georgia, Thangaraj explores the importance of sport in creating alternative masculinities, forging community formation, and providing a means to citizenship for South Asian American men. Pushing back against tropes of perpetual foreignness ascribed to South Asian American born citizens since the Banned Zone immigration Act of 1917, the Thind ruling in 1923 and the resurgence of anti-South Asian racism in the post 9/11 period, basketball leagues provide a site for community formation. Linking Sihks, Pakistanis, and Indians of numerous regions, the Asian Ballers League enabled these groups to express a form of “American” masculinity that negates traditional asexuality and passiveness conflated with South Asian Americans, while participating in a uniquely American activity. Thangaraj accounts for class differences arising from those born after the 1965 Immigration Act and those arriving in the wake of the 1980 Family Reunification Act, with newer South Asian Americans suffering a notable lack of social and economic capital in comparison with their 1965 antecedents. Though such leagues craft a sort of pan-ethnic identity, Thangaraj notes that such developments were not without their pitfalls as Blacks remain outside the traditional purview of such leagues and often racialized when present within them. Thangaraj’s research offers an intricate and fascinating investigation into how South Asian Americans see themselves, their relationship to each other, and their ties to other ethnic groups, notably Latinos and Blacks. A post doctoral fellow at Vanderbilt University, Dr. Thangaraj can be contacted at Stanley.i.thangaraj@vanderbilt.edu.



Matt Miller, “Millennium Sissy: The Gay Male Rappers of New Orleans," Institute of Liberal Arts, Emory University


Though segments of the rap community have long been accused of homophobia and misogyny, new developments in the New Orleans rap scene suggest a means to transcend such problematic constructs. Focusing on the “sissy" rappers' appropriation of New Orleans “Bounce,” Matt Miller cleverly illustrates the openly gay and transgressive nature of new regional rap icons like Katey Red and Big Freedia. Adopting a “Butch Queen” rather than “Drag Queen” persona, Katey Red and others exude pride in their sexuality, dismissing men who engage in similar homosexual acts but deny them publicly. Blunt, joyous, and sexual, sissy rappers embrace the supra localism of New Orleans bounce, celebrating local housing projects, block parties, and other Big Easy idiosyncrasies. Careful not to overplay the freedom such individuals enjoy, Miller points out that in many cases the deeply based place-based identities of these rappers and bounce itself, has enabled them in many ways to transcend homophobia so ingrained in American culture. Moreover, as Miller suggests, they remain perfectly situated to appeal to wider audiences of white middle class audiences, notably “hipsters” and bohemians looking to challenge traditional domesticities and gender roles. Finally, consisting of primarily working class black men, sissy rappers illustrate the powerful influence of grassroots cultural movements that build support through block parties, local clubs, and community events to change broader perceptions. Miller’s work reminds us that agency can start from the most unlikely places and from the most discriminated against groups to create something new, viable, and potentially revolutionary. 


See a clip from Miller's documentary about the bounce music scene, Ya Heard Me, here.






Eve Errickson, "The Mobile Home Park: A One Time Design Revolution Reconsidered as Vernacular Architecture and Artifact of Legal Inconvenience," National Trust for Historic Preservation, Washington DC


Eve Errickson traces the mobile home’s origins as a form of luxury travel to its designation as a low-income shelter for marginalized populations in this insightful presentation. Though adopted from upper class ideas of transportation, the post-war housing crisis led to predictions that the mobile home served as a partial solution to future housing problems. Affordable, mobile, and simple, many believed pre-fabricated mobile homes served as a model for planners and municipalities. Yet, plagued by lack of regulation and often flimsy design models, mobile homes spiked in popularity several times from the 1950s to the 1970s to the 1990s even as their value and social capital continued to decline. Still, they remain a consistent presence among southern communities. Errickson noted that future investigations into why these forms of shelter have been labeled the purview of white trash need to be excavated, as mobile home dwellers remain one of the few social groups that people find acceptable to denigrate publicly. 


Ryan Reft

Wednesday, February 16, 2011

Pentecostal Captain America and the Racialized Geographies of Muslim Hip Hop in a Toothless Fictive Milieu


The Southern American Studies Association will bring its conference to Georgia State University in Atlanta this week. Scholars will travel from France and Kansas, California and Germany to present at the meeting, although a preponderance of the participants hail from institutions in southeastern states such as Georgia, Mississippi, Tennessee, and South Carolina. Operating under the capacious tent of American Studies, the conference includes contributions from scholars of history, literature, music, popular culture, and sociology, among other disciplines.

We have looked through the program to single out some particularly intriguing panels. (The entire schedule can be viewed here.) The panels selected will hopefully provide a thumbnail sketch of the most interesting work going on in the multiple fields that make up American Studies, for both casual readers and possible attendees. Although any academic conference includes its share of comedy-gold paper titles -- a quick glance at the MLA’s most recent program serves up the likes of “Bootleg Paratextuality and Aesthetics: Decay and Distortion in the Borat DVD” and “The War of ‘Of’ and Other Polyvocal Syntaxes in ‘An Ordinary Evening in New Haven’” -- we hope to focus attention on some of the most provocative and promising papers in the conference, whether they focus on Elvis Presley or excrement or spottieottiedopalicious angels. The following is a taste of the freewheeling interests of those who study America and its culture today:

Sunday, February 13, 2011

Making Sense of Mom: The Ideology of 20th Century American Maternalism


In recent weeks, controversy over Yale law professor Amy Chua’s Wall Street Journal article and subsequent book Battle Hymn of the Tiger Mother pointed to the continuing importance of motherhood in the American mind. Chua’s article ostensibly argued, in a fairly essentialistic manner, that strict Chinese mothers were more or less superior to permissive western matriarchs. Needless to say, hackles from numerous communities arose in opposition, with Chua suggesting that the Wall Street Journal published the account without giving her a chance to edit the column and focused on the book’s most controversial aspects. Clearly, though over the course of the 20th century, ideas about motherhood have changed, it remains a topic worthy of controversy. Chua’s “insights” at least turn older racial constructions on their head. If discourse around white middle class motherhood often disparaged non-white mothers, Chua’s assault on the stereotypical “American mom” (a figure still largely connected to whiteness) established a new hierarchy in which white middle class practices are seen as deficient.

American motherhood remains one of the tenets of national belief. However, as Rebecca Plant illustrates in Mom: The Transformation of Motherhood in Modern America, the ideology upon which motherhood rests has not remain fixed. Rather, argues Plant, the existence of “Mother Love” or moral motherhood in earlier eras, notably in the pre-war period gave way to a new ideal of maternalism that removed motherhood from the public sphere as a form of sacrifice, resituating it as an aspect of women’s personal character in the postwar era. While undeniably, strains of misogyny echoed through the discourse rejecting “Mother Love,” Plant goes to great lengths to illustrate the complexities of developments that did not stem wholly from the antipathy that some men held toward women. Instead, Plant traces the decline of moral motherhood and the creation of a new maternalism that “both reflected and facilitated white, middle class women’s gradual incorporation into the political and economic order as individuals rather than wives and mothers.” (2) While Mom focuses predominantly on how this change affected white, middle class women’s conceptions and experiences of motherhood, Plant makes significant insights regarding the role race played in the ideology of motherhood. However, though by no means equal, changes in the ideology of motherhood served as a broad leveler of the ideal for all mothers. By the mid twentieth century “mother blaming” no longer reinforced the cultural authority of middle class mothers at the expense of poor or nonwhite women. Instead, post WWI “mother blaming,” Plant argues, lowered “the status of mothers across the board.” (14)


For Plant, the decline of moral motherhood in the interwar period focused on a rejection of four “long standing precepts” that had come to encapsulate the latter stages of Victorian maternalism, namely “the belief that the mother/homemaker role was a full time, lifelong role, incompatible with the demands of wage earning; the notion that motherhood was not simply a private, family role, but also the foundation of female citizenship; the conviction that mothers should bind their children (especially their boys) to home with 'silver cords' of love in order to ensure their proper moral development; and the assumption that motherhood entailed immense physical suffering and sacrifice.” (2) Though acknowledging that aspects of these beliefs persisted, Plant suggests that far fewer Americans subscribed to them by the late 1960s.  She argues that moral motherhood dissolved in the face of early 20th century “scientific motherhood” endorsed by Progressive reformers. The 1920s saw an assault on the Victorian model of motherhood that targeted the sentimentality and piety associated with the Victorian figure as manipulative, hypocritical and insipid. Still, despite the onslaught, moral motherhood remained a potent force well into the 1930s.

Plant suggests that the importance of this insight relates to how historians have interpreted the post WWI anti-maternalist discourse that grew so prevalent. While “gender conservatism” and anti-feminism played important roles, few historians Plant argues, have considered the importance of moral motherhood. Moreover, if often conservatives have been thought to be guilty of critiquing American motherhood, liberals who “espoused progressive views on race, social welfare, and other issues” arose as its premier critics, seeing in old line maternalism outdated and outmoded ideals. To the contrary, social conservatives emerged as American motherhood’s primary defender suggesting liberal critiques “devalued women’s domestic roles.” (5)

Conservatives and others saw in “Mother Love” or moral motherhood not so much the individual as the institution or as Plant summarizes, “a fundamental pillar of the nation’s social and political order.” (5) Traditionalists saw in this form of motherhood a transformative property for mother and son alike. "Mother Love" allowed wayward women the chance to redeem themselves as virtuous while their new virtue in turn impacted their sons and daughters (though as is to be expected sons drew far more attention in this regard).

Using Philip Wylie’s Generation of Vipers as a starting point, Plant recenter’s Wylie’s work, encouraging scholars to reconsider previous interpretations that dismissed the text’s message as little more than misogynistic babble. Clearly, Generation of Vipers was a problematic source; Plant herself describes the book as part “Menckenesque satire, a hellfire sermon, a primer in Jungian psychology, a work of wartime propaganda, an autobiography, a lurid novel, and a science fiction fantasy.” (20) Nonetheless, one particular chapter in Wylie’s book evoked the familiar figure of the Victorian matron, instilling in the figure a sinister aura that resonated with the larger public. Wylie’s diatribe undermined the idea that women were inherently morally superior beings, an ideal that proved at once limiting and liberating in the late nineteenth and early twentieth centuries. For Plant, Wylie represents the culmination of grievances, built up in the interwar period that led to the rejection of the “late Victorian matriarch.” Critically, the most responsive group to Wylie’s provocations were white , middle class women, who while condemning Wylie for his attacks on American mothers also praised his assault on “momism.”


Wylie’s “momism” attack appealed to misogynists and Progressive women alike, as the latter strove to be seen individuals rather than wives and mothers. Social scientists and psychologists, who believed “momism” to be detrimental for women and children, joined in support. For many, Generation of Vipers served as a stinging critique of “organized womanhood rather than an attack on feminism per se” (25). The work of women’s organizations had come to be associated with the coercive power of the state.  Wylie and others no longer saw such groups as self sacrificing and moral, but instead selfish and clawing. For liberals and others, these women as embodied by the aforementioned Victorian patriarch, supported a continued sexual inequality and paternalism that rankled many men and women alike. Even Betty Friedan harnessed aspects of Wylie’s argument, for as Plant notes, iconic feminists of the period viewed Wylie’s critique as a means to undermine the “toll of sexual inequality” in which women could only marshal political and social power through the domestic sphere. Though Wylie did not endorse women’s developing careers, he did promote an increased presence of women in the workplace. Published in 1942, Generation of Vipers barely preceded the explosion in WWII female employment and the resulting gradual increase in the following decades.

One wonders to what effect the role of women in the 1930s affected Wylie’s views. From the late nineteenth century and early twentieth, women exerted a distinct influence over culture, serving as a sort of “gatekeeper”; as men were distracted by “the business of conquering the continent and developing its resources," women emerged as “agents of civilization.” By the 1930s, Wylie portrayed this gatekeeping as “vulgar and prurient.” Women, suggested Wylie, trafficked in low brow mass culture. This was best represented by his criticisms of daytime serials and soap operas. While seemingly absurdist to the modern reader, debate over the effects of these soap operas proved lively in the 1930s and 1940s.

This raises a question. If women’s organizations' connection to government undermined their position in society for some, one might reasonably ask what part did an increasingly interventionist federal government play in these developments. Obviously, the New Deal and WWII led to the creation of a more activist government, economically and socially, creating a welfare state that sought to expand purchasing power, which under the economic Keynesian policies of the day functioned to boost national economies. As Lizabeth Cohen has illustrated, in both the Depression era and WWII, women’s consumerism established moral authority as integral to the general welfare of the nation’s citizenry. "Women used their existing organizational strength to advocate for consumers, in the process establishing new authority for themselves as guardians of the public welfare,” Cohen observes in her book, A Consumer's Republic: The Politics of Mass Consumption in Postwar America. (34) Wylie’s observations and the reactions of women at the time suggest a complicated interplay: the government entrusted women with purchasing decisions while satirists, pundits, and social scientists scorned the cultural productions that many women enjoyed. In some ways, Wylie enjoyed the backlash against such entertainment as more than a few women chose to identify with what Plant labels “masculinist literature,” rather than the overly sentimental radio serials of the day. Yet these appropriations remained imperfect. The rise of a therapeutic culture, in which men like Wylie occupied positions of authority, left many women unable to articulate their objections to the momism critique, thus the championing of masculinist literature operated as a sort of pre-feminine mystique act of protest.

Plant tackles such paradoxes. Part of the problem with maternalism in the interwar period rested on the fragmentation of its meaning. Female prohibitionists, militarists, pacifists, and even many fascist nations articulated some form of maternalism to justify their actions. This lack of coherence led some observers to view maternalists as opportunistic at best and “proto-fascists” at worst. Though maternalistic rhetoric never fully faded from public life in the post WWII era, the interwar period revealed the tensions that undermined the previous foundations of maternalism, namely the protection of America’s mothers and children that many activists and reformers had organized around. Nor would American motherhood serve as the ultimate symbol of the nation’s values.


Here, Plant engages in one of the book’s keenest insights, an examination of the Gold Star Mothers’ pilgrimages of the early 1930s. This government funded program paid for mothers of deceased WWI veterans to travel to Europe and visit the gravesites of their fallen sons. Unfolding in the heart of the Great Depression the Gold Star Mothers’ pilgrimages not only provide a window into white middle class maternalistic rhetoric but also reveals how race intervened. Gold Star Mothers embodied the idea of motherhood as a civic duty, as important to the nation as the military itself. War mothers differed from Progressive maternalists in two key ways. First, war mothers emphasized their sacrifices in rearing and sacrificing their sons for military duty, while Progressives suggested women contributed civically by raising American citizens. Second, war mothers focused on the “emotional and symbolic aspects,” favoring elderly women who no longer practiced the kind of reforms of progressives who sought to improve conditions for poor, working class, and rural moms. However, though the federal government’s support of war mothers had been meant to provide a service to less financially capable mothers, by the end of the program it served as a symbol of American prosperity and benevolence. Donna Alvah’s Unofficial Ambassadors, which focused on the role of military families in Cold War era policies abroad, points to similar developments in the post war era. For Alvah, American dependents, mothers and children played an important role in Cold War diplomacy through their interactions with occupied peoples, their access to American goods and technology, and their dissemination of American “ideals.” When compared to the economic conditions of occupied peoples in West Germany and Okinawa, the material benefits these families enjoyed abroad served as a symbol American prosperity and wisdom. The larger point here remains the critical role that women played in advancing U.S. foreign policy interests. Plant’s observations and Alvah’s conclusions help breakdown the part played by gender in twentieth century foreign relations.

Undoubtedly, this remains an area worth more consideration, but Plant, Alavah, Laura Briggs (Reproducing Empire) and others provide a growing historiographical map of twentieth century foreign policy and its relations to gender. Considering the work done by Amy Kaplan, Allison Sneider, and others on the role of gender in nineteenth century manifest destiny and U.S. imperialism policies/debates, the past fifteen to twenty years have been fruitful.

Still, Plant’s Gold Star example also illustrates the complexity of this process when race intervenes. Amazingly, when one considers the prejudices of the day, the Gold Star Program included African American women. Yet Black women found themselves predictably segregated from their white counterparts. Moreover, they enjoyed far better accommodations when in France than stateside. The government found itself in an awkward position as it tried to resolve contradictory impulses. Though the mothers served a patriotic function, advertising American “munificence,” the U.S. government also felt a need to maintain the “racial construction of the all-American war mother,” a construction that frequently, if not always, excluded black women. The government’s inability to “treat the black pilgrims as ladies” resulted in negative reaction within the Black press, which Plant points out felt slighted on two levels.  “It is impossible to say which outraged the black community more – the violation of the women’s rights or the social slight against them, for the gendered construction of citizenship made the two offenses indistinguishable,” she writes. (69) Black war mothers who took the opportunity went out of their way to deflect these criticisms by emphasizing that the program did treat them like ladies. This failed to appeal to large segments of the Black community who submitted that these mothers had failed. Black mothers were expected to instill and nurture racial pride in their children, and by accepting the Gold Star Program in its segregationist orientation these women had undermined that process. The Chicago Defender listed the names and towns of the first group of pilgrims in 1930 with the headline, “Their Sons Died for Segregation.”


White liberals took issue with segregation as well, but for different reasons. Liberals conflated racism with feminine snobbery, thus, they seemed less concerned with institutional racism and more so with being small minded and “undemocratic". The Gold Star Program encountered trouble as the depression deepened and it became clear that WWI veterans themselves had not been adequately compensated, leading some veterans to lash out at war mothers. This combined with a feeling that war mother rhetoric trafficked in fascism and a popular culture critique that transformed the war mother’s role. If soldiers had earlier fought to defend “the American war mother,” cultural productions questioned this formation, portraying war mothers as “self aggrandizing figures” from whom their sons needed defending. Though the interwar shift still held to aspects of the earlier “moral mother” ethos, critiques in popular culture, as already discussed with Wylie, pointed to a shift that deepened following WWII.

As the 1950s progressed, it became clear the once sacred tenets of mother love had wilted under the scrutiny of popular culture, psychology, and social science. The very things that had once made mother love so healthy and natural now made it pathological. Psychologists portrayed motherhood as female “fulfillment,” thus, rejecting the self sacrificial ideal that had been central to mother love. Moreover, too much mother love was now seen as narcissistic and sexually perverse. Pundits and others expressed reservations over mothers in ways that would have been unthinkable in previous decades. Mothers were to employ more intensive care in their child’s early years, stepping back as the child developed. Too intense a relationship between mother and child caused sexual perversion such as homosexuality. As Plant notes, this further eroded the public’s connection to the Victorian maternal ideal.  "Rather than ushering in a resurrection of the Victorian ideal of Mother Love," she says, "the 1940s and 1950s witnessed its demise in mainstream American culture.” (88)  Part of this transformation resulted from attempts by experts to exert greater control over child rearing. While commentators and experts certainly impugned mothers, they did so as a means to change American ideals regarding “maternal affect and behavior.”  If feminists like Betty Friedan pointed to a “feminine mystique” as the reason for mid-century discontent, Plant complicates this idea. Noting that what served as the primary frustration for women was a “double bind” in which women were prevented “from construction their identities as autonomous individuals, and still prevented from competing on equal terms with men in the workplace and the broader political realm," Plant points out that "they were now also discouraged from constructing their identities as selfless nurturers, whose sacrifices entitled them to certain rewards.” (116) Now the spread of a new therapeutic ethos promoted by experts and popular culture told women that the truly “feminine” was to reach fulfillment as an individual through motherhood.

In the book’s latter chapters, the importance of the discourse or social structure of motherhood and the birth process emerges as influential. Experts and others helped to create a discourse around both motherhood and pregnancy itself. Plant repeatedly illustrates how mothers seemed to craft experiential stories that adhered to many aspects of the previously mentioned discourse. In these sections, Mom combines aspects of Beth Bailey’s Front Porch to Back Seat, Joan Scott’s “The Evidence of Experience” (which Plant references in her introduction) and Regnia Kunzell’s Fallen Women, Problem Girls. This is to say, Plant remains wary of accepting letters and correspondence as uniquely authentic and draws upon psychological and popular culture discourse regarding pregnancy and motherhood to illustrate how women explained their individual experiences within or related to these discourses. Like Bailey’s work, which examined how dating structures as established by etiquette manuals, popular culture, and psychology/dating experts influenced men and women’s identities and relationships, Plant explores how pregnant women themselves described the birthing process. Like the black Gold Star Mothers who attempted to craft the government’s treatment of them as “ladylike” as a means to accommodate criticisms, women often constructed their own memories of the child birth process in relation to the dominant discourse of the day. Though the proliferation of the “natural birth process” as promoted by professionals and experts on one hand granted women more agency in transforming the experience from a “potentially harrowing medical event” in which male doctors exerted nearly total control, to a “joyful experience – one they could understand and partially control,” it also incorporated a new ideal for motherhood that illustrated a tendency to “to turn a women’s performance of pregnancy and labor into a gauge of her mental health or a test that revealed the degree of her adjustment to femininity.”(143) Even women that suffered through “natural child birth” felt compelled to diminish this aspect of their experience out of fear for not measuring up to this new standard of womanhood. Betty Friedan’s stance on the issue of natural birth represents the complexities of these issues. Though supportive of women reclaiming control over childbirth, Friedan viewed the natural birth movement’s promotion of a “natural motherhood” warily.


Of course, this leads to a discussion of the arguably the most famous feminist of the past fifty years. Perhaps no book that includes a discussion of mid-twentieth century gender could ignore Betty Friedan. Plant explores Friedan’s appropriation of the Wylie’s “momism” critique. Though describing mid-century suburban mothers as similar to concentration camp survivors and employing many of Wylie’s arguments, Friedan also explored the “implications of momism” systematically. For Friedan, suburban mothers needed to be “liberated from their confines domesticity,” writes Plant. Friedan’s work proved polarizing as it managed to alienate segments of homemakers and working mothers but without a unifying set of grievances. If middle class Progressive era maternalism helped middle class white women paper over ideological differences, postwar anti-maternalism highlighted these conflicts, resulting in what Plant describes as a “profoundly divisive force.” For the UCSD professor, responses to Friedan’s critique allow for historians to think about the ambiguity and ambivalence that anti-maternalism seemed to encapsulate in the years to follow.

What has this meant for the ideology of motherhood today? One might point to anti-Iraq protester and war mother, Cindy Sheehan. Sheehan and others resurrected the Gold Star brand as an anti-war organization under the title “Gold Star Families for Peace,” notable not only for its criticism of the war but also its focus on families rather than mothers. Sheehan’s protest undoubtedly gained credibility because of her experience as the mother of a fallen son in Iraq. However, in today’s context, Sheehan endured right wing vilification that impugned her motives, including accusations that she lacked the requisite patriotism for an American mother. In a review of Sheehan’s 2005 book Not One More Mother’s Child for the Future of Freedom Foundation, Professor Sam Bostaph of the University of Dallas spent less time on Cindy Sheehan the mother, paying far more attention to Cindy Sheehan the “street fighter of anti-war rhetoricians.” Bostraph lionized Sheehan’s anti-war efforts pointing out that “her simple eloquence draws crowds. Those who stand with her see a leader and protector. Those who defend the war fear her outspokenness against it and the doubts that she raises among the general public. Consequently, they vilify her.” Sheehan’s example draws attention to the various transformations in the ideology of motherhood both for mothers themselves and observers. It is hard not to see strains of antimaternalism in right wing attacks on Sheehan, yet as Plant points out, in earlier decades traditionalists were exactly the ones to defend Sheehan’s position. Undoubtedly, the intersection of war, motherhood and a highly critical protest movement makes Sheehan not exactly a new model but one that complicates the ideology of motherhood in twenty-first century America.

Peg Mullen provides an even earlier example. The wife of a small town Iowa farmer, Mullen and her husband lost their son to the Vietnam War in February of 1970. Mullen, like Sheehan, did not retreat into a depressive shell but rather reacted with what one observer described as an “arid furied Medean grief, one in which anguish is indistinguishable from rage.” As a 2009 New York Times obituary stated matter of factly, “An angry mother is, of course, most dangerous to her enemies. Peg Mullen started calling the Pentagon relentlessly, demanding more information about the circumstances of her son’s death.” When the Pentagon sent a check for just over 1800 dollars for her son’s life, the Mullens used the check to pay for an ad in the newspaper publicizing their opposition to the war. When aides to Nixon returned her repeated correspondence, Mullen returned it with the message, “Send it to the next damn fool.” As New York Times correspondent Sara Corbett noted, Peg Mullen’s fury “spooled outward into the world.” One woman wrote to her declaring that it was time “for mothers to unite.” Mullen argued that women should bear their anger as publicly as their sorrow.  “I always reminded them that their son belonged to them,” she wrote, “not the military.” When Nixon visited the state capital, Mullen was there protesting and for her efforts was clubbed by local policemen trying to pry her protest sign from her, which read “55,000 Dead, 300,000 Wounded — My Son, Just One.” Mullen greeted America’s second incursion into Iraq with similar disbelief. When asked about Cindy Sheehan’s anti-war efforts in Crawford, Texas, Mullen responded, “I would give my right arm to be there.” Eighty eight at the time, Mullen concluded, “I mean, somebody’s got to stop this thing.”


Both women’s entrance into the public eye rested on their roles as mothers of fallen sons, yet rather than their whole identity, it seemed to occupy only one aspect of their self image. Themes of sacrifice undergird much of their protest, granting their opposition greater weight, yet they straddled modern day conceptions of motherhood. Each appealed to their fellow mothers in ways that appeared patriotic to left leaning observers and an anathema to those on the right. Returning to the black war mothers of the original Gold Star Program, one wonders how different the reception would be for Sheehan or Mullen’s protests had they been African American. Whether today’s “war mothers” or Amy Chua’s reflections on Chinese motherhood, the ideology supporting mothers remains a vibrant and controversial issue, one clearly mediated by race. The intersection of race and class in both domestic and foreign policy debates illustrates both how complex and controversial modes of motherhood can be and their centrality to our own national identity.


Ryan Reft

Wednesday, February 9, 2011

How We Got Here: Stein, Cowie, and Arrighi on the Post-Industrial Economy


In 1974 an autoworker from Michigan named Dewey Burton remarked disconsolately to a reporter, “I wanted to be somebody…It wasn’t about the money so much as that I just wanted to have some kind of recognition, you know, to be more tomorrow than I was yesterday, and that’s what I was working for.” The economic repercussions of the 1973 oil shock hit America’s working class especially hard and were perhaps the first major signs that the period Eric Hobsbawm calls the Golden Age of capitalism (1950-70) was coming to a dramatic end. Indeed, aside from the Great Depression, the 1970s is the only decade in which Americans ended up poorer than they began. As if history was taking its cue from Schumpeter (or Marx, depending on your taste), the seventies saw the gradual decline of the American manufacturing sector and the ascendancy of the service sector. 

For the well off, the latter meant opportunities in finance were multiplying, as the financial sector would soon loom large over the economy. For those less fortunate it meant closing plants, mass lay-offs, stagnant or dropping wages, and a shift to less lucrative precincts of the service sector—waiting tables and cleaning bathrooms. This dramatic decline in fortune for the majority of Americans was compounded by the steady dismantlement of America’s already very modest welfare state. This gives credence to the despondent joke of a good friend: “Starbucks is the new social safety net.” Unfortunately, gallows humor can do little to help us understand the gravity of the general despair of the 1970s. “It takes so much to just make it that there’s no time for dreams and no energy for making them come true—and I’m not sure anymore that it’s ever going to get better,” Dewey explained. Upon reflecting about how hard he had worked to merely survive, Dewey summarized his feelings thusly, “I realized I was killing myself, and there wasn’t going to be any reward for my suicide.”[1]

So what happened? Economic historian Judith Stein (Running Steel, Running America) provides a formidable explanation for the demise of American manufacturing in her latest book, The Pivotal Decade: How the US Traded Factories for Finance in the Seventies. But before going into her thesis, it is worth outlining alternative explanations for the dismal economic performance since the seventies.

As is well known, the period following World War II saw unprecedented economic growth. During this “Golden Age,” the economy grew at about 4 percent a year, disposable income grew by 15 percent in real terms, the percentage of population living below the poverty line dropped from 40 to 10 percent, and perhaps most significantly, “The income of the lowest fifth increased 116 percent, while the top fifth grew 85 percent; the middle also gained more than the top.”[2] Since about the mid seventies, wages have been stagnant, household income for the majority of Americans have barely budged despite the rise of dual-income households, inequality has risen dramatically, and economic growth has slowed dramatically (by about half) even taking into account what Joseph Stiglitz called the “Roaring Nineties.”

A number of factors contributed to America’s economic growth in the Golden Age, but foremost among them is luck. World War II had just decimated America’s main industrial competitors in Europe and Asia. This created more than a mere competitive advantage. In many instances, American products were the only ones in town. Moreover, America’s late entrance into World War II coupled with its relatively minor role in defeating Nazi Germany assured economic preeminence for some time after the end of the war. [Gerhard Weinberg puts American casualties in the war at 300,000, while recent research puts Soviet casualties at 25 to 27 million. About nine million Germans died.] Robust government policies also played a fundamental role. In effect, a whole host of New Deal programs that laid the foundation for unprecedented growth acted as gigantic subsidies for the middle class. “New Deal programs in communications, banking, housing, and airlines stabilized investment,” Stein notes. “To make the telephone accessible to all, the government set tariffs so that richer and urban consumers subsidized poorer and rural customers. [Real America™, if you will] New Dealers removed the risk in the mortgage market so that banks and other institutions could lend to many who otherwise never could have entered the housing market. They did not ignore renters, funding extensive public housing construction. The government promoted airlines by offering mail contracts to sustain the new business. The state funded research in defense and space and was itself the market for the products of that research. The New Deal kit contained many tools—government spending to reduce unemployment but also regulations to promote industries valuable to the nation.”[3]

This being said, it is important not to romanticize the 1950s and 1960s. As Thomas Sugrue explains in his classic The Origins of the Urban Crisis, contrary to widespread belief, the “urban crisis” actually has its roots in the 1950s and 1960s when, among many other factors, inadequate and discriminatory housing policy combined with misguided private sector and neighborhood policies created disastrous conditions for African Americans in urban areas. Stein soundly makes note of the mixed outcomes of the fifties and sixties. Despite major gains, Americans were a long ways away from being comfortably affluent. “The median family income for 1968 was $8,632, when it had been $3,931 in 1947. But $8,632 was about a thousand dollars less than what the Bureau of Labor Statistics defined as 'modest but adequate' income for an urban family of four,” she states. Likewise, despite reductions in overall poverty, “In 1970, government figures indicated that 30 percent of the nation’s working-class families were living in what was actually poverty, with incomes of less than $7,000. Another 30 percent were above a poverty budget but below the intermediate level. Thus, 60 percent were either poor or hovering between poverty and the very modest level of the intermediate budget.”[4] Nonetheless, progress was very real.

An explanation for what went wrong depends on whom you ask. A favored explanation for those on the right puts the blame on rising labor costs due to overzealous unions and overregulation of the economy. While even Marxists agree there is something to this argument, it does not explain the current dramatically lower growth rates compared to 1950-70 despite deregulation of nearly every sector of the economy and stagnant wages at home and access to dirt cheap labor abroad.

Perhaps unsurprisingly, liberals have had an even more difficult time coming up with explanations for the crises of the seventies. Aside from the very real and very detrimental impact of the oil shocks of 1973 and 1978-79, Keynesian liberals like Paul Krugman admit that it is “still somewhat mysterious” why growth slowed so dramatically in the seventies. Although some liberals point to misguided monetary and price and wage control policy (Stein argues that both played important roles), there seems to be no liberal consensus on the 1970s.

Marxist scholars, on the other hand, provide some compelling arguments. David Harvey, for example, argues that falling profit margins pushed industry to tap what Marx called the “reserve army of labor” in the third world. What caused the falling profit margin? Marx would have comforted modern conservatives by agreeing that excessive labor costs typically create profit margin crises. Harvey takes into account several factors, but they boil down to these: labor-saving technology, increased organizational efficiency, increased mobility of capital, and capital’s successful efforts to depress wages through various means all left consumers—which is also labor—in the world’s biggest market (the US) with less income to consume capital’s products. Since labor (and its wages) is necessary for the consumption of capital’s products, the diminishing fortunes of the former led to lower aggregate demand for the latter, which, in the long term, means falling profits. This problem, Harvey notes in his latest Enigma of Capital, was quickly resolved by extending credit to millions of middle and working class people in order to fuel growth through debt-based consumption.

Pivotal Decade provides an argument that complements those of the Marxists. At the heart of Stein’s story is the conflict between America’s Cold War foreign policy priorities and the wellbeing of its own citizens. Put briefly, the Soviet victory over Nazism and the sheer destituteness of Western Europe and East Asia provoked fear in Washington (and London) of fertile grounds for communist revolution. Aside from providing massive amounts of economic aide and negotiating a dramatic devaluation of (Western) European currencies to make its exports more competitive, in an effort to stifle hospitable conditions for communist revolution, the United States opened up its market to European, Japanese, and Korean exports despite their restrictions on foreign (mainly American) imports. (The US also used covert and overt force by helping rig elections and supporting fascists in France, Italy, and Greece, among other places.) European and East Asian exports also received a boost by their own governments in the form of generous subsidies, import quotas, and other forms of protection which allowed them to develop their infant industries to increase their international competitiveness. Incidentally, this whole affair would have given Alexander Hamilton a nasty migraine.

In effect, the United States government helped dismantle its own economy in order to get an upper hand in the Cold War. Stein argues that several factors contributed to this act of slow motion self-immolation. Some of this self-harm was unforeseeable while other policies seemed downright foolhardy. An example of somewhat more excusable undermining of self-interest is America’s quasi-inadvertent role in developing Japan’s steel industry. The US had eyed Japanese production during the Korean War both for its proximity to the conflict and to boost Japanese employment so as to avoid the possibility of a potentially radicalized mass of unemployed people. It is no wonder why the president of Toyota called the Korean War “Toyota’s Salvation:” “the U.S. military’s order of a thousand trucks a month made up for the steep decline of the company’s sales.”[5] Likewise, US military spending in Japan coincided with the peak years of Japanese growth in 1966-70, which also happened to be the height of the Vietnam War.

Other policies were less excusable. Democratic and Republican administrations alike refused to seriously confront Cold War allies that were restricting American imports while taking advantage of open access to American markets. The results were nothing short of staggering. For example, Stein points out that between 1967 and 1970, Japanese exports to the US increased by 96 percent.[6] The flood of subsidized imports on the American market made it nearly impossible for American producers of goods to compete. This created an incentive for American corporations to expand their business and increase market share by divesting at home and investing abroad. For example, since trade barriers prevented easy access to European markets for American goods, American capital fled to Europe where there were capital controls that prevented capital from fleeing in light of any unruly labor activity in Europe. There was no equivalent American policy. The Ford Administration’s William Seidman conceded as much but his only solution to the problem was to cut corporate taxes in order to increase profits at home.

This sort of capital flight was nothing new of course. It was in fact just a grand scale example of the destructive consequences of capital’s mobility that the US experienced internally in the late 1940s and early 1950s as manufacturing moved to the South and Midwest while deindustrializing much of the Northeast. As Jefferson Cowie argues in his insightful edited volume, Beyond the Ruins: The Meanings of Deindustrialization, “we must jettison the assumption that fixed capital investment in resource extraction, heavy manufacturing, and value-added production defines the stable standard against which all subsequent changes are to be judged. Rather, we should see this political-economic order and the culture it engendered as temporary and impermanent developments in space and time.”[7] Though this provides little comfort to those whose lives are destroyed by capital’s mobility, it is a cold truth about capitalism that must be acknowledged.
.
Economic inequality is a feature, not a bug

Although Stein does not go into much depth on finance, the work of Giovanni Arrighi complements her thesis on the centrality of the oil shocks to the demise of manufacturing and the rise of finance. Stein aptly quotes historian Steven Scheider’s conclusion about the 1973 oil shock: “The oil-exporting countries had secured the greatest nonviolent transfer of wealth in human history.”[8] Ironically, this new wealth obtained by oil-exporting countries needed somewhere to sit and accumulate interest. Naturally, the capital went to Wall Street. Arrighi states that this surplus of capital helped spur “innovations” within the western financial sector and promoted (initially) low interest loans to poor countries. In the short term, Stein notes, these loans funded projects throughout the developing world such as shipyards, petrochemical plants, and steel mills that looked to the US market as an ideal destination for their final product. Strangely enough, Stein points out that western (mostly American-based) banks “lobbied for Third World access to American markets because that was the only way to get repayment of their loans. The new lending policies thus created new conflicts between American finance and American manufacturing.”[9] The encore oil shock of 1978-79 funneled even more petrodollars to Wall Street, which fueled even more speculation.

Another conclusion shared by Stein and Arrighi is the effect the abandonment of fixed exchange rates had on the growth of the financial sector. In 1971, the US had its first merchandise trade deficit since the late 19th century.[10] The fixed exchanged rate system established after World War II gave America’s competitors an advantage since their undervalued currency made their exports cheaper than American products. Ever the subtle mind, Nixon’s solution to this problem was to dismantle the Bretton-Woods System by establishing floating exchange rates. In the short-term, the results were mixed. Stein notes that although the value of the dollar fell, the Fed’s increase of the money supply and the establishment of flexible exchange rates did little to assuage the fears of financial markets about America’s trade deficit and overall balance of payments. Sure enough, capital fled to Europe.[11] More importantly, however, flexible exchange rates fueled speculative finance:
The breakdown of the regime of fixed exchange rates added a new momentum to the financial expansion by increasing the risks and uncertainty of the commercial-industrial activities of corporate capital. Under the regime of fixed exchange rates, corporate capital was already engaged in currency trade and speculation. “But for the most part the acknowledged responsibility of the central banks for holding the rates fixed relieved corporate financial managers of the need to worry about day-to-day changes” (Strange 1986: 11). Under the regime of flexible exchange rates, in contrast, corporate capital itself had to deal with day-to-day shifts in exchange rates. The coming and going in corporate bank accounts of money in different currencies forced corporations to engage in forward currency trading in order to protect themselves against shortfalls in their accounts due to changes in the exchange rates of the currencies in which their expected receipts and anticipated payments were quoted. Moreover, fluctuations in exchange rates became a major factor in determining variations in corporate cash flow positions, sales, profits, and assets in different countries and currencies. In order to hedge against these variations, corporations had little choice but to resort to the further geopolitical diversification of their operations.[12]
This distinction between speculative capitalism and production of actual goods plays an important role in Arrighi’s thesis of the late phase of systemic cycles of accumulation throughout capitalist history, reaching back to the 15th century. To be sure, one might ask how one can locate pre-industrial roots to capitalism. Arrighi uses French historian Fernand Braudel’s unique conception of market and capitalist economies. In Braudel view, there are three distinct levels of capitalism: the first level is based on the basic exchanges of material for subsistence which can be located before the industrial revolution; the second level is the more familiar system of commodity exchanges based on producers and firms (i.e. “the market”); and the third level in which capitalists engage in high finance and other more abstract forms of exchange of capital (rather than goods) to increase profit margin through monopolizations. As such, Arrighi borrows Braudel’s conclusion that capitalists (of the third level) see a competitive market as a barrier to be overcome. Since capital’s goal is to monopolize markets in order to have control over profit maximization, it is intrinsically opposed to any barriers—including legal standards for fair competition common throughout liberal capitalist economies—to its monopolization of the market.

They taught Bernie Madoff everything he knows

Arrighi identifies four systemic cycles of accumulation or periods of economic and political hegemony. The Genoese (fifteenth to early seventeenth century), Dutch (late sixteenth through eighteenth century), British (latter half of eighteenth century through the early twentieth century) and American (late nineteenth century to present) cycles of accumulation all have distinct features but share certain patterns. Each cycle contains periods or epochs of material expansion and financial expansion in which accumulation occurs through financial transactions and exchanges of money (insurance, stocks, and derivatives) rather than investment in and exchanges of commodities (actual stuff). Borrowing from Braudel, Arrighi observes that “we identify the beginning of financial expansions with the moment when the leading business agencies of the preceding trade expansion switch their energies and resources from commodity to the money trades. And like Braudel, we take the recurrence of this kind of financial expansion as the main expression of a certain unity of capitalist history from the late Middle Ages to our own days.”[13] For our purposes, the most important pattern of all cycles of accumulation identified by Arrighi is the reoccurring periods of transition from productive investment (material expansion) to the phase of speculation (financial expansion). These transitions, according to Arrighi, are marked by a “sudden intensification of inter-capitalist competition.”[14] Although Stein’s micro-historical approach is drastically different than Arrighi’s broad political-economic 600 year history, Stein’s narrative provides additional empirical evidences to Arrighi’s thesis by detailing America’s rising trade deficit during the 1950s and 1960s as the American economy switched its economic orientation to finance in the 1970s. 

All in all, these processes led not only to the undermining of the American manufacturing, but the destruction of the welfare state as well.

An honest look at the trajectory of American history affirms Cowie’s argument that the New Deal era was anomalous. Unlike in Western Europe where labor movements of the late sixties and early seventies led to profound transformations of economic life for the working and middle classes, the 1970s ended with a shrug and whimper for American labor. In fact, to borrow Paul Krugman’s memorable phrase, despite the seemingly linear progress of the working class during the 1950s and 1960s, Americans were in fact “living through the end of an ‘interregnum between Gilded Ages.’”[15]

So where does that leave Dewey Burton? The answer lies not in the false dichotomy of Dewey being slapped around and strangled by Adam Smith’s invisible hand or saved by the “moral sentiments” of man that were supposed to save us from ourselves. Rather, the fate of the Deweys of the world depends on whether we can acknowledge and transcend the brutal internal contradictions of capital accumulation. Needless to say, this leaves most of us in a very precarious situation.
Joel Suarez

This post is part of a series on the idea of the post-industrial society and the political economy of the late twentieth century United States.  Previous posts include The Rural Roots of America's Cities of Knowledge, Looking for the City of Knowledge, and FIRE and ICE: The Realities of Twenty-First Century Urban Development.

.

Monday, February 7, 2011

The Damaging Ruins of a Classical Aesthetic: Titus Andronicus as Critique of Neoclassical Kitsch


Titus Andronicus has often been rejected by critics for its unharmonious juxtaposition of classical references with wanton violence. This combination produced what historically has often been seen as an unstable work, a failure.[1] But if we regard everything in the work as so-placed for the purpose of conveying something very specific, Titus becomes a critique of neoclassicism’s pillaging of ancient texts for references without regard to their contextual meaning, or, if you like, for reference without reference.

From the start of the play, Titus is an assault on the senses far too hammy to be without sarcasm. “The Most Lamentable Romaine Tragedie of Titus Andronicus” drips with the sort of mawkishness that might be meant seriously by a hack, but could only be sardonic from the pen of a wit the likes of Shakespeare. From the title onward, the play attacks the brand of over-sentimentality that we today might call kitsch.

Robert C. Solomon, in his article “On Kitsch and Sentimentality,” describes kitsch as “the sort of item that would and should embarrass someone with a proper aesthetic education.”[2] It is morally degenerate art that imitatively plunders the aesthetic of original art but in doing so rapes and defiles original meaning. In the twenty-first century, we are no strangers to kitsch. We find it in little souvenir replicas of the David statue brought home from a vacation in Italy, in the formulaic romances of pulp fiction and soap operas, the cut-away living rooms of situation comedies, the heroic language of political speeches, and even in an evening spent by the fireplace with a cup of cocoa. Kitsch, then, is not mere sentimentality, but empty sentimentality. It is the use of the ghosts of real art to evoke emotion from an audience. As the image presented by kitsch grows more distant from the original artwork (as with new imitations based not on the original but on the previous imitation), its ability to appeal to sentiment lessens.[3] Consequently, it becomes more and more exaggerated as it attempts to hold on to its power over the sentiment.

To understand Titus as a critique of kitsch, we must ask what art is being raped and plundered for imitation. We need not look far to find it: Lavinia stands at the center of the play as the bloodied remains of Greco-Roman art and the classical aesthetic.[4] As if the sight of armless Lavinia is not enough to suggest a ruined work of art, Marcus compares her to “a conduit with three issuing spouts,” that is, a fountain.[5] As Lavinia is a distorted imitation of Greco-Roman art (even an imitation of the armless remains of it), the original goal of the classical aesthetic is not discernable in the figure she cuts on stage. The suggestion is that art cannot be removed from its original context and retain its original meaning, nor can it be removed in full from its original context at all – there is always some loss of information. The danger in presenting the remains of classical art to a non-classical public is in the tendency to mistake these remains for the complete artwork. Lavinia embodies neither the beauty nor the meaning of Greek sculpture, and consequently she is, as a symbol, also deficient of the moral[6] of the art she imitates (that moral being undefined for us, as it is inaccessible without its original context). With the rape of Lavinia and the lopping off of her limbs and tongue, the play imitates the story of Philomela and Tereus.[7] Chiron and Demetrius, then, rape Lavinia as the author rapes Ovid for his plotline, so the meaning formerly embodied in each in its completeness is defiled, and whatever lesson or meaning was available through Ovid is not available in its empty imitation. The kitsch mocked by Titus, then, is the classical allusion.

Although the Neoclassical period in literature was not under way until 50-some-odd years after Shakespeare’s death,[8] the practice of littering texts with classical allusions was no rarity in Shakespeare’s time or in Renaissance literature in general. Consider the popularity of court masques, allusion-heavy entertainment pieces designed to flatter the monarchy and reinforce its legitimacy.[9] Recall Christopher Marlowe’s Dido, Queen of Carthage, which appropriated and re-told pieces of the Aeneid. And let’s not forget Spenser’s Faerie Queene, which borrowed Aristotle’s pre-Christianity virtues and used them as a basis for Christian morality. All of these – and many others - were within Shakespeare’s realm of exposure and influence.

Titus Andronicus, strewn with references to classical literature, is intentionally absent classical ideals. The incongruence between its references and its non-existent aesthetic is far too extreme to be coincidental. Where an Aristotelian revenge tragedy would have escalated in violence until achieving catharsis, Titus grows increasingly violent only to conclude with Titus’s grandson, the beginning of a new generation, choking back tears and wishing he himself were dead.[10] There is no chorus to offer a moral and no expression of hope to give the audience the relief of thinking all the brutality and death in the play happened for a reason. While in Ovid’s story the wronged Philomela, once avenged, escapes further retaliation by being turned into a bird and flying away,[11] the final line of Titus Andronicus has Lucius saying of Tamora, “And being dead, let birds on her take pity!”[12] The author has transformed Ovid’s birds from a symbol of hope to a morbid reminder that in the wake of the play’s deaths there is only decay.

In implying that to remove art from its original context also removes its moral content, the author points a finger at all of literature. The lessons of Ovid, Horace, etc. are remote to the experience of Shakespeare’s characters, despite the characters’ familiarity with these authors’ works. Although Metamorphoses is meaningful for the play, it struggles to be meaningful to the characters. Marcus, upon finding Lavinia dismembered in the forest, immediately wonders whether she was raped, like Philomela, by “some Tereus.”[13] He remembers the story, considers the parallel, and then ignores its meaning altogether when he sees that Lavinia, unlike Philomela, has no hands with which to sew a tapestry naming her accusers.[14] As a kitsch replica removed from its original context, the story fails to reference its original meaning. After Titus and Marcus make the connection between Lavinia’s rape and the rape of Philomela, Shakespeare,[15] rather than avoiding imitation or choosing a new moral course, has them play the old story out to its end by serving up a human dish of revenge.[16] Instead of offering a lesson (for the characters), the inclusion of the literary past in Titus offers only grounds for repetition in the living present. The implication is that recycling what has already been set to rest in words breathes not life but death into a new written work.

But we must remember that Titus Andronicus was not intended as a written work. The nature of the play performed was, in Shakespeare’s time, that it was always in the present. Just as Ovid’s Philomela would to the Renaissance reader have been dusty text without a living image, Shakespeare’s Lavinia was a living image but without sound. Ripped from a classical context, the living, breathing Lavinia on stage still fails to communicate the classical ideal to the present. Instead, she communicates the sad knowledge that there is a permanent barrier between the past and present. Lavinia presents a figure too hideously alive and suffering to engage the sentiment on any reasonable level. In her grotesqueness, she is detached from the audience as much as Ovid is detached from her and the other characters in the play. From the time of her rape until the revelation of the names of her assailants, Lavinia is an incomprehensible text from which the other characters must “wrest an alphabet.”[17] Art that communicates nothing has no meaning, and in her visual onslaught upon sentimentality (too shocking to be answerable), Lavinia acts as the very definition of kitsch. She is Philomela rendered meaningless, a part with reference to no whole, a breaking down of the hermeneutic circle. Original meaning has been cut away first by her removal from the past to which her figure alludes, and further by the author with the cutting away of her limbs, an amputation which through its own perversity mocks the perversity of amputating the art of the past to fit it, as imitation (more favorably known as allusion or reference), into the art of the present.[18]

The aberration that is Lavinia’s image after her rape is so incongruent with reality that it is unintelligible. As a text that offers no possibility of intersubjectivity[19] between it and its reader, as a simply unreadable text, Lavinia’s image is incapable of eliciting any rational response. The results are Marcus’s disturbing, sexualized blazon when he finds Lavinia “lopped and hewed,”[20] and Titus’ almost comic treatment of his daughter’s condition, as he puns on “hands”[21] and suggests she “hold [her] stumps to heaven.”[22] Grotesque unreadability is forced upon the audience, as well, in the scene where Lavinia chases her young nephew around the stage.[23] The audience knows her reason, but is nonetheless forced to see a certain inappropriate comedy in armless Lavinia’s pursuit of her nephew.

If Titus is seen as sophisticated criticism of neoclassicism, Shakespeare appears to have been way ahead of his time in criticizing the literary and art worlds for looting history for artistic treasures but donning them as would a runway model the Holy Coat of Trèves – in either mockery or ignorance of any sacred value they may have. Had Titus Andronicus been consistently read since the nineteenth century as critique rather than the failure of a young, unseasoned playwright, perhaps we never would have seen the worst of High Modernism in the twentieth. One could argue that as allusion dropping heightened in twentieth century literature and poetry so did self-critique, but what about self-awareness prevents an empty symbol from being just that? An introspective kitsch is kitsch nonetheless. But while the tragedy and the characters are mostly lamentably kitsch, the play itself, in its critical perversity, seeks through its exaggeration to tear down the illusion of authenticity, and in this – I would argue - saves itself from the kitsch that it condemns. Still, regardless of whether one chooses to see Titus Andronicus as artful, it is clear that it can be seen as a guidepost leading away from an abyss of over-indulgent neoclassicism.

Cherie Braden