Friday, June 24, 2005

Interrogators Cite Doctors' Aid at Guant�namo Prison Camp - New York Times

Interrogators Cite Doctors' Aid at Guant�namo Prison Camp - New York Times

June 24, 2005
Interrogators Cite Doctors' Aid at Guantánamo Prison Camp

WASHINGTON, June 23 - Military doctors at Guantánamo Bay, Cuba, have aided interrogators in conducting and refining coercive interrogations of detainees, including providing advice on how to increase stress levels and exploit fears, according to new, detailed accounts given by former interrogators.

The accounts, in interviews with The New York Times, come as mental health professionals are debating whether psychiatrists and psychologists at the prison camp have violated professional ethics codes. The Pentagon and mental health professionals have been examining the ethical issues involved.

The former interrogators said the military doctors' role was to advise them and their fellow interrogators on ways of increasing psychological duress on detainees, sometimes by exploiting their fears, in the hopes of making them more cooperative and willing to provide information. In one example, interrogators were told that a detainee's medical files showed he had a severe phobia of the dark and suggested ways in which that could be manipulated to induce him to cooperate.

In addition, the authors of an article published by The New England Journal of Medicine this week said their interviews with doctors who helped devise and supervise the interrogation regimen at Guantánamo showed that the program was explicitly designed to increase fear and distress among detainees as a means to obtaining intelligence.

The accounts shed light on how interrogations were conducted and raise new questions about the boundaries of medical ethics in the nation's fight against terrorism.

Bryan Whitman, a senior Pentagon spokesman, declined to address the specifics in the accounts. But he suggested that the doctors advising interrogators were not covered by ethics strictures because they were not treating patients but rather were acting as behavioral scientists.

He said that while some health care personnel are responsible for "humane treatment of detainees," some medical professionals "may have other roles," like serving as behavioral scientists assessing the character of interrogation subjects.

The military refused to give The Times permission to interview medical personnel at the isolated Guantánamo camp about their practices, and the medical journal, in an article that criticized the program, did not name the officials interviewed by its authors. The handful of former interrogators who spoke to The Times about the practices at Guantánamo spoke on condition of anonymity; some said they had welcomed the doctors' help.

Pentagon officials said in interviews that the practices at Guantánamo violated no ethics guidelines, and they disputed the conclusions of the medical journal's article, which was posted on the journal's Web site on Wednesday.

Several ethics experts outside the military said there were serious questions involving the conduct of the doctors, especially those in units known as Behavioral Science Consultation Teams, BSCT, colloquially referred to as "biscuit" teams, which advise interrogators.

"Their purpose was to help us break them," one former interrogator told The Times earlier this year.

The interrogator said in a more recent interview that a biscuit team doctor, having read the medical file of a detainee, suggested that the inmate's longing for his mother could be exploited to persuade him to cooperate.

Dr. Stephen Xenakis, a psychiatrist and former Army brigadier general in the medical corps, said in an interview that "this behavior is not consistent with our medical responsibility or any of the codes that guide our conduct as doctors."

The use of psychologists and psychiatrists in interrogations prompted the Pentagon to issue a policy statement last week that officials said was supposed to ensure that doctors did not participate in unethical behavior.

While the American Psychiatric Association has guidelines that specifically prohibit the kinds of behaviors described by the former interrogators for their members who are medical doctors, the rules for psychologists are less clear.

Dr. Spencer Eth, a professor of psychiatry at New York Medical College and chairman of the ethics committee of the American Psychiatric Association, said in an interview that there was no way that psychiatrists at Guantánamo could ethically counsel interrogators on ways to increase distress on detainees.

But in a statement issued in December, the American Psychological Association said the issue of involvement of its members in "national security endeavors" was new.

Dr. Stephen Behnke, who heads the group's ethics division, said in an interview this week that a committee of 10 members, including some from the military, was meeting in Washington this weekend to discuss the issue.

Dr. Behnke emphasized that the codes did not necessarily allow participation by psychologists in such roles, but rather that the issue had not been dealt with directly before.

"A question has arisen that we in the profession have to address and that is where we are now: is it ethical or is it not ethical?" he said.

Dr. William Winkenwerder Jr., assistant secretary of defense for health matters, said the new Pentagon guidelines made clear that doctors might not engage in unethical conduct. But in a briefing for reporters last week, he declined to say whether the guidelines would prohibit some of the activities described by former interrogators and others. He said the medical personnel "were not driving the interrogations" but were there as consultants.

The guidelines include prohibitions against doctors' participating in abusive treatment, but they all make an exception for "lawful" interrogations. As the military maintains that its interrogations are lawful and that prisoners at Guantánamo are not covered by the Geneva Conventions, those provisions would seem to allow the behavior described by interrogators and the medical journal. The article in the medical journal, by two researchers who interviewed doctors who worked on the biscuit program, says, "Since late 2002, psychiatrists and psychologists have been part of a strategy that employs extreme stress, combined with behavior-shaping rewards, to extract actionable intelligence."

The article was written by Dr. M. Gregg Bloche, who teaches at Georgetown University Law School and is a fellow at the Brookings Institution, and Jonathan H. Marks, a British lawyer who is a fellow in bioethics at Georgetown and Johns Hopkins Universities.

Dr. Bloche said in an interview that the use of health professionals in devising abusive interrogation strategies was unethical and led to their involvement in violations of international law. Dr. Winkenwerder said on Thursday that the article was "an outrageous distortion" of the medical situation at Guantánamo, according to Reuters news agency.

The article also challenges assertions of military authorities that they have generally maintained the confidentiality of medical records.

The Winkenwerder guidelines make it clear that detainees should have no expectation of privacy, but that medical records may be shared with people who are not in a medical provider relationship with the detainee only under strict circumstances.

Dr. Bloche said such an assertion was contrary to what he had discovered in his research. It is also in conflict with accounts of former interrogators who previously told The Times that they were free to examine any detainee's medical files. After April 2003, when Defense Secretary Donald H. Rumsfeld tightened rules on detainee treatment, one interrogator said the records had to be obtained through biscuit team doctors who always obliged.

The former interrogator said the biscuit team doctors usually observed interrogations from behind a one-way mirror, but sometimes were also in the room with the detainee and interrogator.

U.N. Inquiry on Guantánamo

By The New York Times

UNITED NATIONS, June 23 - A four-member team of United Nations human rights experts accused the United States on Thursday of stalling on requests over the past three years to visit detainees at Guantánamo and said it would begin its own investigation without American assistance.

"Such requests were based on information from reliable sources of serious allegations of torture, cruel, inhuman and degrading treatment of detainees, arbitrary detention, violations of their right to health and their due process rights," the four, all independent authorities who serve the United Nations as fact-finders on rights abuses, said in a statement.

Pierre-Richard Prosper, the United States ambassador for war crimes, said the United States had been unable to meet the fact-finders' deadline to answer its request but intended to keep the matter open.

* Copyright 2005 The New York Times Company

Thursday, June 23, 2005

The New York Review of Books: Selling Washington

The New York Review of Books: Selling Washington
The New York Review of Books
Home · Your account · Current issue · Archives · Subscriptions · Calendar · Newsletters · Gallery · NYR Books
Volume 52, Number 11 · June 23, 2005

email icon Email to a friend
Selling Washington
By Elizabeth Drew
Tom DeLay
(click for larger image)Tom DeLay by David Levine

As the criminal investigation of the Washington lobbyist Jack Abramoff was underway this spring, a spokesman for the law firm representing him issued a statement saying that Abramoff was "being singled out by the media for actions that are commonplace in Washington and are totally proper." Abramoff has since said much the same thing. The lawyer was half right. Like many other lobbyists, Abramoff often arranged for private organizations, particularly nonprofit groups, to sponsor pleasant, even luxurious, trips for members of Congress, with lobbyists like himself tagging along and enjoying the unparalleled "access" that such a setting provides; i.e., they get to know congressmen and sell them on legislation. They take over skyboxes at sporting events, inviting members of Congress and their staffs.

But Abramoff has differed from other lobbyists in his flamboyance (he owned two Washington restaurants, at which he entertained), and in the egregiously high fees he charged clients, in particular, Indian tribes in the casino business. The Senate Indian Affairs Committee, headed by John McCain, found last year that Abramoff and an associate, Michael Scanlon, a political consultant and former communications director for House Majority Leader Tom DeLay, received at least $66 million from six tribes over three years. Abramoff also instructed the tribes to make donations to certain members of Congress and conservative causes he was allied with. And he was careless—for example in putting on his credit card charges for DeLay's golfing trip to the St. Andrews golf course in Scotland in 2000, with a stop in London for a bit of semi-serious business to make the trip seem legitimate. It's illegal for a lobbyist to pay for congressional travel, but Abramoff is reported to have paid for three of DeLay's trips abroad. A prominent Republican lobbyist told me that the difference between what Abramoff did and what many other lobbyists do was simply "a matter of degree and blatancy."

Abramoff's behavior is symptomatic of the unprecedented corruption—the intensified buying and selling of influence over legislation and federal policy —that has become endemic in Washington under a Republican Congress and White House. Corruption has always been present in Washington, but in recent years it has become more sophisticated, pervasive, and blatant than ever. A friend of mine who works closely with lobbyists says, "There are no restraints now; business groups and lobbyists are going crazy—they're in every room on Capitol Hill writing the legislation. You can't move on the Hill without giving money."

This remark is only slightly exaggerated. For over ten years, but particularly since George W. Bush took office, powerful Republicans, among them Tom DeLay and Senator Rick Santorum, of Pennsylvania, have been carrying out what they call the "K Street Project," an effort to place more Republicans and get rid of Democrats in the trade associations and major national lobbying organizations that have offices on K Street in downtown Washington (although, of course, some have offices elsewhere).

The Republican purge of K Street is a more thorough, ruthless, vindictive, and effective attack on Democratic lobbyists and other Democrats who represent businesses and other organizations than anything Washington has seen before. The Republicans don't simply want to take care of their friends and former aides by getting them high-paying jobs: they want the lobbyists they helped place in these jobs and other corporate representatives to arrange lavish trips for themselves and their wives; to invite them to watch sports events from skyboxes; and, most important, to provide a steady flow of campaign contributions. The former aides become part of their previous employers' power networks. Republican leaders also want to have like-minded people on K Street who can further their ideological goals by helping to formulate their legislative programs, get them passed, and generally circulate their ideas. When I suggested to Grover Norquist, the influential right-wing leader and the leading enforcer of the K Street Project outside Congress, that numerous Democrats on K Street were not particularly ideological and were happy to serve corporate interests, he replied, "We don't want nonideological people on K Street, we want conservative activist Republicans on K Street."

The K Street Project has become critical to the Republicans' efforts to control all the power centers in Washington: the White House, Congress, the courts—and now, at least, an influential part of the corporate world, the one that raises most of the political money. It's another way for Republicans to try to impose their programs on the country. The Washington Post reported recently that House Majority Whip Roy Blunt, of Missouri, has established "a formal, institutionalized alliance" with K Street lobbyists. They have become an integral part of the legislative process by helping to get bills written and passed—and they are rewarded for their help by the fees paid by their clients. Among the results are legislation that serves powerful private interests all the more openly—as will be seen, the energy bill recently passed by the House is a prime example —and a climate of fear that is new. The conservative commentator David Brooks said on PBS's NewsHour earlier this year, "The biggest threat to the Republican majority is the relationship on K Street with corporate lobbyists and the corruption that is entailed in that." But if the Republicans are running a risk of being seen as overreaching in their takeover of K Street, there are few signs that they are concerned about it.

When the Republicans first announced the K Street Project after they won a majority in Congress in the 1994 election, they warned Washington lobbying and law firms that if they wanted to have appointments with Republican legislators they had better hire more Republicans. This was seen as unprecedentedly heavy-handed, but their deeper purposes weren't yet understood. Since the Democrats had been in power on Capitol Hill for a long time, many of the K Street firms then had more Democrats than Republicans or else they were evenly balanced. But the Democrats had been hired because they were well connected with prominent Democrats on Capitol Hill, not because Democratic Congresses demanded it. Moreover, it makes sense for lobbying firms that want access to members of Congress to hire people with good contacts in the majority party—especially former members or aides of the current leaders. But the bullying tactics of Republicans in the late 1990s were new.

DeLay, Santorum, and their associates organized a systematic campaign, closely monitored by Republicans on Capitol Hill and by Grover Norquist and the Republican National Committee, to put pressure on firms not just to hire Republicans but also to fire Democrats. With the election of Bush, this pressure became stronger. A Republican lobbyist told me, "Having the White House" has made it more possible for DeLay and Santorum "to enforce the K Street Project." Several Democratic lobbyists have been pushed out of their jobs as a result; business associations who hire Democrats for prominent positions have been subject to retribution. They are told that they won't be able to see the people on Capitol Hill they want to see. Sometimes the retribution is more tangible. The Republican lobbyist I spoke to said, "There's a high state of sensitivity to the partisanship of the person you hire for these jobs that did not exist five, six years ago—you hire a Democrat at your peril."

In one instance well known among lobbyists, the Ohio Republican Michael Oxley, chairman of the House Financial Services Committee, put pressure on the Investment Company Institute, a consortium of mutual fund companies, to fire its top lobbyist, a Democrat, and hire a Republican to replace her. According to a Washington Post story on February 15, 2003, six sources, both Democratic and Republican, said that members of Oxley's staff told the institute that a pending congressional investigation of mutual fund companies "might ease up if the mutual fund trade group complies with their wishes." It apparently didn't matter to them that House ethics rules prohibit congressmen or their staff "from bestowing benefits on the basis of the recipient's status as a supporter or contributor, or partisan affiliation." A Republican now holds the top job at the Investment Company Institute.

Last year retribution was taken against the Motion Picture Association of America, which—after first approaching without success a Republican congressman about to retire— hired as its new head Dan Glickman, a former Democratic representative from Kansas and secretary of agriculture in the Clinton administration. Republicans had warned the MPAA not to hire a Democrat for the job. After Glickman was hired, House Republicans removed from a pending bill some $1.5 billion in tax relief for the motion picture industry. Norquist told me, "No other industry is interested in taking a $1.5 billion hit to hire a Clinton friend." After Glickman was selected, the Capitol Hill newspaper Roll Call reported last year, "Santorum has begun discussing what the consequences are for the movie industry." Norquist said publicly that the appointment of Glickman was "a studied insult" and the motion picture industry's "ability to work with the House and the Senate is greatly reduced." Glickman responded by hiring prominent Republicans, including House Speaker Dennis Hastert's former spokesman, for major MPAA jobs.

Norquist's organization, Americans for Tax Reform, keeps watch on other K Street firms and calls attention on its Web site to the ones that are out of line.[1] According to a report in The Washington Post in 2003, an official of the Republican National Committee told a group of Republican lobbyists that thirty-three of the top thirty-six top-level K Street positions had gone to Republicans.

Despite its effectiveness, "the K Street Project is far from complete," according to Norquist, who says, "There should be as many Democrats working on K Street representing corporate America as there are Republicans working in organized labor—and that number is close to zero." He wants the project to include not just the top jobs in K Street firms, but "all of them—including secretaries."

A prominent Democratic Party fund-raiser believes that in 2001, after nineteen years as head of a trade association, he was fired because he was not a Republican. Another Democratic lobbyist told me that one of his major clients was put under pressure to drop him because he was a Democrat. A staff member in DeLay's office called the second of the two men and told him that he was "in DeLay's crosshairs," and warned him that if he attempted to work with any committees on Capitol Hill, he would get nowhere because of his political leanings.

Episodes of this kind have created a new atmosphere of fear in Washington. (Because of that atmosphere, these people as well as several others insisted on talking "on background," to protect themselves against retribution.) The Democratic lobbyist whose client was pressured by Republicans to drop him remarked, "It's a dangerous world out there," a world where, he said, "You'd better watch what you say. People in the Republican party, in the agencies, will say, 'I hear you were badmouthing X.' You know that you're being watched; you know that it's taken into account in your ability to do public policy things—[like] get a meeting with a government agency." Another lobbyist says, "It's scary now. People are afraid to say what they feel. It's had a chilling effect on debate." According to the head of a public policy group who frequently deals with lobbyists and corporations, "They don't have to say it," but he finds them now "intimidated by the atmosphere in this town—you hire Republicans."

Business groups are under heightened pressure to support the administration's policies—even those that are of no particular interest to them. A recent article in Business Week told of business organizations, including the Business Roundtable—an association of CEOs of major corporations—being summoned to meetings with Mike Meece, a special assistant to the President, various cabinet officers concerned with business affairs, and Karl Rove. They anticipated a friendly give-and-take about economic legislation but instead they were told to get behind the President's plan to privatize Social Security. As a result, these organizations have spent millions of dollars promoting Bush's new program, particularly through ads. Business groups have been notably reticent about criticizing administration policies—even ones they deeply dislike, such as the huge budget deficit. In the past, when they differed from administration policies, for example on trade or tax issues, they spoke out. An adviser to business groups says, "They're scared of payback, of not getting their own agenda through."

The connections between those who make policy and those who seek to influence it have become much stronger in recent years because of lobbyists' increasing use of nonprofit groups to sponsor trips that give them access to lawmakers, as with DeLay's trip to Scotland and England. Jack Abramoff arranged for the trips of DeLay and other members of Congress to be officially sponsored by the National Center for Public Policy Research, of which he is a member of the board. According to the congressional ethics rules a lobbyist cannot repay the cost of a free trip for a congressman by reimbursing the nonprofit group that organized the trip. But there's nothing to prevent him from giving large contributions to the organization or encouraging his clients to do so. Abramoff urged the Indian tribes he represents to contribute to the National Center, which paid for DeLay's trips. Owing to a major loophole in the ethics rules, nonprofit groups do not have to disclose their contributors. "It's a real abuse," the Republican lobbyist told me. Such trips are also a way of getting around the ban on gifts of more than $50 to members of Congress.

For the Washington lobbyist, the most-sought-after access is to someone who writes the nation's laws and appropriates federal money. Trips offer the best opportunity for the lobbyist to make an impression on a congressman. Since congressmen can no longer make use of soft money under the McCain-Feingold campaign finance reforms, they are increasingly using golfing weekends and hunting trips for fund-raising. The politicians in effect charge the lobbyists to play golf or hunt with them. (Members of the middle class and the poor have scant opportunity to play golf with members of Congress.)

Many congressional trips have a serious purpose; some members restrict their travel to hazardous places like Iraq and Afghanistan. Such trips can be paid for out of congressional committees' funds—but they are usually less glamorous, harder to explain to the voters since the public pays for them, and they don't include lobbyists. The rules for privately funded trips, for example that they must be "in connection with official duties," have been interpreted quite loosely. Larry Noble, executive director of the Center for Responsive Politics, a nonpartisan group that studies money in politics and its influence on public policy, says, "Even where they touch base with the rules, they don't take them seriously."

According to a study of congressional travel over the past five years paid for by nonprofit institutions, the Aspen Institute, a think tank based in Aspen, Colorado, and Washington, has spent the most on congressional travel; but Aspen is a serious organization that conducts seminars in the US and abroad, and lobbying isn't involved.[2] More interesting is the nonprofit that spends the next highest amount: the Ripon Society, actually the Ripon Educational Fund, an offshoot of the Ripon Society, which was founded in the 1960s by liberal Republicans as a serious organization concerned with public policy. Now that liberal Republicans are virtually extinct, Ripon has become an organization for relatively moderate Republicans.

Like other policy groups that also lobby, Ripon has set up an ostensi-bly separate "educational" group, or 501(c)(3), to which contributors can make tax-deductible donations. The Ripon Educational Fund sponsors a large annual "Transatlantic Conference," held in such pleasant places as Rome, London, and Budapest, to which it invites between 150 and 200 US citizens. These are vaguely described in the filings by the members of Congress who participated in them as "listening tour," or "fact finding."

The Ripon trips are famous among lobbyists for the opportunities they present for pressing their cases with members of Congress. A Republican lobbyist says that a Ripon Fund excursion has "become the trip to go on, because of the luxury and the access." The Washington Post reported that a Ripon Educational Fund trip to London in 2003 was attended by more than a hundred lobbyists, including representatives from American Express, AOL/Time Warner, and General Motors. They pay the Ripon Fund an annual membership fee of $9,500, and in addition finance their own trips abroad to Fund meetings.

Both the Ripon Society and the Ripon Educational Fund are headed by lobbyists. Former Representative Susan Molinari, of Staten Island, New York, a lobbyist whose clients now include Exxon, the Association of American Railroads, and Freddie Mac, is the chair of the Educational Fund. The president of the society itself is Richard Kessler, whose lobbying firm's clients include drug and cigarette companies. According to The Hill, the other Capitol Hill newspaper, Kessler's firm paid for a trip by five members of Congress to Ireland in August 2003, including four days at Ashford Castle, where the elegant grounds include a golf course. Of the members of Congress who went on Ripon Educational Fund trips, almost all took along their wives, an additional perk that contributes to the holiday atmosphere of the excursions. While lobbyists are prohibited from paying directly for congressional trips, trade associations and private corporations are allowed to do so—not much of an ethical distinction, since practically all of them engage in lobbying.

A recently released Congressional Quarterly study said that the disclosure forms filed by members of Congress "frequently show a direct correlation between a member's legislative interests and the sponsors of his or her trips." For example, Representative Michael Oxley, who is particularly concerned with corporate finance, took several trips underwritten by companies such as MCI. A political observer who closely studied congressional trips concluded that the Republicans are invited so they can be "worked on" to pass pending legislation, while the Democrats are there largely for "maintenance," in case they take power in the future. Moderate, "swing" Democrats who can affect the outcome of legislation come in for special attention.

The McCain-Feingold campaign finance reform bill in 2002 didn't stop powerful companies and members of Congress from buying and selling influence. Representative Barney Frank, a major backer of the reform bill, says, "It works about the same as it did before." But, he adds, because the new law banned large soft money contributions by individuals, corporations, and labor unions to campaigns for federal office, and maintained overall limits on how much a person can contribute to federal elections—doubling them from $2,000 to $4,000 per election cycle—everyone has to work harder to raise the money.[3] Still, congressmen are seldom heard to complain that they can't raise enough money and in fact, according to data compiled by the Center for Responsive Politics,[4] both the political par-ties and individual candidates are raising more money than ever. Lobbyists still manage to deliver large amounts to legislators by "bundling" smaller contributions.

They contribute most of the money they raise to incumbents who can be depended on to do favors—a major reason (in addition to gerrymandering) why there is serious competition in only 10 percent of House races, and only about five seats change hands in each congressional election. Members of Congress expect to receive contributions from local industries (and their workers)—say, the coal industry in West Virginia—and they back legislation to help them out as a matter of doing constituent work. It's illegal for a firm to compensate employees for their political contributions, but, a Republican lobbyist says, a job applicant is often told that he or she is expected to make contributions, and salaries are adjusted accordingly.

It's virtually impossible to show that a particular campaign contribution resulted in a specific vote—such quid pro quo is illegal. Fred Wertheimer, of the public advocacy group Democracy 21, told me, "The system's designed so that you don't see who gets what for their money. It's designed for me to give money to you and you do something for me in the Congress—without either of us saying a word about it. But if I give money, I know it and the candidate knows it. It's an investment, and down the road you collect on it." While much of the money buys access to a member of Congress, or key staff members, that is only the entry point to making one's case. As John McCain puts it, "You give money, you get an ear." Still, one can sometimes even trace what Larry Noble carefully calls "correlations" between contributions and legislative successes.

The energy bill passed by the House in April is a striking case in point. The oil-and-gas industry, a top contributor of campaign money—80 percent of it to Republicans—benefited from several of its new provisions. A study by the staff of Representative Henry Waxman, Democrat of California, shows that perhaps the most indefensible provision gave a waiver against lawsuits to manufacturers of MTBE, or methyl tertiary-butyl ether, a gasoline additive that's a pollutant and suspected carcinogen. According to Waxman's staff, this waiver is worth billions to energy companies; the major beneficiaries would be Exxon, which, according to the Center for Responsive Politics, contributed $942,717 to candidates in the last election cycle; Valero Energy, $841,375; Lyondell Chemical, $342,775; and Halliburton, $243,946. The bill also exempted from the Safe Drinking Water Act the practice of hydraulic fracturing, which is used to make natural gas wells more productive and can also have an adverse effect on drinking water. Halliburton would benefit from this provision as well.

Another provision provided compensation to oil companies that bought leases, supposedly a speculative venture, on offshore sites where there is a moratorium on drilling. The compensation is worth billions of dollars to the oil industry. The bill also provided for the opening of the Arctic National Wildlife Refuge (ANWAR) to oil drilling—an invasion of the refuge that environmental groups have long tried to prevent. (Now that it contains more Republicans, the Senate passed a similar provision as part of its budget bill earlier this year.) The Democrats on the House Energy and Commerce Committee were effectively shut out of the drafting of the energy bill. House Democrat Edward Markey, a member of the Subcommittee on Energy and Air Quality, told me, "The energy companies got everything they wanted. Eight billion dollars in subsidies go to the energy companies, but to say that the conservation measures in it are modest would be a generous description."

An analysis by the Center for Responsive Politics shows that pharmaceutical manufacturers, who received a windfall from the new prescription drug program in the 2003 Medicare bill—including a provision prohibiting the federal government from negotiating with drug companies on prices— contributed more than three times as much to those who voted for the legislation as those who voted against it. A bill passed this year in the Senate and the House to tighten the rules for filing bankruptcy had long been sought by finance, insurance, and real estate interests, and particularly by credit card companies. Taken together, they all contributed $306 million to congressional campaigns, 60 percent of it to Republicans, during 2003 and 2004. The richest interests also spend the largest amounts of money on lobbying. According to a recent study by the Center for Public Integrity,[5] the makers of pharmaceuticals and health products spent the most—$759 million —on lobbying between 1998 and mid-2004, when the last lobbying reports were filed. Next came insurance companies. Oil and gas companies were seventh on the list.

The effects of the new, higher level of corruption on the way the country is governed are profound. Not only is legislation increasingly skewed to benefit the richest interests, but Congress itself has been changed. The head of a public policy strategy group told me, "It's not about governing anymore. The Congress is now a transactional institution. They don't take risks. So when a great moral issue comes up— like war—they can't deal with it." The theory that ours is a system of one-person-one-vote, or even that it's a representative democracy, is challenged by the reality of power and who really wields it. Barney Frank argues that "the political system was supposed to overcome the financial advantage of the capitalists, but as money becomes more and more influential, it doesn't work that way."

Two House Democrats, Rahm Emanuel, of Illinois, and Martin Meehan, of Massachusetts, have introduced legislation to tighten the rules on privately funded travel, strengthen the lobbying disclosure rules, and slow down the revolving door by which former members of Congress take jobs with the trade associations and, after a year, can lobby their former colleagues. Some Republicans are talking about placing more restrictive rules on trips. But the record shows that new regulations can often be evaded.

Perhaps the greatest deterrent to ethical transgression is that members of Congress don't want to read unfavorable stories about themselves. A Republican lobbyist says that the biggest factor in the growth of corruption has been "the expectation that all this goes undetected and unenforced." He added, "If Jack Abramoff goes to jail, that will be a big message to this town." Since the scandal broke over Abramoff's payments on behalf of DeLay, members of Congress have been scrambling to amend their travel reports, in some cases listing previously unreported trips, or filling in missing details. Public outrage can also have an inhibiting effect: after the Republicans changed the ethics rules earlier this year to protect DeLay, the adverse reaction in the press and from constituents was strong enough to make the Republican leadership back down.

But the public can't become outraged about something that isn't brought to its attention. The press tends to pounce on the big scandals but usually fails to cover the more common ones that take place every day. Some of the politicians I talked to hoped that the scandal over DeLay and Abramoff might lead to real changes, including more prosecutions and stricter disclosure requirements. But even they admit that, like so many other scandals, it may simply blow over.

[1] See

[2] See

[3] In the 2004 presidential election such money was paid to so-called "527 groups," which spent $500 million in the 2003–2004 election cycle. This wasn't, as widely thought, the result of a loophole in the McCain-Feingold bill but of the failure of the feckless Federal Election Committee to enforce a section of a 1974 campaign finance law.

[4] See

[5] See

Home · Your account · Current issue · Archives · Subscriptions · Calendar · Newsletters · Gallery · NYR Books

Copyright © 1963-2005 NYREV, Inc. All rights reserved. Nothing in this publication may be reproduced without the permission of the publisher. Illustrations copyright © David Levine unless otherwise noted; unauthorized use is strictly prohibited. Please contact with any questions about this site. The cover date of the next issue of The New York Review of Books will be July 14, 2005.

Wednesday, June 08, 2005

The Ineluctability of American Empire_ [u]

Talk paper from a conference on Forms of Empire at NDU. Comments welcome. PAB

Paul A. Bové

1354 Royal Oak Drive

Wexford, PA  15090

Friday, April 29, 2005

For delivery at Notre Dame

            On May 3, 2005


The Ineluctability of American Empire


In this note, I want to propose a hypothesis built on the research of William Appleman Williams into what he called “empire as a way of life.”  In fact, I intend to correct an important error in Williams’ analysis by exposing an unexamined presupposition that often misdirects a great deal of scholarship on US history, on US literary and cultural reality.  In 1980, Williams insisted that the American people did not know that the US in an empire and his work clearly suggested that if intellectuals could bring that fact to public attention and incorporate it within the popular culture, American democracy would choose a non-imperial path for American politics and culture. 

Williams’ book appeared in 1980, when the Reagan revolution took hold in US presidential politics.  The book is not a merely belated expression of sixties optimism over participatory democracy or the literalization of Cold War faith in democratic republican forms.  It is also a tactical intervention into a crisis that would see the US reorganize its economic, military, and security policies to adjust for the fall of the Breton Woods accords.  In the UK, Margaret Thatcher foreshadowed the conservative movement’s transformations of US political rhetoric and practice and Jimmy Carter’s failures to deal with either material structural weaknesses or the ‘malaise that gripped the American people’ opened the door for a jovial savage politics of developing neo-liberalism and neo-conservatism.

Williams’ 1980 error was a simple one:  he believed in the liberatory ideologeme of ‘democracy’ in the American myth or he thought it still a viable figure within the political arsenal of progressive politics, as it had seemed perhaps since Woodrow Wilson and Walt Whitman.  Williams’s book is out of print, a minor fact in a longer story that has seen the neo-conservative intellectuals seize control of the word ‘democracy’ and its heritage to operate an openly forceful US national security policy.  Williams’s book came at the end of a rapidly fading and embattled tendency in US history which fact accounts in part for the tonal pathos that competes with its utopian ambitions.

What was Williams’s error?  In a word, he did not place enough weight on the structural arrangements the US state had imposed on the country for its imperial purposes.  Without considering this brute fact, especially progressive historians and cultural critics will always imagine they see the possibility of resistance or of real social alternatives to the imperial arrangements of US power.  For at least one hundred years, the state has disposed of the US in such a way that it can draw upon its resources for its own purposes and manage its institutional legitimacy without threat of more than temporary or local opposition.  The powerful if not dominant conservative and reactionary tendencies in US culture and politics rest upon not a short-term manipulation of public opinion to form hegemony, but long-standing arrangements of material power that state intellectuals planned for some time.  Everyone can adduce some seeming exception to my claims, from the struggles of the CIO, and the Civil Rights movement, to the expansion of the university population.  Rather than be drawn into debates over the bite each particular might take out of my claim, let me rest its legitimacy upon the ease with which the American state has recently projected force around the world without regard for its opposition.  Power is a measure of success.  That state power is not absolute matters little in this discussion.

 We might examine at least two lines of research.  First, we can look at the arguments about turnout and participation in the recent presidential elections.  This is an important topic because it might trouble Williams’s assumption that Americans would not choose empire if they knew the US were an empire!  Second, and more important, we should look carefully at a persistent tradition among US state intellectuals who believe not only in the manufacture of consent—to invoke Chomsky—but also in the often globally expansionist capacities and requirements of US power and interest.  Given time restraints, I will focus here on the second of these lines of work.

There are at least two avenues to these state intellectuals’ policies:  US survival demands expansion and security comes only from global domination.  Intentionally I set aside all the ideologemes of American exception and the like to concentrate on some more hardheaded expression of material vision.  In these times, we can reward our reading by considering again the origins of modern US national policy in a vision of geopolitical and historically fated insecurity.  Alfred Thayer Mahan is the most important US intellectual whose work understood these national problems and influenced state affairs.

Mahan’s 1889 classic on sea power in history exists to answer the question how the US, situated as it is, might become a great power.  Knowing his audience, Mahan set a solid foundation for his book.  “Men,” he wrote, “may be discontented at the lack of political privilege; they will be yet more uneasy if they come to lack bread” (38).  Insecurity exists on just this continuum from poverty to weakness—although how the lack of bread would come about in the US is unclear.  The metonymic anxiety is clear enough.  The solution is also clear and marks the modern American state ambition:  “Only an absolute control of the sea can wholly secure such [commercial] communications, since it is impossible to know at what point an enemy coming from beyond the visible horizon might strike” (39).

In an article for the Guggenheim Foundation, I studied the way in which Paul Wolfowitz theorizes the development of US power as the mechanism for controlling the globe in the 21st century.  As part of that paper, I commented on Wolfowitz’s invocation of his great predecessor, Mahan, whose work 100 years earlier, established the norm for US state intellectuals’ articulation of US power and its purposes.  Since my purpose here is to suggest something of the iron-cage nature of American empire and to draw attention to the terms in which some of its principle theorists structured America as empire, I want to look at Mahan’s expression of foundational principles and policy.  I know, in advance, that especially populist progressives (the children of Appleman Williams) will revolt against my effort, claiming not only that forces of historical resistance create spaces for freedom and difference against my notion of the iron cage, but also that a careful examination of historical detail shows the necessary compromises inherent in any structuration.  I concede all of this, as I say, in advance, but add that it matters not at all to my analysis.  The brute fact is that for more than 100 years America has been foundationally imperial.  In fact, despite any form of anti-imperial developments, the nation-state has persistently arranged itself best to order the world and its power systems to its own interests.  Even defeats, such as Vietnam, have not resulted in any weakening of the iron cage.  We need to understand this if we are to think the question, ‘can there be a non-imperial America.’

Wolfowitz specifically invokes Mahan’s essay, “A Twentieth Century Outlook,”[1] in his own essay on the twenty-first century.[2]  Wolfowitz found two important points in Mahan:  peace must not be accepted as a good; and state politics must acquire priority over economy as the leading discourse for discussing power, action, and value.

Even more fundamental principles link Mahan and Wolfowitz than their basic claims for state power and sovereign will.  Before addressing these directly, it is worth noticing some pertinent parallels of situation and person.  Each is a secularist.  Each makes strategic use of moral and religious languages and politics.  Each works at a time when the force of liberal economics create movements that we now know by the term, global.  Also, each contests the power of a seemingly natural ideological ally in the economic sphere—those we might call the free traders, the classic liberals, equally well represented by the Manchester School and the University of Chicago.

For each, though, the most important principle is historical in the deepest sense that civilizational concepts allow.  We are familiar with the habit of victorious powers making history as they need.  In the case of modern American state intellectuals such as these, it is their fundamental conception of the nature of history and of state actors in history that unites them in what we have come to call imperium—when it might be just as well to call it a state system of excess power.

Mahan was a strategic thinker whose work combined two central objects of thought:  first, discovering the universal laws of power and advantage—military or geopolitical; and second, determined how best to dispose those laws in the historically specific and geographically determining realities of a particular moment.  In short, Mahan’s concern was with the creation, development, and operation of great powers, the greatest of which acquire the status of imperia.  Over and again Mahan writes in his treatises, his articles, his lectures, his reports, and even his letters to his friends and family, that a time is coming soon when the American people will accept—consciously and unconsciously—that their appetites and interests require not only action abroad, beyond the continent’s borders, but the installation of a political system that organizes America to become a great power extended wherever its interests roam.

Too often, normal accounts of this expansive impulse restrict themselves to the important and pertinent questions of race and class; they fail to take seriously the consciously articulated purposes of these reflective grand strategists, the effect of their thinking and rhetoric, and the tactical success of their work.  In Mahan’s case, for example—an example that would not be lost on Hannah Arendt—the defining thought is clear, simple, traditional, and, some might conclude, dangerous.  In the same article that Wolfowitz so admires, Mahan quotes Mommsen on Rome in a way that vividly presents how these strategic intellectuals think:

“When the course of history,” says Mommsen, “turns from the miserable monotony of the political selfishness which fought its battles in the Senate house and in the streets of Rome we may be allowed-—on the threshold of an event the effects of which still at the present day influence the destinies of the world—to look round us for a moment, and to indicate the point of view under which the conquest of what is now France by the Romans, and their first contact with the inhabitants of Germany and of Great Britain, are to be regarded in connection with the general history of the world.... The fact that the great Celtic people were ruined by the transalpine wars of Cesar was not the most important result of that grand enterprise—far more momentous than the negative was the positive result. It hardly admits of a doubt that if the rule of the Senate had prolonged its semblance of life for some generations longer, the migration of the peoples, as it is called, would have occurred four hundred years sooner than it did, and would have occurred at a time when the Italian civilization had not become naturalized either in Gaul or on the Danube or in Africa and Spain.  Inasmuch as Cesar with sure glance perceived in the German tribes the rival antagonists of the Romano - Greek world, inasmuch as with firm hand he established the new system of aggressive defense down even to its details, and taught men to protect the frontiers of the empire by rivers or artificial ramparts, to colonize the nearest barbarian tribes along the frontier with the view of warding off the more re-mote, and to recruit the Roman army by enlistment from the enemy’s country, he gained for the Hellenic-Italian culture the interval necessary to civilize the West, just as it had already civilized the East. Centuries elapsed before men understood that Alexander had not merely erected an ephemeral kingdom in the East, but had carried Hellenism to Asia; centuries again elapsed before men understood that Cesar had not merely conquered a new province for the Romans, but had laid the foundation for the Romanizing of the regions of the West. It was only a late posterity that perceived the meaning of those expeditions to England and Germany, so inconsiderate in a military point of view, and so barren of immediate result. . . . That there is a bridge connecting the past glory of Hellas and Rome with the prouder fabric of modern history; that western Europe is Romanic, and Germanic Europe classic; that the names of Themistocles and Scipio have to us a very different sound from those of Asoka and Salmanassar; that Homer and Sophocles are not merely like the Vedas and Kalidasa, attractive to the literary botanist, but bloom for us in our own garden—all this is the work of Cesar.”[3]

This vision is familiar to us in the humanities.  What does it mean when those with power act on it?  And how do they so act?  There are two more points I want to make here, in defense of my iron cage theory about America, and Mahan is, as in so much, the perfect textual repository for some exploration.

Mahan answers the ‘how’ with nothing short of a new scientific development.  He invents the science of logistics.  Had we time, we would do two things here.  First, we would explore this term logistics carefully for the philological markers of its importance.  One reason I am comfortable reading Mahan as evidence of the iron cage theory of American empire is that as a logistician, Mahan full well understood that his grand strategy required Americans coming to be at home with the idea of being a great power and comfortable with living in a home always arranged to that end.  Logistics in this sense derives from logis, which is lodging.  Furthermore, logistic, as a singular form, derives from logos, which indicates the rationalized scientific ambition underlying Mahan’s intense preoccupations with logistics.  What does logistics, which seems to be only a matter of moving materiel around for soldiers, have to do with a conference of American humanists, especially literary or theoretical humanists?  It is part of our function as historical critical intellectuals to know about these matters and bring to bear our unique talents and techniques.  Otherwise, important questions remain unexamined and so can be neither opposed nor displaced.

Logistics, in this case, matters because Mahan, the great naval historian and grand strategic thinker, realized that one of history’s universal laws required political economic organizations structured to allow the state to maximalize its access to national productive and ideological resources whenever necessary.  Against peace, such intellectuals organize nations always on the model of war.  Mahan’s extraordinary influence results not just from his monumental research into and agitation for sea-power, especially in the form of offensively capable strike battle-ships, but also his theory for organizing modern production and modern societies through powerful, moving, flexible vectors of force that served always raisons d’etat. 

In writing on geopolitical conceptions of power, the contributors to the Danish Foreign Policy Yearbook 2001 recognize Mahan’s central place in global conceptions of modern state power.  Following Mahan, “geopoliticians establish the claim that there exist a close correlation between the overall logical structure of environmental (or spatial) ‘logistics’ and the power control in the world. This is not simply a matter of military logistics but also an issue in regard to the strategic distribution of the world’s resource bases and how this is correlated with the world political system as a differentiated unit.  This issue was later actualized as a topic by Hans Morgenthau.”

George Bush’s national security documents continue this line of state effort, especially by openly collapsing distinctions between national and international, between matters of domestic and security policy.  If nothing else, the Department of Homeland Security is a powerful instrument, along with censorship, pressure groups, disciplined political parties, and the like, for intensifying the geopolitical logistical effort to make the nation seamlessly available to the state.

Normal authorities on Mahan admit his conceptual breakthrough in logistics, a thinking that earned the respect of US, Japanese, and German policy makers along with high intellectuals such as Carl Schmitt and George Kennan.  “Mahan . . . defined logistics as the support of armed forces by the economic and industrial mobilization of a nation.”  Gravity’s Rainbow is not, in this aspect, a story of paranoia, but a history of the US iron cage. 

If logistics is the answer to the question of how state intellectuals imagined the convergence of US cultural and economic production within a permanent state of war preparation, as what we might call, a ‘being toward war,’ Mahan provides an equal solution to the puzzle of politics in what was at his time already a globalizing international arena with multiple albeit unequal even if aspiring new and old power centers.  As Joseph Buttigieg emphasizes in speaking with me on these questions, we can find clear echoes of Mahan’s thinking even in such pygmy intellectuals as Robert Kagan, the latest ideologue of American exceptionalism.

I do not, however, intend to soil Mahan with the dirt of such a charge.  His thinking is too cunning and careful for such familiar abuse.  Exceptionalism, I would argue, is a minor element in policy makers’ thinking and decision making.  Mahan explicitly holds to a great power thinking that recognizes the transient nature of status and capacity.  In fact, Mahan’s commitments seem to rest on some sort of bureaucratic proceduralist model, especially for states and nations deciding on the truthfulness of their decisions to apply power. 

In October, 1899, Mahan published another in the long series of magazine pieces he wrote to modify elite judgment about military and strategic matters.  The essay, “The Peace Conference and the Moral Aspect of War,”[4] is a critique of the Hague Peace Conference that had set up restrictions on naval competition as well as boards of arbitration and judgment to negotiate peaceful settlements to conflicts.  Mahan, although a member of government that supported these efforts, publicly attacked their very idea as unjustifiable morally and politically.  Proleptically, he warned that nation-states cannot and should not seem to yield sovereignty to transnational entities.  Furthermore, beyond the fact that only force can compel a state, Mahan objected that a sovereign state, defined in its highest essence as an ethical state, could not be constrained by law.  Indeed, developing an elaborate analogy between individual conscience and national decision, Mahan concludes that states must act in agreement with their conscience as long as, after following procedure, they are sure of the truthfulness of their moral and political judgment—even if that decision flies in the face of what must be judged others’ erroneous or cowardly or immoral attitudes.

Mahan sets out to persuade by insisting that nations should act sincerely and according to their ethical commitments:  “Nations, like men, have a conscience.”  Under the long secular view of a Mommsen, according to which a state regime’s civilizational efforts can be neither known nor judged for centuries, nation-states’ sincerity requires they act to enforce right:

The resort to arms by a nation, when right cannot otherwise be enforced, corresponds, or should correspond, precisely to the acts of the individual man which have been cited; for the old conception of an appeal to the Almighty, resembling in principle the medieval ordeal, is at best but a partial view of the truth, seen from one side only. However the result may afterwards be interpreted as indicative of the justice of a cause,—an interpretation always questionable,—a State, when it goes to war, should do so not to test the rightfulness of its claims, but because, being convinced in its conscience of that rightfulness, no other means of overcoming evil remains.  (436)

After Machiavelli, we might be excused for believing that a nation-state’s conscience, comforted by its bureaucratic procedures, fits the most self-interested form of truthfulness rather than the most self-sacrificing.  Mahan himself insists on this obvious truth.  Hoping to undermine the appeal of peace parties in the US and elsewhere, Mahan creates that uniquely American sense of insecurity by arousing anxiety about ubiquitous but unknown threats that not only call for the projection of force but achieve legitimacy based on decisiveness by a state convinced, at whatever point in its procedures is convenient, in the truthfulness of its cause.  States use force to restore right.

Mahan is by no means the start of US intellectuals’ thinking about state power and war, but his lucidity, his place in the institutional history of war planning, his global influence, and his success as a critic of US imperial weaknesses—in 1898 and at other times—gives him something of an originary status.  His works appear everywhere on the curricula of US military war colleges and academies.  (As an aside, it’s worth noting that a computer search of the MLA online bibliography turned up only one article dealing with Mahan, by Chris Connery in b2.)  He is a very successful figure and his concepts and histories have had enormous success.  Keeping with Henry Adams’s advice that to deal with power one must study success, critics would do well to turn their attention to figures like Mahan and away, for a while at least, from the now all too familiar allegories of subversion, resistance, local difference, and so on. 

What do we see if we keep our eyes on this success?  Among other things, we detect a powerful, intelligent, and highly worked out set of arguments for the US to assume a certain form of great power self-definition that prophetically closes the distances between foreign and domestic affairs and assures the nation that it’s good conscience and commitment to truthfulness as it understands it, assures the rightness of its violence and demands something like obedience, or, if you prefer, the subsumption of individual agents as citizens within the fuller agency of the imperium: 

Fidelity to conscience implies not only obedience to its dictates, but earnest heart-searching, the use of every means, to ascertain its tine command; yet withal, whatever the mistrust of the message, the supremacy of the conscience is not impeached. When it is recognized that its final word is spoken, nothing remains but obedience. Even if mistaken, the moral wrong of acting against conviction works a deeper injury to the man, and to his kind, than can the merely material disasters that may follow upon obedience. Even the material evils of war are less than the moral evil of compliance with wrong.

Sincerity demands obedience.  What then of the nay-sayers who knowingly, like Cesar, cast a glancing eye and see the future, understand the directions of history, and yet find themselves thrown aside for bias or some other affective infelicity and breach of decorum?  Lionel Trilling, in his 1971 Jefferson lecture, published in 1972 under the title, Mind in the Modern World, warned that disaster lays in such practices.  Trilling, endlessly worried about the declining power of literature, knew that the loss of literature was the lost of criticism.  Arnoldian to his core, Trilling took literature seriously as a criticism of life as it is.  It is against such instances of alternative ways of being and thinking that Mahan casts the high style of his political philosophy, as an aesthetic performance using the allure of reason and morals to argue for the right to violence in the name of the good and the truthful.  Note well, he argues not in the name of truth—historical, moral, or philosophical truth—but for the legitimacy of the nation-state coming to a decisive sense of its own truthfulness, even if wrong.  For civilization carried by force is a value for which imperialists will seemingly roll the dice.  Once America has become the great power and the die are cast, it seems clear that America cannot be without being an imperium incarnate.

[1] A Twentieth-Century Outlook, by Alfred T. Mahan, Harper's Magazine (Sept. 1897), at MoA-Cornell pp. 521 ff.

[2] “Managing Our Way to a Peaceful Century,” in Managing the International System over the Next Ten Years:  Three Essays; Report to the Trilateral Commission: 50  (New York, Paris, and Tokyo:  The Trilateral Commission, 1997), pp. 43 – 62.

[3] Quoted in Mahan p. 528

[4] North American Review.  No. IDXV.  October, 1899: 433-47.

Censorship and the Disciplines [u]

This is the talk version of a paper given to the Florida Atlantic Foundation on April 15th. Comments welcome. PAB

©Paul A. Bové

1354 Royal Oak Drive

Wexford, PA  15090



Censorship and the Disciplines

As the case progressed, it became increasingly clear that the agitation against the Jews in France followed an international line.  Thus the [Italian Jesuit Monthly] Civiltà Cattolica declared that Jews must be excluded from the nation everywhere, in France, Germany, Austria, and Italy.  Catholic politicians were among the first to realize that latter day power politics must be based on the interplay of colonial ambitions.  They were therefore the first to link anti-Semitism to imperialism.

--Hannah Arendt, The Origins of Totalitarianism (116)


By its very structure colonialism is separatist and regionalist.  Colonialism is not merely content to note the existence of tribes, it reinforces and differentiates them.

--Franz Fanon, The Wretched of the Earth (51)

Interdisciplinarity is an historical problem.  At first, it seems to be an epistemological, methodological, or institutional problem and so we have largely treated it.  How it is an historical problem is itself difficult.  Writers and administrators place interdisciplinarity within many frames, each of which is itself specific to an historical context and which, as a gesture, is particular to modernity.[1]

As a problem of knowledge, interdisciplinarity is essentially historical.  It is one way humans mediate their social lives through institutions.  As such, it offers both a synchronic or planar field of examination and the possibility of genealogical accounts of human change.  In other words, interdisciplinarity accords with the decline of Europe, the dominance of corporate and monopolistic power centers, and the nearly impossible cerebral demands of specialized disciplines themselves.

 Moreover, in the last twenty-five years or so, especially academic humanists began to stress the word ‘discipline’ as a verb or gerund, emphasizing the shaping as a constraining element of discipline.  Much of this line follows upon a reductive reading of Michel Foucault’s work in Discipline and Punish.  Despite Foucault’s admonition that power produces, and despite frequent verbal acknowledgement of that fact, academic cultural studies types, especially in the US, normalize ‘discipline’ as a repressive device in its very constraint of human knowledge and practice into certain forms.  This story infuses interdisciplinarity with liberatory fantasies of alternative knowledges that exist outside, in the past, or in some forgotten or marginal range of life practice.

At more or less the same time as these humanists invoke the inter- of interdisciplinarity as a possible space of alternative life and knowledge, scientific labs, the state research establishment, and corporate research and publicity units embrace the same space as a point of innovation beyond the limits of any one discipline’s inventive capacity.  As humanists embrace the fuzzy edges of disciplines for their catechetical and deconstructive effects upon normal orders of knowledge and practice, corporate power—including the universities—intentionally organize those catachreses in labs and research projects designed to make predictable and regular the innovatory processes.  In one vision, inter- names a space of alternative possibility—much as multi- does in multi-cultural—while for the other, inter- names a solution to capital and the war machine’s needs profitably to invest and control space. 

In English, ‘interdisciplinarity’ began as a term associated with the social sciences and the helping professions.  The OED records the first use of the term as recently as 1937, in the December issue of the Journal of Educational Sociology:  “Programs of study submitted should provide . . . for training of an inter-disciplinary nature.”  Succeeding dictionary examples until 1972 also come from the softer social sciences—not from economics, for example.  Along the way, the term enters public discourse appearing in newspapers and general magazines by 1965 and soon thereafter began to describe academic programs of all sorts that crossed or operated between disciplinary borders.[2]  The OED’s definition of the term--“Of or pertaining to two or more disciplines or branches of learning; contributing to or benefiting from two or more disciplines”—economically catches the complexity and apparent simplicity of the idea’s foundational use.  Something is of interdisciplinary concern if it implicates more than one discipline as, for example, the study of American Civilization at Harvard implicates the literary and historical disciplines.  Each discipline has a claim on the topic and each adds to a sum total of knowledge beyond the other’s reach.  In the end, a new sort of knowledge other than that available to either discipline, to any existing discipline, emerges to validate the inter- and to weaken the claims of the subsidiary practices.  Yet this last idea specifies the OED’s definition:  the new knowledge benefits from the existing disciplines and it might contribute to them, but the resulting new knowledge need not reside in the conceptual or practical space of either.  From this process, of course, one possibility is that a new discipline might emerge.  In this fact, we have a common and for some an unfortunate result while for others the same result is the happy because profitable invention of a new field such as molecular biology or bioengineering or cultural studies.   

Many people have applied their minds to this constellation of possibilities.  From deconstruction’s concern with marginalities of knowledge to DARPA’s funding of interdisciplinary developments in weapons and technology, the inter- space has proven a profitable area of capital investment.  A computer search of the Modern Language Association’s Bibliography returns over 225 items listing ‘interdisciplinarity’ as a topic.  Similarly, a Lexus-Nexus search of other academic databases returns more than 275 hits.  With the possible exception of terms such as margin or difference, no term rewards investment as much as ‘inter-.’  Not only do we have interdisciplinary, but intercultural and an entire range of compounds formed by, at, and between the borders that interest us.

The OED once more helps us along.  Let us remember that disciplinarity is itself a modern formation of a noun from an adjective, and so recent that the OED does not list it.  Normally, critics follow this term through its adjectival form, disciplinary, back to its noun origins—discipline and disciple.[3]  If we pause on the adjectival basis of ‘interdisciplinarity,’ however, we catch an element of meaning and use that aligns with the prefix.

Adjectives, we recall, are adjuncts, that is, they provide adjunctive qualities to a substantive.  So, soup becomes hot soup.  Similarly, knowledge becomes ‘disciplinary knowledge’ or practice ‘disciplinary formation.’  In this weak sense, disciplinarity is the result of an adjunction rather than the singular substance of the noun discipline or its plural additive form.  Items in a plural stand by themselves—one foot, two feet.  In the interdisciplinary mode, disciplines cannot and do not stand each alone and whatever the interdisciplinary is that emerges, it is always in the ambiguous but definite form of an adjunct construction.  Its independence is limited even as it undermined the substantives it modifies.  For the OED, an adjectival noun of this kind is just “that which cannot stand alone; a dependent; an accessory.”  It is a substitute and a dependency, an accessory denying the modified its independence. 

Interdisciplinarity is all these things as a game of spatial operators that reduces the historicity of knowledge to the apparent manageability of administration.   In 1992, Francis Fukuyama’s attempt to draw world-historical conclusions from the end of the Cold War declared the end of history and the reduction of all social, political, and human processes to management issues.[4]  The more recently published results of his interest in the interdisciplinary field of ‘biotechnology’ suggests that this new knowledge field might, in fact, achieve what state political economy seems to have ended in the last decade, something like human history.[5]  Fukuyama warns that biotechnical intervention in the human genome might make liberal democracy impossible by making irrelevant all talk of democratic equality resting on something like the universal rights of man.  This profound ethical and political problem presents another managerial dilemma:  how to anticipate, plan for, and control possible outcomes—with a full awareness that prolepsis in the face of organized invention is impossible.  Fukuyama cannot think of any way to deal with this potentially critical problem other than to circumscribe it within management, to reduce its historicity and its potentiality to restart human history.

Perhaps we expect too much from Fukuyama if we think the ‘inter-‘ of interdisciplinarity is a hope for history.  ‘Inter-‘ does afford us a sense of time, but ‘intervening or happening in the time or period between,’ as the OED puts it—as in the word, intercessional or interwar—is a form of spatialization, a cognate of the fundamental meaning, “Situated, placed, or occurring locally, between or among.”  This is why whatever is inter- whether it be interdisciplinarity or the interregnum of politics seems eminently manageable.

Only when the very constellation of this emergent ‘inter-‘ appears do we see its historicality.  At first, interdisciplinarity simply marks the historical beginnings of the traditional disciplines and the increasing need for their transformation or reorganization.  Institutional and cognitive inertia maintain disciplinary formations as often as their continuing intellectual vitality.  In fact, as we all know, the weaker the discipline, that is, the less innovative, productive, and remunerated the field the more likely it will slide into some inharmonious relation of interdisciplinarity.  When strong disciplines, such as economics or philosophy, join interdisciplinary efforts, it is as an individual contributor to a second line project rather than an effort to sustain or revivify the discipline’s own first-line ambitions.  Universities with strong analytic philosophy departments rarely form ‘literature and philosophy’ majors while strong schools of materials engineering must join with chemists to do work in emerging fields like nano-technology.  At this point, it seems as if inter-disciplinarity can be nothing more than a shortly open space, a spot of time, colonizable by the same post-historical forces of knowledge production and management as we expect on the most basic levels of our polity.  That interdisciplinarity cannot satisfy utopian ambition counts against utopian desire within the framework of our moment—no matter how much ‘new knowledge’ or how many ‘new ways of talking’ seem to open in its spaces.  We can make interdisciplinarity a worthy subject of intellectual historical reflection, to do so, however, we must refuse the terms of the ongoing conversation and join a much older thinking buried by the din of our anxious self-congratulations.

Lionel Trilling delivered the first Thomas Jefferson lecture sponsored by the National Endowment for the Humanities in 1972.  Published under the title, Mind in the Modern World, this lecture now seems to be nothing more than a reactionary warning against affirmative action, government involvement in university education, and academic reluctance to defend its own standards.[6]  Indeed, anticipating both the theory wars and the culture wars that raged from the 1960s to the recent past, Trilling speaks on behalf of an Arnoldian critical vision at a time when such humanism had, as Trilling notes, acquired the presumed status of political repression.  Trilling’s willingness to discuss the fate of mind is what should interest us right now.

As Daniel O’Hara’s remarkable book on Trilling’s style makes inescapable, Lionel Trilling never came close to expressing a simple thought directly.[7]  The aesthetic self-formation enacted in and implied by his style, simultaneously records Trilling’s awareness of the reasons to doubt mind’s ability to improve the physical and moral world while it also records his faith in human mind nonetheless.  We can apprehend something of what this means from Trilling’s own comments on the ancient Greeks’ inauguration of the wonder the Renaissance gave us for mind. 

In some respects this is a very long line indeed.  It goes back to the philosophers of ancient Greece both in what might be called its aesthetic appreciation of mind, its admiration for the mental faculties almost for their own sake, apart from what practical ends they might achieve, and also in its assumption that mind can play a decisive part in the moral life of the individual person.  In other respects its extent is relatively short, going back only to the Renaissance in its belief that what mind might encompass of knowledge of the physical universe has a direct bearing upon the quality of human existence, and also in its certitude that mind can, and should, be decisive in political life.[8]

We need not agree with all the elements in Trilling’s arsenal of ideological sensitivities to see the importance of his acute description of historical burden.  In this quotation, Trilling provides not a definition but a general understanding of ‘mind’ that demarcates the continuing, fragile, and fluctuating faith of Western society in secular humanism, in human intellect free of mythical or dogmatic presumptions.  If ‘mind’ is another name for secular humanism’s fragile historical existence then its historicality leads us to the time of its development and to its contemporary formations; without considering these together, we betray mind itself.

Trilling’s Arnoldian position, like that of his great successor at Columbia, Edward Said, depends upon the Victorian’s obsessive commitment to criticism.  Trilling was a frightened conservative humanist, who literalized Arnold’s “famous characterization of literature as ‘a criticism of life,’” thereby unduly constricting the work of mind, of criticism, and of ‘literariness.’  Literature’s critical function, Trilling insisted, existed only in the print formations of the Renaissance raised to the central institutional and social place it had seemingly acquired by the 1930s.  Trilling would never have written Benjamin’s essays appreciating, for example, the historical human transplanting of the literary critical into film and other electronic arts.

What matters for us, though, in Trilling’s text, beyond the anxiety and misunderstanding of cultural tendencies?  It is the humanist’s fear that both criticism and the earned accomplishments of humanity’s efforts at social secularization would disappear with the suppression of print’s authority.  More important that Trilling’s fear for literature is his fear for criticism.  Literature matters so much because Trilling cannot imagine any human formation as essential to the act of criticism.  If literature is a criticism of life, then what is criticism?  Arnold’s answer:  “to see the object as in itself it really is.”[9]  It is essential that we set aside for now all possible epistemological and socio-cultural debates about either what this realist demand might mean or whose reality he envisions.  As important as these matters are within a secular humanistic and historical society, they matter only if such a society, enabled by ‘mind’ stands as an ideal for achieving justice and equality, truth and beauty.

One might discuss many milestones profitably on the way from the Greeks to the Italian Renaissance in following the unfolding of mind.[10]  Two major figures catch my eye in this context—Vico and Kant. 

Let me explain something of where I am going now.  I will contend that the fundamental concern for all interested in the interdisciplinary must be opposition to religion in the name of that humanism Trilling calls mind.  Along the way, one sees other possible topics for investigation.  Not accidentally, Vico’s disagreement with Descartes develops as a defense of poetry against analysis and reduction—one of the prototypical ideals underlying both Arnold and Trilling’s concern for the literary.  That is, as Vico’s poetry embodies the secular processes of human social creations, analysis stands as its opponent, as the turning away from a study of what humans do in pursuit of a method independent of circumstance and a truth aspiring to apodictic certainty.  In a word, Descartes speaks for ignorance; Vico for learning.  Even between these opponents, there is discipline.

In The Conflict of the Faculties, Immanuel Kant ran the risk of upsetting his prince and censors with a defense of thinking against the authority of faith especially as it aligns itself with state power and presumed social obligations to stable values and social truths.[11]  We need to remember only that Kant reserved the Higher Faculties—Theology, Law, and Medicine—to the domain of the state, of obedience to right—whereas he characterized Philosophy as the lower faculty and free of state right.  Kant’s brief historical characterization of the university’s functional beginnings resonates still.  While its founders’ identity might be unknown their intent is clear:  form a ‘public institution’ “to handle the entire content of learning (really, the thinker devoted to it) by mass production, so to speak—by a division of labor.”  Each discipline or branch of learning, each field would be supervised by a professor who functioned as a trustee responsible as a public figure for the authenticity and legitimacy of learning and teaching.  Taken together, the participants in this structure form a community, dealing with all areas of knowledge, and so best known as a university.  The division of labor required divided responsibility and expertise so that separate faculties “would be authorized to perform certain functions”—admissions, examinations, and conferring degrees—or, as Kant says, the right “to create doctors.”[12]

In this story of the university’s founding intent, the state accepts a specific limit upon its right to discipline and censor the professors, the teaching, and the public circulation of produced knowledge.  The higher faculties belong to the state that relinquishes care for the sciences to secular reason.  Kant insists that the higher faculties are so-called not with reference to “the learned professions” but “with reference to the government.”

A faculty is considered higher only if its teachings—both as to their content and the way they are expounded to the public—interest the government itself, while the faculty whose function is only to look after the interests of science is called lower because it may use its own judgment about what it teaches.  Now the government is interested primarily in means for securing the strongest and most lasting influence on the people, and the subjects which the higher faculties teach are just such means.  Accordingly, the government reserves the right to itself to sanction the teachings of the higher faculties, but those of the lower faculty it leaves up to the scholars’ reason.[13]

Kant chooses to name the lower faculty left to its reason, the faculty of philosophy.  It does us well to remember Kant’s little story when we worry about state intervention into the university.  Kant took pains to make clear that the university’s largest social function is sanctioned teaching.  He carefully reserved not only reason to a subsidiary element of the university, but truth itself.  Setting aside for the moment later criticisms of the confusions of truth and falsehood in philosophical truth—the sort of stories associated with many from Nietzsche to Derrida—dwell on the fact that Kant recognized that the Higher Faculties had no obligation to the truth or to judgment.  The state arrogated both and created the university to its ends.  “For the government,” Kant writes, “does not teach, but it commands those who, in accepting its offices, have contracted to teach what it wants (whether this be true or not).[14]

Of course, Kant’s text arrogated to philosophy what he considered the highest qualities of mind and human life—reason, truth, and judgment.  He could do this, however, because the state had not (or had not yet) come to understand its own proprietary relation to science.  Moreover, it did not feel itself threatened by the independent production of secular and secularizing knowledge as long as it could control its distribution and effects.  Censorship along with tenured loyalty assured the containment of reason’s ability, in Arnold’s terms, to see things as they really are—even if such seeing produced Kant’s own critiques.  Either the university itself would be a caldron of conflict—a streit or struggle—contained by state power or the impossibility of the state’s project for the university—if and when the conflict of faculties made the point about the lower faculties’, the secular faculties’, great power.  In essence, Kant’s book begins from the first and ends in the second alternative.  ‘Higher’ derives from power, against which the enlightened ideal of speaking critical truth in public cannot compete for distinction.  Famously, Kant opposes this power’s discipline to command—Believe!—with the freedom to speak the truth—I believe!   It is an historical error in an as yet ignorant human nature that allows power to dominate truth.  The height of reason grovels within the powerlessness of irrationality and ignorance.  For Kant, explicitly, the battleground for freedom is the political sphere of what he calls “human nature.”[15] 

Kantian humanism has had a difficult time in the last century.  Hannah Arendt traced the processes that made it tragically impossible to speak of the universal rights of man.  Intellectuals from Nietzsche to Lyotard and beyond have revealed the violence of its anthropological universality, even though various heroes of decolonization, such as Fanon, have often appealed to those very standards.

Writing in the occupied city of Naples, under the frightening eye of the Inquisition, Vico imagined a realm called secular human life, lived by those peoples not determined by the book.  He declared, in opposition to analysts of all kinds, that the only things humans can know and understand are those things humans have made—primarily themselves and their societies.  He followed on Dante, Scaliger, Machiavelli, and those other especially Italian intellectuals whose remaking of the ancients’ wisdom produced the world of modern humanity.  In Vico’s case, always and literally, the church and state in their combined violence, threatened mind by burning those who would speak or think heretically.

Secular human knowledge has in theory and in its practical initiation always been a form of conflict, of threatened existence, of dramatic tension with state and religion.  Each of these forms has its own discipline, of course, so that these conflicts are and always were conflicts of discipline with life and death stakes.  When the stakes are indifferent, mind is absent—whether God returns as recompense or not.  When Ian Hacking, for example, blithely congratulates himself for the freedom to carry his own discipline anywhere he wants by virtue of wit and curiosity, he recalls the evident root of his untroubled interdisciplinarity as an escape from discipleship.  Disciples can be secular or religious, as Aristotle to Plato or Xavier to Ignatius.  Hacking must quite not mean it when he describes his personal role model, Leibniz, as ‘predisciplinary man.’  Immediately, he tells us what he takes from Leibniz is curiosity and discipline!  Hacking is actually a little clearer on what he wants to say than this.  He means that Leibniz worked across an enormous range of topics that, today, we subsume under different disciplines.  In this simple sense, Leibniz comes before the Kantian division of labor.  But curiosity requires discipline to be productive and Hacking admits to learning it.  So what is the ideal?  It might be Mary Douglass who, Hacking writes, “applies her keen and totally unconventional mind and skills where she is interested.  I shall have to ask her next time I see her, does she think of herself as anything other than a (non-conformist) anthropologist of a particular kind, education and tradition?”  The Vichian and Kantian passionate struggles have become this ‘complacent’ liberalism.

Trilling’s Arnoldian conservatism regretted institutional changes that threaten not only critical dominance but also the academy’s very commitment to mind.  Trilling’s humanism worried complacency as an intellectual transgression against the history of mind’s efforts.  At the center of Trilling’s commitment is a Kantian concern for critical reason’s relation to truth and so citizenship.  Danger lies in the social transformations of mind’s value into professional or social status:  “By an inevitable inference, the intellectual disciplines in which [professors] give instruction are to be regarded not as of intrinsic value, but, at best, as elements of a rite of social passage and, at worst, as devices of social exclusion."[16]  Trilling knew his Kant and his contemporaries.  The state’s police authority became the social authority of passage and credentialing.  More important, the professors came to have no passion:  they all profess as if teaching in the higher faculties, without reason, judgment, or freedom.  Passionate politics had made the university into an agency of freedom for Tilling when the upper classes came to realize the power of free knowledge.  They

had somehow got hold of the idea that mind, not in one or another of its specific formal disciplines but in what any one discipline might imply of the essence of mind, was of consequence in statecraft and in the carrying on of national life.  What they would seem suddenly to have identified and wanted to capture for themselves was what nowadays we might call the mystique of mind--its energy, its intentionality, its impulse toward inclusiveness and completeness, its search for coherence with due regard for the integrity of the elements which it brings into relation with each other, its power of looking before and after.[17]

For those who understand, discipline implies the “mystique of mind.”  Disciplines are a capacity of mind’s quality and evidence of its desirability as the ethical, aesthetic possibility of human political life.  That Trilling sketches mind’s political character in terms critics recognize as poetic—intentionality of form; integrity of parts; placement in context; and visionary coherence—is hardly accidental.  Rather than dismiss Trilling as nostalgic or elitist take seriously his metaphoric transference of the qualities of secular humanism—mind as historicity—from the literary to the political.  This Vichian moment explains why Edward Said, whose politics were overtly quite different from Trilling’s, would nonetheless approvingly cite Trilling as a predecessor.  Common ground is the intellectual critical commitment to the priority of human mind and life at work in purely historical secular terms building societies on the bases of knowledge, experience, judgment, and justice. 

Reducing the allure of mind to a management problem as US universities have done so much recently, corresponds not to the death of literature but to the death of criticism.  If the inter- space were transgressive of limits or creative beyond the boundaries, then it would attract us as mind did Vico, Trilling, Said and others.  We would feel its allure as the guarantor of both seeing reality and passionately carrying out a criticism of life.  Pathetically, inter-disciplinarity formalizes a social disregard for mind essential to a polity that increasingly censors the chances to intend form as Trilling defines mind’s visionary energies.

Hannah Arendt is the most important thinker of how societies turn against human mind and its historicity, in part, of course, because of mind’s own actions.  The turn against mind, so to speak, is nothing but more human history.  Trilling could not quite think how mind lost its way and so in the best US tradition created a paranoid Manichaeism.  Arendt, however, makes imponderably clear that American forms of mass democracy destroy more thoroughly Kant’s humanistic ideals than the worst forms of twentieth century utopianism.  Arendt, in a manner typical of her generation, and like Trilling with his emphasis on mind’s intent, avoids the worst despairing conclusions by stressing that the human capacity to begin, to take a step, affords alone some potential solace that human history can survive the mind’s destitution.  This puts mind in its historical place, of course, but it offers little solace not because the capacity to begin seems so slim but because no beginning in itself offers any possible reason to believe that what might come is in any way better than what has been or now is.  Difference is not betterment.

Arendt’s deeper concern is that American power arrangements aspire to arrest the possibility of beginning.  Arendt concludes specifically that post-European global arrangements, dominated by the US and American forms, will prohibit all political being and organize loneliness and isolation in such a way as to make alternatives seem impossible if not undesirable.  Fact and illusion come under control in a finally managed end of history.  Fukuyama and his allies merely embody the end of Arendt’s tale.

Interdisciplinarity productively contains beginning within the managed administrative forms of the ahistorical.   I believe that Trilling used the term ‘mind’ rather than other terms such as reason because he both continued and modified the position most associated with the Frankfurt School.  He did not deny their kind of claim that reason had abused itself in becoming bureaucratic rationality; rather, he chose a more complexly encoded term than reason to define the critical historical complexity of human intention and formative intelligence.  If you will, reason as rationalization is a threat to mind, but Trilling does not categorize mind as a potential threat to itself.  Reason as rationalization is close to Arendt’s image of American forms, but those forms do not exhaust mind precisely because they threaten it.  They are not an expression of mind as bureaucracy is of reason.

In my retelling of this tale, America appears as hostile to mind—as its censor.  I think there is a clear reading of Trilling and Arendt that would make a convincing case for this notion.  If so, then interdisciplinarity is important as one censoring device the chief of which must be those American revivals that exist, in fact, to arrest mind. 

Kant’s text defines the lower faculty, which is interested in truth and judgment, as a secular faculty that might make even faith responsive to its questions.  Simultaneously, he allows the state the right to act as censor through its powerful management of populations via god, law, and the body.  He opts for a domain outside the censor, and he is not foolish enough to believe it might be the university itself.  Rather, as with Said, Kant asserts the right of the university to contain the utopian space of judgment and truth.  The censor surrounds it as a wasteland does hope.  Kant has anticipated recent discussion on ‘bio-power’ by underlying Foucault’s thinking on power, knowledge, and the body.   Kant, however, importantly begins his discussion with religion:

By public teaching about the first of these [eternal well-being], the government can exercise very great influence to uncover the inmost thoughts and guide the most secret intentions of its subjects.  By teachings regarding the second [civil well-being], it helps to keep their external conduct under the reins of public laws, and by its teachings regarding the third [bodily well-being], it make sure that it will have a strong and numerous people to serve its purposes.[18]

In this practice, the state acts according to reason.  Trouble starts, as it were, when the censor extends its power into the domain of judgment and truth.  Knowing that truth and judgment have no place in the realms of god, law, and the body, Kant allows that the state’s schema is rational because its censorship paradoxically does not confuse itself with right judgment.

If human nature as secular historicism creates and accepts this politics, it does so because mind survives.  When power extends itself through the domain of dogma into the realm proper to mind, then the intellectual and mind come under threat; indeed, freedom itself comes under threat because, as Arendt concluded, the capacity to be human flickers.

Milton had already argued passionately but futilely that no rationally self-interested state or social order would be so ‘pusillanimous’ as to threaten “that freedom of writing should be restrained by a discipline imitated from the prelates, and learnt by them from the Inquisition.”[19]  Anticipating Kant, Milton warned, “A man may be a heretic in the truth,” if the state censor thinking and coerce assent from a man “without knowing other reason.”  The social result is disaster for humans live in a false relation to each other:  “Truth,” Milton writes, “is compared in scripture to a streaming fountain; if her waters flow not in a perpetual progression, they sicken into a muddy pool of conformity and tradition.”[20]  Of course, Milton warns against a certain sort of theocracy, a social arrangement in which precisely those activities of mind which Trilling says the English came to see as essential to statecraft—a society that would deny the humanistic conception of nature by arresting mind’s intentionality in turgidity.  What would dogma destroy?  “There is not aught more likely to be prohibited than truth itself.”  Those who hope to defend the Gospel “are found the persecutors.”  Milton’s god makes use of “men of rare abilities . . . in the discovery of truth,” which need not lie always in the rigidity of the present’s enshrined past.  Zealots do not see the value of human mind, even in god’s service, and cannot make needed distinctions, to “resolve to stop their mouths because we fear they come with new and dangerous opinions.”[21]

In Trilling’s sense, mind intends as for Arendt it begins.  For Milton, this is how the social universe so dependent on truth must be.  Milton, more radical than Kant, equates licensing books with constraining invention at the grace of god.  Only hypocrisy would bind books to good behavior.

Recall that the censor originally dealt with the population and that in Rome the censor conducted both the census and regulated morals.  In its Latin root, censere means, “to estimate, rate, assess, be of opinion” [OED].  Almost all the Higher Faculties’ functions reside in the term. 

Post-Foucauldian opinion, especially in the English-speaking world, leans toward eliding censere with ‘discipline.’  As we know from Milton, this is a traditional and well-founded claim, since Milton could identify censorship with being ‘bejesuited’[22]—and it is harder to imagine a more rigorous discipline than that of the Jesuits and their founder, Ignatius.  We should not follow this line too far, however, because there was no more critical mind than that of the classically trained Foucault, who began his education in what was formally a Jesuit college.  As David Macey says in The Lives of Michel Foucault, “for the adult Foucault, disciplined devotion to intellectual work was almost an ethic.”[23]

In Milton’s terms, intellectual discipline is not quite enough; to be properly rated as a truth-bearer, god must grace the talented to reveal it.  I ask again that we set aside momentarily ideological predispositions to notice that Milton recognizes the world as the place of historical human life, no matter god’s interventions.  For grace does nothing but align human capacity with the truth’s demands for recognition.  Secularized, this is not far either from Arnold’s view of criticism’s necessity or Trilling’s judgment of mind’s human essentiality.

Perhaps there is no fairer test of Milton than the life of Charles Darwin.  He had no disciplinary home, and yet had remarkable discipline; he destroyed the discipline of dogma, and so provoked the censorious fear of human mind at work.  There is no better tale of secular mind than Darwin’s well-known account of his discoveries.  I do not want to draw special attention to his growing into atheism because of science.  Rather I want to relate two points:  first, that Darwin’s developed habit of mind seems rather close to the general qualities of mind Trilling so admires; and second, that such mind leads toward the materialistic and human and inevitably away from not only god but the authority of censorship.  These two points taken together culminate in the avowed hostility to Darwin that characterizes contemporary American religious hostility to mind and they culminate my attempt to place the discussion of inter-disciplines within a more historical analysis of how we might preserve the university’s founding purpose as a venue for mind.

After recounting how boring and unrewarding he found medical and theological courses, Darwin laments not studying enough mathematics but praises certain friends and professors, especially at Cambridge, who accommodate and stir his thinking.  Not until the voyage on the Beagle, though, does Darwin acquire what Milton might call the grace needed to make use of his exceptional talents:

I have always felt that I owe to the voyage the first real training or education of my mind.  I was led to attend closely to several branches of natural history, and thus my powers of observation were improved, although they were already fairly developed.  The investigation of the geology of all the places visited was far more important, as reasoning here comes into play . . . .  [added to observation] always reasoning and predicting what will be found elsewhere, light soon begins to dawn on the district, and the structure of the whole becomes more or less intelligible.[24]

It takes little explication to see the aesthetic nature of this training and its resultant apprehensions.  While the older Darwin lost his taste for art, he retained it for landscape.  Most important, however, he refused to assent to any idea that the aesthetic emotions themselves might account for the existence of an extra-human intelligence as it impinges on the affects:

The state of mind which grand scenes formerly excited in me, and which was intimately connected with a belief in God, did not essentially differ from that which is often called the sense of sublimity; and however difficult it may be to explain the genesis of this sense, it can hardly be advanced as an argument for the existence of God, any more than the powerful though vague and similar feelings excited by music.[25]

Fundamentalists can respond to this apprehension in various ways.  The most evidently censorious might be the most important in public life—that is, biblical literalists and others who demand that we acknowledge the legitimacy of their private fantasies—but for the intellectual and academic world, the greatest danger lies in the censorship that replaces ‘difficulty’ with final settlements.  What are these?  Final settlements are frames of interpretive reference that seem to the administrative mind to account for all possible events.  While they do not imagine themselves as master narratives, they do present themselves as the inevitable key to all elaborations, which they variously see as after-effects of the fundamentals they alone envision.

‘Difficulty,’ which here I use as another metonym in the long chain I have stretched over our topic, is the life of the mind.  It is not explicable ab ovo and certainly not ever merely recognized.  It is history, and it is history as people live it, not simply as those great minds Milton admires.  Recall that Milton’s Puritanism might play a role in his delimiting grace to the few.  On that topic, great American writers have sometimes, although in the minority, offered different opinions.  Darwin thought of our topic:  difficulty is nothing less than the conflict of faculties, in this case, what we might call the aesthetic and what we might call the humility of intellectual discipline.  ‘Dif-‘ is ‘dis-‘ which is the Greek that means two.  Some this complexity ‘disses’ our faculties—but that is a good thing.  We must ‘dis’ ourselves.  How else will we learn the humility of living minds?





[1] This is what Heidegger refers to as “The Age of the World Picture.”

[2] (Oddly enough, though, ‘interdisciplinarity’ is absent from the online edition of The American Heritage dictionary.)

[3] Cf. Ian Hacking, “The Complacent Disciplinarian,” at the following URL: as of April 1, 2005.  This is a site sponsored by Science Po.

[4] Fukuyama, Francis.  The End of History and The Last Man. New York: Maxwell Macmillan, 1992.

[5] Francis Fukuyama, Our Posthuman Future:  Consequences of the Biotechnology Revolution (New York:  Farrar, Straus and Giroux, 2002).

[6] Lionel Trilling, Mind in the Modern WorldThe 1972 Jefferson Lecture in the Humanities (New York:  Viking Press, 1972).

[7] Daniel T. O’Hara, Lionel Trilling:  The Work of Liberation (Madison:  the University of Wisconsin Press, 1988), pp. 57 – 59.

[8] Trilling, pp. 5 – 6.

[9] Trilling, p. 17.

[10] See, for example, Henry Adams’s treatment of Abelard in Mount Saint Michel and Chartres as well as Paul A. Bové, "Abandoning Knowledge: Disciplines, Discourse and Dogma - Henry Adams's Mont Saint Michel and Chartres," New Literary History 25 (Summer 1994): 601 - 620. 

[11] Immanuel Kant, The Conflict of the Faculties, trans. & introduced by Mary J. Gregor (Lincoln, Nebraska:  University of Nebraska Press, 1992.

[12] Kant, p. 23.

[13] Kant, p. 27.

[14] Kant, p. 27.

[15] Kant, p. 29.

[16] Trilling, p. 25.

[17] Trilling, p. 38 - 39.

[18] Kant, pp. 31 – 33.

[19] John Milton, “Areopagitica,” John Milton:  Complete Poems and Major Prose, ed. Merritt Hughes (New York:  The Odyssey Press, 1957), p. 739.

[20] Milton, p. 739.

[21] Milton, p. 748.

[22] Milton, 748.

[23] David Macey, The Lives of Michel Foucault (New York:  Pantheon Books, 1993), p. 8.

[24] Charles Darwin, The Autobiography of Charles Darwin 1809 - 1882.  Ed. Nora Barlow (New York:  Norton, 1969; reprint of Harcourt edition of 1958), p. 77.

[25] Darwin, Autobiography, pp.  91 – 92.