Of Morality and Failed Business Strategies…

Anti-Statism, Business, Drug Policy, History, Libertarian Theory, Nanny Statism, Police Statism
Share

Some time ago, back in 2013 in fact, Richard Branson published a piece on LinkedIn, under the heading of “Big Idea 2013: This Year the Drug War Ends” wherein he positied, among other things, that if the War on (Some) Drugs was a business strategy, it would long ago have been scrapped.  He’s absolutely correct. And he’s also absolutely incorrect.

The War on (Some) Drugs is not a failed business strategy, and it is dangerous to even suggest that it is. Instead, it is a failed moral strategy. If it seems counter-intuitive to you that the government should be in the business of applying moral strategies, you win a prize. The control of what enters one’s body is, at root, the very basis of self-ownership. (Admittedly, the phrase “self-ownership” is not quite the correct nuance. I don’t “own” me, I “am” me, but anyway…)

The apparent failure of the War on (Some) Drugs speaks just as much to its actual goals as to its legitimate chances for success. In other words, if the goal was to criminalize large portions of an entire generation, then it has been a raging success. However, if the goal was to prevent people from freely consuming that which they know is their right anyway, it had no hope of success in the first place, and that lesson was obvious from alcohol prohibition.

On the more general issue of business strategies, why is it is dangerous to draw such a parallel to the War on (Some) Drugs? Such a suggestion–that just because the War on (Some) Drugs is failing that we should stop it–is a trap. It is a great example of the argument from effect, a veritable fat, shiny, Red Herring waiting for the obvious, “well, people still murder each other…” retort. Let us be clear, murdering someone is an attack on them, which is morally prohibited, dare I say malum in se anyway. Me putting a substance that you don’t like into my body has nothing to do with you.

Drug prohibition is unarguably malum prohibitum and therefore simply the attempt–misguided and puritanical–to impose the choices of some on the behavior of all. Ergo, it was destined for failure. By the way, this in no way suggests that drugs are good, but then again, neither are Twinkies. Now, if one wants to argue about the possible negative results of drug usage–crime, sickness, whatever–those ostensibly resultant actions, at least those that actually infringe on others, are ALREADY against the law. They are, in fact, malum in se regardless.

If you’re in your own home getting baked or shooting up, and don’t bother anyone else, it should be no one else’s business. I might also argue that most, if not all, of the crime supposedly endemic to illegal drugs occurs commensurate with the distribution of said substances despite their illegality. Make it legal on one day and that crime stops the next day. And, if the lessons of places like Portugal are any indication, with very little, if any, increase in widespread drug usage.

Of Morality and Failed Business Strategies… Read Post »

Was Robin Hood a Marxist?

History, Libertarian Theory, Pop Culture
Share

Simon Schama could use a dose of classical-liberal theory. Most of us can be forgiven for knowing Marxist theory better than the liberal tradition — it’s hard not to drink Marxism in with our schooling and culture — but popular historical narrative really does suffer by the omission of the "bourgeois historians" whom Marx himself credits as the precursors of his class theory.

In the BBC TV series A History of Britain, Schama asks about the English Peasants’ Revolt of 1381, "Was this a class war, then?" (A term, he explains parenthetically that "we’re not supposed to use since the official burial of Marxism.") A pause, while the camera angle changes to closeup. "Yes," he says plainly. "It was."

"Not surprisingly," writes Schama in the print version of A History of Britain, "it was in the second half of the fourteenth century that the legends of Robin Hood … first became genuinely popular."

But as I write in "Class War in the Time of Robin Hood" in today’s Freeman, Schama is appealing to the wrong class theory if he wants to explain the mindset of the commoners marching on London in the 14th century.

I’m far from the first to offer a libertarian revision of Robin Hood’s politics, but where I focus on the ideology of his earliest historical audience, most other treatments focus on the particulars of the legend.

Some examples:

On the other hand, Ayn Rand seems to have been happy to leave Robin Hood to the socialists:

"It is said," Rand has Ragnar Danneskjöld concede in Atlas Shrugged, that Robin Hood "fought against the looting rulers and returned the loot to those who had been robbed, but that is not the meaning of the legend which has survived.

What do you think: is Robin Hood worth claiming for our tradition?

Was Robin Hood a Marxist? Read Post »

When Evil Institutions Do Good Things: The FCC’s PTAR Law

Corporatism, History, Pop Culture
Share

In my Freeman article "TV’s Third Golden Age," the summary subtitle that the magazine chose was "Programming quality is inversely proportional to regulatory meddling." I couldn’t have said it better. But does that mean that everything the FCC does makes television worse?

All laws and regulations have unforeseen consequences. That usually means unintended damage, but there’s no law of history that says every unplanned outcome is pernicious.

If you’re an advocate of a free society — one in which all arrangements are voluntary and there is the least coercive interference from governments or other thugs — history will present you with an unending series of conundrums. Whom do you side with in the Protestant Reformation, for example? The Catholic Church banned books and tortured scholars, and their official structure is one of hierarchy and authority. Easy enemy, right? Clear-cut bad guy. But the Church had kept the State in check for centuries — and vice versa, permitting seeds of freedom to root and flourish in the gaps between power centers. Whereas the Protestant states tended to be more authoritarian than the Catholic ones, with Luther and Calvin (not to mention the Anglicans) advocating orthodoxy through force. There’s a reason all those Northern princes embraced the Reformation: they wanted a cozier partnership of church and state.

This is certainly not the history I was taught in my Protestant private schools.

Similarly, most of us were schooled to side with the Union in the Civil War, to see Lincoln as a savior and the Confederacy as pure evil. But as much as the war may have resulted, however accidentally, in emancipating slaves, it also obliterated civil liberties, centralized power, strengthened central banking and fiat currencies and — to borrow from Jeffrey Rogers Hummel’s great book title — enslaved free men.

"Father Abraham," as the pietists called him after his assassination, was a tyrant whose primary goal was always what he actually achieved: central power over an involuntary union. Recasting this guy as an abolitionist hero is one of the many perverse legacies of America’s official history. But it’s a mistake to simply reverse the Establishment’s verdict and claim that the Confederacy was heroic. Plenty of Johnny Rebs were fighting a righteous battle against what they rightly deemed to be foreign invaders, but even if you ignore the little problem of the South’s "peculiar institution," the Confederate government was no more liberal than its Northern rival. "While the Civil War saw the triumph in the North of Republican neo-mercantilism,” writes Hummel, “it saw the emergence in the South of full-blown State socialism.”

Reading history without taking sides may fit some scholarly ideal (actually, it seems to be a journalistic ideal created by the Progressive Movement to masquerade their views as the only unbiased ones), but it is not a realistic option. We cannot do value-free history. If we try, we instead hide or repress our biases, which makes them a greater threat to intellectual integrity.

Neither can we say, "a plague on both their houses," and retreat to the realm of pure theory, libertarian or otherwise. We have to live in the real world, and even if we are not activists or revolutionaries, the same intellectual integrity that must reject "neutrality" also requires that we occasionally explore the question of second-best or least-evil options.

I remember several years ago, when my very libertarian boss surprised me by speaking in favor of increased regulation of banking. His point was that the banks were not free-market institutions; they were government-created cartels enjoying a political privilege that protected them from the consequences of the market while they surreptitiously depleted our property and spoiled the price system that drives all progress in the material world. Ideally, he’d want the government out of banking altogether, but in the meantime having them do less damage was better than letting them do more.

It may seem anticlimactic to follow the Reformation, Civil War, and fractional-reserve banking with a little-known FCC rule about TV programming from almost half a century ago, but I’ve been reading television history for a while now (1, 2, 3, 4) as illustrative of larger patterns in political history.

The Prime Time Access Rule (PTAR) was a law instituted in 1970 to limit the amount of network programming allowed during TV’s most-watched evening hours.

According to industry analyst Les Brown, the PTAR was adopted

to break the network monopoly over prime time, to open a new market for independent producers who complained of being at the mercy of three customers, to stimulate the creation of new program forms, and to give the stations the opportunity to do their most significant local programming in the choicest viewing hours. (Les Brown’s Encyclopedia of Television)

If you still accept the official myth that the airwaves are "That most public of possessions given into the trust of the networks," as Harlan Ellison describes them in The Glass Teat, and that the federal government’s job is to manage the radio spectrum in the best interests of that public, then I’m sure you don’t see any problem with PTAR. (You can read my paper "Radio Free Rothbard" [HTML, PDF] for a debunking of this official piety.)

But a libertarian could easily jerk his or her knee in the opposite direction. How dare the central government tell private station owners what they can and can’t air on their own stations, right?

The problem with such an ahistorical take on the issue is that broadcast television was a creature of the state from the beginning. Radio may have had a nascent free-market stage in its development, but television was a state-managed cartel from the word go.

So am I saying that PTAR was a good thing? Is it like the possibly beneficial banking regulations imposed on a cartelized banking system? Should we view CBS versus FCC as the same sort of balance-of-power game that Church and State played before the early modern period of European history?

Maybe, but that’s not why I find PTAR an interesting case for the liberty-minded historian. As is so often the case with laws and regulations, PTAR’s main legacy is in its unintended consequences.

"Despite the best of intentions," writes historian Gary Edgerton in The Columbia History of American Television, "the PTAR failed in almost every respect when it was implemented in the fall of 1971."

[P]ractically no local productions or any programming innovations whatsoever were inspired by the PTAR. In addition, any increase in independently produced programming was mainly restricted to the reworking of previously canceled network series, such as Edward Gaylord’s Hee Haw and Lawrence Welk’s The Lawrence Welk Show.… Rather than locally produced programming, these kinds of first-run syndicated shows dominated the 7 to 8 P.M. time slot.

This renaissance of recently purged rural programming was certainly not the FCC’s goal, but the creation of the first-run-syndication model is one of the great unsung events in media history.

A quick note on terminology: to the extent that I knew the word "syndication" at all when I was growing up, I took it to be a fancy way of saying "reruns." For example, Paramount, the studio that bought the rights to Star Trek after the series was cancelled, sold the right to rerun the program directly to individual TV stations. When a local TV station buys a program directly from the studio instead of through the network system, that’s called syndication. But syndication isn’t limited to reruns. Studios created first-run TV programs for direct sale to local stations as far back as the 1950s, but they were the exception. The dominant syndication model was and is reruns. But two events created a surge of first-run syndication: (1) PTAR, and (2) the rural purge I obliquely alluded to above.

I write about the rural purge here, but I’ll summarize: as the 1960s turned into the 1970s, television network executives did an about-face on their entire approach to programming. In the 1960s, each network tried to win the largest possible viewership by avoiding controversy and appealing to the lowest common denominator in public tastes. This meant ignoring the rift between races, between generations, and between urban and rural sensibilities — what we now call red-state and blue-state values — in the ongoing culture wars. This approach was dubbed LOP (Least Objectionable Program) theory.

Basically, this theory posits that viewers watch TV no matter what, usually choosing the least objectionable show available to them. Furthermore, it assumes a limited number of programming choices for audiences to pick from and implies that networks, advertising agencies, and sponsors care little about quality when producing and distributing shows. (Gary Edgerton, The Columbia History of American Television)

By the end of the decade, however, NBC vice president Paul Klein (who had christened LOP theory just as its tenure was coming to an end), convinced advertisers that they should stop caring so much about total viewership and focus instead on demographics, specifically the Baby Boomers — young, politically radicalized, and increasingly urban TV viewers — who were most likely to spend the most money on the most products. CBS was winning the battle for ratings, but Klein pointed out that their audience was made up of old folks and hicks, whereas NBC was capturing the viewership of the up-and-comers.

Klein may have worked for NBC, but it was CBS who took his message to heart, quite dramatically. In 1970, the network rocked the TV world by cancelling its most reliably popular shows: Petticoat Junction, Green Acres, The Beverly Hillbillies, Mayberry RFD, Hee Haw, Lassie, and The Lawrence Welk Show.

In Television’s Second Gold Age, communications professor Robert J. Thompson writes,

CBS, in an effort to appeal to a younger audience made socially conscious by the turbulent 1960s, had dumped its hit rural comedies in the first years of the 1970s while their aging audiences were still placing them in Nielsen’s top twenty-five. Critics, who for the most part had loathed the likes of Petticoat Junction and Gomer Pyle, loved some of what replaced them.

I loved what replaced them, too: Mary Tyler Moore, All in the Family, M*A*S*H, and the like. "Several members of Congress," Wikipedia informs us, "expressed displeasure at some of the replacement shows, many of which … were not particularly family-friendly." But that was the point: the networks were no longer aiming to please the whole family: just the most reliable consumers.

But despite capitalism’s cartoonish reputation for catering only to the bloated hump of the bell curve, that’s not how the market really works. It is how a cartel works, and the broadcast networks behaved accordingly, both before and after the rural purge. In the 1950s and ’60s, they aimed for the largest possible viewership and to hell with minorities of any sort. The demographic revolution changed the target, but not the tactic: aim for the big soft mass. That’s certainly how the big players would behave in a free market, too, but the telltale sign of freedom in the economy is that the big players aren’t the only players. Fortunes are made in niche markets, too, so long as there aren’t barriers to entering those niches. As I’ve said, TV is descended from radio, and Hoover and his corporatist cronies had arranged it so that there could only be a few big players.

That’s where we come back to the FCC’s Prime Time Access Rule of 1970. PTAR created a hole at the fringe of the prime-time schedule, just as the rural purge was creating a hole in the market. All those fans of Hee Haw and Lawrence Welk didn’t just go away, and they didn’t stop spending their money on advertised products, either. Before PTAR, the multitude of fans of "rural" programming would have had to settle for mid-afternoon reruns of their favorite shows (the way Star Trek fans haunted its late-night reruns around this same time). But the rural fans didn’t have to settle for reruns, and they didn’t have to settle for mid afternoons or late nights. They could watch new episodes of Hee Haw or Lawrence Welk at 7 PM. In fact, those two shows continued to produce new episodes and the local stations, which were no longer allowed to buy from the networks for the early evening hours, bought first-run syndicated shows instead. The Lawrence Welk Show, which had started in the early 1950s, continued for another decade, until Welk retired in the early ’80s. And the repeats continue to run on PBS today. Hee Haw, believe it or not, continued to produce original shows for syndication until 1992.

I loved Mary Tyler Moore, and I didn’t care so much for Lawrence Welk, but what I really love is peaceful diversity, which cannot exist in a winner-takes-all competition. The rise of first-run syndication was a profound crack in the winner-takes-all edifice of network programming.

The strategy CBS, NBC, and ABC had gravitated toward for short-term success — namely, targeting specific demographics with their programming — also sowed the seeds of change where the TV industry as a whole would eventually move well beyond its mass market model. Over the next decade, a whole host of technological, industrial, and programming innovations would usher in an era predicated on an entirely new niche-market philosophy that essentially turned the vast majority of broadcasters into narrowcasters. (Gary Edgerton, The Columbia History of American Television)

This idea of "narrowcasting" is the basis of quality in entertainment (and freedom in political economy, but that’s another story).

I’m not out to sing the praises of the FCC for increasing economic competition and cultural diversity — these consequences were entirely unintended — but we do have to recognize PTAR as a pebble in Goliath’s sandle, distracting him for a moment from David’s sling.

When Evil Institutions Do Good Things: The FCC’s PTAR Law Read Post »

privcheck

Articles, History, Political Correctness, The Left
Share

Check Your Privilege

In a recent Freeman article, “Check Your Context,” columnist Sarah Skwire brought my attention to a popular meme on the political left, both online and off: “Check your privilege.”

At its gentlest, this is advice to raise our awareness of those aspects of our personal histories that may lead to complacent assumptions about how the world works, assumptions that may limit the scope of our moral imaginations.

When it is less gentle (which is often), it is a dismissal of the opinions of anyone who is insufficiently poor, or, more likely, insufficiently left-wing. [Read the rest of the article.]

freemancheckyourhistory

privcheck Read Post »

Yes, We Have No Bananas

History, Political Correctness, Science, The Left
Share

YesWeHaveNoBananasIn a recent post on my personal blog (“Is mediocrity intelligent?”), I talked about the importance of a diversity of strategies — even apparently “wrong” ones — to the long-term survival of a species. The corollary of course is that overinvestment in any single strategy can be catastrophic.

We see this issue at play in modern agribusiness.

As Popular Science informs us,

The 1923 musical hit “Yes! We Have No Bananas” is said to have been written after songwriters Frank Silver and Irving Cohn were denied in an attempt to purchase their favorite fruit by a syntactically colorful, out-of-stock neighborhood grocer.

It seems that an early infestation of Panama disease was already causing shortages in 1923. But the out-of-stock bananas in question were not the Cavendish variety we all eat today; they were Gros Michel (“Big Mike”) bananas, and they were all that American banana lovers ate until the 1950s, when the disease finally finished them off.

I would love to know what a Gros Michel banana tastes like. I’m a big fan of bananas and eat them every day. (Actually, I drink them, blended into smoothies.) But the reason I only know the taste of Cavendish — and the reason you do too, unless you’re old enough to have had some Gros Michel mixed into your pablum — is that Cavendish bananas are resistant to the strain of disease that wiped out our original bananas. We have to assume that the Plan B bananas we now enjoy are only second best as far as flavor goes. They may not even be first best at survival, because the banana industry is searching for a Plan C banana to take the place of the Cavendish once the inevitable crop disease sends it the way of the Gros Michel — something that they predict will happen in the next decade or two. (See Banana: The Fate of the Fruit That Changed the World by Dan Koeppel.)

Why are bananas so vulnerable to these blights? Why aren’t agricultural scientists worried about our other favorite fruits — apples, for example?

Yes, We Have No Bananas Read Post »

Scroll to Top