Back to top

HIstorical Background

Background on CyberWork

The First Industrial Revolution

In the middle of the eighteenth century, around the time the Thirteen American Colonies rebelled against England, an unexpected wellspring of inventiveness bubbled up in the British Isles. For the first time in history, average working people began inventing tools of all kinds to make products by machine instead of by hand. Two critical developments led the way: a steam engine to power large machines, and spinning devices that proved mechanical con­traptions could spin and weave quicker, faster, and better than humans. This First Industrial Revolution revolutionized the daily lives of common people. Although it had no precise start or stop date, scholars generally describe the First Industrial Revolution as spanning the decades between the 1760 and 1850.

James Watts invented the first efficient, multi-use steam engine. Born in Scotland to parents of average means, he made a living repairing instruments for the University of Glasgow. This sparked his curiosity about how to make instruments function better. In 1763, the university asked him to repair a crude steam-driven water pump. Intensely curious about how steam might power all kinds of machines, Watts devised an engine that not only pumped water far more efficiently, but could also be harnessed to other mechanical devices from printing presses to passen­ger ships. For example, rather than horses pulling wooden carriages, steam engines allowed locomo­tives to pull iron “carriages.” Watts coined the term “horsepower” to measure exactly how much force each engine generated. In 1776, he received a patent from the British government to protect his exclusive right to sell these unique machines. Because Watt’s engine powered other large mechanical inventions, it became the preeminent symbol and most widely applied tool of the First Industrial Revolution.

James Hargreaves helped transform the textile industry around the same time. An uneducated spinner with thirteen children to support, the En­glishman conceived a better way to make thread out of raw cotton when a spinning wheel overturned and Hargreaves noticed that its spindle rotated more freely if positioned horizontally rather than vertically. In 1764, he began building what he called “spinning jennies” with cranks to turn multiple spindles, all producing cotton thread simultaneously. (“Jenny” was a slang term for machine.) Hargreaves made the contraption for his own use, but in 1770 took out a patent to sell it commercially.

Hargreaves’s invention inspired tinkerers to mecha­nize yet other parts of the textile industry, including weaving. Their inventions put spinners and hand weavers out of business. James Hargreaves became so unpopular that an unemployed mob destroyed his first “jennies” and ran him out of town. Elsewhere, unem­ployed workers called themselves “luddites” in honor of a mythical King Ludd who lived in Sherwood For­est (a lá Robin Hood). They swore to crush the devices that had replaced them, and in the process became the enduring symbol of opposition to new technology.

Although no single invention started the First Indus­trial Revolution, the steam engine and spinning jenny sparked a wave of invention that rapidly transformed the British economy. Traditional centers of cotton textile production in India and China declined in the face of competition from British machine-made fabric. In mere decades, mechanized weaving replaced an­cient textile workshops and left millions in Britain and elsewhere searching for new footholds in the economy.

In the United States at this time, cotton was a principal crop on Southern plantations manned by enslaved workers. To keep up with British demand, New England inventor Eli Whitney developed the cotton “gin,” a machine that processed raw cotton for spinning far more quickly than people. (Here, too, “gin” was slang derived from the word “engine.”) Whitney’s invention spurred a further expansion of the British textile industry and incentivized American planters to plant more acres in cotton and extend slavery to the farthest horizons. Soon, inexpensive but finely woven fabrics made clothing more af­fordable and plentiful for everyone. In America, the expansion of cotton also provoked a Civil War over the rights of the states to enslave people.

The new employment opportunities of the First Industrial Revolution attracted huge numbers of 4 migrants from rural communities and even foreign countries to industrializing cities. Machine manufac­turing, food processing, and iron smelting forged giant metropolises like London, Chicago, and New York, and urban economies that grew along with them.

Factory life was difficult, however. Laborers worked long hours for minimal pay, often in unsafe, crippling conditions. Even children worked in factories, putting in long shifts alongside adults. Writers like Charles Dickens publicized the harsh conditions of the early industrial era in novels such as Oliver Twist (1837) and David Copperfield (1849). As it did for enslaved Americans, industrialization came at a high price. Yet for the first time in human history, population boomed, life expectancy improved overall, and per capita income edged upward as people earned wages and as mechanization helped farmers grow more food. Immigration and advancing technology stimu­lated westward expansion. Steam-powered railroads spanned the continent and became a quintessential example of the First Industrial Revolution in America.

The Second Industrial Revolution

Historians roughly date the Second Industrial Revolution from 1870 to 1914, at the start of the First World War. The Second Industrial Revolution continued the inventive process with new sources of power. Lightweight engines powered by oil, natural gas, and electricity replaced bulky steam engines. These smaller, more efficient engines had a variety of transportation applications, including automobiles, buses, and airplanes. Communications advanced rapidly, too. Telephones, telegraphs, and radios put millions of people in touch with one another. Everything seemed to speed up. Even plants grew faster with the invention of new chemical fertilizers.

What defined the Second Industrial Revolution most prominently, however, was electricity. In the big cities of the First Industrial Revolution, people had only gas lamps or kerosene lanterns to light their homes, stores, streets, and factories. Many activities had to be done during daylight hours, which limited the length of the workday and also the capacity of the economy.

With the advent of electricity, new products multi­plied quickly and could be produced 24 hours a day. Great Britain installed the first public power station in 1881. Thomas Edison patented a commercially viable light bulb in 1897. It was the first bulb cheap and safe enough for home use. Electric streetcars replaced horse-drawn carriages in the major cities of Europe. By 1910, municipalities could power a residential neighborhood from a single power sta­tion. Cheap electricity and tiny engine components made other machines feasible, such as washers and dryers, vacuum cleaners, sewing machines, and refrigerators. These inventions reduced the labor involved in running a home and gradually paved the way for women to enter the workforce while still raising children. This increased overall output.

Other innovations of the Second Industrial Revolution were organizational rather than scientific. American carmaker Henry Ford was first to apply the principle of assembly-line production. Workers remained rooted in one spot while components of a new automobile passed by them on an electric conveyer belt. Workers simply performed their one task (welding or assem­bling) before the conveyor belt passed the new car to yet another worker for the next task. Based on “Sci­entific Management,” such arrangements led to new fields such as industrial engineering. As factories and businesses grew ever larger, cities did so along with them. Over time, water supply networks, sewage sys­tems, communication systems, public schools, electri­cal power lines, and modern transportation networks created what’s called an industrial infrastructure.

The Second Industrial Revolution saw the greatest increase in economic growth in the shortest period of time in human history. Living standards improved as the prices of manufactured goods fell dramat­ically. With continued prosperity came continued population growth. By 1880, the U.S. population had grown so greatly that it took more than seven years to tabulate the U.S. Census results for that year. The government announced a contest for a better way to count data. The winner, a New Yorker named Herman Hollerith, invented a machine that took all the data from the 1880 census and tabulated it in 5.5 hours. The next census, in 1890, became the first “electric census,” using Hollerith’s punch-card based machines. Powered by electricity and informed by scientific principles, these great tabulators took up entire rooms. They paved the way for the next major human innovation and ushered in the modern world.

The Third Industrial Revolution

The Third Industrial Revolution, sometimes called the Digital Revolution, began sometime between 1950 and 1960 with the development of the main­frame computer. Called “main frames” after the large cabinets that housed their central processing units, early computers were used primarily by gov­ernments and large corporations to store data and make statistical calculations. Computer technology evolved rapidly, however, from semiconductors and mainframe computing to the first microprocessors in 1971 and birth of personal computers around 1975. Mainframes that stored information for internal use were eventually replaced by networks of “servers” that store and process information around the planet for millions of miniaturized computers (desktops, laptops, and now phones), all of which are far more powerful than the biggest mainframes of the 1960s and 1970s. The memory and processing “chips” that computers use to analyze and store data increased continuously in capacity. Scientists call this “Moore’s Law.” In 1965, Gordon Moore, a scientist and founder of the company Intel, predicted that the number of circuits that manufacturers could fit onto a single silicon wafer (a “chip”) would double every two years while falling in price. His estimate proved true.

By the end of the 1980s the basic elements of the modern digital age were in place. Consumer goods like televisions, which used to be tuned in by antenna (analog), were reengineered and linked to the Internet in ways that allow us to stream movies today. Automo­biles, telephones, clocks, and small appliances of all types were similarly reengineered to be more powerful, useful, and efficient. The move from analog to digital technologies in the Third Industrial Revolution revolu­tionized the communications, transportation, and en­ergy industries. It further globalized supply and pro­duction chains by allowing humans to coordinate their activities more rapidly and efficiently. Computers facil­itated a process known as “outsourcing.” For example, instead of all the parts of a car being made in one place, components might be manufactured and par­tially assembled in multiple locations across the planet.

Artificial Intelligence and the Fourth Industri­al Revolution

Mathematician John McCarthy, the son of work­ing class immigrants, coined the term “Artificial Intelligence” (AI) in 1955 to describe the poten­tial for computers to be programmed to “think” about information and respond flexibly to human commands. Previously, every process executed by a computer had to be preplanned. Traditional computer programs were like recipes that instructed the machine exactly what to do at each step. If the machine encountered a situation not anticipated by the programmer, the computer stopped functioning or “crashed.” This was known as a “bug” in the program. All the work a person had put into the process might be wiped from the computer’s memory.

McCarthy’s vision wasn’t realized for four decades, however. Scientists were stymied by numerous diffi­culties in achieving machine learning, not the least of which was limited data storage capacities and slow processing speeds. Not until 1993, when IBM’s “Deep Blue” computer defeated chess champion Garry Kasparov, was Artificial Intelligence shown to work.

AI the next generation of computer software that uses a new type of algorithm called a “neural net­work.” Instead of following a series of steps written by a programmer, a neural network uses massive amounts of data to teach itself the best solution to a problem. A human defines the goal such as, “Drive this car quickly and safely to the closest store that sells organic eggs.” The neural network accesses all the available data on the Internet about the car, closest store, fastest route, nearby merchants that carry organic foods, safe driving techniques, recent traffic jams, and so on. It then analyses that information to find the best way to achieve the goal. The human doesn’t know exactly how the software arrives at the right answer. If the software makes a mistake, the human provides feedback such as, “those eggs weren’t fresh.” The neural network updates itself and informs other neural networks to watch for “sell by” dates. The human doesn’t have to “debug” the program. It debugs itself based on a feedback from the person.

AI and neural networks are now used in many industries. One example is aviation. Human pilots supervise commercial aircraft and sometimes over­ride computers if they appear to be malfunctioning, but pilots no longer do most of the work. Modern aircraft are generally flown by a computer autopilot that tracks the plane’s position using motion sensors and dead reckoning, corrected as necessary by GPS (Global Positioning Systems). Software systems land most commercial aircraft. In a 2015 survey, Boeing pilots reported spending an average of only seven minutes physically manipulating their controls during a typical flight. Airbus pilots, whose planes are even more fully automated, spent half that time.

Like steam, electricity, and computers in previous generations, Artificial Intelligence will have count­less applications, many of which have not yet been invented. Incorporated into robots, AI will save humans countless hours. A common example today is a vacuum robot. Using sensors, the robot scans a room, “learns” the position of chairs, couches, and stairs, and then vacuums around them without being instructed by the user. Scholars debate—and no one knows—exactly the extent to which AI will transform our world. In the film, Andrew Ng, one of the world’s leading experts, asserts that it is “the new electricity.” If that’s true, new applications will increase exponentially. Like earlier inventions, AI is not meant to replace humans but allow them to be more productive and free up time for other pursuits.

Algorithms

In mathematics and computer science, an “algorithm” is a procedure for solving a problem. The word goes back to the Persian inventor of the branch of mathe­matics known as Algebra. The scholar’s last name, Al- Khwarizmi, was translated into Latin as Algorithmi. Applied to Artificial Intelligence, algorithms are strings of instructions to perform calculations, process data, estimate probabilities, and complete specific tasks.

Algorithms are the basis for software programs that allow computers to recognize faces, translate speech, identify patterns in data, and solve problems based on logic. Common examples include Goo­gle’s Internet browser, Amazon’s recommendation engine, Waze’s satellite navigation routing system, and people-finders like Skype and Facebook.

Neural Networks

Artificial neural networks are one of the main tools of machine learning. These are software programs based on algorithms that use a “network” of in­ter-connected data points. Essentially, they write their own instructions (computer codes). They use information provided by humans to achieve goals set by humans. While inspired by processing patterns in animal nervous systems, neural networks remain significantly different from biological brains that are far more perceptive, flexible, and creative.

To picture how machine learning works, imagine a factory conveyor belt. After the raw material (data) is placed on the conveyor belt, it travels along a pro­duction line. At each stop, the data is examined for a different set of features. For example, if the job of the neural network is to recognize a bottle of water, the first stop might analyze the brightness of the object’s pixels. At the next stop, the network might analyze the distribution of pixels to determine the shape of the object. At the end of the production line, the computer gives its results. Human beings then measure the software’s accuracy. For example, if the neural network has mistaken a bottle of whiskey for a bottle of water, the human programmer marks the answer “wrong.” The machine goes back and reprocesses the data again (and again) until its margin of error approaches zero. After awhile, the network carries out its task without further human help.

These “deep learning” neural networks are being ap­plied to an increasingly wide variety of fields, includ­ing speech and sound recognition, drug design, med­ical analysis, board games, financial investment, and many other specialties, producing results that are sta­tistically more accurate than those of human experts.

The ability to “think” about data does not mean that AI is “sentient,” however. Neural networks are devised to achieve specific outcomes using large amounts of data. Even when attached to robots, neural networks solve only the problems assigned them by humans. These programs are “intelligent” only in the sense that they learn from their mistakes. Scientists call this “narrow intelligence,” as opposed to “general intelligence,” which humans possess but no computer is expected to have anytime in the foreseeable future. Human intelligence is self-aware, learns from an example of one, and informed by emotion. Narrow AI is task specific and data-driven. Narrow AI can identify a picture of a bottle of water. General intelligence knows what to do with the bottle.

One commonsense way to think about the differ­ence between narrow and general intelligence is that people talk to—but not with—their phones. A user may ask a phone for driving directions, but the phone itself has no curiosity or questions about the information it provides. It has no real mind.

Big Data

A common term to describe the inputs that neural­networks need in order to produce accurate results, “big data” describes extremely large and continuously expanding data sets beyond the ability of any human to process. No one can write fast enough, long enough, or small enough to manage the amount of data that the tiniest computer chip can store, sort, and analyze. Big data collection was made possible in recent years by the improvements in storage and digitization that Gordon Moore predicted in 1965. Neural networks use big data to identify patterns, make recommendations, and perform mechanical tasks that range from prescribing medicine and performing surgery to flying airplanes and vacuuming carpets.

Robotics

Robotics is a branch of technology that designs, builds, and programs machines to carry out specific actions either autonomously or semi-autonomous­ly. In industry, most robots are machines with long arms that perform endlessly repetitive tasks like welding the parts of a new car. Some robots operate under close human supervision, such as those used to perform surgery. Instead of using her or his own hands, which are bigger, shakier, and less precise than mechanical instruments, a surgeon uses a computer to direct the robot’s arms and delicate instruments.

Some manufacturers have created robots like “Pep­per” in the film to identify faces. They even identify expressions that the robot categorizes as emotions based on examples that humans have previously fed into the computer’s database. Such robots can greet customers by name and provide information on prices or the aisle locations of specific products. They are programmed to express polite and sympa­thetic greetings, much like voice-activated “shopping assistants” on phones and home devices. Robots are essentially another kind of labor saving device for tasks previously done by human workers. Some robots use artificial intelligence, but most do not.

Labor Saving Devices

Labor saving devices are all the inventions that make it easier for people to complete particular tasks, reducing the time and number of hands necessary for a job. So, for example, a mechan­ical thresher makes it possible for one person to harvest fifty acres of corn in a day instead of the job taking several people a week by hand. This increases what economists call “productivity:” the amount of work each individual can accomplish.

People have developed tools to make work easier since the stone age, but during the Industrial Revolution the attitude spread held that if a job was difficult for a human or an animal, then a mechanical device could and should be invented to reinvent the task entirely. Cars and tractors replaced horses and oxen, and peo­ple found ways to automate household tasks, too. La­bor saving devices have revolutionized the world many times over, continuously raising human productivity.

Creative Destruction

The Austrian economist Joseph Schumpeter coined the term “creative destruction” in 1942 to describe a painful paradox of free markets. Capitalism offers economic incentives for creativity that have made the world wealthier. But every product that improves upon another destroys the market for the earlier item. Old companies go out of business and em­ployees lose their jobs. Some individuals are worse off not just in the short term, but permanently if unable to adapt. Attempts to soften the impact by preserving old jobs merely short-circuit progress. Historically, creative destruction has kept up with population growth, producing an adequate number of new jobs and raising living standards overall.

Milton Friedman, an economist who won the Nobel Prize in 1976, told the story of asking a construction supervisor in an impoverished coun­try why his men used shovels to build new roads instead of bulldozers. His host replied that bull­dozers would take away jobs. Friedman replied that by this logic the supervisor ought to issue teaspoons in order to employ even more men.

For real human beings in real time, Schumpeter’s enduring term captures a painful truth: progress is not easy. Old jobs are destroyed. A compas­sionate society tries to cushion the transition.

Access, Rule of Law, and Transparency

Scholars believe that advances in technology were due primarily to modern social systems that were friendlier to innovation than the aristocratic systems that preceded them. There are three key ways in which modern societies are comparatively more open and equal than before. The film calls them Access, Rule of Law, and Transparency.

Access is an ideal woven into democracy even when imperfectly achieved. Essentially, it is the conviction that open systems function better than closed ones. Where individuals can freely participate in govern­ment and the economy, a society becomes not only freer and more personally fulfilling, but also wealthier and more peaceful. The philosophical justification for access is that “All men are created equal,” as stated by the American Declaration of Independence. It has been the nation’s work since 1776 to put this concept into practice by making opportunity acces­sible regardless of race, gender, ethnic background, sexual orientation, or physical disability. The process hasn’t been easy or automatic. Human history is long and customs are deeply rooted. But the nation has changed its laws and practices over time to align more closely with its revolutionary ideals. This has widened the pool of people who contribute to innovation.

Rule of Law is the belief that government should not have arbitrary power. Rules should be made by elected representatives and enforced impartially. In the U.S., state and federal legislatures determine the “rules” by which everyone is supposed to play. Courts hold both government and citizens accountable. History shows that the breakdown of “rule of law” leads to individual criminality and government tyranny. Innovation dries up. People don’t want to risk others stealing their ideas, and businesses don’t wish to lose their investments either to corrupt elites or popular mobs. Rule of law is fundamental to social stability.

Transparency is another widely held modern value. At the time of Columbus, navigational maps of the earth’s surface were state secrets. Royal governments felt no obligation to report treaties or internal de­liberations to their subjects. Then, around the time of the First Industrial Revolution, the United States and a growing circle of nations increasingly came to accept that transparency in political and economic dealings was more useful than secrecy. Scholars spy the first American references to transparency in the 1778 Articles of Confederation. At the time, it was illegal to publish the proceedings of the British Par­liament, but the Articles required the Constitutional Congress to publish a monthly journal of proceedings. The U.S. Constitution of 1789 made similar require­ments of the new federal government. In his 1792 farewell address, George Washington reminded his countrymen that “honesty is always the best policy.”

Transparency serves access and rule of law as both depend upon a free flow of information.

Patent System

The history of American patents spans more than three centuries, even before the Constitution was adopted. In the colonial period, people who invented new products could petition the Royal government, which had the power to grant them an exclusive right to sell their invention. But the British imperial system was aristocratic, making patents hard to obtain and defend. The Crown was also discriminatory towards the colonies. The causes of the Revolution included the limits on what colonists could make. For exam­ple, Parliament’s “Hat Act” of 1732 restricted the manufacture, sale, and exportation of colonial hats.

Following Independence, the American founders envisioned intellectual property as one of the forms of property that government should defend. Arti­cle One, Section 8 of the U.S. Constitution gave Congress the power “To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.”

This meant that only the inventor could profit from her or his work for the duration of a patent. For example, in 1917 a Swedish immigrant named Gideon Sundback submitted diagrams and a model to the U.S. Patent Office for a new device to fasten clothing. After that, for a limited period of time, any company wishing to make “zippers” had to pay Sundback’s company for the use of his idea. Once the patent lapsed, any manufac­turer could freely copy Sundback’s innovation.

The Patent Office was democracy in motion. Any per­son could file for a patent regardless of race, gender, national origin, and so on. The talents of the whole country—including immigrants—were put to use.

Pursuit of Happiness

The American Declaration of Independence of 1776 famously stated, “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.

The final phrase was a play on English philoso­pher John Locke. Locke had earlier spelled out a “contract theory of government” at the end of the violent seventeenth century, following a long period of dynastic turmoil and civil war. In words that came to symbolize the transformation of British government, Locke asserted in 1688 that kings and queens rule not by divine appointment, but at the consent of the people they serve. If rulers fail to protect their subjects’ “life, liberty, and prop­erty,” the people are entitled to replace them.

In his draft of the U.S. Declaration of Indepen­dence, Thomas Jefferson substituted the phrase “pursuit of happiness” for the word “property,” but he meant much the same thing as John Locke. Citizens were entitled to improve their lot in life. Government must protect, rather than deprive citizens of the fruits of their labor. It should devise and enforce laws that advance the people’s physical welfare. If government failed, the people could change the government. In the United States, this led to representative government and “universal suffrage,” meaning the right to vote for all adults.

Population and Longevity

Throughout most human history, population and economic growth was slow and incre­mental. It took humankind 120 centuries to reach a population of one billion.

Then, in little more than little more than two cen­turies, the number of humans on the planet shot from one billion to seven billion. Improvements in farming—from the introduction of fertilizers to mechanical threshing—created more reliable food supplies. Science brought a better understanding of how diseases like cholera and typhus are transmitted, leading to improvements in public hygiene. Indus­trialization led to wages and savings, stimulating a middle class. People lived longer than ever and fewer of their children died. Average life expectancy was around age thirty in 1800, at the start of the Industrial Revolution. In one hundred years, life expectancy climbed to around age fifty. Today, most people can expect to live close to age eighty unless—ironical­ly—they harm themselves by indulging habits that would have been impossible in earlier generations, such as overeating or abusing “recreational” drugs.

Social Safety Net

Humans have a long history of helping one another out in times of trouble. All major world religions require adherents to practice charity towards the needy. The parable of the Good Samaritan in the Christian Bible is one of countless examples. For this reason, churches, temples, mosques, and other religious bodies were among the first institutions­vto provide organized assistance to the poor.

Yet industrialization created challenges that exceeded the capacity of private charity. Urbanization com­bined with creative destruction left big, diverse pop­ulation groups at risk during economic downturns or at times of exceptional innovation. One feature of all industrial economies was that they began experiment­ing with new forms of social service to mitigate hard­ship towards the end of the nineteenth century. But the Great Depression and World War II proved the critical turning point. These events drove home that failing to respond compassionately risked catastrophe.

In 1920, for the first time, more Americans lived in towns earning wages than on farms growing their own food. When the depression hit in 1929, unemployment rose to 25%. In the United States, a long period of Republican Party dominance came to an end when voters decided that President Herbert Hoover was unequal to the challenge. Under Franklin Roosevelt, government devised new programs to help families. In some parts of the world, democratic governments sim­ply crumpled under the weight of economic failure. Germany, Italy, and Japan embraced authoritarian governments that promised better economic perfor­mance—and attacked their neighbors to achieve it.

The Great Depression produced a variety of federal and state initiatives, some of which continue to the present. One of the great architects of this system was Frances Perkins, an experienced reformer whom Roosevelt recruited to head of the Department of Labor, where she served longer than any other labor secretary in U.S. history. A canny politician, Perkins led campaigns that established a minimum wage and maximum workweek. Most importantly, she chaired the committee that wrote the Social Security Act of 1935, creating a federal pension system and fostering state unemployment insurance. Perkins’ achievements did not end the Great Depression, but helped democracy weather it. Although her safety net has been amended many times since the 1930s, the basic institutional structure remains the same and is premised on the idea that unemployment is largely cyclical. Creative destruction periodically wipes away old jobs, but innovation brings new jobs at higher pay because individual workers are more productive.

Today, some people worry that AI and robots render this premise moot: the changes that are coming are just too rapid and too fundamental. Policy debates center around what changes need to be made in the social safety net to cope with job loss and stag­nant wages, especially among the least educated. A Universal Basic Income is one of many proposals. Economist Milton Friedman was one of the first to propose that government provide a minimal income for all citizens to prevent the worst forms of poverty. President Gerald Ford signed a modified plan called the Earned Income Tax Credit, which today still provides a small amendment to personal income for employed individuals who fall below a certain income.

As the film shows, experts disagree on the best way to reform the current system, but most believe it is inadequate.

Education and National Prosperity

Even before the invention of income support pro­grams in the 1930s, there was another way that governments strove to provide. The American Revolution gave rise to the idea that a free country needed free schools if citizens were to exercise votes intelligently and thrive economically. Horace Mann, who never had more than six weeks of school­ing in a year, proselytized the “Common School Movement.” Head of the Massachusetts’s Board of Education (founded in 1837), Mann called public schools “the greatest discovery made by man” and “the great equalizer.” There is “nothing so costly as ignorance,” he believed. An uneducated society is unstable. “Jails and prisons are the complement of schools,” he wrote. “So many less as you have of the latter, so many more must you have of the former.”

Grammar schools spread across the U.S. from the 1830s and 1880s. Reading, writing, and arithmetic proved tools for success in industrializing econo­mies. Unlike most parts of the world, towns offered children a no-cost education. Pupils learned from common textbooks such as McGuffey Readers and Noah Webster’s American Spelling Book. Immediately after the Civil War, the federal government estab­lished a Department of Education to “promote the cause of education across the country.”

A “high school movement” soon began, decades in ad­vance of universal secondary education in Europe (not established there until after World War II). Schooling per pupil in the U.S. went up by 0.8 years every decade. Each generation improved over the last. They also became wealthier than their parents. In 1900, only six percent of Americans graduated high school. By the 1950s, roughly 60 percent did. Per capita income increased and economic inequality declined. The nation’s productivity climbed as well. With the help of new tools, educated workers produced more goods per capita than previous generations. Americans achieved the world’s highest income in the same years that they became the world’s best-educated people.

Until the 1970s. Since then, improvement between generations has nearly ceased. The percentage of Americans who graduate high school is rela­tively constant at around 75 percent, and math and reading scores have not gone up in fifty years. Instead of exceeding their parents’ accomplish­ments, many students are now stubbornly defined by them. Educational success is least among pupils from low-income backgrounds. As shown by Eric Hanushek, one of the scholars in the film, Horace Mann’s beloved public schools are no longer clos­ing the gap between “haves” and “have-nots.”

This is in contrast with competitors. The United States, which invented public education, no longer leads, as in the nineteenth and twentieth centuries. European and Asian pupils have caught up with and now exceed American students. In the most recently published international scores (2015), the U.S. rank 31st behind most European and Asian nations.

Decline of Employment Benefits

During much of the last century, middle class Americans relied on employers to provide “bene­fits” in addition to wages. These included health insurance plans, retirement pensions, short-term disability insurance, paid vacations, and contribu­tions towards unemployment insurance. Today, many employers have reduced these benefits, paying only into the Social Security Fund, which provides a small pension to workers after retirement.

There are two important problems with this develop­ment beyond the fact that such pensions don’t cover the full cost of living. Social Security relies heavily on current contributions to support people who retired a long time ago. The size of the elderly population has boomed as longevity improves. This puts an enormous strain on the fund. Some economists predict that Social Security will be bankrupt by 2034. Addi­tionally, many people who work in the informal “gig economy” neither pay into Social Security nor receive a pension. Increasing numbers of individuals, such as drivers for Uber or Lyft, are considered indepen­dent contractors. These people receive few if any benefits. They are at real risk in a recession or during another wave of innovation that could eliminate their jobs entirely, such as the advent of driverless cars.

Share