The pilcrow — a quirky backward “P” symbol — signals the start of a new thought. It once served a role in the margins of medieval manuscripts, but today it lives as an editorial mark in digital documents.
You might have noticed a curious backward “P” with an extra leg (¶) in certain editing tools or word processing systems. This strange little glyph is called a “pilcrow,” and its job is to mark the beginning of a paragraph. While it’s found today only in editing contexts, it had wider use in medieval manuscripts.
Pre-pilcrow reading was like a marathon for the brain: There were no breaks. By the Middle Ages, scribes began to use symbols to break up text, which made reading easier, but these symbols weren’t standardized. Different scribes used dashes, letters, and even personal symbols.
The first attempt at standardization came from the Latin word for “chapter,” capitulum (meaning “little head”). The modern pilcrow symbol evolved from there during the 12th century:The “C” that symbolized these breaks eventually turned into a backwards “P” with extended lines as scribes added ornate embellishments to manuscripts.
Advertisement
In the Middle Ages, the word “paragraph” could describe a distinct section of writing that was smaller than a capitulum (akin to how we use “paragraph” today), or denote the actual mark. It comes from the Greek paragraphein, meaning “write by the side.” “Paragraph” has had many variants over the centuries, including pelagraphe, pelagreffe, and pilcraft, the latter of which evolved into “pilcrow” by the 16th century.
By the late medieval period, the pilcrow symbol was a standard feature of manuscripts, used as a design element. Scribes favored drawing them in red ink, giving them visual prominence. But all good things must come to an end. Scribes often ran out of ink for such embellishments, leaving blank swaths on pages. The pilcrow’s demise was solidified with the advent of the printing press, as the hand-drawn mark slowed production. Soon, the pilcrow was abandoned, and a line break divided paragraphs instead.
Although the pilcrow largely disappeared from printed text, it never entirely vanished. It remains in editing marks in online word processors, or sometimes handwritten by editors to guide writers from one paragraph to the next. The pilcrow is a fun reminder that although many things change, some remain the same.
Rachel is a Washington, D.C.-based freelance writer. When she's not writing, you can find her wandering through a museum, exploring a new city, or advocating the importance of the Oxford comma.
Much like a fruity smoothie full of strawberries and bananas, certain words blend together quite nicely. They’re called “portmanteaus,” and you likely use some of them often.
The word “portmanteau” was created long ago for a large travel trunk capable of opening into two equal parts. It wasn’t until 1871 that the term was repurposed by Lewis Carroll, author of Alice’s Adventures in Wonderland. Carroll’s version of a portmanteau was “a word that blends together the sounds and meanings of two other words” — which is the primary usage today. If you’ve ever asked someone to brunch or checked into a motel, you’ve used one of these mashups. Here are a few of the most common portmanteaus that English speakers use regularly.
Brunch
This portmanteau combines the words “breakfast” and “lunch” in a delectable combination that denotes the meal that occurs in late morning or early afternoon — usually on weekends, sometimes including alcohol. Brunch menus tend to include both traditional breakfast and lunch fare, as the culinary lines are a bit more blurred than at other meals.
The earliest use of “brunch” dates to the late 19th century. An August 1, 1896, edition of the British magazine Punch mentions not only brunch, but also “blunch.” It reads, “The combination-meal, when nearer the usual breakfast hour, is ‘brunch,’ and, when nearer luncheon, is ‘blunch.’” Of course, the portmanteau “blunch” has faded into obscurity, whereas “brunch” has gained in popularity.
Chillax
The portmanteau “chillax” tells someone to tone it down. It blends together the words “chill” and “relax,” which are synonymous commands, and it became a popular slang alternative when it was coined sometime in the mid-1990s. Now that the ’90s are back in style, “chillax” can be revived for a quick way to warn someone to take it easy without coming across as too stern or formal.
Motel
“Motel” is a portmanteau with origins dating back to the 1920s, a time when personal motor vehicles were becoming more popular. It fuses the words “motorside” and “hotel” to refer to a type of lodging with a large parking lot, often built alongside major roadways. Nowadays, the word “motel” describes a style of hotel where each room is accessed directly from the parking lot, as opposed to through a communal lobby like in a more traditional hotel.
Biopic
Abraham Lincoln, Elizabeth II, and Mozart have all been the subject of cinematic biopics — a genre that combines “biographical” with “picture.” Biopics are movies that dramatize real events and tell the life stories of famous figures in an entertaining way. One of this portmanteau’s earliest uses came in a 1947 Variety article about Till the Clouds Roll By — a film about the life of composer Jerome Kern.
Smog
The term “smog” is believed to date to 1905, when it was used by scientist H. A. Des Voeux. He coined this term by blending the words “smoke” and “fog” to bring attention to the polluted, hazy skies throughout the British Isles. Today, there are widely considered to be two types of smog. The first is “sulfurous smog,” the type that used to plague London, which is caused by burning fossil fuels. The other type is “photochemical smog,” which is prevalent in areas such as Los Angeles, where there’s a high density of motor vehicle emissions.
Podcast
Everyone seems to have a podcast today, but we may not have had the word for these audio programs if not for Apple’s iPod, as the portmanteau combines “iPod” with “broadcast.” One of the earliest uses, if not the earliest, can be traced back to British author Ben Hammersley, who proposed the term “podcasting” in a 2004 Guardian article about the burgeoning entertainment medium. While iPods are no longer produced by Apple, this portmanteau continues to maintain its lasting appeal.
Spork
It’s a spoon … it’s a fork … no, it’s a spork! The history of this spoonlike fork dates back to before the portmanteau was coined. Dr. Samuel W. Francis filed a patent for a sporklike utensil in 1874, though it had the much more convoluted name of “Combined Knives, Forks and Spoons.” The word “spork” later appeared in a 1909 supplement to the Century Dictionary, suggesting it was coined in the intervening years. But it wasn’t until 1951 that the term was registered for a trademark, when inventor Hyde W. Ballard used “spork” to refer to a “combination spoon and fork made of stainless steel.”
Bennett Kleinman is a New York City-based staff writer for Optimism. He is also a freelance comedy writer, devoted New York Yankees and New Jersey Devils fan, and thinks plain seltzer is the best drink ever invented.
How Do They Decide When To Add New Words to the Dictionary?
Just because a term exists doesn’t mean it’s automatically worthy of being added to the dictionary. Much like with award ceremonies, some words are inevitably snubbed while others achieve lexicographical immortality.
Dictionaries are living documents that are updated with new words each year. Recently, Merriam-Webster added more than 200 terms, from social media slang (“touch grass,” meaning “to participate in normal activities in the real world especially as opposed to online experiences and interactions”) to science jargon (“heat index,” which is “derived from a calculation using air temperature and relative humidity”). But much like how sports halls of fame induct only a select few players, dictionary editors and lexicographers don’t accept every word and slang term that comes across their desk. Their process takes into account several factors, such as the term’s longevity, popularity, and purpose. Here are some basics of how a new word makes it into the dictionary.
Merriam-Webster, one of the preeminent English language dictionaries, explains: “A word gets into a dictionary when it is used by many people who all agree that it means the same thing.” That is to say, your friend group might use a word in a certain way, but that doesn’t mean it should go straight into the dictionary. It takes time for words to spread across society, and they get added only after developing a widespread, collectively understood meaning.
Advertisement
An early step to potentially adding a new word is compiling trusted citations of it being used in articles, books, songs, and more. Researchers scour every available source, and the Oxford English Dictionary even uses crowdsourcing to bring new terms to their attention.
Beyond widespread usage, several additional criteria must be met in order for a term to be considered a worthy dictionary entry. A word should be widely understood across many regions, serve a linguistic purpose that enhances communication, and have been used for a sustained period of time. If the word meets all these criteria, it stands a better chance of making the dictionary corpus (the lexicographical term for the body of the reference book). But if a term is known to only a small group of people, or has been popular for just a few weeks, the odds of dictionary enshrinement are much less likely.
And even if a potential candidate meets all of the criteria, that doesn’t mean it’ll make the cut. It comes down to a final decision that’s made by an editor of a specific dictionary rather than some larger committee of dictionaries. Of course, some metrics are subjective, which is why some words appear in certain dictionaries and not others. However, if a word is approved, it will show up online, be published in future print editions, and is likely to trickle through to other dictionaries.
Bennett Kleinman is a New York City-based staff writer for Optimism. He is also a freelance comedy writer, devoted New York Yankees and New Jersey Devils fan, and thinks plain seltzer is the best drink ever invented.
Some of the most common words used today actually started as mistakes. The English language is famous for its adaptability, as it borrows words from other languages and turns linguistic accidents into permanent additions. From medieval courtrooms to modern computer science, here are nine fascinating examples of words that became part of our dictionary through memorable misunderstandings.
Algorithm
An algorithm is a set mathematical process with clear steps to arrive at the right answer. But the word “algorithm” is a mistranslation of the name of ninth-century Persian mathematician Muḥammad ibn Mūsā al-Khwārizmī, which was Latinized into “Algoritmi.” So, a fundamental mathematics term comes from a mispronounced name.
Sneeze
The Old English word fnesan means “to snort.” But as writing and penmanship changed, it caused confusion between the letter “s” and the letter “f.” Fnesan became snesan — the origins of “sneeze.” Gesundheit!
Advertisement
Tornado
A tornado is a flurry of winds, maybe blowing so loudly that people couldn’t hear the correct word. The etymology of “tornado” is unclear, but it’s close enough to the Spanish words tronada, meaning “thunder,” and tornar, meaning “to turn.” A combination of the two, perhaps due to a conflation of their sounds, created the English word “tornado.”
Culprit
In the Middle Ages, the language of law was French. This word may have originated from a misinterpretation of a common abbreviation in legal documents, cul.prist. The full phrase was prest d’averrer notre bille, or “we’re ready to prove your indictment.” The abbreviation cul.prist indicated a “guilty” verdict. As English became more common, “culprit” was created by people confusing the verdict for the person.
Pea
The word “pea” is a backformation, a word created from an existing word. The original form of the green legume was “pease,” with the plural “pesen.” However, “pease” was mistaken for the plural, and people quickly began calling the singular “pea.” The mistake stuck, and now “peas” is the plural of “pea.”
Advertisement
Ammunition
Like “culprit,” “ammunition” is from French, a dominant Middle Ages language. The word la munition, meaning “weapon,” was misheard by English speakers as “ammunition,” which maintains its firing power today.
Sherry
Sherry is a strong, sometimes sweet Spanish wine. The name is commonly believed to be a misinterpretation of the Spanish vino de Xeres.
Chassé is a French ballet term, meaning “to move across the floor, jump, and bring your feet together.” But English ears heard it and wrote down “sashay,” meaning “a sassy, dance-like walk.”
Varsity
“Varsity” now strictly refers to high school sports, but it comes from “university.” It’s a shortening and misspelling based on an archaic pronunciation.
Jennifer A. Freeman is the Senior Editor of Word Smarts and Word Daily. When she's not searching for a perfect synonym or reaching "Genius" level on Spelling Bee, she's playing with her Welsh Terrier in Greenville, SC.
A scroll through social media comments will reveal instances of “mother” or “she’s mothering” in replies that have seemingly no relation to moms. Don’t worry — we’re here to decipher the Gen Z slang and give you a history lesson on how “mother” became a superlative worthy of a queen.
The origin is not so far off from the dictionary definition of mother: “a woman in relation to her child or children.” In the 1970s, the drag performers Crystal La Beija and Lottie LaBeija founded drag balls, events where performers would walk the runway in various fashion and culture-related categories. The performers formed “houses,” where all the members were under the tutelage of one “mother.” As some of these young LGBTQ+ drag performers had been kicked out of or left their biological families, their newly formed families and mothers provided necessary support. Crystal La Beija was the founding mother of the House of LaBeija, and Pepper LaBeija, the subject of Paris Is Burning, the award-winning documentary film about Harlem drag ball culture,followed her.
Advertisement
While “mother” began in the Harlem ball scene, it soon spread through drag culture. Many drag performers use “mother” to affectionately refer to the real-life folks who inspired them and/or their personas. These are traditionally female performers with a large gay fanbase — Judy Garland, Madonna, Lady Gaga, Rihanna, Beyoncé, and more. In recent years, Emmy-winning TV sensation RuPaul’s Drag Race has used the pageant format of the Harlem drag balls for its runway segments, and the contestants on the show often call host RuPaul “mother” because of her influence in the drag community. RuPaul has embraced the title with a hit dance song, “Call Me Mother.”
But the social media “mothers” aren’t exclusively from drag queens. In recent years, the slang has leapt into the Gen Z lexicon as a way to express affection and respect for all kinds of women. If someone is demonstrating a particular level of fierceness or elegance that looks like it could belong on the runway, you might say, “She mothered so hard with this one,” or give a succinct “mother” as a sign of your appreciation or admiration.
Julia Rittenberg is a culture writer and content strategist driven by a love of good stories. She writes most often about books for Book Riot. She lives in Brooklyn with a ton of vintage tchotchkes that her cat politely does not knock over.
The word for “mom” might sound different depending on where you are, but the feeling is universal. From the cheerful “mami” in Spanish to the sophisticated-sounding “maman” in French, every culture has its own take on what to call the most important person in the room on Mother’s Day. While many of these languages share similarities — many start with “m” or include “ma” and “mama” sounds — each has its own cultural ties and flair.
"Mama" — A Universal Term
Some words transcend language barriers, and “mama” is one of them. “Mama” is a go-to moniker in German, Polish, Indonesian, Filipino, Finnish, Swahili, and Japanese (ママ, “mama”) — and in Italian and Swedish it receives an extra “m” for mamma. It has slight spelling and vowel variations in other languages, as seen in Ukrainian (мама), Greek (mamá), Romanian (mamă), and Mandarin Chinese (妈妈, māma), and in Māori, the language of the Indigenous Polynesian people of New Zealand, it’s māmā (pronounced with a long “a”).
While the origin of “mama” is uncertain, the Oxford English Dictionary says it’s probably a duplicated syllable of /ma/ — a common early vocalization of infants. Because of this, it’s plausible that all similar words for mom — including “momma,” “mammy,” “mum,” and “mom” — are ultimately related by babies’ capacity for language, spanning centuries, oceans, and cultures.
Advertisement
In Some Languages, "Mom" Changes by Region
While many cultures have a word like “mama,” there are plenty of other names to call mom, and in some languages, the version you use depends on where you live. Vietnamese has more than a dozen variations of “mom.” It’s a tonal language, and words can vary based on geography. Generally speaking, you could say mẹ (the most common modern word for “mother”) anywhere in Vietnam and be correct — but other variations depend on regional dialects.
In Northern Vietnam, including Hanoi, mợ, a diminutive of mẹ, is very popular, while in the countryside, you’d be more likely to hear bu or bầm. In Central Vietnam, mạ (derived from the Chinese word 妈妈 or mama) is common. The underdot tone mark indicates a low-dropping tone, but to the south, má receives a high-rising tone mark instead.
Titles for mothers vary by region in Spanish as well. While mamá (“mom”) and madre (“mother”) are universal, there are plenty of regional Spanish words for “mom.” For example, the diminutive suffix “-ita” is added to mamá as a term of endearment, but in Latin America, it’s written as mamita, and in Spain, it’s mamaíta. Similarly, mamacita is the equivalent of English’s “momma” in Mexico and Central America. Mamá is also often shortened to ma or amá, which is especially popular in Mexico.
Another regional slang term for “mom” in Spanish is Argentina’s mamucha. In Mexico and some South American countries, vieja (“old lady”) and viejita (“little old lady”) are used affectionately to refer to one’s own mom. And one of the most endearing slang options for “mom” comes from Mexico, where jefe (meaning “boss”) refers to mom. It’s often seen with a twist, as in jefita (“little boss”) or jefecita (“little boss lady”), using diminutive suffixes to show fondness.
Advertisement
Other Words for "Mom"
With thousands of languages worldwide, it’s no wonder there are so many ways to say “mom.” Some are classic, while others are surprising, quirky, or even confusing, but they all share one similarity. They carry a special sense of warmth and familiarity that transcends language.
In American English, we might favor “mother,” “mom,” “mommy,” and “mama,” but across the globe, there are hundreds of words for this special person.
Rachel is a Washington, D.C.-based freelance writer. When she's not writing, you can find her wandering through a museum, exploring a new city, or advocating the importance of the Oxford comma.
What’s the Difference Between Straight and Curly Quotation Marks?
Straight and curly quotation marks may look largely similar. But when it comes to deciding between the two, there’s a “smart” choice and a “dumb” choice to be made.
Much like the shape of french fries, there is a choice between straight (“”) and curly (“”) quotation marks (no waffle or shoestring quotation marks to confuse the issue even more). Funnily enough, while they look different, they don’t differ in their meaning, as straight and curly quotation marks serve an identical grammatical purpose. However, they do cause some issues when it comes to typography and now computers, and thanks to modern computer programmers, straight quotes started to be called “dumb” and curly quotation marks were called “smart.” Here’s how they acquired those reputations.
Curly quotation marks were preferred by the earliest printers for their stylistic appeal. They were the standard choice until the typewriter was invented in the 1870s. In an effort to consolidate space on typewriter keyboards, early models included a single straight mark (‘) key to serve a variety of purposes. This single ambiguous line could act as an apostrophe if typed once or quotation marks if typed twice, and could also be used for measurements in feet or inches (e.g., 5’4”). This choice was more efficient than dedicating multiple keys to open and close curly quotation marks and an apostrophe. It freed up valuable keyboard real estate for other symbols.
In the 1960s, ASCII (American Standard Code for Information Interchange) was designed for word processors. The digital system didn’t have the same physical limitations of a typewriter keyboard, but ASCII still adopted straight quotation marks, rather than revert to the curly symbols that many typographers preferred. The smart/dumb distinction developed as personal computer technology evolved. In the 1980s, software developer David Dunham wrote an algorithm he called “Smart Quotes,” which replaced ‘ and ” with ‘ or ’ and “ or ” automatically as the user typed. At that point, the straight versions were still standard on Macintosh computers and, as Dunham described, “you used to have to remember some arcane keyboard combinations to enter curved quotes.”
It seems that thanks to Dunham’s algorithm, straight quotation marks got the reputation of being “dumb,” whereas curly ones came to be known as “smart.” While there really is no grammatical difference between the two, straight quotation marks were phased out for the more popular curly quotation marks as word processing technology developed. All this is to say that the difference between straight and curly quotation marks is largely a style choice, though it’s one that many have strong feelings about. Common typography wisdom says that you should never use straight quotation marks, as they’re simply a vestigial relic of early typewriters, and some computer programs don’t render them properly. Instead, curly quotation marks are considered preferable due to their aesthetic appeal and also the fact that they came first. We’re of the opinion that it doesn’t really matter, but you should pick one and be consistent within the piece.
Bennett Kleinman is a New York City-based staff writer for Optimism. He is also a freelance comedy writer, devoted New York Yankees and New Jersey Devils fan, and thinks plain seltzer is the best drink ever invented.
What’s the Difference Between “Comprise” and “Compose”?
If you’ve ever avoided using the terms “comprise” and “compose” because you weren’t quite sure of the difference between them, this one’s for you. Here’s how to accurately describe parts of a whole using these verbs.
English has a way of humbling even the most meticulous grammarians. I’d venture to guess that many of us have confused or swapped “comprise” and “compose” at least once or twice. Not only do they sound similar, but both verbs deal with how parts relate to a whole. By definition, “comprise” means “consist of; be made up of,” while “compose” (when referring to elements) means “constitute or make up (a whole).” Simple, right? Not quite. In practice, the nuance between these two words can bewilder even the most seasoned writers. But there’s a trick for telling them apart, and it all depends on the order of the sentence.
Consider this example: “The U.S. comprises 50 states” vs. “The United States composes 50 states.” The first option is correct, but why? “Comprise” is used when the thing that is the whole is listed before the parts. Conversely, “compose” is used when the parts are listed before the whole. Following this rule, we can flip the previous example around for the correct usage of “compose”: “Fifty states compose the United States.” The parts (the 50 states) come before the whole (the United States).
Advertisement
Here’s another example. Let’s picture a bouquet of flowers. You might say, “The bouquet comprises roses and peonies” because the whole (the bouquet) comprises the parts (roses and peonies). Similarly, you’d say, “Roses and peonies compose the bouquet” because the parts (roses and peonies) compose the whole (the bouquet).
However, knowing the difference between “comprise” and “compose” isn’t the only difficulty here. Another common blunder is the phrase “is comprised of,” as in, “The bouquet is comprised of roses and peonies.” According to grammarians, this use of passive voice is never correct — it would be like saying “is sold of” instead of “sells.” However, you can say, “The bouquetis composed of roses and peonies.” (In this passive form, the previous guidance about the parts being before the whole doesn’t apply.) The mistake with passive voice likely happens because of the confusion between “comprise” and “compose,” but if neither “compose” nor “comprise” fits the bill, you might try “constitute,” a verb that means “to be (a part) of a whole.”
If you’re feeling swept up in the “comprise” vs. “compose” conundrum, you’re not alone, but now you know the key to determining the difference: “Comprise” starts with the whole, and “compose” starts with the parts. And you receive bonus points if you avoid the phrase “is comprised of” altogether. If this is still too tedious, there’s no shame in using tried-and-true stand-ins like “includes,” “consists of,” or “makes up” — they’re clear, correct, and easier to remember.
Featured image credit: ksyusha_yanovich/ Adobe Stock
Rachel is a Washington, D.C.-based freelance writer. When she's not writing, you can find her wandering through a museum, exploring a new city, or advocating the importance of the Oxford comma.
Why Do We Talk About “Stealing Someone’s Thunder”?
The origins of “stealing someone’s thunder” are somewhat scandalous. They can be traced to the theatrical world of 18th-century England, so let’s raise the curtain and examine what happened.
There’s a certain episode in season 7 of Friends in which Monica accuses Rachel of several instances of “stealing her thunder.” Maybe I was etymologically naive, but I thought the writers of the show created this expression, which refers to doing or saying something that another person was planning on, and thus taking away the credit or attention that the other person may have deserved. But with a little bit of research, I found out the saying originated in the theater world of 18th-century England.
The story goes that in 1709, a struggling playwright, John Dennis, staged a London production of Appius and Virginia. For this performance, Dennis created a mechanism that could replicate the sound of thunder. While there are no surviving records of how it worked, reportedly the device was successful in mimicking the booming noise. However, the play was far less successful, and the production was forced to close after just four nights.
Soon after this failed run, the very same theater staged a production of Shakespeare’s Macbeth. Dennis attended opening night, only to discover that his thunder machine had been purloined and used in the show. He reportedly stood up and shouted something to the effect of, “They will not let my play run, but they steal my thunder!” This very literal exclamation supposedly gave rise to the idiom that’s used more generically today.
Nowadays, “stealing someone’s thunder” is applied in a variety of ways. It can refer to literal theft like it originally did, or it can be said as a reaction to some social drama (e.g., someone beating you in sharing an idea at work or a cousin or a friend getting engaged at your wedding).
If Dennis were alive today, perhaps he could take solace in knowing he left an indelible mark on the English language, even if he failed to make the same impact theatrically.
Bennett Kleinman is a New York City-based staff writer for Optimism. He is also a freelance comedy writer, devoted New York Yankees and New Jersey Devils fan, and thinks plain seltzer is the best drink ever invented.
Penmanship is a dying art form, but that doesn’t mean handwriting is any less important or useful. Let’s take a look at the history of cursive writing to learn how a skill that was once widespread has since declined.
As today’s communication is dominated by keyboards and touchscreens, the art of penmanship is dying out. For hundreds of years, personal style was expressed through stylish script and signatures. But in recent decades, cursive penmanship has become less prominent, in part because many American grade schools dropped it from the curriculum in the 2010s (though about half the states have added it back in the last few years). Let’s explore the history of cursive writing and script and why mastering penmanship remains relevant in our digital age.
Cursive as a Status Symbol
Good penmanship has long been considered a status symbol, indicating wealth, privilege, and education access. The ancient Romans borrowed aspects of the Etruscan alphabet to create an early written script for transactions and correspondence. However, by the Roman Empire’s fall, penmanship had become a specialized discipline rarely seen outside monastic settings. This is evidenced by the beautiful illuminated manuscripts from monasteries before the Renaissance.
To give a very brief overview of the development of European penmanship styles, in the late eighth century, Charlemagne — the King of the Franks and Holy Roman Emperor — instructed an English monk to standardize the craft of penmanship. This resulted in the Carolingian minuscule, a writing style that included lowercase letters, word separation, and punctuation. In the Middle Ages, the increasing costs of parchment led to a denser style of script. However, upon the invention of the printing press in the mid-15th century, a heavier typeface dominated, but Italian humanists responded by creating a more elegant handstyle, known as “italic.”
Advertisement
Handwriting was a status symbol, leading to the emergence of penmanship schools in the New World by the 1700s, while poorer people taught themselves to write by copying from manuals. In addition to indicating education and wealth, penmanship also signified gender, as men and women were expected to flourish their writing differently. “Feminine” writing often appeared more curved and bowed-out than straighter “masculine” writing.
In the mid-1800s, abolitionist Platt Rogers Spencer attempted to democratize American penmanship by developing a cursive writing system that was adopted by many schools and businesses. His Spencerian script can be seen in the original Coca-Cola logo. This idea of teaching a single penmanship style caught on, and in the 19th and 20th centuries, cursive English was standardized in American schools. As cities grew and job prospects, such as secretarial positions, opened outside of fields and factories, strong writing skills were required. In many ways, good penmanship meant improved opportunities.
Handwriting for Memory’s Sake
Why don’t more people born after the 1990s have strong penmanship skills? The answer is simple — computers. While penmanship is still rigorously taught in many European schools, current American schoolchildren spend more time mastering typing and computer skills than practicing neat handwriting. But traditional handwriting offers unique cognitive benefits that typing simply can’t replicate.
Today, we may not need to pass a penmanship test to get a job, but it’s still a valuable skill to cultivate outside of school. Research shows that handwriting notes activates multiple brain regions associated with optimal memory, much more so than digital devices. Taking down information by hand or writing a to-do list on paper will preserve that memory longer than typing it into a laptop or phone.
Jennifer A. Freeman is the Senior Editor of Word Smarts and Word Daily. When she's not searching for a perfect synonym or reaching "Genius" level on Spelling Bee, she's playing with her Welsh Terrier in Greenville, SC.
Enter your email to receive daily lessons that dive into what makes English so fascinating. Each email is packed with odd rules, etymologies, and the tools you need to be a better communicator.
Sorry, your email address is not valid. Please try again.
Sorry, your email address is not valid. Please try again.