Are jumping jacks named after a real person, or are they a gym teacher’s invention? Here’s a quick examination of the etymological history behind this common exercise.
There are far too many famous Jacks to count — including (but not limited to) Lemmon, Nicholson, Benny, and even the fictional Jack Bauer from TV’s 24. But what about “jumping jack,” as in the calisthenic exercise? Does this name refer to a real Jack, or does the credit lie elsewhere? Let’s jump into figuring out the phrase’s true etymological history.
According to Merriam-Webster, the earliest use of “jumping jack” dates back to 1883, long before the exercise was invented. At that time, it referred to “a toy figure of a man jointed and made to jump or dance by means of strings.” These wooden toys were quite popular in parts of England, France (where they were known as pantins), and Germany (where they were called Hamplemann) in the 17th through 19th centuries. Similar puppets have been found in Brazil, North America (from the Hopi people), and in Africa (in the form of Yoruba carved masks). In England, the name “Jack” was likely given to this toy because it was a common way to refer to any male figurine at the time — similar to how we’d refer to any random man as an “average Joe” today.
This dancing puppet toy’s name and movement appears to be the likely inspiration for the actual physical exercise. “Performed from a standing position by jumping to a position with legs spread and arms raised and then to the original position,” the exercise was popularized, at least in part, by General John J. Pershing (nicknamed “Black Jack”). While working as an instructor at West Point from 1897 to 1898, Pershing taught the exercise as a conditioning technique. He likely called this movement “jumping jack” because it closely resembled the toy’s movement (and possibly because of the connection to his nickname), but we cannot know if he invented the name and exercise, or if he was taught by someone else.
A misattribution has to do with fitness guru Jack LaLanne, who did help to popularize jumping jacks on his nationally syndicated exercise TV show. But given that show ran from 1951 to 1984, it postdates both the toy and the Pershing story.
Featured image credit: gradyreese/ iStock
Advertisement
More from our network
Word Smarts is part of Inbox Studio, which publishes content that uplifts, informs, and inspires.
In Greek mythology, Pan was a fertility deity said to roam freely in the mountains, caves, and forests of Greece. Faunlike in appearance, he was typically represented as mostly human in form but with the horns, legs, and ears of a goat. As the patron of shepherds, Pan concerned himself with flocks and herds of pastoral animals. He was also a free-spirited — and notoriously lusty — god of the wilds, who enjoyed dancing in the moonlight with the nymphs and playing his eponymous panpipe.
As a god of nature and protector of animals, Pan can be seen as a positive force. But he had a dualistic nature, being neither purely good nor evil. Pan could be wild and unpredictable and possessed a peculiar and disconcerting power: the ability to instill a sudden, overwhelming fear in anyone who raised his hircine (goatlike) hackles. According to the ancient myths, he became particularly irritated when anyone interrupted his afternoon naps. If a passing stranger did disturb his slumber, Pan would let out a chilling yell that sent terror coursing through anyone nearby — a type of fear named “panic” in his honor.
Ancient Greeks believed Pan was responsible for a range of strange events, from sudden stampedes of livestock to the inexplicable fear that would grip travelers as they passed through dark woods. One famous story credits Pan with helping the Athenians defeat the Persians at the Battle of Marathon in 490 BCE. According to the accounts of the ancient Greek historian Herodotus, Pan appeared to the runner Pheidippides and promised to aid Athens. During the battle, the Persian army supposedly experienced sudden, unreasonable terror — soldiers became panic-stricken — which ultimately led to its defeat. The grateful Athenians later built a shrine to Pan beneath the Acropolis.
The word “panic” passed through the centuries, and today in English it means what it did millennia ago: “a sudden overpowering fright.” But now we tend to attribute panic to the instinct of “fight or flight” and the release of stress hormones such as adrenaline and cortisol, rather than to a libidinous, flute-playing goat-god who lives in the hills.
Featured image credit: Giorgio G/ Adobe Stock
Advertisement
More from our network
Word Smarts is part of Inbox Studio, which publishes content that uplifts, informs, and inspires.
Handwriting can be as distinctive as a fingerprint, but there are specific types of script that have developed over the centuries. Do you know the name for what you were taught in school?
The ability to communicate via written language is one of the main behaviors that distinguish humans from other animals. In prehistoric times, three main writing systems developed independently — in the Near East, China, and Mesoamerica. Each of these systems evolved from pictography to a syllabary (symbols representing syllables) to an alphabet. The Latin alphabet we use today developed out of the Near East writing system. Along with the alphabet came specific forms of writing styles, each adapted to solve specific problems — whether increasing writing speed, improving legibility, or simply making the written word more beautiful.
Here we take a look back through the history of cursive handwriting and how different methods have emerged over time, from the elegant loops of medieval scribes to the standardized methods taught in modern classrooms.
Roman Cursive
Long before the printing press was invented, ancient Romans relied on handwriting for all written communication, records, and daily scribblings. Apart from the square capitals (capitalis quadrata) used for inscriptions on public monuments, the writing can be divided into two varieties: old and new Roman cursive. The word “cursive” comes from the Latin “currere,” meaning “to run,” signifying the letters ran together. Old Roman cursive, used from approximately the first century BCE to the third century CE, was a majuscule script — one that used capital-like letters, all of a similar height. In the late third century, old Roman cursive was largely replaced by new Roman cursive, which incorporated minuscule letters similar to the lowercase letters used today in the Latin alphabet. New Roman cursive became the dominant form of writing in ancient Rome, leading indirectly to Carolingian minuscule — and eventually to the script commonly used today.
Carolingian Minuscule
Carolingian minuscule emerged during the eighth century, when several monasteries in the Carolingian realms of Northern France and Germany began developing scripts in an attempt to bring some clarity, order, and consistency to the swathe of barely legible cursives that had developed from the late-Roman period. The Carolingian ruler Charlemagne, keen on bringing about an intellectual revival, tasked the Anglo-Latin cleric Alcuin with standardizing texts across the empire as part of the broader educational reforms of the Carolingian Renaissance. Based primarily at the scriptorium (a collection of manuscripts) in Tours, France, Alcuin carried out his task with aplomb. The newly standardized script, with its clear, rounded letterforms, uniform heights, and consistent letter spacing, was ideal for copying manuscripts, and soon became the principal script in the empire’s scriptoria. By the end of the ninth century, Carolingian minuscule had emerged as the standard form of handwriting throughout most of Europe. It would influence virtually all subsequent Western scripts.
Advertisement
Gothic Cursive
Gothic scripts emerged in the 12th century to meet the growing demand for legible religious texts. At the same time, the rise of universities saw an increased demand for costly parchment — the dense, angular nature of Gothic cursive conserved space on the page, while also creating a visually striking aesthetic that came to define medieval manuscripts. (Due to its dense, heavy, dark style, Gothic script is also known as blackletter.) Multiple variations existed across Europe, including the rigorous and formal littera textualis, and a rounder style known as rotunda, used in southern Europe. While beautiful and space-efficient, Gothic cursive’s density made it challenging to read, especially for those unfamiliar with the style. Despite this, it remained the dominant script for formal documents, religious texts, and legal records throughout the late Middle Ages.
Italic Script
Italic script emerged during the Italian Renaissance of the 15th and 16th centuries, primarily as a response to the cramped and illegible lettering of medieval Gothic cursive. Humanist scholars sought to revive what they believed were ancient Roman writing styles, but they actually based their new italic script primarily on Carolingian minuscule, which they mistakenly thought was Roman rather than medieval. Italic featured slanted, elegant letters with a flowing rhythm, combining speed with beauty. And unlike Gothic’s angular compression, italic spread letters horizontally with generous spacing, making texts easier to read. Italic’s tilt also made it cost-efficient, as copyists were able to fit more words on fewer pages, further promoting its spread across Europe.
Copperplate Script
Copperplate script, also called English roundhand, dominated formal writing from the 17th through 19th centuries. It first emerged in England, largely due to a need for an efficient commercial cursive style. During this time, metal engraving became more common and accessible, and scribes began working alongside engravers to recreate their work on copper plates for printing — hence the name. The script itself is graceful and highly refined, using elegant letterforms with dramatic contrasts between thick downstrokes and thin upstrokes. Copperplate demanded excellent pen control, and writing masters created elaborate manuals displaying examples of the script. Copperplate soon became the standard for formal documents, invitations, certificates, and business correspondence in Britain, and from there spread throughout much of Europe and North America.
Advertisement
Spencerian Script
Spencerian script was developed by Platt Rogers Spencer in the United States during the 1840s. Spencer set out to create a form of cursive handwriting that could be written very quickly and legibly to aid in matters of business correspondence, and was also suitably elegant for personal letter writing. At first, Spencerian script can appear very similar to copperplate, but it does have a number of distinguishing features, including a lack of emphasis on shaded downstrokes on small letters, the use of only one broad downstroke on capitals, minuscules being considerably smaller than capitals, and the joins between letters tending to space them further apart. Its elegance rivaled copperplate but proved faster and more practical for everyday use. It caught on quickly, becoming the standardized form of cursive handwriting taught in American schools in the 1850s.
Palmer Method
Austin Palmer revolutionized cursive in the late 1800s by developing a system specifically designed for business efficiency and ease of teaching. The Palmer Method streamlined the flourishes of Spencerian script, creating a simpler and faster system of writing. In 1894, he published The Palmer Method of Business Writing, designed primarily for use in business colleges. The book was adopted by public school systems across the United States and became the standard cursive instruction for decades. Its focus on efficiency over artistry reflected the Industrial Age’s values of productivity and standardization — and while the method lacked the visual beauty of copperplate or Spencerian, it successfully democratized cursive by making it accessible to millions of students. The Palmer Method was used through the 1950s — and in some places, into the 1980s — but eventually began to fall out of favor when educational standards changed.
The Zaner-Bloser Method was adopted to teach handwriting in the latter half of the 20th century. It retains elements of the Palmer method, but it teaches block printing before teaching cursive script writing. In the early 21st century, some school districts dropped teaching cursive handwriting from the curriculum, but it’s now being added back. As of 2024, 24 states have a requirement to teach some kind of cursive handwriting in schools.
Featured image credit: KoolShooters/ Pexels
Advertisement
More from our network
Word Smarts is part of Inbox Studio, which publishes content that uplifts, informs, and inspires.
Subjects and objects are two of the most important components of sentences, helping us form our favorite song lyrics, novels, and movie lines. Here’s how to identify them.
Think back to some of the most recognizable lines from Hollywood films. Often, brevity increases impact. The line “Nobody puts Baby in a corner” (Patrick Swayze making hearts flutter in the 1987 romance Dirty Dancing) is a clear example of basic sentence structure, with a subject (“Nobody”), a verb (“puts”), and an object (“Baby”). Subjects and objects are two essential parts of sentences, but sometimes, they can be tricky to identify.
Every complete sentence must have a subject. The subject is the person, place, or thing performing the action, and it is almost always a noun or a pronoun. Even when the order of the words doesn’t fall into a straightforward pattern with the subject at the beginning, it’s there. For instance, “Here’s Johnny!” (from the 1980 adaptation of The Shining) is a complete sentence because it includes a subject (“Johnny”) and a verb (the contraction of “is”); “here” is an adverb. Occasionally, the subject can be a different part of speech, such as a gerund (a verb acting as a noun that ends in “-ing”) or an infinitive (“to” + a verb). For example: “Swimming is an excellent cardio exercise” uses “swimming” as the subject.
Of course, not all sentence constructions are as simple as a noun plus a verb. That’s where the object adds clarification or depth to a sentence. The object is a noun or pronoun affected by the verb’s action. For example, in Apollo 13‘s “Houston, we have a problem,” the subject is “we,” the object is “problem,” and “Houston” is a vocative (a direct address). “Problem” is the thing being acted upon by the verb.
Subjects can also be implied. Consider an example from 1977’s Star Wars: A New Hope: “Use the Force.” The subject is an implied “you,” making “the Force” the object of the verb “use.” Implied subjects can be challenging to identify but are common nonetheless. When in doubt about whether a noun is functioning as a subject or an object, consider how it interacts with the verb. If it performs the action, it’s the subject, but if it receives the action, it’s the object.
Featured image credit: Africa Studio/ Adobe Stock
Advertisement
More from our network
Word Smarts is part of Inbox Studio, which publishes content that uplifts, informs, and inspires.
Some familiar phrases sharpen your writing while others quietly dull it. Learn how to tell an aphorism from an idiom — and why spotting a cliché matters more than you think.
Every January, we’re flooded with well-meaning advice: “be true to yourself,” “start fresh,” “work smarter, not harder.” Some of it is genuinely helpful, and some of it just sounds wise because we’ve heard it so many times. For writers and careful speakers, the challenge isn’t avoiding familiar language altogether — it’s knowing when a figure of speech will be helpful in conveying your message.
A well-chosen aphorism can sharpen an idea, and a colorful idiom can make it memorable. A cliché, on the other hand, can drain it of life. As the new year invites reflection and resolution, it’s a good moment to choose our words with a little more intention. How can you tell the difference between an aphorism, an idiom, and a cliché?
Aphorisms
An aphorism is a concise statement of a principle or a universal truth. Examples abound: “actions speak louder than words,” “practice makes perfect,” “better late than never,” “easier said than done,” “every cloud has a silver lining,” “look before you leap,” “money can’t buy happiness,” and “two wrongs don’t make a right.”
Some aphorisms are direct quotes from or references to philosophy, poetry, and literature. For example: “the unexamined life is not worth living” (Socrates), “a thing of beauty is a joy forever” (John Keats), “the truth is rarely pure and never simple” (Oscar Wilde), and many from Shakespeare.
Aphorisms are effective because they are short, punchy, and direct. They often use parallel structure for effect and to create a rhythm (“easy come, easy go”), and are typically constructed in the active voice (“speak softly and carry a big stick”).
Advertisement
Idioms
An idiom, as Merriam-Webster defines it, is “an expression in the usage of a language that is peculiar to itself either in having a meaning that cannot be derived from the conjoined meanings of its elements (such as up in the air for ‘undecided’) or in its grammatically atypical use of words (such as give way).” Simply put, it’s a commonly understood expression that doesn’t match up to the definitions of its individual words.
Sources of idioms are varied and include card games, hunting, anatomy, the theater, and sports. For example,American English has been enriched with many idioms just from baseball: “in the ballpark,” “batting a thousand,” “it’s a brand-new ballgame,” “bush league,” “can’t get to first base,” “play hardball,” “heavy hitter,” “off base,” “out in left field,” “right off the bat,” “step up to the plate,” “swing for the fences,” “knock one out of the park,” “go down swinging,” “strike out,” “be thrown a curve ball,” “go to bat for someone,” and “touch base.”
A cliché is a trite, hackneyed phrase or expression. To name just a few: “tip of the iceberg,” “easy as pie,” “think outside the box,” and “fit as a fiddle.” You might recognize these as idioms — and they are — but clichés are worn out from overuse. Avoid falling back on a cliché — they reveal a lack of creativity and a reliance on dull word usage. It’s a fine line to tell when an idiom edges into cliché territory (as we could argue with a few of those baseball idioms), so if you think you’re falling back on the same stock phrases, maybe it’s time to search for a new option.
Featured image credit: izzuan/ Adobe Stock
Advertisement
More from our network
Word Smarts is part of Inbox Studio, which publishes content that uplifts, informs, and inspires.
Some of our sharpest turns of phrase come with a literary pedigree. Here are classic references that add wit, depth, and a hint of erudition to everyday conversation — if you know how to use them.
If you want to add some literary flair to your writing, or simply sound more well-read, learning a few clever quotes and cerebral idioms goes a long way. From witty Shakespearean lines to references to classical mythology and timeless novels, these expressions can be used in modern conversation, bringing their rich history into daily life. Knowing their precise meanings can make your everyday speech sharper and more interesting. Whether contemplating bold decisions or acknowledging thankless tasks, these phrases replace dull words with more meaningful ones.
Cross the Rubicon
Meaning: To pass a limit or point that is reached when the results of one’s actions cannot be changed.
Example: “The age of AI has crossed the Rubicon — there is no going back.”
To cross the Rubicon is to take a decisive step at a critical moment. It comes from a real historical event: In 49 BCE, Julius Caesar and his army crossed the Rubicon River, which formed the border between Italy and Gaul. This violated Roman law and marked the start of a civil war. “Crossing the Rubicon” (or “passing the Rubicon”) has been used to refer to a metaphorical boundary since at least the 17th century, as seen in this 1626 letter cited in the Oxford English Dictionary: “Queen Dido did never more importune Æneas’s stay at Carthage, than his mother and sister do his continuance here at London … But now he is past the Rubicon.”
The die is cast
Meaning: A process or course of action has been started and it cannot be stopped or changed.
Example: “The die was cast when the company announced the merger today.”
The Rubicon wasn’t the only metaphor born from Julius Caesar’s famous river crossing. As the legend goes, Caesar waded into the water and said, “alea jacta est,” meaning “the die is cast.” This saying refers to the literal action of rolling a die or dice. Once it is rolled, the outcome cannot be changed. Caesar was possibly quoting a line from a Greek play by Meander: anerriphtho kybos, meaning “let the die be cast.”
Mad as a hatter
Meaning: Severely mentally unsound.
Example: “I can feel mad as a hatter when I’ve worked 10 days in a row with no break.”
“Mad as a hatter” is an old-fashioned saying that describes someone as mentally unsound, though today it can be an idiom for calling someone or something unpredictable or absurd. The saying has a grim origin, dating back to the 18th-century hat-making industry. Safety standards were nonexistent at the time, and workers were exposed to toxic substances that resulted in physical symptoms and hallucinations. Though the saying predates the novel, Lewis Carroll popularized the idea in 1865 as a metaphor in Alice’s Adventures in Wonderland. The Hatter character, as he is called in the book, became known popularly as the Mad Hatter, furthering the phrase’s association with unpredictable behavior.
Advertisement
Quixotic
Meaning: Foolishly impractical especially in the pursuit of ideals.
Example: “Her plan to save the old theater from demolition was quixotic, yet noble.”
The word “quixotic” is actually an eponym, from the name of Don Quixote, the protagonist in Spanish author Miguel de Cervantes’ 17th-century novel of the same name. This adjective describes something that is “foolishly impractical, especially in the pursuit of ideals.” It can also carry an air of lofty or extravagant romantic ideas, marked by rash actions that are doomed to fail. Don Quixote was known for these very traits. “Quixotic” has been used in English since at least the 18th century.
It was the best of times, it was the worst of times
Meaning: The outcome is mixed.
Example: “Today was eventful. I lost my phone on the train, but then I got a promotion. It was the best of times, it was the worst of times.”
In the opening line of the 1859 novel A Tale of Two Cities, Charles Dickens’ clever phrasing captures contradictions in a way that has withstood the test of time. This saying is still used when both good and bad things are happening at the same time. The rest of the opening is often discarded, but in it, Dickens continues to make contrasting comparisons: “It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair …”
There's method to the madness
Meaning: There are good reasons for one’s actions even though they may seem foolish or strange.
Example: “I keep a dozen tabs open on my computer, but trust me, there’s method in my madness.”
This Shakespearean saying is a clever way to say, “There’s a reason behind these actions.” It likely comes from a line in Act 2, Scene 2 of Hamlet: “Though this be madness, there is method in’t.” Polonius says this in reference to Hamlet’s strange behavior since his father’s death, and the “method” in the idiom refers to Hamlet’s plan to feign madness to gain revenge. Later, Oscar Wilde used the phrase “method in his madness” in reference to the protagonist in his 1890 novel The Picture of Dorian Gray, further popularizing the saying.
Meaning: A task requiring continual and often ineffective effort.
Example: “Trying to clear my inbox after the holidays is a Sisyphean task.”
A Sisyphean task is something that requires continual effort, though it is often unsuccessful. It comes from a story in Greek mythology, derived from the name of King Sisyphus of Corinth. When condemned to Hades, Sisyphus was given a grueling, eternal sentence: to roll a large boulder up a long, steep hill in the underworld. However, the boulder would roll back down every time, making the task endless and impossible. The adjective “Sisyphean” has been used metaphorically in English since the mid-17th century.
Featured image credit: Allstar Picture Library Ltd/ Alamy Stock Photo
Advertisement
More from our network
Word Smarts is part of Inbox Studio, which publishes content that uplifts, informs, and inspires.
There are many ways to organize a list of terms. You could arrange them from shortest to longest, or vice versa. You could also list the terms alphabetically (the adjective for that is “abecedarian”), or you could arrange them in order of frequency. If the letters of the alphabet were arranged in order from most frequently used to least frequently used in the English language, it would look like this: ETAOINSRHDLCUMFPGWYBVKXJQZ
But that’s not why “Z” comes in last place in the alphabet. The real explanation is historical, based on the relative superfluousness of the letter.
“Z” originated as the Phoenician “zayin,” the seventh letter of that alphabet, pronounced like our “Z.” It was initially depicted as an arrow, then reduced to three lines, similar to our “Z.” It was a glyph (a symbolic depiction) for a weapon or for two armies confronting each other, represented by two parallel lines.
In the Greek alphabet, “zayin” became “zeta,” the sixth letter. When Latin borrowed “zeta” from Greek, it was listed in the alphabet in the same place as in Greek.
Then around 300 BCE, “zeta” was removed from the Latin alphabet under the Roman Censor Appius Claudius Caecus. Through the linguistic process of rhotacism, the “Z” sound had morphed to sound like an “R,” already represented by the letter “rho,” rendering “zeta” superfluous.
But around 200 years later, “Z” was reintroduced to the Latin alphabet in loanwords from Greek. By then, though, the position of “Z” in the alphabet had been taken by “G,” and “Z” was tacked on at the end.
Even though “Z” was once deemed superfluous, it would be catastrophic if it disappeared from our alphabet today. You couldn’t apologize, criticize, fantasize, incentivize, optimize, organize, prioritize, sympathize, or theorize. Do you realize or even recognize the problem?
No zucchini, pizza, mozzarella, zest, zeal, zones, zippers, quizzes, sizzle, razzle-dazzle, or ZIP codes. No zero, which would create numerical havoc. Zowie!
Even some country names would disappear from the map: No Azerbaijan, Belize, Brazil, Czech Republic, Mozambique, New Zealand, Switzerland, Venezuela, or Zambia, not to mention Kazakhstan, Kyrgyzstan, and Uzbekistan. And what about Zanzibar, part of Tanzania?
As you see, we need “Z,” even though it was once evicted from the alphabet.
Featured image credit: Kitch Bain/ Adobe Stock
Advertisement
More from our network
Word Smarts is part of Inbox Studio, which publishes content that uplifts, informs, and inspires.
“Well, that takes the cake!” This statement, said with different intonations in two different contexts, can be interpreted as either high praise or derision. How can the exact same words convey such disparate meanings with only a shift in tone?
“It takes the cake” can mean something is ranked first — or something is foolish or annoying. Let’s take a look at how this idiom has been used over the decades.
The earliest recorded use is from 1839, when a Lexington, Mississippi, newspaper alluded to cakes being offered as prizes at a fair: “We have been shown some [cotton bolls] that we thought hard to beat, yet this takes the cakes.”
That usage seems to be literal, but less than a decade later, the phrase was being used metaphorically, still referencing a prize. In 1846, an account of a horse race reported, “The winning horse take [sic] the cakes.”
The wording “takes the cake” expanded in meaning over the next few decades to refer to skill, not just winning prizes. This usage is seen in an 1886 article in the Pall Mall Gazette, a London-based newspaper: “As a purveyor of light literature, Mr. Norris takes the cake.”
As early as 1900, however, “takes the cake” acquired negative connotations. Read these next examples with a derisive tone, as opposed to the complimentary examples above. In Sister Carrie, publishedin 1900, Theodore Dreiser wrote: “Pack up and pull out, eh? You take the cake.” And in her 1938 book A Blunt Instrument, British author Georgette Heyer wrote: “I’ve met some kill-joys in my time, but you fairly take the cake.” This shift evolved out of the positive prize-winning, skillful sense being used ironically in negative contexts.
As you see, “takes the cake” can refer either to something remarkably excellent, or to something outstandingly negative. Either way, it’s something extraordinary.
Featured image credit: Maryam Sicard/ Unsplash+
Advertisement
More from our network
Word Smarts is part of Inbox Studio, which publishes content that uplifts, informs, and inspires.
For 25 years, Judge Judy reigned as TV’s most famous courtroom reality star, presiding over small-claims cases with a hardball approach to her sentencing. Like any good arbitrator, Judge Judy was never uninterested, but she did remain disinterested. The courtroom context highlights the differences between these seemingly similar words. “Uninterested” means “not interested” — something Judge Judy certainly was not. However, “disinterested” means “unbiased,” which is a key characteristic of her success. Although these two terms are often used interchangeably, they have distinct meanings and should be used appropriately.
While “uninterested” conveys the commonly used meaning of “not interested” or “not having the mind or feelings engaged,” “disinterested” is a bit more nuanced. It means “free from selfish motive or interest,” as in, “A disinterested third-party must stand as a witness.” Here, the prefix “dis-” means “apart from” or “away from.” However, “dis-” can sometimes mean “the opposite of,” as in “dislike.” This alternate usage could be why “disinterested” is often misused to mean “not interested.”
These terms have been intertwined since they entered English in the 17th century. Back then, “disinterested” meant “not interested,” and “uninterested” meant “unbiased” — the reverse of their modern meanings. Why the switch? The French word desinteresse, meaning “impartial,” was first translated into English as “uninterested.” Shortly after, “disinterested” came into use with the meaning of “not concerned.” By the late 18th century, their meanings had swapped, as the prefix “un-” became a common way to express the opposite of something, and “disinterested” aligned more closely with the original French spelling and sense of neutrality.
Here’s a mnemonic to help you remember the difference: “Disinterested” adds an “i” in the prefix, like the “i” in “impartial,” so a disinterested person is impartial, while an uninterested person just doesn’t care.
Featured image credit: Jose carlos Cerdeno/ iStock
Advertisement
More from our network
Word Smarts is part of Inbox Studio, which publishes content that uplifts, informs, and inspires.
The magical powers of the silver bullet are found in supernatural tales of werewolves, but the idiom extends to common usage as well. Where did this metaphor come from?
In werewolf myths, silver is one method to kill the powerful creatures, so a silver bullet is sometimes the hero’s weapon of choice. When “silver bullet” is used in everyday English, however, it is usually in the negative sense: “There is no silver bullet for …” You can complete that sentence with any seemingly intractable, complex problem.
How did a silver bullet earn a reputation as an all-powerful weapon? There’s a long tradition in folklore and literature.
The ancient Greeks believed that silver was a gift from the moon goddess, Selene, and that it had mystical powers. In Norse mythology, silver was believed to have protective properties, useful for warding off evil spirits. And in medieval European folklore, silver was imbued with magic used to repel werewolves and other supernatural creatures.
Silver’s reputation of mystical powers has endured through the centuries. In 1804, American poet Thomas Green Fessenden wrote about killing a witch: “how a man, one dismal night, / Shot her with a silver bullet.”
In the Grimm brothers’ 1812 fairy tale “The Two Brothers,” a huntsman shoots a witch with a lead bullet, but it has no effect. Then, the story continues, he “knew what to do, tore three silver buttons off his coat, and loaded his gun with them, for against them her arts were useless, and when he fired she fell down at once with a scream.”
A few decades later, in 1858, French writer Élie Berthet authored a novel based on the true story of the Beast of Gévaudan, a wolf that killed about 80 people in south-central France. In the real occurrence, a local hunter killed the beast using lead bullets, but in the novel, the facts were embellished by the use of silver bullets.
More recently, starting in 1949, the TV series The Lone Ranger featured a masked lawman who left a silver bullet for grateful law-abiding frontier folks before moving on to capture more desperadoes. Although the Lone Ranger used lead bullets to injure the bad guys, he named his horse “Silver.”
All these myths tell great tales, but they aren’t any more realistic than a metaphorical bullet solving a difficult problem. There is no such thing as a silver bullet — because of its low density and high melting point, silver is impractical for making bullets and far more expensive than lead.
Featured image credit: fjdelvalle/ iStock
Advertisement
More from our network
Word Smarts is part of Inbox Studio, which publishes content that uplifts, informs, and inspires.
Enter your email to receive daily lessons that dive into what makes English so fascinating. Each email is packed with odd rules, etymologies, and the tools you need to be a better communicator.
Sorry, your email address is not valid. Please try again.
Sorry, your email address is not valid. Please try again.