2 MIN READ

Should I Use ‘Everyday’ or ‘Every Day’?

Unravel the mystery of “everyday” versus “every day” and never second-guess your word choice again.

by Bennett Kleinman
Close up of calendar and clock

Deciding whether to use “everyday” or “every day” can be as tricky as choosing where to order takeout from. While we may not be qualified to tell you what food to eat — though you can never go wrong with pizza — we are able to shed some light on the grammatical issue at hand. “Everyday” is an adjective that essentially means “ordinary” or “commonplace.” It’s used to describe objects that you’d typically encounter on a regular basis without any sort of exact schedule. For instance, an outfit that you wear frequently could be described as “everyday clothing.”

Advertisement

“Every day,” by contrast, acts as a synonym for the word “daily” and is used to indicate events that happen each day. For example, “she rides the bus to work every day,” or “he orders the same sandwich every day for lunch.” Here’s a trick: If you can insert the word “single” between the words “every” and “day” and your message remains the same, then “every day” is more appropriate to use than “everyday.” “She rides the bus to work every single day” makes sense, while “she wears every single day clothing” does not. 

“Someday” vs. “some day” and “anytime” vs. “any time” operate similarly. Whenever one of these grammatical conundrums presents itself, keep the following in mind: Closed single words (“someday,” “anytime,” “everyday”) are modifiers, whereas open double words (“some day,” “any time,” “every day”) are noun phrases that are modified by the words that precede them.

Just like everything else in the English language, there are exceptions to that rule. “Anywhere” and “everybody” are almost always written as closed single words. But that’s a topic for a future edition. 

Featured image credit: Chutima Chaochaiya/ Shutterstock
Advertisement
2 MIN READ

Is “GIF” Pronounced “Jiff” or “Giff”?

For nearly 40 years, the pronunciation of “GIF” has divided technophiles. According to major dictionaries, both sides of the debate say “GIF” correctly, but that isn’t the whole story.

by Rachel Gresh
Document file type flat GIF

Is it pronounced “giff” or “jiff”? According to Steve Wilhite, the creator of the GIF image file format, “choosy developers choose JIF.” If you recognize this tagline as a riff on a  famous peanut butter brand slogan (“Choosy moms choose Jif”), you know the intended pronunciation is with a soft “G,” like in “giant” or “gym.”  

Advertisement

Wilhite doubled down on this pronunciation at the 2013 Webby Awards, displaying a massive GIF while accepting his award. The short animation plainly stated, “It’s pronounced ‘JIF’ NOT ‘GIF.’” So, if the creator promotes this pronunciation, that should resolve the dispute, right? Not so fast. According to a Stack Overflow survey, 65% of respondents use a hard “G” (as in “gum”) — the “wrong” pronunciation.

A GIF, short for “graphics interchange format,” is a type of looping animation created by Wilhite in 1987. GIFs are widely used today in marketing, entertainment, and texting, but despite their ubiquity, there’s still much debate over the pronunciation of their name. Even with Wilhite’s proclamation, the question persists: Should it be pronounced with a hard “G” as in “gift” or a soft “G” as in “gem”? The short answer is that both are technically acceptable, according to Merriam-Webster and the Oxford English Dictionary. (Remember: While we look to dictionaries as the arbiter of truth for language queries, they record how people use language, but they don’t necessarily make up the rules.)

The argument for the hard “G” is fueled by what the “G” stands for: “graphics.” For some, this is a dead giveaway and the reason for the discourse. However, no rules state that acronyms must be pronounced correspondingly to their full names. Take GEICO, or “Government Employees Insurance Company.” In this acronym, the “E” is pronounced with the “I” to form an /aɪ/ sound, like in “height,” while in “employees,” the “E” creates an /e/ sound, as in “edit.”

In 2020, in a humorous attempt to settle the GIF/JIF dispute, Jif partnered with GIPHY (an online GIF database) to release limited-edition jars of peanut butter with labels reading “Gif.” However, both pronunciations persist, leaving us to wonder what the next amusing chapter will be in this ongoing debate.

Featured image credit: Maxchered/ iStock
Advertisement
2 MIN READ

When Should You Say “May I?”

There is more than one way to ask permission. Understanding the significance of asking “May I?” vs. “Can I?” can enhance your language etiquette.

by Rachel Gresh
Close-up of a girl asking for permission

Do you remember playing the game “Mother, can I?” when you were a kid? Of course not, because it’s called “Mother, may I?” In this elementary game, the chosen “Mother” calls on each player, and they must ask permission to move a certain number of steps. The Mother may grant (or deny) permission, or give an alternative move to perform, but the key is in the asking: “Mother, may I?” If the player forgets to structure their request like this, they’ll never reach the Mother, and the finish line. This playground game teaches children the rules of an essential question in the English language: “May I?”

Advertisement

Going back to Old English roots, the structure of “May I?” has been used to ask for permission. This question is formal, pleasant, and shows respect — when in doubt, go with this option. That said, if you’ve ever been on the receiving end of a snarky “I don’t know. CAN you?” retort to your question of “Can I?”, you can take solace in knowing that this structure is perfectly acceptable. “Can I?” is informal, and it’s a good option for casual requests when “may” sounds too ceremonious.  “Can” has multiple definitions — yes, it refers to having the ability to do something (that’s what’s implied by the snarky nonanswer), but in the 1800s, it gained the definition of “to have permission.”

While we’ve resolved the “May I?” vs. “Can I?” debate, there are a few more prospects for asking permission. To get technical, “can” and “may” are modal verbs, meaning they work as helper verbs to express a hypothetical situation (for example, “I can pick up the kids, but if you leave work early, could you?”) Other modal verbs include “might,” “should,” “will,” “must,” and “would.” Depending on the context, several other modal verbs work for asking permission: “Might I ask who is calling?” or “Could I use your phone?” In order of increasing formality, the permission-seeking modal verbs are “can,” “could,” “may,” and “might.” Try to gauge the tone of your conversation and pick your modal verb appropriately. 

Featured image credit: Just dance/ Shutterstock
Advertisement
3 MIN READ

When Should You Use ‘Which’ Instead of ‘That’?

Elevate your writing with this guide to using often-confused words such as “which” and “that” correctly.

by Bennett Kleinman
Man pointing index fingers in different direction

“Which” and “that” are a lot like identical twins, in that they’re pretty easy to mix up at first glance. But just like identical twins have unique traits that set them apart, so do these similar yet distinct terms. One reason we mix up “which” and “that” so often is the words were used interchangeably until the 1700s, and old habits die hard. But under today’s grammatical guidelines, there’s an appropriate context for “which,” and separate occasions for “that.”

Both “which” and “that” are relative pronouns, meaning they can refer to any related or previously mentioned nouns. But to understand how they differ, it’s useful to define the concepts of restrictive and nonrestrictive clauses. A restrictive clause adds essential information to a sentence: “The album that came out after her child was born changed her musical style.” In this example, the restrictive clause of “that came out after her child was born” is crucial to the meaning of this sentence. 

A nonrestrictive clause, however, works as a conversational aside, adding nonessential information: “The band’s first album, which was my favorite, had great backup singers.” It might be nice to know that you like an album, but the point of this sentence is the prowess of the backup singers, so the clause within the commas is considered nonrestrictive. As these examples show, it’s appropriate to use “that” in restrictive clauses, and “which” in nonrestrictive clauses.

Nonrestrictive clauses often appear at the end of sentences, not just in the middle like in our example above. For example, “David Bowie’s album Young Americans had famous backup singers, which included Luther Vandross.” Here’s our memory tip: If you need a comma, you’re probably dealing with a nonrestrictive clause, meaning “which” is almost always the correct choice. Commas set off info that, when removed, doesn’t impact the sentence’s clarity or meaning. If you don’t need a comma, use “that.” 

“That” has a variety of usages “which” cannot serve. It can act as a definite article referring to a specific noun (“That is my favorite album”), as a conjunction to connect two clauses (“I didn’t know that it was their first time performing together”), or as an adverb to add context before an adjective or verb (“I don’t want to spend that much money on concert tickets”). This makes “that” more versatile than “which.”

Featured image credit: Krakenimages/ Shutterstock
Advertisement
2 MIN READ

What Is a Double Negative?

Learn why two negatives don’t always make a positive in the world of grammar.

by Bennett Kleinman
Wrong choice X button

In the real world, purging negativity is an important skill for a happy life. In the grammar world, purging double negativity is crucial for clear and concise writing. Double negatives are redundant thoughts made of multiple negative words; these result in complicated and confusing sentences. However, they aren’t not useful. Every once in a while, an appropriately used double negative can improve your writing, but those occasions are rare.

Advertisement

A double negative is any statement with two negative words. A person might say, for example, “I didn’t see nobody.” The two negatives are “did not” and “nobody.” The problem is that double negatives muddle the intentions, resulting in the opposite meaning. “I didn’t see anybody” would be clearer. Think back to math class — multiplying two negative numbers together cancels out the negative and turns it into a positive. It’s the same with words. Two negatives cancel each other out and turn the statement into a positive. Combining “didn’t” and “nobody” flips the meaning to imply the speaker did see somebody, which wasn’t the goal of the statement.

Common words in double negatives include negative determiners (“no” and “none of”), negative pronouns (“neither,” “no one”), negative adverbs (“not,” “never”), and negative verbs, which are created by adding “not” or making it a contraction (“wouldn’t,” “don’t”). The good news is, a double negative is usually easy to fix by removing one negative word. For example, “I cannot go nowhere tonight” can be fixed by removing “nowhere” to get “I cannot go tonight.”

There are rare instances where double negatives can add flair to your writing, however. If you’re hoping to emphasize a point, you might say, “I can’t not go to this party” for added oomph and drama. Or just ask the Rolling Stones, who famously sang, “(I Can’t Get No) Satisfaction.” In these cases, the double negatives are used for rhetorical effect. But otherwise, they should be avoided.

Featured image credit: Thx4Stock/ iStock
Advertisement
2 MIN READ

What Does ‘Playing Devil’s Advocate’ Really Mean?

From the halls of the Vatican to modern-day debates, discover the true meaning of “playing devil’s advocate.”

by Bennett Kleinman
Creepy Devil silhouette

When you hear “playing devil’s advocate,” your mind might first go to Keanu Reeves’ role in the 1997 thriller The Devil’s Advocate. And while that’s a pretty solid film, today’s edition is about something different: a figure of speech. Let’s examine the idiom’s origins, which date back to the 16th-century Roman Catholic Church.

Advertisement

The term “devil’s advocate” stems from a position in the Catholic Church known as the Promoter of the Faith (promotor fidei). This role emerged in the early 16th century during Pope Leo X’s reign and was formalized in 1587 by Pope Sixtus V. Whenever an important individual was nominated for beatification or canonization (parts of the process for granting sainthood), the promotor fidei was responsible for bringing to light any past wrongdoings or sins. This would fuel a critical debate to examine if the candidate’s positives outweighed the negatives. Given the promoter fidei’s focus on past wrongs, they came to be called advocatus diaboli, which translates to “devil’s advocate.”

In modern parlance, the idiom “playing devil’s advocate” is applied more broadly to debates on any topic, not just canonization. “Devil’s advocate” is defined as “a person who champions the less accepted cause for the sake of argument.” The person playing the role of devil’s advocate doesn’t need to believe the case they’re arguing; they may just be bringing up those points to make the argument more interesting or promote a more critical lens. Worst case, the devil’s advocate is just being annoying for the sake of argument, but let’s hope your friends aren’t that cruel.

Featured image credit: ardasavasciogullari/ iStock
Advertisement
2 MIN READ

Why Do We Start Letters With ‘Dear’?

Learn the history of this centuries-old greeting and discover how it’s evolved in the age of email.

by Samantha Abernethy
Close-up of pen writing on white paper

In an early job, I got feedback that my email correspondence needed to be more sophisticated. I was firing off, “Hey, do you have that expense report?” while my boss preferred the formalities of a traditional letter, including a salutation (a “Dear” and title/name combo) and a complimentary closing (such as “Best regards” or “Sincerely”). That level of formality has dropped out of all but the most professional email communications, but the etiquette persists for handwritten letters. Where did “Dear” come from, what does it mean, and what other options are there for conscientious email writers?

Advertisement

The word “dear,” from the eighth-century Old English “deoare,” originally meant something was precious or costly, but evolved into calling out something/someone as special. (That first usage still exists, but it’s less common.) “Dear” was used, starting around the 14th century, as a salutation for only the most intimate letters: “Dearest sister,” “To my dear friend,” etc. The phrase “dearly beloved” was introduced in a 1662 Bible translation called the “Book of Common Prayer,” and it became a traditional component of wedding ceremonies (and the classic opener of a Prince tune), furthering the association of “dear” with loved ones. 

Around the 17th century, the term became the standard opening for most polite communication and a way to start any letter as a show of respect: “Dear Mr. Smith,” “Dear Sir or Madam,” etc. Eventually we moved on from regular letter writing as the primary mode of communication between family and friends, and in the 20th century, people wrote more memorandums than love letters. Once email became standard, some people retained the formality of the written structure, and others took the opportunity to let the “To” and “From” fields do the work for them — no salutations or closings needed. 

But many people — including the Washington Post’s Miss Manners — aren’t ready to let go of “dear,” no matter the format. Career experts recommend using “dear” as a salutation in formal email correspondence, such as cover letters, but only when you know the recipient’s name; the impersonal “Dear Sir or Madam” is definitely extinct. (A Google or LinkedIn search can help you out with names.) For standard missives between colleagues, consider starting the first message with “Hi [name]” and closing with your name as well. Future replies in the chain don’t need a salutation. 

Featured image credit: CurrywurstmitPommes/ Shutterstock
Advertisement
3 MIN READ

Is It OK To Start a Sentence With a Conjunction?

This grammar myth-busting article might just change your mind about kicking off sentences with conjunctions.

by Bennett Kleinman
Conjunctions word concept on cubes

Some bits of advice are instilled in us from a very young age: Eat your vegetables, look both ways before crossing the street, and, of course, never begin a sentence with a conjunction. The latter comes to us directly from grammar class, but is it really a rule? Nope. In fact, it’s merely a suggestion. Starting sentences with a conjunction is perfectly OK in a grammatical sense, and it may even improve your writing.

Advertisement

But let’s go back to basics first. A conjunction is a word that connects two clauses of sentences. Subordinating conjunctions (“because,” “since,” “after,” etc.) link independent and dependent clauses. Correlative conjunctions (“either/or,” “neither/nor,” “such/that,” etc.) join together two words or phrases of equal importance. For example: “Either I’m going to eat this sandwich, or I’m going to eat at home.” People use both types of conjunctions to start sentences, and nobody bats an eye. The controversy usually arises with a third type: coordinating conjunctions.

Coordinating conjunctions link together two independent clauses, and can best be remembered with the acronym FANBOYS: For, And, Nor, But, Or, Yet, and So. Reputable grammar guides, including the Chicago Manual of Style and the Merriam-Webster Dictionary of English Usage, say it’s acceptable to begin sentences with coordinating conjunctions. The main reason so many people are opposed to this idea is that using these conjunctions as an initial word too often can lead to bad writing habits. For instance, stringing together multiple sentences that start with a conjunction can sound like you’re talking like a 7-year-old: Today, I went to the park. And then I ate lunch. And I saw a dog. But the dog ran away. And then he ran back again. So I smiled. And my lunch fell on the ground.

As a general rule of thumb, it’s best to avoid beginning your sentences with conjunctions as you’re developing your writing skills. But as you’re honing your voice and writing style, it’s fine to experiment a bit, as evidenced by some of history’s most iconic prose. Consider Lord of the Rings author J.R.R. Tolkien, who once wrote, “Yet the Lord of Gondor is not to be made the tool of other men’s purposes, however worthy.” Or read S.E. Hinton’s The Outsiders, which contains the line, “I lie to myself all the time. But I never believe me.”

Featured image credit: Shutterstock
Advertisement
2 MIN READ

Why Can’t Something Be ‘Very Unique’?

Discover why grammar sticklers cringe at “very unique” and when bending the rules might actually enhance your writing style.

by Bennett Kleinman
Unique toy duck among many other plain toy ducks.

There are times when you might want to add a bit of extra oomph to your words to get your point across. Consider the opening line of the U.S. Constitution: “We the People of the United States, in Order to form a more perfect Union…” Grammatically, describing something as “more perfect” is impossible. “Perfect” is, by definition, perfect. 

Advertisement

Absolute adjectives are words such as “unique,” “perfect,” or “impossible” — these terms are unequivocal on their own, shouldn’t be compared or intensified, and don’t deal in any level of varying degrees. They differ from comparative adjectives (“smarter,” “faster,” “lesser,” etc.), which deal in degrees or levels, and superlative adjectives (“smartest,” “fastest,” “least,” etc.), which have an element of judgment. But despite the fact that absolute adjectives technically should stand on their own, without modification, many writers opt to modify them as a point of style. 

Let’s examine this passage from Kenneth Grahame’s The Wind in the Willows: “‘Toad Hall,’ said the Toad proudly, ‘is an eligible self-contained gentleman’s residence very unique; dating in part from the fourteenth century, but replete with every modern convenience.'”

Should we presume to correct Grahame’s classic tale? The word “very” in the selection above is unnecessary from a strictly grammatical perspective, but it still plays an important role. It adds to Mr. Toad’s quirky tone and gives the reader a feel for the character and his home. In this case, his house is not only one-of-a-kind, but also so interesting that it’s worth noting. Another instance where a writer might find it useful to modify an absolute adjective is with an adverb that implies completeness. For instance, you may call something “completely final” in an effort to add emphasis and hammer home your point — as this topic is totally complete. 

Featured image credit: Lee Yiu Tung/ Shutterstock
Advertisement
2 MIN READ

Why Is “Could Of” Wrong?

Discover why this common phrase is a grammatical misstep, and learn how to avoid this sneaky error that even native English speakers make.

by Bennett Kleinman
Man checking his phone

English is full of sentences that sound awkward but are grammatically correct. Consider trying to untangle the mind-bending “Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo.” (Hint: “Buffalo” has three distinct meanings and functions as a noun, proper noun, and verb in this riddle.) Then there are phrases that sound OK, but are actually grammatical errors. These mistakes are easier to catch when writing, and more difficult to recognize when someone is speaking (for example, “intensive purposes” vs. “intents and purposes”). One of the biggest culprits of the “easy to mishear” swap comes with contractions and prepositions; we’re saying one thing, but people are hearing another. 

Advertisement

Consider the contraction “could’ve.” “Could” implies both possibility and willingness. It often acts as a helper verb, which means it’s paired with a second verb to make the sentence clearer. For example, “I could go to the dance” implies a chance of attending the dance. But let’s imagine you missed that opportunity and you’re telling a friend about it. You might say, “I could of gone to the dance.” But wait — that’s only what it sounds like. The proper construction of the sentence is “I could’ve gone to the dance,” where “could’ve” is a contraction of “could” and “have.” 

As already noted, “could” requires a second verb so that it can make grammatical sense. This is why we need “have” instead of “of” — the latter may sound similar, but it’s an incorrectly used preposition. The contraction “could’ve” sounds very similar to “could of,” and while it may be near impossible to discern between the two if you’re speaking, swapping them will make a big difference in writing.

Featured image credit: Tim Samuel/ Pexels
Advertisement