On December 10th, 2017 we learned about

The invention and improvements that led to the modern roll of toilet paper

If you’re old enough to read these words, you’re probably at a stage in your life where you can take things like toilet paper for granted. Using your annual quota of 50 pounds of toilet paper per year may feel easy, but like any tool, it’s something you had to be taught to use and understand (as my four-year-old is now acutely aware of.) Beyond our acclimation to wiping ourselves with paper products, there’s been technological innovation in toilet paper as well, starting with the invention of paper itself.

Early years of paper hygiene products

Just a few hundred years after paper was invented in China, their revolutionary material for writing found its way into someone’s toilet. In 589 AD, the first account of using paper for personal hygiene was documented in Korea. By 1391, paper was being produced in China for the express purpose of wiping one’s rear. That paper came in awkwardly large sheets, around two- by three-feet overall, but at least had some perfume in it to make the experience more pleasant. It wasn’t an immediate world-wide hit though, partially because these tissues were intended for the emperor’s family only. Paper was still too precious for most people to dispose of after a single use.

This early start certainly didn’t put toilet paper, scented or unscented, into everyone’s bathroom. For many parts of the world, paper was scarce enough that it wasn’t even being used in books, much less in toilets. Instead, many folks made (or continue to make) due with a variety of options that many of us wouldn’t really associate with wiping. Throughout history, the list of bathroom tissue alternatives has included stones, sponges, clay, moss, shells, sticks, hands and corncobs.

Building a better toilet tissue

By the 17th century, paper products started making their way into the bathroom in the western world, but only after it arrived in the mailbox. Newspapers and magazines were repurposed as toilet paper in the American colonies, since paper was finally cheap enough to be disposable. Dedicated toilet paper was made available in 1857 by a one Joseph Gayetty, but it faced stiff competition in the form of the Sears Roebuck catalog. The latter was mailed out for free, and came with a hole punched in the corner, making it convenient to hang in one’s outhouse. This interest in convenience may have informed the next big innovations toilet paper technology, as in 1871 Seth Wheeler started selling perforated sheets in a role rather than a tissue-style box.

That doesn’t mean the story of toilet paper was settled in 1871 though. It wasn’t until 1935 that Northern Tissue offered “splinter free” paper with a process called linenizing, reminding us of how much bravery a trip to the lavatory once required. Two-ply tissue arrived in 1942, and colored paper was available in 1954. As important as all these improvements were, toilet paper’s place in public awareness was also being updated in this time period, since at one point the whole concept of wiping one’s self was deemed too inappropriate to even bring up, much less purchase in a public setting.

Selling toilet paper to an uncomfortable public

While Joseph Gayetty was proud enough of his medicated tissues to put his name on every one, other manufacturers were a bit more hesitant to brag about their products. Thomas Seymour, Edward Irvin and Clarence Wood Scott started producing rolled toilet paper, but sold it directly to hotels and drugstores, putting their clients’ names on them instead of their own. The Scott Paper Company didn’t really acknowledge their toilet paper production until 1896, over a decade after they started selling it. At the end of the 19th century, homes started being built with indoor plumbing, meaning older methods for hygiene, like corncobs, weren’t acceptable anymore. This gave toilet paper an opening in public discourse, since the product could be advertised for how well it broke apart in plumbing, avoiding too much detail about what it did directly for consumers themselves.

The final innovation on this front came from the Hoberg Paper Company in 1928. The company started selling their toilet tissues in “ladylike” packaging, since bragging about softness and feminine qualities would be easier than getting into the specifics of cleaning one’s nether-regions. When coupled with paper sold in four-packs, the branding was enormously successful, helping keep Charmin afloat through the Great Depression and beyond. They’ve had a number of major advertising campaigns since, but they’ve all been based around notions of soft, tactile enjoyment without getting too specific about where you’re supposed to actually feel that softness. Even though toilet paper use is growing worldwide, it’s still not something most of us (over age four) want to discuss in great detail.

Source: Who Invented Toilet Paper?, Toilet Paper History

On December 4th, 2017 we learned about

From Optimi to As, a short look at educational grading systems in the United States

My third grader brought her report card home today, and I’m happy to say she’s a straight Optimi student. Well, mostly— she did have Class II, but thankfully no As (meaning absences, naturally.) Now, these grades may not seem terribly informative, but grades are still a work-in-progress. The American educational system has been experimenting with them since the 1780s, and with the number of iterations we’ve gone through, progress hasn’t always been clear. Many of us are currently familiar with As, Bs, and Cs, but they certainly aren’t the last word on how to rate student performance. While there wasn’t actually any Latin on my child’s report card today, those famous letter grades weren’t there either, instead being replaced by a numeral system reminiscent of even earlier grading systems.

Experiments in higher education

The aforementioned Optimi grade comes from the first written account of assigning a grade to students’ test scores, from Yale University in 1785. Prior to these one-word descriptions ranging from “best” to “worst,” students were generally evaluated via personal feedback from their instructors. Students’ rankings in a class were more-often based on their family’s status rather than their academic performance, which was made all the easier by the exclusive nature of higher education. Still, Yale launched a number of grading experiments, particularly when it came to exams. One that will feel more familiar today dates to the early 19th century, where exams were rated along a simple four-point scale that set the basis for our current grade-point-averages.

Other colleges tried their hands at designing grading systems as well. Four-point scales were changed to nine-point scales. The University of Michigan tried a simple P for “passing,” C for “conditioned” and A for “absent” in the 1860s. Other schools tried a series of Divisions, Classes and other organizational concepts that usually revolved around a 100-point scale in one way or another. In the case of Harvard, the school tried nearly all of the above, even going back to a basic “Passed with Distinction,” “Passed” or “Failed” system in 1895.

Harvard also had the first letter-based grade, according to a reference in 1883. A student earned a B grade, although that grading system didn’t really take hold until Mount Holyoke adopted it in 1897. As with today’s grades, the letters were basically shorthand for positions on a 100-point scale, although many were more narrowly defined than many schools currently use, being limited to a five-point range. Another difference that this scale went from A to E, with the latter signifying a failing grade for any score lower than 75 percent. Mount Holyoke adjusted their scale the following year, promoting E to passing and adding the now infamous F to signify failure. Obviously, E didn’t last as long as the other letters, possibly for the slightly silly reason that it could be read as “excellent” right next to F‘s “failure,” although why that bit of branding needed addressing while A through D had no real meaning isn’t clear. Still, a scale that can’t even follow proper alphabetization seems somehow appropriate, highlighting how arbitrary these grades can be.

Transferable grades that take less time

So if grades are abstract shortcuts without real meaning, how did they catch on? While early students of Harvard could expect fairly exclusive, intimate academic communities, changes in primary education in the 20th century drastically altered how much time a teacher could spend on each student’s work. For the first time, laws required children to attend school, which when coupled with waves of immigration, meant that the number of students needing evaluation skyrocketed. Starting in 1870, the number of public high schools alone grew by 2,000 percent, reaching 10,000 schools in 1910. Teachers needed a consistent way to rate students’ performance, both for efficiency and to make grades a useful metric between schools. They haven’t always lived up to those promises, and many teachers today would like to be able to provide more personal feedback to their students. In the mean time, I’m morbidly hoping someone will start using loading bars, experience points and maybe unlockable achievements from video game interfaces, just to build a sense of progression in students’ work. At the very least, they seem as helpful and motivating as being called “worst” in Latin.

Source: An A Is Not An A: A History Of Grading by Mark W. Durm, The Educational Forum, vol. 57, Spring 1993

On November 29th, 2017 we learned about

Urban traffic has been terrible for at least two thousand years

Even though they’re too young to drive, my kids have already learned to dread traffic. They know how car-clogged roads make us tense, late, and are generally unpleasant, even before you start worrying about noise, air pollution and increased risks for accidents. As much as this may seem like an unavoidable part of driving a car, humans have been struggling with congested roads for thousands of years, going back at least as far as ancient Rome. In some respects, citizens of ancient cities like Pompeii may have had it worse than we do, starting with fact that their roads had to double as sewage lines.

There were a lot of complications in getting your ox cart down the road in a city like Pompeii, but one of the most ubiquitous issues would have been the water, sewage and other garbage running down the stone streets. While Romans famously designed aqueducts, public toilets and more, not every form of waste management seems so innovative from a modern perspective. There was management involved though, as streets had intentionally deep curbs to keep water off the sidewalks. They also had raised stepping stones across streets, acting as a simple crosswalk to let pedestrians cross the road without stepping in the muck. Wagon wheels were expected to thread their way between the stones, but that’s probably ok since it would have been hard to move quickly in the first place.

No easy navigation

Moving carts and wagons through these sullied streets was a slow process, partially thanks to the tight fit these vehicles dealt with to get through town. Most streets weren’t more than three feet across, leaving just enough room for a single cart at a time. Since advanced maneuvers like turning were out of the question, as many as 77 percent of the streets were essentially one-way. This likely created bottlenecks when a cart or wagon needed to park, or maybe try to get around a corner with very little room to work with.

Beyond the mechanical issues that made moving through Pompeii difficult, the city layout was a mixed bag. Some streets were winding and less predictable, having been built according to old paths or trails. Much of the city was planned though, with most streets being laid out along a grid that may have been hard to squeeze through, but was at least comprehensible. To balance that out, only a portion of the streets had clearly marked names, ensuring that people new to the city had more reason to slow down traffic while figuring out where they were.

Understandably, all of the above could come together for a difficult, frustrating trip through town. Like today, there were complaints about noise and safety, with Roman writers making a point to comment on the clatter of wagons and swearing from their drivers. If the familiarity of these traffic woes feels somewhat disheartening, there are some silver linings to take solace in. After all, it’s much safer for a distracted pedestrian to stare at their phone now that they’re less likely to fall into a sewage covering the street. If they do manage to pull that off, at least we know there will still be plenty of other people around to swear at them.

Source: Pompeii Had Some Intense Rush Hour Traffic Too by Sarah Bond, Forbes

On November 21st, 2017 we learned about

Ancient Romans sought out citrus fruit as status symbols more than food

How much would you pay for a fruit that’s mostly rind, doesn’t taste good, and can make you vomit if you eat too much of it? If you were a citizen of ancient Rome, probably a lot. Citrus fruit, originating in east Asia, was hard to come by in the Mediterranean two thousand years ago, and so even something as difficult to enjoy as citron (Citrus medica) made a big impression on people. While most of us would pass over less appealing options like citron in favor of a lime or mandarin orange, Roman elites worked hard to appreciate them, largely thanks to citrus’ scarcity in that part of the world.

If you’ve never eaten a citron yourself, you don’t worry too much. Compared to other citrus fruits, citron are nearly entirely rind, leaving little fruit or meat to eat fresh. Some recipes capitalize on this by cooking with the rind itself, such as pickling it in brine before coating it in sugar to make a candy. Thanks to citrus’ hallmark tricarboxylic, aka citric, acid, the Romans enjoyed citron’s striking scent, both as a breath freshener and moth-repellent in clothing. Some citron was certainly eaten, although consuming the fruit was also tied to the ‘medicinal’ purpose of helping someone vomit up toxins if deemed necessary.

Fruit worth flaunting

There was one more good reason to avoid eating one’s citron fruit, which is the need to display it. Citrons, followed later by lemons, were most prized just because they were hard to come by. They were exotic goods imported through Persia, and owning them was a mark of a family’s wealth. They have been depicted in mosaics and even on coins, and seeds have been discovered in the ruins of wealthier villas around the Mediterranean. Citrus was a great food to have in your home, at least until everyone could have it.

As technology and trade routes modernized, Muslim traders made more citrus fruit available to the western world. However, sour oranges, limes and pomelos just didn’t have the same cache, possibly thanks to this increased availability. Food that was accessible to more people wasn’t worth putting on coins anymore, even if it was probably more worthy of a spot on your plate. As petty as that sounds, it wouldn’t be the last time wealthier people abandoned tasty food over its social status. By the time sweet oranges arrived in Europe in the 15th century, citrus fruits weren’t turning many heads anymore. To fully complete the cycle, when thin-skinned mandarins arrived in Europe in the 19th century, the clout once commanded by ancient citrons was long gone.

Source: In Ancient Rome, Citrus Fruits Were Status Symbols by Natasha Frost, Atlas Obscura

On November 14th, 2017 we learned about

The evolution of empty space in how we read and write

It’s a safe bet that you’re reading this text silently, because that’s how text works, right? You read the words presented, process their meaning in your mind, and then maybe relate them to others verbally or by sharing the text directly (hint hint.) At this point, I’m going to guess that you’re wondering why this is even being spelled out to you- this is how reading works, right? As it turns out, this concept of reading is relatively new. For the majority of human history, people had a very different relationship with words on a page, starting with all the parts that they didn’t even bother to transcribe in the first place.

Skipping vowels or spaces

While alphabets have existed for nearly 4000 years, not all languages recorded them like a modern English speaker is used to. Languages related to Aramaic, like Hebrew and Phoenician, had vowels but didn’t write them down. Instead, consonants were strung together, and readers just had to use context to put together what word was intended by the author. As difficult as that may sound, some modern languages, including Arabic, still use a writing system that skips writing vowels. European languages obviously did start incorporating vowels thanks to the ancient Greeks, although when they started writing them, they dropped some critical punctuation as a trade-off.

By the time The Iliad was written, Greeks had more or less given up on spaces between words, adopting a system now known as scriptio continua. The entire word was spelled out, but entiresentencesweremashedtogether. It seems like this would just convince all the Greeks to declare “too long, didn’t read,” and do an extent they did. This is because the purpose of writing at that time was primarily to record someone’s oral statements so that they could be stated for a new audience. Mashing words together without spaces made reading a bit more cumbersome, but that was fine when the point of a written scroll was read words aloud to other people. The reader could see what sounds to pronounce, and so plowing through an endless line of phonemes worked fine.

Adding spaces for accessibility

As human societies came in more and more contact with each other, scriptio continua started causing problems. For hundreds of years, the few people who could read throughout Europe were satisfied with this kind of writing, since most listeners were never going to see the text itself to even worry about it. However, in the ninth century, Irish scribes hit a snag, as the manuscripts they were copying were proving to be extremely difficult to parse. As native Celtic speakers, the scribes couldn’t simply “hear” the gaps in the words that speakers of Romance languages like Italian or French could, and so they started incorporating other cues in the writing to help make sense of things. They started with line breaks, giving each sentence it’s own line on a page. That was followed by spaces between words, with the Book of Mulling being the first volume to be transcribed in a way that wouldn’t immediately scare off a modern reader.

Adding spaces to a page did more than make sentences more intelligible. As the practice spread across Europe, it started to influence people’s relationship with writing. This new writing no longer put the emphasis on sounds and speech, allowing for the reader to become the primary audience instead of a performer. With that change in focus, making a page pleasing to the reader sparked new ideas in graphic design, since those efforts would help readers and writers alike. The printing press and subsequent accessibility of reading materials obviously cemented the idea of reading as something people could do by themselves, but the simple act of putting spaces between words was a key step towards making reading a goal unto itself, although it’s certainly not time to think that our relationship with the written word has been settled.

Trading space for more speed

The amount of writing in the world today is unmatched in history. More people are expected to read more than ever before, and huge amounts of that text is found on electronic devices that are starting to sever our relationship with the media it appears on. Modern graphic design loves white space more than ever, and many people would advise that the most accessible writing is lists of short phrases as opposed to “walls of text” that will scare readers away. However, as more writing appears on screens, there’s a chance that we could give up spacing in an entirely new way, displaying only one word at a time on your screen.

Instead of scanning your eyes across a sentence, text can be animated, with each word changing in the center of your screen so your eyes can focus on one physical location. It’s a significantly faster way to read, as moving your focus across a page or screen slows you down just a bit. As we consume more writing on daily basis, reading at 600 to 1000 words per minute may start to sound pretty attractive, as much as that would baffle the ancient Greeks.

Source: No Word Unspoken by Daniel Zalewski, Lingua Franca

On November 1st, 2017 we learned about

The origins of the artificial flavors that make your antibiotics taste like bubblegum

In 1972, Beecham Laboratories made a significant advancement in medicine that has improved the lives of millions of children around the world. They launched Amoxil, an antibiotic related to penicillin that today is often sold under its generic name, amoxicillin. The exact bacteria-busting power of Amoxil were surely great, but the real innovation may have been the sweet, vaguely-bubblegum flavor that was mixed in with the otherwise bitter liquid. By adding what tasted like a spoonful of sugar to every dose, getting kids to follow through on their course of antibiotics became easier than ever.

Battling bitterness with sweet, syrupy relief

While the weirdly chalky flavoring of amoxicillin may stand out in many people’s experience as the best part of a childhood ear infection, but it certainly was not the first time flavor was added to medicine. In the Middle Ages, when illness was believed to be an imbalance of the “four humors,” treatments were based on flavor. So if you felt sad or sour, the recommendation would be that you should avoid acidic foods and eat something sweet instead. (“I feel sad!” shouts the eight-year-old). While sugar has been linked to pain-relief, a lot of these treatments were probably only effective as placebos, and even then it seems hard to feel optimistic over the prospect of being prescribed something to restore your phlegm.

Herbal remedies offered more efficacy, but like penicillin, often taste very bitter. This bitterness was seen as a sign of the herbs potency, but it didn’t make them any more attractive to patients. Rather than have people munch plants directly, herbs were dissolved in alcohol and mixed with sugary syrups. It was nothing as exciting as the wild cherry or banana flavors you might find today, but the sweetness made medications much more palatable.

A fruit salad’s worth of synthetic flavors

By the 1800s, chemists were starting to isolate specific compounds that matched the flavors and smells of various foods. These compounds were usually esters that turned up in various contexts, like the cherry flavors that were found in byproducts of everything from alcohol distillation to coal processing. Methyl anthranilate, which in Germany was associated with orange blossoms, reminded people of Concord grapes (Vitis labrusca) in the United States. It’s now the basis for all the grape flavored candy and cough syrups on the market, even if it doesn’t taste like the Vitis vinifera grapes we snack on or make into wine. Banana flavors have a similar disconnect these days, as the isoamyl acetate that’s added to food and medicine comes from the Gros Michel banana, a cultivar that hasn’t been available since the 1950s. The flavor was included in candies in the United States before the average consumer could buy any bananas, but now it clashes with our expectations of Cavendish banana flavor.

All these flavors promised tastier candies and medicines, which became a serious problem by the 1960s. So called “candy aspirin” was apparently sweet enough to be compared to SweeTarts, a fact that was not overlooked by kids who started seeking treatment for every ailment they could think of. Many children were so drawn in by this flavor that they started eating aspirin like literal candy, and increasing aspirin poisonings by 500 percent nationwide. In response, the flavor was toned down and child safety caps were introduced to keep smaller hands away from medications.

Flavored medicines beyond bubblegum

Anyone with a young child knows that these flavored medications haven’t vanished entirely though. Dimetapp is still flavored like Concord grapes, and amoxicillin is still the strawberry-banana-cherry-cinnamon mash-up that we recognize as bubble gum. Chemistry hasn’t stopped innovating though, opening up a larger array of options than ever before, including mango, watermelon, and chocolate. If that weren’t enough, your dog can get its medicine flavored as beef, tuna, chicken pot pie, bacon, salmon, and yes, bubble gum. No reason for dogs to miss out on one of medicine’s greatest achievements, right?

Source: A Search for the Flavor of a Beloved Childhood Medicine by Julie Beck, The Atlantic

On October 30th, 2017 we learned about

The unfounded promise of mummy wheat: revitalizing life from long-dead seeds

People usually prefer fresh produce, but Victorian era Europeans once spent enormous resources on getting seeds and plants that were at least 3,000 years old. The seeds didn’t taste better, nor did they offer any proven nutritional advantages over modern plants. The draw was instead some vaguely insinuated magic that the seeds and plants might posses as a result of having supposedly been entombed alongside mummies. As ancient Egypt filled Europeans’ imagination with thoughts of pharaohs, mummies and magic, any object associated with a tomb was irresistible to collectors. Owning a piece of ancient Egypt, even if it was just a handful of seeds from so-called “mummy wheat,” became a coveted goal that then attracted charlatans and frustrated botanists for years.

Despite the name, mummy wheat was not directly part of the mummification process for dead bodies, not that would have dented some collectors’ enthusiasm. It was simply seeds stored in sealed vases and pots with the intention that the deceased could use it in their next life. The belief was that the afterlife was basically a more idealistic version of one’s actual life, but only if you brought your stuff with you. So alongside jewelry, furniture and even mummified pets, Egyptians planned for their theoretical future by including practical items like beer, seeds and cakes. They probably didn’t plan for how all this would be interpreted by foreigners, or how it might help drive the local economy 3,000 years later.

Desires for prosperity and profit

Unlike other treasures found in ancient tombs, mummy wheat grabbed people’s attention with its mystery and sense of possibility. The seeds themselves were fine, but collectors wanted the seeds to grow and sprout into something bigger. The plants they hoped to grow had great cultural significance, as signs of long dormant power and vitality springing back to life. The wheat was associated with biblical references to Pharaoh’s seven-eared wheat in the book of Genesis. If that weren’t enough, some people also suggested that by sitting in a tomb for thousands of years, the seeds could now yield supernaturally-large harvests. Mummy wheat had something for everyone, which is part of why so many people were selling it.

People who wanted their own magic beans, er, seeds, could sellers near and far. In Egypt, locals responded to tourists’ demand by dumping seeds into ceramic jar, then sealing the lid to give it the appearance of antiquity. Since these seeds were basically fresh, getting a plant to grow wouldn’t be a problem, making for a satisfied customer. If people did get hold of truly ancient seeds, there was probably some pressure to get a plant out of them, giving these wealthy collectors’ gardeners reason to quietly substitute new seeds for the old. For folks who couldn’t make it to Egypt themselves, scam artists were known to even cross the pond, selling Canadians mummy peas for the equivalent of $285 (US) today.

Testing if seeds are dormant, or just dead

Even before this botanical craze started emptying people’s pockets, botanists had suspicions about these ancient seeds. Every controlled study of the seeds failed to grow a plant, and while botanists were convinced they were infertile, the public was slow to listen. Anecdotal success stories, followed by fraudulent seed sales, kept the idea of mummy wheat alive in many people’s minds. The discussion was further distorted by racism, with debunkers often blaming ‘Arabs frauds’ for selling fake seeds while ignoring the role of European scams and sales.

Eventually, as the general fascination with ancient Egypt declined, associations with the seeds shifted from dormant vitality to a symbol of foolishness and gullibility. While some foods entombed with ancient mummies have been found to be safe to eat, there’s no sign that these seeds had any life left in them.

Source: The Myth of Mummy Wheat by Gabriel Moshenska, History Today

On October 23rd, 2017 we learned about

How rocks collected for their aesthetic value contributed to the collapse of a Chinese Empire

My third grader returned from a Girl Scout camping trip this weekend with stories, craft projects, and of course, a new rock. This particular rock was broken into a few pieces that interlocked together, making it a “puzzle” rock, which was interesting, but still a five-pound hunk of geology she doesn’t really have space for in her room. I asked if she had any idea what kind of rock it was, guessing it was some kind of sandstone maybe? The only classification that mattered to my daughter was that it was an interesting rock. It was pretty. It was a rock that caught her eye. My inner nerd sighed, but the art student in me is fine with this. Aesthetics can be important. They can move us to action or to calm serenity. In some cases, things do get out of hand though, like that time when rock collecting helped bring down a 12th-century emperor in China.

From contemplation to craze

Like my third grader’s current rock collection, this story started much more innocently. In 826 AD, Bai Juyi, a regional administrator and poet, was captivated by a pair of heavily weathered rocks near a lake. They were gnarled and craggy, standing upright in the ground, clearly displaying ages of rough treatment by the elements. Bai Juyi took them home, but more importantly wrote a poem about them, transforming them from a personal aesthetic experience into a anchors for a national movement. Other scholars were drawn to his observations about how the rocks marked the power of nature, contrasted the transience of human lives, and inspired quiet, Taoist contemplation.

Naturally, all these ideas about enjoying the stoic beauty of well-worn stones inspired a bit of a fad. Bai Juyi noted the growing fascination, and compared his lithomania to an addiction, suggesting that his compatriots limit their daily rock meditations to a few hours a day. People codified the virtues of rocks, noting their shou, zhou, lou, and tou— their upright stature, furrowed textures, carved channels and deep perforations that allowed air and light to pass through them. Rocks were featured in paintings, often dwarfing any human subjects that happened to be included. So-called “spirit stones” became a fixture of well-educated households, and the civil servants and artists of the time made a point to appreciate carefully selected rocks in terms we usually associate with paintings or poetry. Some stones could fit on a table, but more ambitious collectors stared acquiring pieces of twisted limestone large enough to tower over visitors.

This is where things get complicated. I can insist that my daughter only collect rocks she’s willing to carry on her own, but when an Emperor is the one doing the collecting, it’s hard for anyone to say no. In the 12th century, the Emperor Huizong was a noted artist quite obsessed with building up his rock garden. Details seem to vary depending on the source, with some accounts mentioning bridges being dismantled to allow a large stone to be transported down a river, for instance, but everyone agrees that the Huizong’s collection stressed the Northern Song Empire’s resources. When invaders came calling in 1125 AD, the empire had sunk so many resources into Huizong’s aesthetic interests that it couldn’t properly defend its borders. Sadly, the carefully tended garden did not survive either, sadly undermining the persistence symbolized by the rocks’ themselves.

International interpretation

Thankfully, not all rock collections have spiraled out of control like this. When Chinese lithomania arrived in 15th-century Japan, it was transformed. Rather than celebrating rough, tortured shapes, Japan was enamored with smoother rocks with more gentle silhouettes. These stones were still collected, but were displayed in sand, water or gravel to imitate the look of a miniature mountain. Eventually, Zen Buddhist monks started raking the gravel around stones to reflect the movement of wind and water, but they avoided the frantic collecting that brought down the Northern Song Empire.

As long as aesthetics drive my daughter’s interest in rocks, this second model seems like an easier path to follow. I just worry about the day she requests a large amount of gravel to cover her floor.

Source: The Philosophical Appreciation of Rocks in China & Japan: A Short Introduction to an Ancient Tradition, Open Culture

On October 17th, 2017 we learned about

A history of air pollution recorded on preserved bird bellies

History is usually studied in written records, man-made objects, rocks and bones. It’s no surprise that we generally rely on durable materials like this, but more studies of more recent history aren’t so limited. Thanks to careful preservation of biological samples from around 200 years ago, we can use softer, more delicate bits of biology to find out what was happening in the world, including picking up evidence of very indelible events, like air pollution from the industrial revolution. In that case, instead of looking at stone or tools, researched dug into museum collections to survey thousands of dead, but preserved, birds.

By the 1870s, parts of the United States were being overwhelmed by smoke from furnaces and steam engines. These early combustion engines opened up a lot of new possibilities in technology, but they also dumped a lot of soot on their surrounding neighborhoods. By 1874, places like Chicago were so choked with smoke that it obscured the Sun on a daily basis, a scenario we usually associate with natural disasters. These smokey years have been documented in some forms, but biologists wanted to find data on air pollution across a wider territory and time span. The record they then turned to was the accumulated soot that was still stuck to the feathers of birds collected from the 1800s to the present day.

Documenting accumulated dirt

The researchers traveled to museums around America’s so-called Rust Belt to compare as birds across different decades and locales. Instead of taking samples of soot for chemical analysis, they measured the overall level of contamination on a bird’s body by taking photos, then quantifying exactly how dark its white feathers had become. To make sure varying light levels or human error didn’t throw off these measurements, each bird was placed next to a paint strip called a reflectance that allowed each photo to be properly calibrated, ensuring that the gray on a woodpecker or horned lark’s belly was the result of smoke and not a shadow. Once the data was gathered, it was plotted by location and time frame, revealing the history of America’s air quality with impressive specificity.

The dirtiest birds were dated from the beginning of the 20th century, when industrial air pollution was at its peak. Feathers were cleaner when manufacturing declined during the Great Depression, and were then dirtier again during World War II. Birds after the 1955 Air Pollution Control Act looked cleaner than their predecessors, and those that had lived after 1963’s Clean Air Act were cleaner still. When he birds were laid next to each other, the gradation was obvious enough that my four- and eight-year-old immediately picked up on it. My eight-year-old also commented that the dirtier birds were generally smaller as well. Unfortunately, the effects this soot might have had on the birds (ruling out age or other nutritional differences) wasn’t in the study’s scope.

Bird breathing problems

Other studies, however, have looked at the effects on air pollution on our feathered friends, even before they’re covered in soot. Beyond canaries in coal mines, birds’ health has been found to decline from air pollutants faster than humans, as pollutants often lead to thinner egg shells and lower birth rates. Relative body sizes probably play a role, but scientists suspect that bird respiration also contributes to their sensitivity to particulate in the air. While humans breath in and out in two distinct steps, birds breath in a more continuous cycle, essentially inhaling and exhaling in one step. This allows them to fuel the metabolism needed for high-energy tasks like flying without pumping their lungs as a ridiculous rate. It unfortunately also means that they’re more efficient at sucking in soot and other toxins that can cause health problems. So if the birds are suffering in an area, there’s a chance that human health is being attacked as well. Or to put it in a more positive light, you can breath easy if your avian neighbors are clean and in good health.

Source: Sooty Feathers Tell the History of Pollution in American Cities by Alex Furuya, Audubon

On October 9th, 2017 we learned about

The history of waffles, from the Stone Age to street food to your local supermarket

Humanity’s first shot at waffles was probably cooked on a rock. They weren’t a happy discovery made during a camping trip, but were probably some cutting-edge cuisine back in the Neolithic, or Stone Age over four thousand years ago. These proto-waffles certainly lacked refinements like an indented grid or maple syrup, but the core of what’s now a favorite breakfast food was still there— a cereal-based batter that was cooked not baked or fried, but seared on both sides. Like many cultural inventions, it seems safe to say that we’ve improved on this recipe over time, but clearly our ancestors knew they were on to something big, even before they could slather their dish in whipped cream and strawberries.

Once people started mastering metal in the Iron Age, rocks were traded out for metal plates, often held at the end of sticks to more easily reach into ovens. These early griddles basically sandwiched the batter as it was placed in heat, allowing it to be cooked in half the time. This concept would be picked up by the ancient Greeks, who called the resulting flat cakes obleios, or wafers. Like wafers you find in stores today, obleios weren’t especially cake-like, instead being cooked to a flatter, crispier consistency. They also flavored them as a savory food, using cheese and herbs instead of sugar and syrups. Europeans continued munching wafer-styled waffles well through the Middle Ages, with the only major innovations being to make them larger and sell them from street vendors.

Standing out with indented sqaures

The 13th century ushered in a new era of two-sided flat-cake cuisine, with waffles finally gaining their signature grid patterning, plus the name that we know today. A blacksmith enhanced the traditional iron griddle plates, hinging them together and putting in raised patterning that would increase the cakes’ surface area, allowing for more efficient heat distribution. More importantly, the indented squares that now marked waffles added a fun design element that caught people’s imaginations, earning the name wafel as a reference to a section of a bee’s honeycomb or woven webs. Other designs have been created since then, including coats of arms and even landscapes, but none are as iconic as the square pits we now associate with waffles.

Waffles’ popularity continued to grow throughout Europe. In the late 1500s, France’s King Charles IX even had to issue regulations concerning how close waffle vendors could cluster together to cut down on the number of fights in the streets. Recipes showed some divergence as well, with lower classes eating flatter, crispier waffles made from flour and water, while upper classes added more cake-like textures to their waffles with milk and eggs in the mix. This interest in softer, puffier waffles eventually grows into what we now think of as a Belgian waffle, which generally includes yeast to complete the effect. As popular as this concept seems now, it was certainly not part of the original flat cake recipe.

Sweeter, slower, faster and frozen in America

Waffles were brought to North America first with Pilgrims, and then again with Thomas Jefferson after a trip to France. Maple syrup finally found its home among the square divots in the 1800s, which alongside molasses, pushes waffles closer to the sweet side of the flavor spectrum. In 1869, Cornelius Swarthout of New York made his contribution to waffle history, patenting yet another improvement on the waffle griddle concept. Obviously the squares had to stay, but Swarthout’s design allowed waffles to be cooked over a stove top as long as the chef was willing to slow the process down, flipping the new waffle iron over to cook both sides. In a way, it was a step backwards from earlier technology that aimed to cook waffles faster, but it made waffle-cooking accessible in a new way, earning the date of the patent it’s own holiday in the form of National Waffle Day on August 24th of each year.

Waffles’ ties to technology meant that they kept changing over the 20th century. 1904 saw the invention of the waffle cone for ice cream, although considering waffles’ origins as flat, crispy wafers, you might not call this a major innovation. Alongside many other domestic objects, waffle irons were electrified in 1911, removing the need for the stove top, but often retaining the need for flipping. “Froffles” made frozen waffles available to consumers in 1953, although their egg-heavy flavor would eventually see them renamed as today’s Eggo products that are now part of a $211 million market in the United States. Finally, Americans got a proper introduction to Belgian waffles in 1964, although originally as “Brussels waffles.” Again, marketing concerns lead to a new name, and the vanilla and yeast-infused recipe was eventually circulated simply as Belgian waffles.

Blueberry, gluten-free, fried-chicken flat cakes

Today it seems that we have a huge range of waffle variations to choose from. Waffles can be found with fruit, whole wheat, chocolate, ice cream, pumpkin and of course, fried chicken. It seems that throughout this grand history, we’ve really only given up two aspects of previous waffles— the speed of cooking both sides at once, and the opportunity to buy waffles on the street from a horde of competing waferers when walking about town.

Source: Waffle History Page 1: The Origin & Evolution Of Waffles, The Nibble