On July 23rd, 2018 we learned about

The invention of trapeze, and the tights that went with it

“Maybe someone needed a better way to swing in the branches of a tree?”

That’s probably not a concern most people worry about, but then again, it’s hard to tie something like a trapeze to any practical purpose. Even after a week at circus camp, my nine-year-old was clearly stretching to figure out what could have inspired the design of such a simple but specific device.

“They didn’t have airplanes then, so was part of a blimp?”

To be fair, the origin story of the trapeze isn’t necessarily intuitive. A young man named Jules, who had grown up learning to climb and tumble in his father’s gymnasium, saw ropes hanging over the accompanying swimming pool. He placed a cross bar between two ropes, supposedly to use as a chin-up bar. Apparently it didn’t take long for other uses of a swinging bar to be found, because within a year this acrobat had put together a performance in his home town of Toulouse, France.

Within three years of that first performance in 1856, Jules had not only moved past his would-be career as a lawyer to work in the Cirque Napoleon in Paris, but he had a second act figured out as well. On November 12th, he swung from one trapeze to another for the first time, gripping audiences’ attention as never before. While the swimming pool was traded for a set of mattresses, more trapezes were added to the performance, allowing Jules to do back-to-back somersaults between five different swings.

“Was his name Mr. Trapeze?”

No- the name trapeze probably came from the Latin trapezium, referring to an “irregular quadrilateral,” like a trapezoid. That’s not to say that young Jules’ name was really forgotten though, since his last name was Leotard.

To safely swing and climb among the ropes and bars of his trapezes, Jules Leotard created a form-fitting body suit out of wool. While it was probably quite hot to wear, the elastic nature of wool allowed him to move without being hindered or snagged on any equipment. It also helped Mr. Leotard hold the attention of many audience members, as the tight outfit revealed his physique in a way unheard of at the time. The combined spectacle of the trapeze and costuming helped make Leotard quite successful, earning him plenty of money and notoriety, including the song the “The Daring Young Man On The Flying Trapeze.”

Leotard’s body suit wasn’t referred to as a ‘leotard’ until around ten years after he died in 1870. As eye-popping as the garment was when it was first created, it’s now fairly standard for athletes and dancers around the world. That’s not to say that Jule’s first passion has died out though- as recently as 2013, performer Han Ho Song performed five consecutive somersaults off a trapeze in Stuttgart, Germany. The key difference is that he updated Leotard’s trick from 1859, making all five revolutions in a single jump.

Source: Trapeze origins, Vertical Wise

On July 18th, 2018 we learned about

Engineers’ brief attempts to speed up trains with airplane engines

When I first read my kids the story of “Thomas and the Jet Engine,” I treated it as nothing more than fan service for kids. Sure, the idea of their favorite train being temporarily boosted across the tracks by a jet engine was fun, but clearly ridiculous. And of course, I was wrong. Not only have there been real attempts to build trains powered like aircraft, but working prototypes powered by jet aircraft have been built in multiple countries. These experimental trains were designed to join the speed of air travel with the hauling capacity of trains, although none of them ever went into regular, if speedy, service.

Pushed and pulled by propellers

The very first attempt at a plane/train hybrid was the railplane. George Bennie created a vertically oriented track that was intended to be built above existing railways, saving space and simplifying logistics. At the front and back of the pill-shaped vehicle, electric motors drove large propellers so that the railplane could ‘fly’ down the track without worrying about actual flight. The test track was too short to confirm it, but Bennie estimated that the railplane could have traveled as fast as 120 miles-per-hour, beating even today’s travel times. Unfortunately, Bennie couldn’t get enough funding to continue developing his prop-propelled train, leaving us only with advertisements and some footage of the prototype.

Jet-powered propulsion

With airplanes shifting to jet engines in the 1950s, trains in the 1960s had some catching up to do. By 1966, multiple parties were looking to either retrofit or design jet-powered trains from scratch. In France, Jean Bertin started work on the Aérotrain, which was designed to hover on an air-cushion atop an elevated track. The hovering was intended to reduce friction, making it easier for the Aérotrain’s single jet engine to propel the train down the track. Multiple rounds of prototypes were developed, including an 80-passenger train that could reach 155 miles-per-hour under the power of two jet-engines. Like the railplane before it, the Aérotrain was eventually abandoned in 1977 due to funding problems, although you can still find sections of test track France and near Pueblo, Colorado.

In the United States, turbojet train development looked a bit more like a retrofit. Don Wetzel led an effort to make trains faster and cheaper, which translated to front-mounted jet engines on an otherwise traditional-looking commuter train. Early iterations used recycled General Electric jet engines, purchased from the Air Force. Like the Aérotrain, Wetzel’s jet-powered trains never progressed past test tracks and prototypes, although they did manage to hit an impressive 183 miles-per-hour before the project was shut down.

Even if commuters never got to enjoy jet-engine speeds on the rails, these efforts caught the attention of engineers in the Soviet Union. With long-distance travel served by rail, plus a Cold War competitive spirit, the Speed Wagon Laboratory started work on their own jet-train in the late 60s as well. Target speeds ranged from 155 to a theoretical 223 miles-per-hour, but the project was dropped in the 1970s, partially thanks to the expenses associated with all the jet fuel those speeds would require.

Rounding things out, Japan had their own attempt at jet-powered trains, starting around 1968. Engineer Hisanojo Ozawa offered a few twists on the “common” jet-train design, aiming for a train with three jet engines that ran on rollers instead of traditional, flanged train tracks. The project didn’t seem to progress past a working scale-model, although that model predicted speeds up to 733 miles-per-hour.

Floating but not based on flight

Like Thomas the Tank Engine’s accidental sprint across the Island of Sodor, the age of jet-powered trains was short-lived. Fuel and other expenses made this form of propulsion less attractive, and so high-speed train designs have moved on to other design concepts, mostly. While not explicitly modeled after air travel, maglev trains do hover over the ground to reduce friction, allowing them to reach speeds of up to 249 miles-per-hour. The form of propulsion is different, but the basic premise of track-based transportation still appears to be one of our most practical ways to cross long distances.

Source: Turbojet train, Wikipedia

On July 17th, 2018 we learned about

Making sense the inconsistencies of cars’ mud flaps

Kids are supposed to ask why the sky is blue, what happened to the dinosaurs, and maybe where babies come from. My five-year-old, apparently content in his knowledge of such things, has instead been wanting to know more about mud flaps on cars. What are they for? Do they help a car drive faster somehow? The thing that was really bothering him though, was if mudflaps are useful, why aren’t they part of every car and truck on the road?

Mudflaps have probably been conceived a number of different times throughout automotive history, but Oscar Glenn March of Jones, Oklahoma is generally credited with inventing the products we know today. Unlike the side-mounted “anti-splashers” patented by William Rothman in 1922, March’s flaps were built and put into immediate use. March worked in the motor pool on Tinker Air Force Base during World War II, and realized that they needed a way to protect sensitive radar equipment from mud and rocks when it was hauled on flatbed trucks. The flaps were originally made of canvas which has since been replaced by rubbers and plastics, although March’s original bracket-mounting design is still in use today.

How functional are rubber flaps?

In addition to keeping your radar equipment clean, mud flaps can also protect the vehicle they’re mounted on. In areas with lots of rain, snow, salted roads and of course, dirt and mud, the right mud flaps can help prevent dirt and rocks from damaging the paint on fenders right behind a car’s wheel well. Beyond one’s own car, mud flaps can also help cut down on how much dust and water your vehicle will spray on anyone around you, which is part of the reason they’re legally required on trucks in many states. These vehicles’ higher frames and larger wheels makes them prime candidates to launch more rocks and water at other drivers, and so properly-sized mudflaps trap those materials before they cause trouble.

Of course, not every car today has mudflaps, which raises the raises questions about how valuable they really are. A piece of heavy rubber can’t be that expensive, so why don’t all cars come equipped with mud flaps by default? Many modern cars do have some extra plastic molded behind their wheels, but why not use the flexible flaps trucks are required to use? This is a trickier question to answer, as there’s no single authority declaring that mud flaps be excluded from modern car designs. Sometimes there are concerns over improperly mounted mud flaps, which require holes be drilled in a car’s body that can end up leading to rust damage. Other people argue that the flaps cause a small amount of aerodynamic drag, making them slightly less efficient to drive with. Finally, there’s the issue of aesthetics— some people think they look great on their cars, while others think they’re simply eyesores that will get bent up against speed bumps. There may not be a single “right” answer to this, although that won’t stop some people from asking questions.

Source: Who Invented the Mud Flap?, Fruehauf Trailer Historical Society

On July 1st, 2018 we learned about

The controversial origins of the cross-cultural California roll

California rolls are a bit of an enigma wrapped in mystery, then covered in rice. They helped introduce sushi to westerners in the 1970s, even though they contain avocado and crab meat instead of fish. They’re made inside-out, hiding the nori seaweed inside the roll to hide its texture. They were long considered a perversion of traditional sushi, and yet their invention has earned one chef the title of cultural ambassador to Japan. Of course, it would be easier to make sense of these peculiar contradictions if we knew the true origins of the dish, which is difficult because California rolls have somehow been invented at least two times.

Created in California

The first American sushi restaurant opened in Los Angeles, California in 1966. There wasn’t a lot of demand for dishes based around fresh cuts of raw fish, and so the Kawafuku restaurant kept its main kitchen busy with more familiar dishes like teriyaki chicken. Beyond Americans’ concerns with raw fish, there was also hesitation over the use of seaweed as an outer wrapper. While American kids today might be happy to snack on nori on its own, few western diners in the 60s were entirely ready to wrap seaweed around their raw fish without squirming about it.

The first step towards a California roll then was to essentially eliminate the fish. Chef Ichiro Mashita, lacking sufficient supplies of tuna, substituted avocado at the Tokyo Kaikan restaurant in Los Angeles. Giant green fruit may not seem like a good substitute for tuna, but the rich, fatty flavor and soft texture was fairly successful, especially when paired with crab meat to bring back a bit more of a fishy flavor. While offering Californians something as familiar as avocado was a big move on its own, these rolls didn’t really take off until the rice was moved to the outside of the nori wrapping. This change was fairly blasphemous, as the crisp texture of the nori was normally a point of pride for sushi chefs, but hiding it proved to be the secret of California rolls’ success.

Born in British Columbia

You could say that the rest was history, if not for an alternative history that took place in British Columbia around the same time. Chef Hidekazu Tojo had opened a restaurant in Vancouver in 1971, and immediately ran into some of the same difficulties faced by Mashita in Los Angeles. Truly fresh fish was hard to come by, necessitating experimentation with ingredients like avocado and salmon skin to fill the role of tuna and eel respectively. Customers in Canada were also reluctant to give sushi much of a chance, spurring Tojo to hide nori under a layer of rice in what he dubbed an inside-out roll.

Tojo refers to these rolls as Tojo rolls in his restaurant today, although in the 1970s a steady stream of customers from Los Angeles supposedly earned them the name California rolls. While we may have thus been deprived of a name like the “Angelino roll,” the impact these rolls made was immense. While the alternations to traditional sushi making may have been compromises at first, they truly succeeded at making sushi accessible to a much wider audience. In recognition of his continued innovation, Chef Tojo was named one of just 13 cultural ambassadors of Japan in 2016, lending further weight to his version of the birth of California and other inside-out forms of sushi.


My fourth-grader asked: So are avocados from California?

Unlike the strangely undocumented history of California rolls, we do have a pretty good idea about where avocados got their start. They’re originally from Mexico, although avocado pits have been found buried alongside mummies from 750 BC as far south as Peru. Those mummies probably didn’t enjoy the fruit too much though, as the trees weren’t cultivated until 500 BC.

Modern avocados were brought to California in 1871, with a variety of varieties vying for people’s plates until the 1950s. Today’s most popular variety, Haas avocados, got their start in 1926, traceable to a single tree in La Habra Heights. The tree died in 2002, and while it’s not exactly helping bridge cultures, some of its wood is being preserved for commemoration.

Source: Who Invented The California Roll? by Michelle Woo, OC Weekly

On May 31st, 2018 we learned about

The development of doughnuts: Innovating and automating one of the world’s favorite fried foods

Since 1938, Americans have celebrated National Doughnut Day in recognition the Salvation Army’s “Doughnut Lassies” from World War I. These women earned their title by delivering the sweet, torus-shaped bits of fried dough to American soldiers in the trenches of France to help fend off feelings of homesickness. While it may seem appropriate that such a calorie-rich treat was considered an American invention, it shouldn’t be a surprise that the true origins of the doughnut can actually be traced to various cultures around the world. But if Americans didn’t exactly invent the basic recipe for fried dough, we probably do deserve credit for the efficient, convenient and entertaining aspects of their industrialized production.

Fried dough tastes fantastic

Figuring out the origins of the first doughnut recipe is probably an impossible task. There are many examples of foods from as far back as Ancient Greece that essentially consisted of frying dough in oil, then adding sugar or other sweeteners to make it taste amazing. We’ll probably never know who first had the bright idea to fry up their scraps of dough, but we know that it was enjoyed across Europe to the Middle East. Out of all those variations, we can say that the Dutch dish oliekoecken, or oily cakes, became the eventual inspiration for the modern doughnut. This recipe was originated in Europe, but was also popular in immigrant communities of what was then New Amsterdam in North America.

These proto-doughnuts looked a bit like a modern jelly-filled doughnut, at least from the outside. They often included some kind of flavorful item in the center, such as fruit or even nuts (which may be a possible explanation for the name “doughnut.”) Those fillings weren’t just added for flavor though- they also helped the cake cook more evenly, since dough in the center of a cake would often be left cold while the outside was fried. Fortunately for everyone, this conundrum would eventually lead to the first major innovation in doughnuts- the addition (or removal?) of the hole.

Slicing out the center

There are a lot of details surrounding the transformation of oily cakes into a torus-shaped doughnut, and there’s a good chance many are apocryphal. They all surround a one Captain Hanson Gregory of New England, who is given credit for the invention of the modern doughnut in the mid-19th century. Gregory’s mother was said to make her son Dutch oily cakes for his journeys with a recipe involving nutmeg, lemon peel and nuts in the center. At some point, Gregory opted to cut the centers out of the cakes though, either to skimp on nuts, impale the snack on his ship’s steering wheel to free up his hands, or most likely, to help the cakes cook more evenly. Whatever sparked the idea, Gregory went on the say he’d cut the centers out of cakes with a round tin pepper box, although later iterations of doughnuts are generally formed with the hole from the start, rather than slicing it out.

Sweet snack success

Properly-holed doughnuts proved to be quite popular, and some cafes and bakeries had a hard time meeting demand. This was rectified by the second major iteration in doughnut technology, an automatic doughnut machine built by Adolph Levitt, a Russian immigrant living in New York City. The machine not only churned out 75 dozen donuts per hour, but it was also set up to entertain hungry patrons who would gather to watch the food being produced, not unlike the Krispy Kreme stores of today. These devices were a huge success, earning Levitt a considerable amount of dough (sorry) and establish doughnuts as the “food hit of the Century of Progress.”

Aside from turning up in the trenches of World War I, mass-produced doughnuts were quickly established in popular culture. The image of eating, and perhaps dunking them coffee at a diner, was invoked in movies and songs in the 1930s. Obviously, such a phenomenon couldn’t be contained in New York alone, and it wasn’t long before now-famous brands like Dunkin’ Donuts and Krispy Kreme were established, even if the latter started by being sold out of the trunk of a car.

Today doughnuts make up a significant portion of America’s snacking habits. By 2015, $581 million dollars worth of doughnuts were being sold from convenience stores alone, and that number is expected to grow in the future. Major brands are still growing as well, with companies like Krispy Kreme extending their reach to international markets. It seems that thanks to the appeal of sweet, fried dough, no American will ever need to feel homesick for a sugary snack again.

Source: The History of the Doughnut by David A. Taylor, Smithsonian

On May 30th, 2018 we learned about

Origins of origami: looking for the the world’s first folded-paper sculptures

“When did people start folding paper like this? What were they trying to make?”

Despite the sincerity of my daughter’s interest, the fact that she couldn’t be bothered to look up from folding her tenth paper crane of the day made it an odd question. As cute as they are, we don’t really need fifty paper birds around the house, suggesting that there’s something intrinsically satisfying about making origami and other paper-folding sculptures. That said, my daughter did raise an interesting point— it’s unlikely that anyone spontaneously folded an entire crane without some kind of precedent to put that idea in their head first. So how did we get from that first creased paper to the sculptures we have today?

Folding precious paper

Unfortunately, as the kami in origami implies, this is an art form based around paper. As tricky as it is to carefully fold a paper sculpture, preserving the resulting artwork is also difficult, and so we no longer have the world’s first attempts at paper-folding to examine for clues. Much of what we know has been inferred indirectly from circumstance, written descriptions or even folded paper depicted in other artwork. At the very least, we can say that origami didn’t exist before paper did, which gets us to 105 AD, when Cai Lun invented paper in China. The new material then traveled to Korea, eventually landing in Japan with Buddhist monks in the sixth century.

When paper first arrived in Japan, it was a rare and exclusive technology. Far from the disposable receipts and tissues we are now surrounded by, paper was expensive and special enough to only be used in religious and formal ceremonies, like purification rituals and weddings. In these cases, it was often folded into abstract shapes, generally copying the geometry of fabric ornaments that were also used in these ceremonies, like a simple zig-zag Shide. Gifts might also be adorned with folded paper, again as an ornament and symbol of value and authenticity.

The first representational origami sculptures were probably butterflies attached to sake bottles at Shinto weddings. Mecho and Ocho were meant to be female and male butterflies respectively, opening the door for the multitude of animal-shaped sculptures that would follow in later centuries. However, most of those designs would have to wait until at least the 1600s, largely due to the availability of paper as a material.

Accessible art form

Once paper was more accessible to the general public, origami became a very popular across Japan. By the 1700s, folded cranes were depicted in other artwork. While most sculptures had previously been taught via oral tradition, the first instruction manual, the “Tsutsumi-no Ki” by Sadatake Ise, was printed in 1764. While other manuals were published in the following years, the next major innovation came in 1950, when Akira Yoshizawa and Sam Randlett developed a set of standardized arrows and other symbols to make instructions clearer for people around the world. These symbols are now commonly found in diagrams and instruction manuals, allowing origami designs to be shared and iterated upon much more easily than before.

With people around the world sharing designs, there has been a lot of innovation in the designs themselves. Early Japanese origami didn’t necessarily avoid making cuts in the paper as it was folded, but many enthusiasts today try to challenge themselves by making geometrically-complex shapes with folds alone. Other branches of origami include wet origami, modular designs and even designs that are inspiring engineers with new ways to build flexible robots in other materials.

Other forms of folding paper

While we don’t have detailed records to prove it, it’s safe to say that people started folding paper outside of Japan as well. Zhezhi is a form of paper-folding from China that generally focused on making geometric objects like boats and hats. In 1993, Chinese refugees introduced people to what is often called Golden Venture folding, wherein many smaller paper triangles are assembled into larger, three-dimensional objects.

Europeans couldn’t resist folding paper either, although its hard to say how they got started. By 1490, images can be found that seem to depict a life-sized version of a paper-folded boat, although it’s hard to be sure based on the single woodcut alone. By the 17th century, German baptism certificates were commonly folded into spiraling forms, although this isn’t to say that folded paper was limited to ritual like it was in Japan. The play “The Duchess of Malfi” by John Webster mentions children folding paper as early as 1614.

The most famous form of European paper-folding is likely the Pajarita. The bird sculpture isn’t just a popular design, but is the name of a style of paper-folding originating out of Spain. Possibly based on mathematically-folded designs created by the Moors, the Spanish papiroflexia spread across Europe, eventually turning up as a cultural reference point in other media.

Source: History of Origami, Origami Resource Center

On March 13th, 2018 we learned about

America’s mostly-successful history with student science fairs

My third-grader will be entering a project in her first science fair this week, and while she and her partners at least aimed higher than a vinegar volcano, nobody’s expecting to found a new company from their work either. That’s fine- the point of a science fair, particularly in elementary school, isn’t to set a kid up for a Nobel Prize. Even if one’s experiment (or let’s be honest, demonstration) isn’t completely successful, the real goal of a science fair is to give kids a hands-on opportunity to work within the scientific method. Unless, of course, we’ve somehow lost sight of why science fairs were ever started in the first place…

Shows for students to share the natural world

The earliest science fair on record was more of a general exposition, in 1828. The American Institute of the City of New York assembled exhibits on a variety of topics, from agriculture to manufacturing to the arts. The engineering on display included show-stoppers like an iron plow, so it wasn’t exactly a showcase of scientific progress. Still, kids did participate, although they were noted for doing things like making black veils instead of growing flowers in food coloring.

100 years later, the American Institute organized the first Children’s Fair. While there was more of a scientific focus, the mission of these events was to get high school students thinking about nature. In that context, it makes sense that the top entry from 1931 was a diorama about how dogwood trees function in their habitat at different times of the year. The Children’s fairs were popular, although by 1941, the American Institute realized they couldn’t financially support them any longer. This created the opening for what most consider to be the first ‘modern’ science fair for students.

Competitions to launch careers

In 1942, a non-profit institution called Science Services worked with Westinghouse to launch The Science Talent Search. World War II had demonstrated the utility of science and engineering most convincingly, and the competition was squarely focused around promoting what we now call STEM careers for high school students. Westinghouse has been replaced as the primary sponsor by Intel, and later by Regeneron, but mission to promote up-and-coming scientists has been consistent throughout the completion’s history. Out of the nearly 150,000 high school students who have participated, alumni have gone on to win 13 Nobel Prizes, two Fields Medals, 11 National Medals of Science, 18 MacArthur Fellowships and more.

Students finding their way forward

A fair majority of those winners probably didn’t need a ton of encouragement though. Since 1942, students from specialized, science-focused schools have garnered the lion’s share of semi-finalist and finalist accolades, suggesting that they were starting from a substantially different position than most students in the United States. For many kids, a science fair is one of their first times thinking about how to come up with a testable question, make observations, etc. For many parents who are recruited to help see these projects through to competition, it’s a time filled with stress as they balance managing their kid’s progress while also allowing the student to have enough leeway to still learn something useful. Many parents report not knowing how to help, a dynamic that’s sadly reflective of survey results that show many American parents want their children to be well-versed in science, but also feel like it doesn’t really intersect with their own lives. We don’t have numbers on how many kids are scared away from science because of a bad project, but these factors probably don’t make for a good introduction to science or engineering.

Obviously, no iteration of science fairs or expos was meant to be confusing and frustrating. Fortunately, steps are being taken to help guide students if they don’t have all the resources they need to get started. ScienceBuddies.org is a website designed to help students find a project that is not only interesting, but practical as well. Aside from the pragmatic assistance this provides, it also seems to be looking to make science accessible to a wider audience, which is just what (I think) a science fair should do.

Source: The Rise of Science Fairs (And Why They Matter) by Rebecca Hill, Parent Map

On March 5th, 2018 we learned about

The origin and appeal of Lucky Charms’ crunchy marshmallows

In 1963, General Mills Vice President John Holahan was tasked with turning one of the company’s current cereals into something kids would find a bit more “magically delicious.” If a baseline of either Cheerios or Wheaties wasn’t restrictive enough, this new product had to be developed in six months, a fair amount shorter than the two to three years of development allotted to most of the company’s products. Fortunately for Holahan, inspiration apparently struck at the grocery store, when he encountered his favorite candy and decided to add it cereal, leading to the creation of the marshmallow-laden Lucky Charms. It’s a remarkable achievement, as Lucky Charms have now been produced for over 50 years, completely eclipsing the Circus Peanut candies that inspired them.

Unpopular influence

It seems fair to say that most people wouldn’t have been inspired by Circus Peanuts like Holahan was. The peanut-shaped marshmallow-based candy did used to be more popular, but the semi-spongy texture never made it a best seller. Instead of tasting anything like a peanut, the most common flavor is banana, and even that is rumored to have been the result of a “banana oil accident.” On top of all that, they’re also tricky to make, as the wrong amount of moisture will cause them to deform or get crusty. None of this sounds especially appealing when added to a bowl of Cheerios and milk, which is probably why the marshmallows in Lucky Charms were considerably altered before going to market.

Over five decades of marketing marbits

The marshmallows in Lucky Charms have actually been engineered enough to have their own name— “marbits.” To help get kids to try their new food, General Mills launched Lucky Charms with one of the biggest advertising campaigns for a breakfast cereal, pushing ads in comic books, newspapers and on television. Lucky the Leprechaun was invented providing a theme that would then influence the shapes and bright colors of the marbits, apparently making them more appealing in the process. This marketing push worked fairly well, although it should be noted that the allure of marbit clovers, hearts, stars and moons weren’t enough to really spike sales. To really solidify the cereal’s place in the market, the recipe needed extra sugar on the cereal pieces too.

With Lucky and the appeal of marbits being established in kids’ minds and palettes, General Mills only needed to play with the aesthetics of Lucky Charms to keep interest up. Lucky the Leprechaun and the marbits have been given regular updates, giving the cereal a dizzying array of shapes and colors in its history. In roughly chronological order, marbits have been offered as clovers, hearts, stars, moons, diamonds, horseshoes, whales, balloons, Christmas ornaments, candy canes, bells, trees, rainbows, pots of gold, different moons, hats with clovers, shooting starts, hourglasses, Olympic medals, Olympic torches, Rudolph the Red-Nosed Reindeer, ice skates, snowmen, stockings, mittens, Man in the Moon (blue moons with a yellow-toothed smile), wreaths, presents, crystal balls, locks, bats, ghosts, cauldrons and books. If that somehow weren’t enough novelty for breakfast, other twists have been added to some of these designs, from swirled colors to colors that change when milk is added. If that weren’t enough to hold your interest, chocolate and berry variations of the cereal have been sold, although they don’t seem to have the staying power of the standard, Cheerios-based recipe.

The crunch of sugar crystals

Aside from the supposed “lore” behind each marbit (blue moons, for example, let Lucky turn invisible), the secret to Lucky Charms is probably the particular crunchiness of its marshmallows. It’s obviously an upgrade over the spongy Circus Peanuts, but isn’t exactly what you’d get from your usual puffed marshmallow either. While the latter option would offer plenty of sugar and opportunities for coloring, they would also be more likely to make the cereal go stale in the bag thanks to their internal moisture.

However, this doesn’t mean that Lucky Charms’ marbits are simply dehydrated marshmallows. That’s probably a close approximation, but doesn’t seem to capture the crisp texture of a true cereal marshmallow. General Mills isn’t in a hurry to release the official recipe, but marbits seem to be the result of unstable corn syrup. Unlike the shelf-stable corn syrup you buy at the grocery store, a homemade corn syrup is more likely to crystallize over time. The marshmallows are still dried out, but the crystallized sugar makes sure they develop a satisfying crunch instead of just getting powerdery when eaten.

Of course, if you’re not looking to cook your own batch of homemade marbits, you can always buy a bag of cereal marshmallows instead. It may be lacking in fabulous new unicorns, but if you wanted cereal, you’d be buying Cheerios, right?

Source: Let’s Raise a Bowl to the Little Fella, Recognizing Innovation

On February 18th, 2018 we learned about

Mulan versus history: women who assumed male identities to join the military

When watching a cartoon like Disney’s Mulan, my kids are fairly confident that things like the ancestors’ spirits, talking dragon and lucky cricket never really happened. When I press further, they’re hesitant about the rest of the story too. Movies are, by default, fiction in the eyes of my eight- and four-year-old, so why would Mulan be any different? As it turns out they’re both right and wrong about this— as far as anyone can prove, Fa Mulan never existed outside of folklore. However, the core premise of a woman disguising herself as a man to fight in the army seems to have been repeated in history often enough to make the story very easy to believe.

Disney didn’t invent the story of Mulan, although their version is definitely different from how she’s been represented in Chinese ballads and storytelling. Every version starts with the idea that Mulan wants to take her father’s place in the Chinese emperor’s army, the context and reactions to that choice diverge immediately. Whereas the Disney cartoon is based around a teenager who feels like she doesn’t fit society’s image of a woman secretly donning her father’s armor, a Ming Dynasty ballad by Xu Wei called The Female Mulan Joins the Army in Place of her Father is based around a girl who is comfortable with the idea of staying at home and sewing, but was also explicitly trained by her father to be a fighter as she grew up. She hides her sex from the army when she joins, but convinces her parents that her enlistment is the only sensible option for the family if they want to fulfill their duties to the emperor. She does unbind her feet to make this transition, but while planning to rebind them once she returns home. It’s a very different approach to gender roles than a modern audience might expect.

Incentives for enlistment

Of course, gender expectations have long been rigid enough to block women from enlisting under their own identities for most of recorded history. Even without a real Mulan to point to, many other women have fought under assumed male identities, usually to stay closer to a brother or husband, escape an abusive family, earn more money than a woman would otherwise have access to or, as in the Chinese ballad, fulfill a sense of duty and patriotism.

Elisa Servenius dressed as a Swedish man to fight in the Finnish War of 1808 in order to remain close to her husband, who was also a soldier. He went missing in 1809, but Servenius continued to serve, partially so that she could try to find her lost spouse. Having been captured, Mr. Servenius was released in 1810, and the two were reunited.

Sarah Malinda Pritchard Blalock signed up with the American Confederate Army to follow in her husband’s footsteps towards defecting to the Union Army. William Blalock purposely enlisted in a company he figured would be sent to the Virginia border so that he could flee north more easily. He didn’t realize that his wife, after cutting her hair and adopting the name Sam, would attempt the same strategy. It didn’t work for either Blalock as intended, and both had to simply flee north after getting medical discharges. In the end, they both joined the Union Army, fighting in raids in the Appalachian Mountains.

In a reversal from Mulan’s reverence for her father, Sarah Emma Edmonson joined the Union Army in 1861 to avoid her father. She’d fled her native Canada in 1857, starting a new life first as Sarah Edmonds, then later as Franklin Thompson. As “Thompson,” Edmonds enlisted and took on a number of roles in the war, from hospital attendant to spy to battlefield courier. While she managed to survive a broken leg and other injuries, she felt the need to desert the army when she came down with malaria, lest her disguise be revealed while hospitalized.

Earning, and claiming, income

Edmonds’ desertion raises the issue of how secretive these women had to be, and what the consequences were once their sex was found out. From the records we have, Disney actually suggested more severe penalties than anyone actually faced. Their version of Mulan feared not only the disapproval of her parents, but also execution by the state of the army discovered that she was a woman. Even the Ming Dynasty version has Mulan’s friends immediate embrace her identity once she reveals a 12-year deception, and nobody in the story worries about an execution in the slightest. While not every woman was able to leave the armed services on quite so favorable terms, multiple women did manage to receive pensions for their status as veterans.

In Edmonds’ case, her desertion did work to preserve her secret identity. Once she recovered, she worked as a nurse, but also wrote about her time in the Union Army. She even managed to get her alter-ego’s desertion charges cleared in order to claim her pension in 1884.

Mary Lacy posed as a 19 year-old-boy named “William Chandler” to join the British Navy in 1759. She was a successful sailor and shipwright, at least until rheumatism force her to stop working in 1771. Still, she did manage to not only earn a pension, but also receive it under her original identity, Mary Lacy.

Jennie Hodgers adopted the name Albert Cashier to fight in the American Civil War. She managed to keep her identity secret not only throughout the war, but even afterwards, as she chose to keep her male identity for the rest of her life. Not only did she receive the pension she earned under her new name, but also lived in a soldiers’ rest home in Illinois. The staff there did eventually discover her secret, but they never disrupted her life by making it public.

Consequences when caught

Not every woman managed to control their identity this well though. As in Disney’s Mulan, injury and illness that required medical treatment would, understandably, expose people’s secrets. Some were temporarily detained (sometimes to be guarded by another woman posing as a man!) while others were just sent home. In the case of Dorothy Lawrence, the British Army was more concerned with how easily she’d faked her way to the front lines of World War I. After confirming that she wasn’t a spy, the biggest concern was to make sure her story didn’t get out and embarrass the Army, or encourage more women would attempt the same feat Lawrence had. Luckily for the top brass, the multiple versions of Mulan weren’t in wide circulation at in England at the time.

Source: Mulan vs. The Legend of Hua Mulan, Disneyfied, or Disney Tried?

On January 18th, 2018 we learned about

Updating our spotty, rat-filled understanding of the 14th century plague epidemics

If there’s one thing we can learn from the Black Death in the 14th century, it’s the importance of record keeping in times of crisis. Granted, it was probably hard to focus on documenting what was going on when tens of millions of people were dropping dead for no obvious reason. However, piecing together exactly how the plague spread with the speed it did has been an ongoing question, even long after we’ve come to understand and successfully treat the Yersinia pestis bacteria that actually causes bubonic plague. While rats have long been thought to have carried fleas that carried the bacteria, new investigations are starting to cast doubt on what we thought we knew about these horrifying epidemics.

No rats required

To be clear, Y. pestis is still the cause of death that killed millions of Europeans on more than one occasion. The question is how big a role rats played in transmitting the bacteria to humans. Part of our evidence against the rodents is that they have often play a role in plague outbreaks today, which understandably makes a strong case for their guilt in the 14th century. However, there are some holes in the story of past epidemics, such as no reporting on dead rats turning up in large numbers (as the rodents can be killed by the plague just as we can.) Researchers have also questioned if the flow of infections that we do know about really required rats’ presence in the first place, so they ran some tests to find out.

These experiments obviously didn’t involve risking any human or rat lives. They were conducted as simulations in a computer, allowing changes in different variables to be run over and over, eventually revealing the likelihood of one scenario over another. Obviously, long-shots can still happen, but these simulations showed that fleas biting humans could be passed around quite efficiently with no help from furry friends. In fact, in seven out of nine cities’ virtual infections, the human-flea-human model was a better match for mortality records than scenarios that depended on the movement of rodents.

Looking at leprosy

While these simulations have tried to consider an array of data sources to build a more accurate picture of how the plague spread, some historical gaps have been filled erroneously. Many images that are now archived as contemporary depictions of plague victims are actually pictures of other diseases entirely, such as leprosy. This kind of mistake has become common enough that it’s likely reshaping people’s understanding of what symptoms the bubonic plague actually produces.

Medieval images of leprosy, later labeled as the plague, often include eye-catching lesions on the victims’ skin. It’s dramatic and easily understood as a sign of disease, making these mislabeled images all the more convince to audiences lucky enough to never encounter an actual bubo- the real calling card of the bubonic plague. While some victims could occasionally end up with dark red spots under their skin, most people would end up with a single swollen lymph node in the armpit or neck, depending on where the bacteria-carrying flea bit them. However, these buboes don’t turn up in any drawings or paintings from the 14th century outbreaks. Instead of showing the medical reality of the plague, the few contemporary images directly related to the epidemic focus on its effect on societies, such as a drawing of people burying coffins from 1349, or Jews being burned alive in the 1340s after they were blamed for the disease.

Seeing patterns in the symptoms

Even after the dramatic epidemic of the 14th century, the plague revisited Europe every few decades. Bit by bit, people started to put the pieces together, even making a point to record what an actual plague victim looked like. Images of swollen lymph nodes are directly connected to the plague in imagery from the 15th century, both in artwork and medical documents, some of which suggested lancing buboes to save infected patients.

It’s understandable that people didn’t know what to keep track of before they even knew what was making them sick. But it’s interesting to consider how much information about a curable disease is still hard to be sure of. As someone who was preemptively treated for bubonic plague once as a toddler, I guess I’m just grateful that someone around me knew what to look for at a time when it counted. For what it’s worth, in that case people blamed a flea-bitten cat.

Source: Maybe Rats Aren't to Blame for the Black Death by Michael Greshko, National Geographic