Skip to main content

A Branching Method For Covering News Efficiently

Last winter, I experimented with a new method of newsgathering and presentation using the conversation startup Branch. This is the post-mortem.
Published onSep 11, 2013
A Branching Method For Covering News Efficiently

A year ago, I spent a weekend covering the winter storm of 2013 with my friend Allan Lasser, who is the current publisher of the magazine I co-founded in college, The Quad. This was done as part of an experiment with an alternative way of structuring news coverage I originally called the Experimental Newsroom (now branded Koncurrent). At the time, I piggybacked off the technology of a very cool structured discussion startup called Branch. I learned a lot over that weekend that I would like to share with anyone who’s interested. Read on for my post-mortem. If you want to skip straight to the learnings, scroll down to “Learnings and Observations.”

Acknowledgements

Several people must be thanked for their contributions to this little experiment. Thanks to the Branch team, in particular Libby Britain, for not only allowing but enthusiastically encouraging me to use their platform for this experiment, including using their Twitter account to promote it. Thanks to Allan Lasser for volunteering to help me with coverage and contributing ideas. Thanks to Lauren Hockenson for being a sounding board and contributing her ideas and expertise to the project. Finally, thanks to my father and mother for listening, contributing their thoughts and encouraging me to do this.

Background

At the time of the experiment, I was not a journalist (I now contribute to FastCoLabs), but I am the son of a quintessential and quite literally ink-stained wretch. I grew up hanging around the newsroom of a major daily newspaper in Colorado and worked there for a few brief weeks as a video producer during the 2008 Democratic National Convention in Denver. I have a high level of respect for the institution of journalism and was heartbroken when the newspaper my father worked for became one of the first to shut down in early 2009 due to a host of factors. It was like a death in the family.

I am also a lifelong technologist. I taught myself HTML in the sixth grade to build a Harry Potter website and kept learning throughout school. I have used my skills to co-found publications and build their websites in middle school, high school, college and professionally.

The combination of these two passions of mine has, over the years, flowered into a borderline obsession with what some have termed “the future of news”; in particular, the writings of media critics such as Jeff Jarvis, Jay Rosen and Mathew Ingram. I believe that journalism is a necessary pillar of democracy. Specifically, I think that for-profit journalism is necessary and possible. I also believe that the advances in technology that destroyed the traditional business models on which journalism subsisted offer through their disruption opportunities to propel the craft to new heights. In the process, I suggest we be willing to radically rethink what journalism is as long as we advance the goal of informing people about the world around them in ways they would not be able to do on their own.

It is through this prism that I began to think, over the last few of months of 2012, about a different way of covering news that would address some of the criticisms leveled at both so-called “traditional” news outlets and modern “aggregators”. My thinking coincided with two personal events: The run-up to the presidential election, during which time I became addicted to news articles and poll results and my formal introduction to Git, the source-management tool, during a few freelance projects.

The election was a frustrating time for me, because for the first time I experienced the mismatch between tools like Twitter — which are good providing new information to informed audiences but bad at facilitating deep discussion of topics — and traditional news articles — which make readers dig through paragraphs of context to find new information but handle deep topics well. The result was a lot of fatigue and, over the course of a few months, days of wasted time spent parsing highly repetitive news articles for new nuggets of information teased in headlines and on Twitter.

Meanwhile, working with Git for the first time was a revelation compared to the Subversion-based systems I had worked with before. Git’s elegant “branching” system allows developers to easily spin off sub-projects, or branches, from the main code base. This means they can focus on developing specific new features without the overhead of copying the entire repository and without worrying about disrupting other developers working on the project. When work on a feature is complete, it can be merged back into the main branch fairly painlessly.

You can probably see where my mind went with this. In many ways, newsworthy events follow a similar pattern. They often start as one unified event and then branch into lots of sub-stories as time goes on, sometimes converging back into larger stories and sometimes not.

Traditional inverted-pyramid news articles were once the best format for news when information moved less freely and more slowly. The format is still great for longer retrospective pieces, but because it is designed to offer a “complete” story to readers, it is not only wrong but uniquely bad for reporting evolving stories today. Moreover, the focus on completeness inherent in the inverted-pyramid is highly inefficient, requiring reporters to spend time repeating non-essential contextual information every time they publish (or copy entire previous summary articles into new ones, as some outlets do). The effort required to produce such an article also encourages some over-grouping of related events into one piece, forcing readers to hunt for the specific information they want. It is also insulting of today’s news readers in a least common denominator sort of way to assume that every reader of every article needs the contextual cruft of the background paragraph to understand the piece.

For over 100 years, this type of 800-word news article has been the basic fabric of written news coverage. It is time for a new fabric. So, I imagined — inspired by many products, ideas and people, notably Anil Dash’s seminal post on streams — a structured way of covering news that would allow editors and reporters to follow evolving stories and “branch” their coverage into related, deeper sub-stories as they become newsworthy without disrupting the flow of higher-level stories. Readers could then choose to subscribe to the topics they care about, receiving updates only when something new and noteworthy happens in those streams.

In a turn of serendipity, a structured conversation startup, fittingly called Branch, launched late last year with three key features: Embedding of multimedia, subscriptions, and branching of conversations (sadly, the branching feature has since been removed). Rather than building my own software I decided to go lean and piggyback off of their excellent work to test and validate my idea, even though Branch isn’t designed specifically for my purpose. They have been kindly accepting and even enthusiastic about my experimentation.

That test occurred during the course of one weekend in February 2013 when I decided to cover the winter storm unfortunately known as “Nemo,” a name which I regret using in our coverage. Call this version 0.1.

Goals

The goals of the weekend were very simple:

  • Cover as many newsworthy events related to the storm as possible, using any methods necessary.

  • Update streams only with new and relevant information.

  • Avoid repeating information in the same stream.

  • Focus on accuracy and consistent velocity over speed.

  • Try to tell stories by piecing together data and branching off more specific streams as necessary; don’t just provide one-off updates.

  • Provide both breadth and depth, but not necessarily in the same stream.

Methodology

It is important to note that this concept is intended as an exercise in replacing the news article as the primary product of an online journalist. There is an implicit assumption that goes along with that experiment, which is that the role of a journalist is changing. Without getting into a protracted discussion of what that means, it is my view that the first job of a journalist today is to parse through and piece together the overwhelming amount of information that flows through society — building narratives from disparate pieces of information, verifying reports from people on the ground, recognizing and investigating inconsistencies in data and, of course, being efficient by linking out to the work of others when it provides the best experience for readers rather than trying to reproduce everything. In my view, this has always been the real value of a journalist to society, but it has been overshadowed by the data collection process, which was once difficult. Now that data is available in abundance, that part of the job should take on less importance while the verification and parsing of data should take on more.

Given this assumption implied by our structure and also, to be blunt, a lack of time and resources, I decided to use mostly online methodology to report on the storm. I relied mainly on the US Severe Weather Twitter list created by then-Reuters journalist Matthew Keys and supplemented it with Twitter searches, Google News alerts and eyewitness reports from myself in New York, Allan in Boston and other friends in both New York and Boston. This isn’t to say I wouldn’t send reporters to the field or email or call sources if I had more resources to devote to the project. But as we shall discuss in the learnings section below, this limitation revealed an interesting fact about the process of news today.

It is also worthwhile to talk about how I judged the newsworthiness of an event. Like most reporters, I relied to some degree on instinct and the usual data-points journalists use: The number of people affected by the event, the severity of the event relative to other, similar events in the past, the way the event fit in or stood out from past data and narratives, the value to the public of disseminating information about the event for safety or educational reasons, the relative certainty of the facts surrounding the event, etc. I also judged newsworthiness based on the stream I was posting to, which I’ll get into in the learnings section below.

I tried to attribute every fact reported to the source, whether it was a tweet from a politician or journalist, a press conference or press release, an article from a news organization, a utility website or API or a photo from a friend on the ground. I also took special care to verify reports of snow levels, power outages and restorations and other events where the interests of news organizations, public officials or Twitter trolls might serve to stretch the truth. I did this mostly using Twitter searches to confirm other people with large networks or reasons to maintain credibility in the area confirmed official reports.

I was limited by Branch to 750 characters per update and to their embed system. I also had no ability to edit or delete updates once they had been posted.

Finally, I tracked open branches using a Google Drive Spreadsheet and used Google Talk for real-time backchannel communication with Allan.

Results and Stats

We opened 7 Branches and posted 151 updates:

  • Main Branch: General coverage of the storm as a national event. Major updates and statistics. (28 updates)

  • Nemo naming controversy: Nested under the main branch. Coverage of reactions to TWC’s controversial decision to name the storm. (5 updates)

  • Tri-State: Updates specific to New York, New Jersey and Connecticut. (34 updates)

  • Tri-State Transit: Nested under the main Tri-State branch. Updates related to Amtrak, MTA subways, buses, bridges and tunnels, Metro-North, the LIRR, the LIE, NJ Transit subways and buses, the PATH and general roadway updates (29 updates)

  • New England: Nested under the main New England branch. Updates specific to Massachusetts, Rhode Island, New Hampshire, Connecticut, Vermont and Maine. (45 updates)

  • Blizzard of 1978: Nested under the main New England branch. A look back at and comparisons between this storm and the blizzard of 1978 such as the travel ban (2 posts)

  • Coastal Flooding: Nested under the main New England branch. Updates specific to coastal flooding occurring mainly along the East Coast of Massachusetts (8 posts)

I am unable to provide exact traffic statistics, although the goal was not to court an audience. I know that the main branch had at least 200 views and the Tri-State and New England branches at least 100. The Tri-State transit branch had at least 50. The rest presumably had fewer than 50, which is the minimum level at which Branch reports statistics. One person subscribed to the main branch.

Learnings and Observations

I am proud of what Allan and I produced that weekend. Following any one of our Branches, a reader would have received an accurate and up to date account of the events of the storm from multiple angles and depths of their choosing. The experiment scaled extremely well and, in my mind, proved that newsroom resources can be further optimized without sacrificing the quality of reporting.

This last point will be contentious to some as neither of us made any phone calls to government officials, conducted any interviews with utility companies or traveled to view events with our own eyes other than those happening in our immediate locations. But this was more a limitation of resources than intent. I did verify reports, engage in conversations with people experiencing the storm and, I will argue, added value to readers above and beyond traditional reporting methods. That is not to say we did perfectly or that there were not problems with our coverage. There were many. Below, I will discuss both successes and failures with more specificity.

Process/Workflow

The central goal of this experiment was to prove that the method of streaming updates and branching topics into more focused sub-stories works as a way of delivering news to an audience. Specifically, that it is a more efficient way to apply newsroom resources because it removes the need to repeat contextual background information before publishing. It also allows the newsroom to better serve users by focusing on both broad and narrow topics in a way gives users the choice to subscribe to distinct sub-topics within a story that they’re interested in rather than assuming that every reader of a particular story has the same interests.

Of course, every story is different and will have unique requirements. This is only the first test of this methodology. That said, weather stories are not particularly unique and I think much of what we experienced can be generalized to other types of stories.

Acknowledging those limitations, I believe that the methodology was a success for a number of reasons. The main reason was that it forced me to exercise editorial judgement in relation to the audience of each stream. Before each post I performed a quick thought experiment in my head: Given the title and topic sentence of each stream and assuming I had thousands of subscribers who would take time to read an update if I posted it, would it be a waste of their time to read a particular update or would the majority of subscribers find it valuable? This line of thinking created a very natural newsworthiness scale as branches went from more general to more specific. The more general the branch, the higher threshold an event had to pass in order to be considered a worthwhile update and vice versa for more specific branches. As a side note, I made a conscious decision not to treat death as newsworthy by default. I hate that.

This framework allowed Allan and I to cover the storm locally as well as nationally. If you followed the national branch, you received only major updates from regions — like airport closures and emergency declarations — and general interest information — like satellite images, the incredible snowfall counts and safety tips — that was either sourced from every affected region or applied equally across the regions. The main branch was also updated less frequently, reflecting the higher bar an event must clear to be considered news of national importance.

If you followed one of our regional branches, on the other hand, you received updates at a much higher velocity, reflecting the lower bar required to be considered important at the local level. You saw most of the statements made by government officials and agencies as well as local snow totals, power outages, business closures and eyewitness reports. Branching thus proved to be a great methodology for stories with local effects.

It also allowed me to pursue multiple angles of the story without having to rewrite the context from the main branch or bury it in a larger story. For example, remembering Hurricane Sandy and paying attention to meteorologists meant I saw the possibility of a coastal flooding story developing on Friday night and was prepared to open a branch about it during the high tide on Saturday morning. When news broke about the evacuation of Salisbury Beach I had a place to report it and follow coastal flooding as the storm lingered over Cape Cod without polluting the main New England feed, which was focused on massive power outages and transit suspensions in Boston. I was also able to cover how every single Tri-State area transit system coped with the storm in the transit sub-branch at a high velocity without forcing car owners following the main Tri-State feed to read every update.

That’s not to say the weekend was without failure. One thing that became apparent as Allan and I started opening more branches was that branches make it easy to overextend yourself. Allan opened a branch focusing on the similarities between this storm and the blizzard of ‘78 that had great potential, but due to time constraints neither of us were able to expand it into a full-fledged story. In reaction to that, I decided not to open a New England transit branch, which proved to be a mistake Sunday when the Massachusetts driving ban was extended to its second day and the MBTA unexpectedly reopened that afternoon. In this way, I learned that keen editorial judgement — when to pursue a story and when not to — is perhaps even more important for this methodology because opening a new branch is in some ways more of a commitment than assigning an article and choosing not to open a sub-branch branch can have devastating effects on the flow of higher-level stories.

I also failed at times to weave incoming data into a coherent story and reverted to posting Twitter-like updates. Part of this was due to resource constraints and part of it was simply fatigue. Part of it was also due to technological limitations, which I’ll discuss below.

Speaking of technological limitations, I struggled to deal with a particular story: Amtrak’s service disruptions. These events clearly belonged under Tri-State transit, but because they also affected New England I found myself cross-linking between the New England and Tri-State transit streams. I’ll address this special case in the next section on technology.

Finally, an important note on scale. Two people were able to cover most aspects of this major winter storm, including local events across eight states. Some of it was not as in-depth as we would have liked and some of it relied on aggregating news from other sources. If more people had been working on the story we would have done better. But given the time and resources we had available and compared to liveblogs at other news organizations, national and local, it was remarkably efficient and effective.

One of the reasons it worked at scale was that the branching methodology naturally lead to reducing duplication of efforts by covering stories from the inside out. What do I mean by this? On Friday, the first day of coverage, I oscillated between branches, trying to make sure each one received unique updates at regular intervals. By the time Saturday rolled around, I had realized that it was much easier and more effective to instead focus my efforts first on the most in-depth branches and then “bubble up” the most important updates from those branches to the next level by rewriting them in the context of the higher-level narrative and linking to the more complete updates in the deeper branches.

By repeating this until I got to the top level, I was able to significantly scale my efforts, using the high-velocity, in-depth narratives of the sub-branches to fuel the slower, more general narratives of the top branches. That isn’t to say I didn’t post updates specific to middle and top-level branches or violate my newsworthiness test. Instead, “bubbling” the most important stories from the bottom of the stack up to the top allowed me to save time and provide readers at all levels with the most important updates, tailored to their streams. The practice also encouraged readers to dive deeper into our coverage if they were interested, but didn’t require them to do so to understand the story.

Technology

Although Branch is a great product built by a talented team, it was not built for my specific purposes. Thus, the experiment exposed significant shortcomings in the technology as a news CMS, which is to be expected. Unexpectedly, these limitations proved very useful in illuminating the key functionalities needed for a news platform.

One limitation of Branch did serve me very well. The maximum post length on Branch is 750 characters, which is about two paragraphs. This turned out to be an ideal length for updates — long enough to tell small stories, like the story of the MTA 7 train’s continuing reliability and construction problems, but too short to include any unnecessary cruft. I wouldn’t necessarily impose this limit if I were building a news system, but I would make 750 characters the target length for most updates in editorial guidelines.

Some of Branch’s other limitations were too severe for my purposes. The system lacks rich-text capabilities and, at the time, did not allow users to edit updates after they were posted, a feature that has since been added. I also couldn’t manually title sub-streams (they were all auto-titled Re: Winter Storm “Nemo”), which made the reading experience confusing.

The most common feedback from readers was that they had difficulty understanding how to travel from branch to sub-branch. They also found it hard to understand where they were in the stream hierarchy at any given time. Branch’s UI in this regard was very subtle so as not to distract from conversations (and indeed, they have since removed the branching feature altogether), but for a news interface I would make it very clear how to move to deeper and shallower streams. I would also work on developing a graphical representation of the hierarchy to make it simpler for readers to understand where they are and how to navigate up and down the stack.

In terms of editorial functionality, one of the key insights of the weekend was that an ideal CMS would have a natural way to “bubble” updates up the hierarchy, rewriting them for higher levels but maintaining their connection to lower-level streams. We got around this by linking to our own updates in lower-level streams from higher-level ones, but it was never clear for readers that I was linking back down to the bottom of the stack. Similarly, an ideal CMS would allow journalists to start a new stream by recombining existing updates from other streams. This would allow reporters to create new streams out of stories that emerge gradually and make up for missed opportunities to branch out of higher-level streams. For example, in the Tri-State transit stream I made several disparate posts on the Long Island Expressway that could have become a more coherent and focused story had I been able to recombine them into their own stream. This also would have enabled me to post more in-depth updates on the situation on the LIE without worrying about polluting the broader transit stream.

Finally, the ideal CMS would allow updates to have more than one parent stream. This would have taken care of the special case of the Amtrak story that clearly belonged in multiple streams.

Reporting and Curating News

Using Twitter and networks like it as a source for news and tips is nothing new, and my findings here only validate previous observations about the curation/validation approach to journalism made in publications like Post-Industrial Journalism.

Because I made the decision to cover the storm on the day the it was set to hit, I had to act quickly. Using Matthew Keys’ list of news organizations, politicians, government agencies and reporters, supplemented by Google News Alerts and Twitter searches worked wonderfully. It provided a real-time stream of updates from every affected state that missed few major developments.

What’s worth noting here is that, in effect, the list creation that enabled my experiment was itself an act of journalism. Keys used his understanding of Twitter as a platform and knowledge of trusted local Twitter accounts to create that list. I could have attempted to duplicate it, but would not have been able to nearly as quickly or as well. The value of these types of skills and knowledge and the way they contribute value to news organizations should not be overlooked.

One of the interesting things I noticed using this list was, to be blunt, how little I relied on news media or aggregation of traditional articles to cover this story. Even though I embedded a lot of tweets from news organizations and journalists into posts, very few of those tweets added any kind of value to readers beyond facts that were already readily available. Most were just repeating information from official government or utility company statements, news conferences or releases that were already available, direct from the source, on Twitter! When I embedded media Tweets, I usually did so only because I happened to read them first. The only value they provided was raw speed over government social media managers.

Thus, it would seem the days of the journalist providing value as a mere collector of information are over. That information is already flowing from the source and needs no arbiter. The people who added useful information were folks like then-Wall Street Journal meteorologist Eric Holthaus, who collected crowd-sourced weather reports from his followers and used his domain-specific expertise on the science of meteorology to report things like the moving location of the rain/snow line in Manhattan and predict when sleet would turn to snow.

Accordingly, I tried to add value by not just providing updates on our Branches but deciding when not to. The more Twitter became an echo chamber, the more time I spent deciding whether an update was worth posting. It quickly became apparent that providing an update to my readers would not be useful unless the info was either very important or I could tie it into the overall narrative of the storm using some personal schema, research or storytelling.

I also decided to add value by choosing to focus on accuracy in reporting rather than speed. To accomplish this, I developed a very useful algorithm to verify changing and sometimes contradictory information coming from disparate government offices. Simply put, it was very possible to verify things like wind gusts, snow totals and power outages simply by searching Twitter for “name of location + event name” (ie, cape cod snow). It was almost always possible to find someone reacting to the event itself on Twitter, and projects like WNYC’s snowfall tracker made data readily available from non-official sources.

There were some areas where this method failed. I reported that subways were running smoothly based on the MTA website’s reported service status even though I later saw photos on the MTA’s Twitter account of aboveground stations in Queens made impossibly dangerous due to being covered by a foot of ice. I should have been more skeptical of the official definition of “good service.”

Additionally, although it was possible to verify the location of power outages using Twitter, it was not possible to verify the magnitude. Official power company Twitter accounts rarely reported on the number of customers without service until they could report that service had been restored to X number of customers, and their real-time outage maps seemed designed to obfuscate the magnitude of outages. Instead, that information largely came from government agencies or journalists who had talked to power companies. In the future, I would call utility companies and government agencies and compare their numbers with Twitter results and population maps to verify numbers.

I’m sure this still applies to lots of circumstances where official reports don’t pass the gut check and someone has an incentive to look effective even when they’re not. Verification has been made easier by real-time reports from people experiencing conditions on the ground. This makes the journalist’s instincts for trustworthiness more important because falsehoods can spread just as easily as the truth through these networks.

Was It A Success?

My goal with this experiment was simply to show that the branching format is a successful news-telling device from both an editorial and reader perspective. I believe we successfully demonstrated this, particularly from the editorial side. That said, there were several issues that came up during the course of the experiment that merit further testing, especially how to insert more narrative storytelling into the streams, thus making it less Twitter-like.

I also recognize that this was one experiment with one type of story and that I need to pursue different types of stories to test the limitations of the method in different circumstances.

Next Steps

I’m not sure if I will be able to continue the Experimental Newsroom project, which is why I’m posting this. I did spend some time creating a first, very rough version of what such software would look like for both readers and editorial staff that I’m calling Koncurrent.

Maybe someday I’ll be able to put more time into that project. For now, I’m treating this post-mortem as a blueprint in the hopes that someone in the news industry trying similar stuff (and I know from my brief experiences at Fast Company that there are many) can learn something, however small, from my experiment.

I’m happy to discuss it in more depth with anyone. Feel free to reach out to me on Twitter (@gabestein).

By Gabriel Stein on September 11, 2013.

Canonical link

Exported from Medium on October 22, 2020.

Comments
0
comment
No comments here
Why not start the discussion?