Wednesday, December 6, 2017

“Perfect vs. Good Enough” – Writing Quality in the Online Age - Part 3

This is the last part of a three-part post looking at “perfection” in content creation in the online age. In this post, I’ll discuss the return of “perfection” but with a different meaning than it had historically and how that different meaning represents a sea change in technical writing.

In the old days, “perfection” was rarely defined but generally meant perfectly crafted writing. But perfectly crafted writing takes too long to create. Think about how often you heard a product manager say of the user guide “We don’t need it perfect. We just need it good enough and out there.”

The result was a tug-of-war within tech comm. One side claimed that the communication/writing side was what counted and the technical side was secondary. The other side took the opposite position. Those in the middle, like me, claimed that both were important. Communication was our business but without the technology, there was no way to share that communication with readers.

So how has the meaning of “perfection” changed?

In the old days, the code behind the content didn’t matter as long as the document printed correctly. The result was documents full of syntactically incorrect or junk code. But today, we have to assume that any content we write may be converted between formats and presented in some online form. Plus, the rapid pace of modern publishing means that problems that require manual intervention to correct are bad. This change is driving a new requirement for “perfection” but on the technical side. How?

·         Syntactic correctness is crucial to using conversion tools to move content between formats. No more hacks or Easter eggs. This means any old tools that aren’t syntactically correct will have to be abandoned, no matter how invested your company may be in them.

For example, a few years ago, I did some work for a company that used an HTML editor that was created around 1999 and that followed acceptable syntax as it existed in 1999. Unfortunately, that syntax didn’t work today. For example, it denoted heads not by the standard h1, h2, etc. tags but by span tags with class numbers. This worked as long as the company used that old tool. But that old tool was limited. In order to go beyond its capabilities, the company had to first correct the code. Doing so would require using Regex but the company didn’t want to spend the money. Checkmate.

·         Good coding practice is crucial to taking advantage of responsive design.

For example, if there’s a problem aligning three images horizontally, common practice is to insert a table on the page, hide the cell borders, and insert the images in consecutive cells. This works well until users view the page on a mobile device. Now, rather than shifting automatically from a horizontal layout to vertical, the images remain horizontal and users have to scroll horizontally. Horizontal scrolling isn’t the end of the world but it doesn’t take advantage of the capabilities of responsive design.

·         Good SEO (search engine optimization) practice is crucial for effective searching. As more and more content goes online, you want your content to show up at the top of the search hits list and SEO will help. Of course, all your competitors are thinking the same thing…

·         Good metadata and taxonomy practice is crucial for machine-driven assembly of small chunks of content – Information 4.0’s “molecular content” or “granular content” – into output quickly to respond to user requests in transitory contexts.

·         Cross-platform-appropriate linking is crucial for usability.

For example, authors often use popups for online help to display small bits of content and, more important, to keep users within the topic and task thread. (Using jump links lets users get out of the topic and thread and perhaps lose their train of thought.) However, if you’re creating online help that’s responsive for use on desktop screens and mobile devices, be aware that popups will render as jumps on mobile devices, harming your navigational design on mobile devices.

·         Accurate linking is crucial for content credibility. Broken links happen but they can make users question the quality of the content and thus its credibility. And because the same content may appear on multiple devices, possibly on multiple network nodes, the risk of broken links goes up.

·         Future-proofing research is important to understand emerging platforms or technologies and to consider their effect on your content authoring.

For example, chatbots have suddenly become hot. I think it will be years before most companies create them but things can change unexpectedly.

      What about on the writing side? Is good writing still important? Yes, in a way.

·         Accuracy is obviously always important.

·         Good, consistent style is important, expressed through devices like parallelism, to support retention.

·         Good, consistent tone is important, expressed through devices like using active voice for the second person – e.g. “you”.

·         Good punctuation is important sometimes.

For example, the obsession over the number of spaces to use after a period seems silly to people who aren’t in tech comm (and to many who are). Other punctuation issues may seem just as silly until you point out the problems they can cause. (One programmer in a class that I taught said that punctuation rules were “a load of crap”. I pointed out that coding has its own punctuation rules and that breaking them might cause a program to crash. I then suggested that he consider the difference between “Let’s eat, grandma.” and “Let’s eat grandma.” He agreed and made it up to me by telling me what he claimed was the world’s worst accordion joke.)

·         What’s less important is creative perfection, le mot juste. We don’t have time for it and most readers won’t appreciate it anyway.

So “perfection” is a requirement again in tech comm, except that the word now has a different meaning. It still matters on the communication/writing side but is crucial on the technical side and represents the continuing ascendancy of that side of tech comm.

Thursday, November 30, 2017

Information 4.0 - Tech Comm of the Future?

This article was originally published in ISTC Communicator, Winter 2017.

Imagine that you’re on an oil platform in the North Sea. Strain gauges in a drill tube detect metal fatigue and automatically start the process of replacing the tube.

Or imagine that you’re looking at an exhibit in a museum. Your smartphone knows which exhibit you’re at and automatically describes it. Move to the next exhibit; your phone automatically tells you about that one. Your phone also knows that it’s noon and suggests lunch. It also knows that it’s Friday, when the cafeteria offers a favorite dish of yours until 1 PM, and mentions that as an inducement.

Or imagine other scenarios where a computer can take action or offer information or assistance based on some context. That’s a large part of what Industry 4.0 and Information 4.0 offer. In this article, I’ll look at both but focus on Information 4.0 as being of primary interest to technical communicators.

What Is Industry 4.0?

Industry 4.0 is a model for factory automation and data exchange from Germany. (For an overview, see It’s based on the Internet of Things (IoT), AI, and the cloud, plus a new standard called iiRDS (International Standard for Intelligent Information Request and Delivery – and RDF (Resource Description Framework from the Worldwide Web Consortium –, and other technologies.

The goal, to seriously oversimplify, is to create factories with machines that are self-governing through, in part, “context sensing”, like the drill tube that can detect metal fatigue and call for servicing on its own. For more examples, see “Context Sensing and Information 4.0” by Ray Gallon in the November 2016 issue of TCWorld at

So What Is Information 4.0?          

Information 4.0, according to its evangelists Andy McDonald and Ray Gallon, is the “…informational component of Industry 4.0”. (See (Andy and Ray have formed the Information 4.0 Consortium. Check it out at Think of Information 4.0 as a conceptual umbrella for current and emerging technologies, either directly or peripherally related to technical communication.

Why does this matter to technical communicators?

For industry, Industry 4.0 is most important. And, as Adobe’s Stefan Gentz noted in a conversation with me at TCUK, what industry needs now is standards, protocols, and use cases.

Technical communicators are unlikely to create those standards, protocols, and use cases but we may be involved in documenting them. Doing so will call for familiarity with a vast range of new technologies.

Furthermore, we’re likely to use those technologies to document things outside the bounds of Industry 4.0. That will change technical communication the way word-processing did in the early 1980s and HTML in the late 1990s. That broad applicability of Information 4.0 is why it’s the focus of this article.

Characteristics of Information 4.0

Its evangelists postulate that Information 4.0 content will have six primary characteristics.

  • Molecular – Replaces documents with “information molecules” that can assemble themselves into “compounds” based on a “state vector”.
  • Dynamic – The context may change so the information molecules must change too.
  • Offered rather than delivered – Information is available as needed but not pushed on the users. (Think of a dynamic help system that’s always available but discretely tucked into a corner of the screen.) Also, because it’s hard to predict what information users might need and whether users have the background knowledge needed to understand a particular molecule, we may need to create different molecules containing different versions of the information.
  • Ubiquitous – Information has to be available everywhere, and has to be searchable.
  • Spontaneous – Information has to display based on the context of the information requests.
  • Profiled automatically – Information has to fit user’s needs as closely as possible rather than being generic.

    Are these characteristics implementable today? Here are some of the issues.


Think of molecular content as topic-based authoring taken to extremes. “Fragments” of information are available at any given moment to fit the context of requests for information. This will boost the number of files.

But as the number of files increases, so do the hardware and software requirements – more RAM, faster hard disks, and faster networks. Can your tools and networks cope with huge numbers of files or will you have to upgrade your hardware or tools, or change tools? (And the definition of “huge” is subjective. I meet many people who have five-hundred files in a Flare project and consider that huge. The largest I’ve ever worked on had 176,500 files. The largest I know of is close to 900,000.)

As the number of files in a project grows, so does the need for project management rigor. The need for a project description becomes crucial. Even simple things like file naming conventions have to be defined and followed with no deviation. Without these and similar steps, the work may go out of control.

Assembly into compounds will make extensive use of metadata. Current tools offer some metadata, like conditional build tags, but the Information 4.0 metadata will have to be open source, such as RDF. That will add a new and unfamiliar requirement for authoring.

The “state vector” that drives the assembly process is a set of temporary context-states – an advanced form of today’s context-sensitivity. Somebody or something will have to define and maintain them.

Is it feasible today? Yes, in a limited way. Today’s tools support topic-based and fragment authoring, but not in the numbers needed by Information 4.0. Also, molecules and fragments created by today’s tools are meant to be combined into one output rather than live on their own. That may change how we create those molecules and may lead to machine-created content and AI.


The molecules will have to be continuously updated to match changing state vectors, effectively in real-time. Furthermore, the molecules will have to be in open databases rather than stored on authors’ local PCs. It also means that compilation becomes a bottleneck in some cases because users may not want to wait out the compilation time, just as users don’t want to wait out slow-leading web sites today.

We’ll also need fast, reliable network access to send the context state to the processor and the updated molecules back to the user quickly. There will also need to be some local storage for situations where network access is slow or nonexistent.

Is it feasible today? Yes, in a limited way. Current tools are starting to let us metatag content but that’s still in an early stage. Current tools will also have to create fragments that don’t contain tool-specific codes that may not work in an open standard environment. (MadCap Flare does this with its Clean XHTML output.) Our tools will also need to support local storage.

Offered Rather Than Delivered

Breaking information into the smallest possible molecules. However, defining the parameters of those molecules and creating them using traditional writing methods may not be fast enough. We may need machine-generated content.

Is it feasible today? Yes, in a limited way. The biggest bottleneck is the need to create molecules rapidly enough.


The molecules have to be available from anywhere and searchable. HTML5’s responsive design allows ubiquity across multiple devices and platforms. SEO (search engine optimization) increases searchability and findability. However, the need for ubiquity and findability rules out hard-copy outputs, perhaps even PDF. This will be a wrenching move for many companies today.

Is it feasible today? Yes, to a surprising degree. Responsive design lets us create one output that runs on multiple devices rather than having to create one output for each device. But the compilation time may be longer than users are willing to wait.


The “contexts” that trigger spontaneity are more advanced forms of today’s context-sensitive help. They might include device orientation (how users hold the device), location detection, external states like temperature, and more. The contexts have to be sent to the processor to let it alter the information to fit the context. This requires methods of context detection, metadata (again), plus fast networks to get the new content back to the user fast enough to make it useful.

Is it feasible today? Yes, in a limited way. We’ve been creating context-sensitive help since the dawn of online help. What we don’t have in the technical communication world is the spontaneous side, where information might change dynamically as the context changes. For that, we have to look to the web and mobile app worlds. Once online help went from RTF to HTML in 1997, technologies from those worlds have been available to technical communication projects but I’ve almost never seen them used.

Profiled Automatically

User profiling will be done automatically based on the context, again requiring major and continuous use of analytics.

Is it feasible today? Yes, in a limited way. We’ve been able to create targeted online help for years using conditional build tags and placeholders like variables and snippets. The problem is that we can’t create the output dynamically. Instead, we generate a fixed output for user A. To create an output for user B, we change the conditions and placeholder values and regenerate, which adds a delay. Our online help projects have offered search capabilities for years but I rarely encounter companies that save and use the search data. I have never encountered a company that uses that search data to automatically update and regenerate the material. (If your company does, please contact me.)

How Will It Affect Tech Comm?

Here are some of the more obvious likely effects.


  • We won’t write documents. Instead, we’ll do single sourcing with a twist – creating information molecules that can stand on their own as well as being combined into larger outputs. This may affect users of document-oriented tools like Word and FrameMaker the most. (We can use those tools to create molecules – e.g. topics – but many authors will find true topic-focused tools to be more convenient.)
  • No more traditional continuity – e.g. no more writing “as described above” because “above” may now be in a separate molecule.
  • No more local formatting since local formatting will probably break automated processing tools that look for styles.
  • The need to structure content organizationally, using templates, but also semantically using metadata.
  • Dealing with dynamic wording. We say “click the button” when writing for large screens but “tap the button” when writing for mobile. What do we do when using responsive output to that runs on both large screens and mobile devices? One answer today is to use an intermediate word like “select” but that doesn’t ring true to desktop or mobile output. There are ways to alter wording dynamically using CSS, which should spread as more authors start creating responsive output but they’ll have to work in the code until our tool vendors start to support it. Which will call for greater facility with…


  • We’ll need more technical skill. We’ll have to be familiar with, or more familiar with, metadata, CSS, networks, and the other technical communication technologies mentioned above.
  • We’ll need familiarity with technologies and standards from outside technical communication, like iiRDS and RDF.
  • We’ll need to follow good coding practices and standards.
  • We’ll need up-to-date tools.
  • And we’ll need formal training in how to use those standards and tools. Peer-to-peer training is common but too often simply passes bad practices on from one generation to the next.

Corporate Role

  • Nobody cares about “documentation”, but “content” is cool, possibly even a revenue generator and/or a branding mechanism. This may help technical communicators become more of a force within the company by changing the company’s perception of tech comm.
  • We have to stop trying to block new technologies in technical communication. Someone in an audience once responded to my discussion of social media by saying “over my dead body.” The technology will still go ahead, but around her, as her career has dead-ended.

Four Major Issues on the Business Side

You may be thinking that all this sounds interesting, with the promise of challenging work and new lines of work. But Information 4.0 will have to overcome four business hurdles.
  • Does it support my company’s strategic and business direction? If it does not do so, in ways that are clearly and quantitatively demonstrable, it either won’t be funded at all or will be a quickly-forgotten one-off.
  • Does it have high-level management support? Even if Information 4.0 supports the company’s strategy and business, other initiatives within the company probably do as well. Only the ones that get management support will survive, or at least have a chance to survive.
  • Does company culture support this higher technical complexity? The answer is often no. Trying to force Information 4.0 into an unsupportive culture will cause turnover in the documentation groups; you’ll get people who are comfortable with the technology but lose people who have the experience and the institutional memory.
  • Are goals and terms clearly defined within my company? As silly as this sounds, it’s very possible that people in your company, especially management, may not understand the terms we take for granted. They may also not have any clearly defined goals for an Information 4.0 initiative. Any Information 4.0 initiative is going to require an educational component on your part.

There are other business issues, plus technical ones. See my blog at for periodic updates on Information 4.0 issues.


Much of Information 4.0 still conceptual, but it represents a continued increase in the technical side of technical communication, and offers multiple paths forward for tech comm in the years ahead. It’s worth watching.

Thursday, November 9, 2017

Correcting An Omission In a Previous Post

On November 1, I put up a post about best practices for using MadCap Flare. The post included some tips from Craig Wright of StrayGoat Writing Services , but I didn't include his contact information. Sorry about that.

Craig's web site is

Thanks, Craig.

Wednesday, November 8, 2017

Creating Image Maps in Flare

I was asked on LinkedIn whether Flare supported image maps and how to create them. I don't see image maps used much anymore but they can be useful. Flare makes it easy to create them with its built-in image map editor. Here's how:
  1. Insert your graphic in the topic.
  2. Right-click on the graphic and select Image Map from the dropdown menu.
  3. Click on any of the three "New... Mode" buttons on the toolbar and draw the desired shape. To resize a shape, click on and drag one of its drag handles. To move a shape, just drag it. There are various other image options, but the three mode buttons are the core.
  4. After you create the shape, double-click it. The Area Properties dialog box opens. (It's basically the same as the Hyperlink Properties dialog box.)
  5. Specify the link parameters like you would for any type of link and you're done.

Be sure to test the hot spots in your browsers. I haven't used image maps in a while so there may have been some changes in the browsers that break the hot spots, but they did work fine on my PC using Chrome, IE, and Edge.

Wednesday, November 1, 2017

More Best Practices for Starting to Work with MadCap Flare - Standards

Flare has so many options that it can be hard to decide where to start, even if you've taken a class and especially if you haven't. New Flare users often just jump in. But this can get you off to an inefficient start, perhaps even a bad start, and create problems that ripple down to later projects.

I recently wrote a post for MadCap’s MadBlog that reviewed best practices for starting to use Flare. (See I expanded that post into a longer white paper for MadCap. (See
In this post, I’ll take the topic of best practices further by expanding on the Standards section of the white paper. The original MadBlog post, the white paper, and this blog post are based on twelve years as a certified Flare consultant and trainer and eighteen years of hypertext consulting and training pre-Flare. However, always consider your specific situation before following any suggestions. With that, let's look at the standards issue in more detail.

The more you standardize, the more consistent your projects will be and the less authors will have to guess about what setting to use for a given task or feature. Things you might standardize include:

  • Graphic file formats – Many graphic formats are available today and Flare supports most or all the ones that you’re likely to use. The traditional approach is to use GIF for screen shots and JPG for photos. This works but you’re maintaining two sets of files, GIF and JPG.

    You may find it more efficient to use PNG for all graphics, including those for your print outputs. A PNG’s quality may not be as good as that of an EPS but your users won’t know because they haven’t seen the EPS so they won’t have anything to which to compare the PNG.

  • Conditional build tag usage – Condition tags are the core single sourcing feature but they can go out of control if you don’t set rules for their use. (I’ve talked to two firms this year that used tags with no initial rules about when to insert and how to call them. One company had a project with about 1,000 topics and 1,500 “unruly” tags. The other had about 1,000 topics and 15,000(!) such tags. The result was that new authors couldn’t figure out what tags to include or exclude for a given target and when to add new tags.)

    Creating and inserting tags is flexible – you can apply tags to a character in a topic, a paragraph, an entire topic, a group of topics, a folder, and any other element in a Flare project. And it’s easy – assign a name, pick a color, add an optional comment, and you’re done. In fact, it’s so easy that new authors often gloss over the crucial first step – defining what they’re trying to do and documenting it. So my suggestion is to first decide what you want to do, then define rules for inserting and using the tags, such as the smallest element you can tag. Document this. And test the results to make sure you’re getting what you expect.

  • Variable and snippet usage – Two excellent points from Craig Wright (

    “Try to plan your content reuse at an early stage, especially if there are multiple writers involved. It’s no good creating snippets if the other authors aren’t aware of them.”


    “Make sure your snippets are well organized. If authors struggle to find the snippets they need, they may create their own and duplicate existing content.”

    A few comments of my own regarding Craig’s points:

    o   Get all authors involved in the planning early on.
    o   Define the variables and snippets in two groups – project-specific and shared.
    o   Set up a parent/child project structure using the Flare Project Import feature and put shared variables and snippets in the parent project for easy downloading to the child projects.
    o   Let all authors know when a new variable or snippet has been added to the shared sets.
    o   Make sure that all authors know that any changes to shared variables or snippets must be made by the “owner” of those files, not by the individual authors.
    o Make sure that the snippets are clearly named.
  • Index entries – My experience is that traditional indexing is declining among Flare authors, with search taking over. This makes sense since search is easier to implement, but an index can do things that a search can’t. For example, if I refer in a topic to a sandwich made of cold cuts on a tubular loaf of bread as a “sub”, searching for “hoagie” won’t find the topic because the search is looking for the search term in the topic text. But an index lists keywords attached to the topics rather than terms in the topics, so it’s easy to add the keyword “sub”, plus the keyword “hoagie, see sub”, and so on. This makes it more likely that users will find the right topic. (Flare does let you add search synonyms but this can be a tedious job.)

    If you’re going to create indexes, define some rules to make the entries structurally consistent from project to project. Here are a few:

    o   Decide if the verb should use the infinitive (“to print”, then remove the “to”, leaving “print”), or the gerund (“printing”). I prefer the infinitive but that’s up to you.
    o   Decide whether the noun should be plural (“documents”) or singular (“document”).
    o   Decide whether to use sub-entries. For example, the term “BBQ” might be a first-level index entry, with “Carolina”, “Tennessee”, and “Texas” as sub-entries. Note that you could also use those sub-entries as first-level entries – e.g. “Carolina BBQ”, “Tennessee BBQ”, and so on.
    o   Consider using inversing. If you include the first-level entry “print dialog box”, include “dialog box” as a first-level entry with “print” as a sub-entry below it.
  • Hyperlinks vs. cross-references (xrefs) – Hyperlinks have been with us since the beginning of online help but they have two limitations.

    o   The link text doesn’t update if the title of the target topic changes. Let’s say you link the term “sub” in topic A to the “Subs” topic. You then change the title of the “Subs” topic to “Hoagies”. But the link term remains “sub”. You must find and change it by hand. In contract, a cross-referenced term is programmatically linked to the title of the target topic, so changing the target topic’s title automatically changes the link term too.
    o   A hyperlink keeps the format of a hyperlink when you output to print, such as PDF, and the link works if the user is viewing the material on the screen. But when the user prints the material, the link obviously doesn’t work. But a cross-reference will literally change its format from a link style to a page reference – “information about spaniels, see page 225”.
So a cross-reference is a better choice if you generate online and print targets. The limitation is that a cross-reference won’t work if you create a link from a topic in a target to an external file, like a URL or PDF. In that case, you must use hyperlinks. That limitation aside, cross-references are the best and most flexible choice for links.

  • File naming – Setting file naming conventions is, surprisingly, one of the hardest tasks in working with Flare. There are two naming issues, programmatic (thanks to Stephanie M.), and semantic.

    o   Programmatic conventions are straightforward. Use all lower case or mixed case? Can multi-word file names use spaces, use underscores in place of spaces (file_name), or use “Camel Case” (FileName). Check with your IT department.
    o   Semantic naming conventions, to indicate what a file contains, is harder, and you’ll want to involve all the authors in the process. For example, you might decide to name graphic files by the name of the screen followed by the type of screen, such as “Dialog Box.” The result is “Print Dialog Box”, easy to find in a list if you know the name of the screen that you want. Alternatively, you could name graphics by the type of screen followed by the name of the screen, such as “Dialog Box – Print”, easy to find in a list if you know that the desired screen is a dialog box but don’t remember which one.
Two final planning thoughts in addition to documenting your standards, getting trained, joining a user group, and contacting support as discussed in the white paper:

  • Decide what your priority and secondary output targets are. Some features, like togglers, work fine online but not in print. So deciding on your priority target will help guide your selection of Flare features.
  • Decide if mobile is in your future. If it is, note that some features, like popups, don’t work on mobile devices. So knowing if you’re going mobile will also help guide your selection of features. Note that this can apply in other project areas as well, such as making sure that all movies you create using Mimic are in HTML5 format because SWF format won’t display on iOS devices.


I’ll partly repeat what I said at the end of the white paper – Flare is a big powerful tool. If you’re new to it, it may not be obvious what can even be standardized. By following the suggestions in the white paper and here, you’ll help get your Flare work off on a sound footing and help keep it that way.

Monday, October 30, 2017

“Perfect vs. Good Enough” – Writing Quality in the Online Age - Part 2

This is part 2 of a three-part post examining the issue of “perfection” in content creation in the online age. 

The first part, which I posted on October 10, is a column I wrote in 2001 discussing an event from 1998. (Stay with me here...)

This second part is a column that I wrote in 2009 discussing what had changed since the first column in 2001. Look for the third part, in late November, to revisit the issue of “perfection” in light of emerging trends in 2017.

In this post, I’ll list the core points of the Wired article and some ideas about their impact on technical communication. First, the Wired article…

“…It’s … the latest triumph of what might be called Good Enough tech. Cheap, fast, simple tools are suddenly everywhere. We get our breaking news from blogs, we make spotty long-distance calls on Skype, we watch videos on small computers… The low end has never been riding higher.

So what happened? … technology happened. The world has sped up, become more connected, and … busier. As a result, what consumers want from the products and services they buy is fundamentally changing. We now favor flexibility over high fidelity, convenience over features, quick and dirty over polished. Having it here and now is more important than having it perfect. These changes run so deep and wide, they’re actually altering what we mean when we describe a product as “high quality.”

And it’s… everywhere. As more sectors connect to the digital world… they too are seeing the rise of Good Enough tools … Suddenly, what seemed perfect is anything but, and products that appear mediocre at first glance are often the perfect fit.”

Two examples from the article…

·         MP3, whose audio quality is lower than the CD standard but whose greater file compression lets us cram hundreds of songs into devices the size of a pack of cards.

·         The netbook, with minimal storage and power but which is light, portable, and cheap compared to traditional laptops that have more features, most of which may go totally unused.

The examples offer “flexibility over high-fidelity, convenience over features, and quick and dirty over slow and polished” and each has altered its market or created new markets. How might these factors affect technical communication? Here are three ideas – none new but, based on my training and consulting experience, worth repeating:

·         A major change since 2001 is the appearance and partial acceptance of user-generated content for online use. “Let the engineers write the doc” has been a laugh-getter for years within technical communication but the idea keeps coming up for one good reason – the engineers (the subject matter experts) know the material. And their content has now been appearing for years in blogs, wikis, and tweets.

I don’t see user-generated content replacing traditional online documentation/help but extending it. The documentation/help will still contain stable core content but link to user-generated content in blogs or wikis containing new, changeable content. Technical communicators and user-authors form a virtual team. If you create online documentation/help but don’t link it to your company blogs or wikis, take another look.

Similarly, video and animation have been around for years but not often used because of the costs and required skills. But lower prices and simpler tools are putting video and animation into more hands – e.g. user-generated. It may be “movies” created quickly using tools like Adobe Captivate, TechSmith Camtasia, or MadCap Flare, or from video bloggers. (YouTube may also be a source. You may not find the perfect video there, but there are so many clips about almost any topic that you may find one that’s good enough. The volume of clips provides flexibility, and the material is available quickly, even if the production values may be “dirty”.)

So rather than discount the idea of user-generated content, we should be actively helping to create, organize, use, and distribute it in the first place.

·         Software-driven writing features like templates and style sheets have existed for years but are still not used as often as they should. One reason is that the settings in these control files are often not quite “right.” Something in your material deviates from a setting in the control files. You could modify the setting, but it’s often easier to set up the non-standard material by hand. The result? You get perfect content, but at the expense of losing the consistency and automation provided by the control files.

Instead, consider setting up your control files to handle your common needs and ignore or modify other needs that are too difficult or marginal to handle in the control file. For example, you might create a “first-paragraph” style with extra space above for use in hard-copy, but can you replace that style with the “body” style and live with the “good-enough” result?

So the results may lack the perfection that you got by hand-tweaking the material, but you get the good enough, quick-and-dirty convenience of bringing programmatic control to your writing tasks. (Wired made an interesting point about users coming to accept MP3 quality as the standard rather than the higher quality of CD because they used MP3 more and got used to it. As more and more readers get material online, they may come to accept online style quality as the standard.)

·         Finally, consider lowering your writing standards – not to write badly but to change the definition of quality, standardize that definition, and write to it.


Many technical communicators started in hard-copy and transitioned to online, a transition that involved some hard changes including:

·         Adding tasks once performed by other people, like editors, to the writer’s workload.

·         The speedup of the work, losing the time we might once have had to get material “perfect.”

·         The appearance of media like blogs and wikis whose need for immediacy runs counter to the idea of perfecting the writing.

Since the old column appeared, I’ve seen more and more technical communicators accept the idea of “good enough.” But many still fight it, which is a losing battle. The field has seen many changes, each fought but not stopped. This is one more. If we fight it, the change will occur but without us. That would be a shame because these “good enough” technologies and methodologies are actually fun and highly challenging.

Wednesday, October 25, 2017

Correction To My Post About Flare 2017 R3

In my post about Flare 2017 R3 on Nov. 24, I said:

"If you copy the content in this topic, paste it into Word, and generate the readability statistics in Word, you’ll get different results. When I tried it, Word gave a readability of 65.1 and a grade level of 7.0, both still excellent but different from Flare... This may be caused by Flare’s using the same "Flesch Reading Ease” and “Flesch-Kincaid Grade Level” algorithms as Word but with different options enabled."

As it turns out, the problem is not one of having different options enabled in Flare vs. Word but rather the fact that Flare and Word interpret what a "sentence" is somewhat differently. So the feature is still as useful as I said it was in yesterday's post, but the Flare and Word results are not directly comparable.

Tuesday, October 24, 2017

Some New Features in MadCap Flare 2017 R3

MadCap released Flare 2017 R3 a few days ago. In this post, I’ll look at two of the new features that I think are most useful.

Text Analysis

In the past, one problem that I had while writing topics was that I couldn’t determine the readability of the topic content. Flare didn’t offer a readability checker. I had to output a Word target and run that through Word’s readability checker. This process worked but was a bit clumsy. The new text analysis feature seems to offer a simple solution to that problem.

Selecting Text Analysis on the Tools ribbon opens the Text Analysis pane, shown below.

I selected the readability scores option for one topic (from the basic training class), shown below, with the results also shown below.

Flare shows good results with a green bar color, fair with yellow, and poor with red. So this topic has a fair reading ease score of 76 and a good grade level score of 3.9. (Both actually excellent.) I can check any content from one topic to an entire project.

Be aware of one thing when using this feature. If you copy the content in this topic, paste it into Word, and generate the readability statistics in Word, you’ll get different results. When I tried it, Word gave a readability of 65.1 and a grade level of 7.0, both still excellent but different from Flare’s results. This may be caused by Flare’s using the same "Flesch Reading Ease” and “Flesch-Kincaid Grade Level” algorithms as Word but with different options enabled. (We can’t yet modify those options in Flare.) This peculiarity aside, I’m delighted to see the text analysis feature because it simplifies my Flare workflow.

Style Inspector

The Style Inspector is a short-cut way to perform stylesheet tasks without opening the full Stylesheet Editor. It lets you see what styles your text is using, modify a style’s properties, add new properties to a style, add a comment to a style, even convert local formatting to a style on the stylesheet.

Selecting Formatting Window in the Styles group on the Home ribbon opens the Formatting pane with the Style Inspector tab selected, as shown below.

In this example, I put the cursor on the topic’s title in the left pane and:

  • The Style Inspector on the right tells me that the title uses h1, the font-size is 140%, and so on.
  • There are no local style attributes, as indicated by that empty pane at the top.
  • I could add local formatting by clicking the + sign in the top pane or an additional property by clicking the + sign in the lower pane.
  • I could change the value of one of the properties by clicking the ellipsis to the right of that style.
  • I can see what style sheet is controlling this style, here “ipswitch_styles.css” and see the path to that stylesheet by hovering over its name.
  • I can add a comment to a style by clicking any property of the style, right-clicking the style itself, and selecting Add Comment. You must click on one of the properties first.

All without having to open the Stylesheet Editor. (However, the stylesheet opens in the Stylesheet Editor if you add a property or change the value of a property since you’ll have to save the stylesheet to register that change or addition.)


I like how the Style Inspector makes it easy to manage my style usage. Personally, I still prefer to go into the full Stylesheet Editor but using the Style Inspector means I don’t have to. That’s useful if you’re new to styles and find the Stylesheet Editor overwhelming.

I especially like the text analysis feature because it’s completely new and solves a problem – the inability to get readability statistics in previous versions of Flare.

Between these two features, plus Microsoft Excel file import, last action repeat, a thesaurus, and some snazzy new templates, Flare 2017 R3 is a solid and useful release.

Tuesday, October 10, 2017

“Perfect vs. Good Enough” – Writing Quality in the Online Age

Part 1

In August 2009, Wired Magazine published an article entitled “The Good Enough Revolution: When Cheap and Simple is Just Fine” by the Wired Staff in the Gear column, 8/24/09, at  Its theme – “cheap and simple beats perfect almost every time.” That article reminded me of a column I wrote in 2001 (“’Perfect vs. Good Enough’ – Writing Quality in the Online Age”) that discussed why technical communicators needed to change our definition of quality for the dot-com era.

On rereading, the 2001 column still seemed relevant. So, this column, part 1, presents the core points from the old column from 2001. Part 2 will revisit the 2009 column to present the core points of the Wired article and how they might apply to technical communication. Part 3 will revisit the issue of quality in the emerging age of taxonomies and semantic markup.

First, the old 2001 column, with my comments in italics.


I typically get one or two calls per week from prospective clients or people looking for writers with certain skills. Three years ago (1998), I got a call from a dot-com looking for a “content provider.” It was the first time I’d ever heard that title so I laughed and said “So you’re looking for a writer?” and was taken aback when the caller vehemently said “No! We don’t want a writer.”

I asked why. The answer – “… writers get too focused on perfection… we don’t have time for. If we wait until the material is perfect, our competitors will beat us to market. We do not need it perfect; we just need it good enough.”

I mentioned that conversation often. Two people used it as the basis for presentations in the Bleeding Edge stem at the 2001 annual (STC) conference – one discussing the issue from a writing perspective, the other from a tools perspective. Here, I discuss it from two other perspectives – trends and standards.


Four major trends affect the issue of writing quality:

·         Time-to-market is getting shorter.

·         Editorial positions are being cut back or eliminated in many companies.

·         Single-sourcing is becoming increasingly complex.

Single-sourcing isn’t new. If you used RoboHelp to create WinHelp and hard-copy in 1995, you were single-sourcing. But today’s single-sourcing technologies work best with rigorously structured content. We can no longer get away with “winging it”.

By supporting “good enough” as opposed to “perfect”, isn’t winging it exactly what I am calling for? But it’s not winging it if you write to a standard, just that that standard may call for “good enough.”

·         New competitors are entering our field.

Technical writing was once unglamorous and fairly low-paying. Today, companies are starting to view content – including documentation – as a strategic asset. That shift has attracted consultants looking for new business. But technical writers also want that work.

Outsourcing is a new competitor. Technical writers are upset over the perceived lower quality of outsourced material, and lost jobs. But consider the business perspective. If outsourced material has 50% of the quality but is written at 25% of the cost, a company may decide it’s a worthwhile tradeoff.

What are the effects of these trends?

·         Shorter time-to-market means less time to write perfectly or fix stylistic inconsistencies. (Without editors, there may be no one to fix or even notice those inconsistencies.) So we need to define the material’s look and style before the project starts. We need standards and consistency at a human level.

·         Increasing single-sourcing complexity means that consistency and simplicity are key to getting our material into a form for re-use. We need standards and consistency at a structure and format level.

·         Consultants often use formal methodologies to do their work and help sell their services. We need standards at the business level.

Defining A “Perfect vs. Good Enough” Standard

Few companies have formal writing standards. Even those companies that do often don’t use them. There seem to be two reasons for this.

·         There’s a lot of creativity and subjectivity in writing, so how do you define “good”?

·         Many writers dislike tools that measure writing quality. This may be due to a reluctance to have a creative process measured by machine, bad experiences with a tool, or antipathy toward a tool’s vendor.

But setting documentation standards can let us do two things:

·         Determine how to change our processes to compete with the new entrants in the “content” field and participate in emerging markets and niches.

·         Define measurable standards to help justify why technical writers should do the work, or at least participate in it.

These standards should do three things:

·         Establish a baseline. What is “perfect”?

·         Define acceptable and measurable deviations from the baseline. Formalizing such deviations – a maximum acceptable percentage of passive voice, for example – will help improve consistency.

·         List and describe tools, especially third-party tools, that let us measure the baseline and deviations.

These standards could be created in two ways.

·         Each company defines its own baseline, deviations, and tools as part of its style guide. However, many companies don’t have the time to do this.

·         An organization, such as the STC, could define a “perfect” baseline standard and make it available to members to use as is or to define their own deviations.


Because of the nature of writing, our profession has always accepted a subjective definition of quality. But changes in the market and technologies are starting to undermine that viewpoint. We’re going to have to confront this issue at some point. Now would be a good time, while we have time to do so thoughtfully and deliberately.

The old column ended here. In part 2, I’ll look at the core points of the Wired article and some ideas about their impact on technical communication.

Tuesday, September 19, 2017

Four Management Challenges in Implementing Information 4.0

Information 4.0 is a new concept but some of the technologies and methodologies that it encompasses are available and implementable today, albeit in early forms. But before that happens, Information 4.0 will face multiple challenges, just as online help and the web did in the 1990s.

In this post, I’ll discuss four implementation challenges on the management side:

  • Defining clear and consistently accepted terminology.
  • Demonstrating support for the company's strategic and business direction.
  • Dealing with problematic senior management biases.
  • Establishing and following standards, metrics, and analytics.

Defining Clear and Consistently Accepted Terminology

New technology often sounds like confusing gibberish.

  • Twenty years ago, and even today, there was confusion over “WebHelp” versus “Web Help”, for example. Because of that confusion, many companies bought the wrong tools, hired the wrong people, or just went off in the wrong direction.
  • Today, there’s confusion over the meaning of “mobile”. Is it an app? Responsive online help on a laptop and a mobile device? Something else? I recently consulted at a large manufacturing firm that brought me in to help assess its readiness to go mobile. One result was the discovery that the different divisions had totally different interpretations of the term.
  • Information 4.0 promises entirely new levels of terminological confusion. Is “molecular content” the same thing as a topic? What’s “dynamic” content? And so on.  

Until everyone agrees on the meanings of the terms being used for an Information 4.0 implementation, it will be difficult to show support for the company’s strategic and business direction. This means it will be almost impossible to do anything else. So any Information 4.0 effort needs an education component.

Demonstrated Support for the Company’s Strategic and Business Direction

Information 4.0 is cool. But that won’t be enough to build management support because management is typically being pressed to support other initiatives too, many also cool. It’s crucial to show, concretely, how Information 4.0 will support the company’s strategic and business direction. That’s going to require careful analysis of the company’s operations beyond technical communication.

Dealing with Problematic Senior Management Biases

Even if senior management supports an Information 4.0 effort, we may encounter biases that affect that support. (In the early days of business computing, managers didn’t want to use computers because that involved typing and the bias was that typing was secretarial work. Renaming “typing” to “keyboarding” got past that bias and made typing – on a computer – cutting edge.)

For example, it will be crucial to present Information 4.0 as dealing with “content” and “user support”, not “documentation”. No one cares about documentation. But despite your efforts, management may still view Information 4.0 as documentation-focused, not realizing that “documentation” today is more a combination of content creation and programming. If so, it will be hard to get management support. By way of illustration…

I was contacted by a company whose online help was created using a long-dead version of RoboHelp. Users complained that the search didn’t work well and there were problems in the code. The company wanted to convert the help to Flare to get better search results and clean up the code to future-proof the content, both supposedly good things.

The company turned down the proposal on the grounds that it was too expensive. The problem was that they saw their help as documentation rather than as a strategic resource and gave it a far lower priority. The upshot? Their staff would do the conversion. Unfortunately, the staff was bright but didn’t know RoboHelp, Flare, or code so the effort was likely to be slow and inefficient at best.

In that tale is an example of how management bias may harm even efforts that management wants. And Information 4.0 is far more complex and unfamiliar than online help, so bias is likely to be still more of a problem.

Standards, Metrics, and Analytics

In the mid-1990s, online help and the web were so new that few companies had standards or metrics by which to measure them. And analytics barely existed.

Today, however, getting management support for an Information 4.0 effort will require showing support for your company’s business and strategic direction. (That may not always be the case. In 2002, I spoke with two people from an aircraft builder whose CTO was so impressed with mobile that he directed that it be implemented on the manufacturing floor without cost-justification. So you may not always have to demonstrate support, but it’s the safe way to bet.)

Demonstrating that support often requires quantitative data, ideally numbers that translate to increased revenue or reduced expenses. Information 4.0 is so new that few standards exist, and thus few metrics or analytics. Yet Information 4.0 has a lot in common with today’s online help and web efforts, and may be able to use some of their standards and metrics. The biggest problem I’ve found with metrics for any purpose, let alone Information 4.0, is resistance from people who don’t want to be measured.


Information 4.0, like any new technology, is fun to speculate about and fulfilling to help emerge. There are many interesting challenges on the development side and the impact on tech comm. I’ll look at these in later posts.

But none of them matter if you don’t sell management on the idea in the first place.