Wednesday, December 6, 2017

“Perfect vs. Good Enough” – Writing Quality in the Online Age - Part 3


This is the last part of a three-part post looking at “perfection” in content creation in the online age. In this post, I’ll discuss the return of “perfection” but with a different meaning than it had historically and how that different meaning represents a sea change in technical writing.

In the old days, “perfection” was rarely defined but generally meant perfectly crafted writing. But perfectly crafted writing takes too long to create. Think about how often you heard a product manager say of the user guide “We don’t need it perfect. We just need it good enough and out there.”

The result was a tug-of-war within tech comm. One side claimed that the communication/writing side was what counted and the technical side was secondary. The other side took the opposite position. Those in the middle, like me, claimed that both were important. Communication was our business but without the technology, there was no way to share that communication with readers.

So how has the meaning of “perfection” changed?

In the old days, the code behind the content didn’t matter as long as the document printed correctly. The result was documents full of syntactically incorrect or junk code. But today, we have to assume that any content we write may be converted between formats and presented in some online form. Plus, the rapid pace of modern publishing means that problems that require manual intervention to correct are bad. This change is driving a new requirement for “perfection” but on the technical side. How?

·         Syntactic correctness is crucial to using conversion tools to move content between formats. No more hacks or Easter eggs. This means any old tools that aren’t syntactically correct will have to be abandoned, no matter how invested your company may be in them.

For example, a few years ago, I did some work for a company that used an HTML editor that was created around 1999 and that followed acceptable syntax as it existed in 1999. Unfortunately, that syntax didn’t work today. For example, it denoted heads not by the standard h1, h2, etc. tags but by span tags with class numbers. This worked as long as the company used that old tool. But that old tool was limited. In order to go beyond its capabilities, the company had to first correct the code. Doing so would require using Regex but the company didn’t want to spend the money. Checkmate.

·         Good coding practice is crucial to taking advantage of responsive design.

For example, if there’s a problem aligning three images horizontally, common practice is to insert a table on the page, hide the cell borders, and insert the images in consecutive cells. This works well until users view the page on a mobile device. Now, rather than shifting automatically from a horizontal layout to vertical, the images remain horizontal and users have to scroll horizontally. Horizontal scrolling isn’t the end of the world but it doesn’t take advantage of the capabilities of responsive design.

·         Good SEO (search engine optimization) practice is crucial for effective searching. As more and more content goes online, you want your content to show up at the top of the search hits list and SEO will help. Of course, all your competitors are thinking the same thing…

·         Good metadata and taxonomy practice is crucial for machine-driven assembly of small chunks of content – Information 4.0’s “molecular content” or “granular content” – into output quickly to respond to user requests in transitory contexts.

·         Cross-platform-appropriate linking is crucial for usability.

For example, authors often use popups for online help to display small bits of content and, more important, to keep users within the topic and task thread. (Using jump links lets users get out of the topic and thread and perhaps lose their train of thought.) However, if you’re creating online help that’s responsive for use on desktop screens and mobile devices, be aware that popups will render as jumps on mobile devices, harming your navigational design on mobile devices.

·         Accurate linking is crucial for content credibility. Broken links happen but they can make users question the quality of the content and thus its credibility. And because the same content may appear on multiple devices, possibly on multiple network nodes, the risk of broken links goes up.

·         Future-proofing research is important to understand emerging platforms or technologies and to consider their effect on your content authoring.

For example, chatbots have suddenly become hot. I think it will be years before most companies create them but things can change unexpectedly.

      What about on the writing side? Is good writing still important? Yes, in a way.

·         Accuracy is obviously always important.

·         Good, consistent style is important, expressed through devices like parallelism, to support retention.

·         Good, consistent tone is important, expressed through devices like using active voice for the second person – e.g. “you”.

·         Good punctuation is important sometimes.

For example, the obsession over the number of spaces to use after a period seems silly to people who aren’t in tech comm (and to many who are). Other punctuation issues may seem just as silly until you point out the problems they can cause. (One programmer in a class that I taught said that punctuation rules were “a load of crap”. I pointed out that coding has its own punctuation rules and that breaking them might cause a program to crash. I then suggested that he consider the difference between “Let’s eat, grandma.” and “Let’s eat grandma.” He agreed and made it up to me by telling me what he claimed was the world’s worst accordion joke.)

·         What’s less important is creative perfection, le mot juste. We don’t have time for it and most readers won’t appreciate it anyway.


So “perfection” is a requirement again in tech comm, except that the word now has a different meaning. It still matters on the communication/writing side but is crucial on the technical side and represents the continuing ascendancy of that side of tech comm.

Thursday, November 30, 2017

Information 4.0 - Tech Comm of the Future?


This article was originally published in ISTC Communicator, Winter 2017.
www.istc.org.uk/publications-and-resources/communicator

Imagine that you’re on an oil platform in the North Sea. Strain gauges in a drill tube detect metal fatigue and automatically start the process of replacing the tube.

Or imagine that you’re looking at an exhibit in a museum. Your smartphone knows which exhibit you’re at and automatically describes it. Move to the next exhibit; your phone automatically tells you about that one. Your phone also knows that it’s noon and suggests lunch. It also knows that it’s Friday, when the cafeteria offers a favorite dish of yours until 1 PM, and mentions that as an inducement.

Or imagine other scenarios where a computer can take action or offer information or assistance based on some context. That’s a large part of what Industry 4.0 and Information 4.0 offer. In this article, I’ll look at both but focus on Information 4.0 as being of primary interest to technical communicators.

What Is Industry 4.0?

Industry 4.0 is a model for factory automation and data exchange from Germany. (For an overview, see https://en.wikipedia.org/wiki/Industry_4.0.) It’s based on the Internet of Things (IoT), AI, and the cloud, plus a new standard called iiRDS (International Standard for Intelligent Information Request and Delivery – https://iirds.tekom.de/) and RDF (Resource Description Framework from the Worldwide Web Consortium – https://www.w3.org/2001/sw/wiki/RDF), and other technologies.

The goal, to seriously oversimplify, is to create factories with machines that are self-governing through, in part, “context sensing”, like the drill tube that can detect metal fatigue and call for servicing on its own. For more examples, see “Context Sensing and Information 4.0” by Ray Gallon in the November 2016 issue of TCWorld at http://www.tcworld.info/e-magazine/content-strategies/article/content-sensing-and-information-40/

So What Is Information 4.0?          

Information 4.0, according to its evangelists Andy McDonald and Ray Gallon, is the “…informational component of Industry 4.0”. (See https://www.linkedin.com/pulse/information-40-response-requirements-industry-andy-mcdonald.) (Andy and Ray have formed the Information 4.0 Consortium. Check it out at http://information4zero.org/.) Think of Information 4.0 as a conceptual umbrella for current and emerging technologies, either directly or peripherally related to technical communication.

Why does this matter to technical communicators?

For industry, Industry 4.0 is most important. And, as Adobe’s Stefan Gentz noted in a conversation with me at TCUK, what industry needs now is standards, protocols, and use cases.

Technical communicators are unlikely to create those standards, protocols, and use cases but we may be involved in documenting them. Doing so will call for familiarity with a vast range of new technologies.

Furthermore, we’re likely to use those technologies to document things outside the bounds of Industry 4.0. That will change technical communication the way word-processing did in the early 1980s and HTML in the late 1990s. That broad applicability of Information 4.0 is why it’s the focus of this article.

Characteristics of Information 4.0

Its evangelists postulate that Information 4.0 content will have six primary characteristics.

  • Molecular – Replaces documents with “information molecules” that can assemble themselves into “compounds” based on a “state vector”.
  • Dynamic – The context may change so the information molecules must change too.
  • Offered rather than delivered – Information is available as needed but not pushed on the users. (Think of a dynamic help system that’s always available but discretely tucked into a corner of the screen.) Also, because it’s hard to predict what information users might need and whether users have the background knowledge needed to understand a particular molecule, we may need to create different molecules containing different versions of the information.
  • Ubiquitous – Information has to be available everywhere, and has to be searchable.
  • Spontaneous – Information has to display based on the context of the information requests.
  • Profiled automatically – Information has to fit user’s needs as closely as possible rather than being generic.

    Are these characteristics implementable today? Here are some of the issues.

Molecular

Think of molecular content as topic-based authoring taken to extremes. “Fragments” of information are available at any given moment to fit the context of requests for information. This will boost the number of files.

But as the number of files increases, so do the hardware and software requirements – more RAM, faster hard disks, and faster networks. Can your tools and networks cope with huge numbers of files or will you have to upgrade your hardware or tools, or change tools? (And the definition of “huge” is subjective. I meet many people who have five-hundred files in a Flare project and consider that huge. The largest I’ve ever worked on had 176,500 files. The largest I know of is close to 900,000.)

As the number of files in a project grows, so does the need for project management rigor. The need for a project description becomes crucial. Even simple things like file naming conventions have to be defined and followed with no deviation. Without these and similar steps, the work may go out of control.

Assembly into compounds will make extensive use of metadata. Current tools offer some metadata, like conditional build tags, but the Information 4.0 metadata will have to be open source, such as RDF. That will add a new and unfamiliar requirement for authoring.

The “state vector” that drives the assembly process is a set of temporary context-states – an advanced form of today’s context-sensitivity. Somebody or something will have to define and maintain them.

Is it feasible today? Yes, in a limited way. Today’s tools support topic-based and fragment authoring, but not in the numbers needed by Information 4.0. Also, molecules and fragments created by today’s tools are meant to be combined into one output rather than live on their own. That may change how we create those molecules and may lead to machine-created content and AI.

Dynamic

The molecules will have to be continuously updated to match changing state vectors, effectively in real-time. Furthermore, the molecules will have to be in open databases rather than stored on authors’ local PCs. It also means that compilation becomes a bottleneck in some cases because users may not want to wait out the compilation time, just as users don’t want to wait out slow-leading web sites today.

We’ll also need fast, reliable network access to send the context state to the processor and the updated molecules back to the user quickly. There will also need to be some local storage for situations where network access is slow or nonexistent.

Is it feasible today? Yes, in a limited way. Current tools are starting to let us metatag content but that’s still in an early stage. Current tools will also have to create fragments that don’t contain tool-specific codes that may not work in an open standard environment. (MadCap Flare does this with its Clean XHTML output.) Our tools will also need to support local storage.

Offered Rather Than Delivered

Breaking information into the smallest possible molecules. However, defining the parameters of those molecules and creating them using traditional writing methods may not be fast enough. We may need machine-generated content.

Is it feasible today? Yes, in a limited way. The biggest bottleneck is the need to create molecules rapidly enough.

Ubiquitous

The molecules have to be available from anywhere and searchable. HTML5’s responsive design allows ubiquity across multiple devices and platforms. SEO (search engine optimization) increases searchability and findability. However, the need for ubiquity and findability rules out hard-copy outputs, perhaps even PDF. This will be a wrenching move for many companies today.

Is it feasible today? Yes, to a surprising degree. Responsive design lets us create one output that runs on multiple devices rather than having to create one output for each device. But the compilation time may be longer than users are willing to wait.

Spontaneous

The “contexts” that trigger spontaneity are more advanced forms of today’s context-sensitive help. They might include device orientation (how users hold the device), location detection, external states like temperature, and more. The contexts have to be sent to the processor to let it alter the information to fit the context. This requires methods of context detection, metadata (again), plus fast networks to get the new content back to the user fast enough to make it useful.

Is it feasible today? Yes, in a limited way. We’ve been creating context-sensitive help since the dawn of online help. What we don’t have in the technical communication world is the spontaneous side, where information might change dynamically as the context changes. For that, we have to look to the web and mobile app worlds. Once online help went from RTF to HTML in 1997, technologies from those worlds have been available to technical communication projects but I’ve almost never seen them used.

Profiled Automatically

User profiling will be done automatically based on the context, again requiring major and continuous use of analytics.

Is it feasible today? Yes, in a limited way. We’ve been able to create targeted online help for years using conditional build tags and placeholders like variables and snippets. The problem is that we can’t create the output dynamically. Instead, we generate a fixed output for user A. To create an output for user B, we change the conditions and placeholder values and regenerate, which adds a delay. Our online help projects have offered search capabilities for years but I rarely encounter companies that save and use the search data. I have never encountered a company that uses that search data to automatically update and regenerate the material. (If your company does, please contact me.)

How Will It Affect Tech Comm?

Here are some of the more obvious likely effects.

Writing

  • We won’t write documents. Instead, we’ll do single sourcing with a twist – creating information molecules that can stand on their own as well as being combined into larger outputs. This may affect users of document-oriented tools like Word and FrameMaker the most. (We can use those tools to create molecules – e.g. topics – but many authors will find true topic-focused tools to be more convenient.)
  • No more traditional continuity – e.g. no more writing “as described above” because “above” may now be in a separate molecule.
  • No more local formatting since local formatting will probably break automated processing tools that look for styles.
  • The need to structure content organizationally, using templates, but also semantically using metadata.
  • Dealing with dynamic wording. We say “click the button” when writing for large screens but “tap the button” when writing for mobile. What do we do when using responsive output to that runs on both large screens and mobile devices? One answer today is to use an intermediate word like “select” but that doesn’t ring true to desktop or mobile output. There are ways to alter wording dynamically using CSS, which should spread as more authors start creating responsive output but they’ll have to work in the code until our tool vendors start to support it. Which will call for greater facility with…

Technology

  • We’ll need more technical skill. We’ll have to be familiar with, or more familiar with, metadata, CSS, networks, and the other technical communication technologies mentioned above.
  • We’ll need familiarity with technologies and standards from outside technical communication, like iiRDS and RDF.
  • We’ll need to follow good coding practices and standards.
  • We’ll need up-to-date tools.
  • And we’ll need formal training in how to use those standards and tools. Peer-to-peer training is common but too often simply passes bad practices on from one generation to the next.

Corporate Role

  • Nobody cares about “documentation”, but “content” is cool, possibly even a revenue generator and/or a branding mechanism. This may help technical communicators become more of a force within the company by changing the company’s perception of tech comm.
  • We have to stop trying to block new technologies in technical communication. Someone in an audience once responded to my discussion of social media by saying “over my dead body.” The technology will still go ahead, but around her, as her career has dead-ended.

Four Major Issues on the Business Side

You may be thinking that all this sounds interesting, with the promise of challenging work and new lines of work. But Information 4.0 will have to overcome four business hurdles.
  • Does it support my company’s strategic and business direction? If it does not do so, in ways that are clearly and quantitatively demonstrable, it either won’t be funded at all or will be a quickly-forgotten one-off.
  • Does it have high-level management support? Even if Information 4.0 supports the company’s strategy and business, other initiatives within the company probably do as well. Only the ones that get management support will survive, or at least have a chance to survive.
  • Does company culture support this higher technical complexity? The answer is often no. Trying to force Information 4.0 into an unsupportive culture will cause turnover in the documentation groups; you’ll get people who are comfortable with the technology but lose people who have the experience and the institutional memory.
  • Are goals and terms clearly defined within my company? As silly as this sounds, it’s very possible that people in your company, especially management, may not understand the terms we take for granted. They may also not have any clearly defined goals for an Information 4.0 initiative. Any Information 4.0 initiative is going to require an educational component on your part.

There are other business issues, plus technical ones. See my blog at http://hyperword.blogspot.com/ for periodic updates on Information 4.0 issues.

Summary

Much of Information 4.0 still conceptual, but it represents a continued increase in the technical side of technical communication, and offers multiple paths forward for tech comm in the years ahead. It’s worth watching.

Thursday, November 9, 2017

Correcting An Omission In a Previous Post


On November 1, I put up a post about best practices for using MadCap Flare. The post included some tips from Craig Wright of StrayGoat Writing Services , but I didn't include his contact information. Sorry about that.

Craig's web site is https://straygoat.co.uk/.

Thanks, Craig.

Wednesday, November 8, 2017

Creating Image Maps in Flare


I was asked on LinkedIn whether Flare supported image maps and how to create them. I don't see image maps used much anymore but they can be useful. Flare makes it easy to create them with its built-in image map editor. Here's how:
  1. Insert your graphic in the topic.
  2. Right-click on the graphic and select Image Map from the dropdown menu.
  3. Click on any of the three "New... Mode" buttons on the toolbar and draw the desired shape. To resize a shape, click on and drag one of its drag handles. To move a shape, just drag it. There are various other image options, but the three mode buttons are the core.
  4. After you create the shape, double-click it. The Area Properties dialog box opens. (It's basically the same as the Hyperlink Properties dialog box.)
  5. Specify the link parameters like you would for any type of link and you're done.

Be sure to test the hot spots in your browsers. I haven't used image maps in a while so there may have been some changes in the browsers that break the hot spots, but they did work fine on my PC using Chrome, IE, and Edge.

Wednesday, November 1, 2017

More Best Practices for Starting to Work with MadCap Flare - Standards


Flare has so many options that it can be hard to decide where to start, even if you've taken a class and especially if you haven't. New Flare users often just jump in. But this can get you off to an inefficient start, perhaps even a bad start, and create problems that ripple down to later projects.

I recently wrote a post for MadCap’s MadBlog that reviewed best practices for starting to use Flare. (See https://www.madcapsoftware.com/blog/2017/10/12/madcap-flare-best-practices-starting-a-new-project-from-scratch/) I expanded that post into a longer white paper for MadCap. (See https://assets.madcapsoftware.com/white-papers/White_Paper-7_Best_Practices_for_Starting_Your_First_MadCap_Flare_Project.PDF
In this post, I’ll take the topic of best practices further by expanding on the Standards section of the white paper. The original MadBlog post, the white paper, and this blog post are based on twelve years as a certified Flare consultant and trainer and eighteen years of hypertext consulting and training pre-Flare. However, always consider your specific situation before following any suggestions. With that, let's look at the standards issue in more detail.

The more you standardize, the more consistent your projects will be and the less authors will have to guess about what setting to use for a given task or feature. Things you might standardize include:

  • Graphic file formats – Many graphic formats are available today and Flare supports most or all the ones that you’re likely to use. The traditional approach is to use GIF for screen shots and JPG for photos. This works but you’re maintaining two sets of files, GIF and JPG.

    You may find it more efficient to use PNG for all graphics, including those for your print outputs. A PNG’s quality may not be as good as that of an EPS but your users won’t know because they haven’t seen the EPS so they won’t have anything to which to compare the PNG.

  • Conditional build tag usage – Condition tags are the core single sourcing feature but they can go out of control if you don’t set rules for their use. (I’ve talked to two firms this year that used tags with no initial rules about when to insert and how to call them. One company had a project with about 1,000 topics and 1,500 “unruly” tags. The other had about 1,000 topics and 15,000(!) such tags. The result was that new authors couldn’t figure out what tags to include or exclude for a given target and when to add new tags.)

    Creating and inserting tags is flexible – you can apply tags to a character in a topic, a paragraph, an entire topic, a group of topics, a folder, and any other element in a Flare project. And it’s easy – assign a name, pick a color, add an optional comment, and you’re done. In fact, it’s so easy that new authors often gloss over the crucial first step – defining what they’re trying to do and documenting it. So my suggestion is to first decide what you want to do, then define rules for inserting and using the tags, such as the smallest element you can tag. Document this. And test the results to make sure you’re getting what you expect.

  • Variable and snippet usage – Two excellent points from Craig Wright (https://www.straygoat.co.uk).

    “Try to plan your content reuse at an early stage, especially if there are multiple writers involved. It’s no good creating snippets if the other authors aren’t aware of them.”

    and

    “Make sure your snippets are well organized. If authors struggle to find the snippets they need, they may create their own and duplicate existing content.”

    A few comments of my own regarding Craig’s points:

    o   Get all authors involved in the planning early on.
    o   Define the variables and snippets in two groups – project-specific and shared.
    o   Set up a parent/child project structure using the Flare Project Import feature and put shared variables and snippets in the parent project for easy downloading to the child projects.
    o   Let all authors know when a new variable or snippet has been added to the shared sets.
    o   Make sure that all authors know that any changes to shared variables or snippets must be made by the “owner” of those files, not by the individual authors.
    o Make sure that the snippets are clearly named.
  • Index entries – My experience is that traditional indexing is declining among Flare authors, with search taking over. This makes sense since search is easier to implement, but an index can do things that a search can’t. For example, if I refer in a topic to a sandwich made of cold cuts on a tubular loaf of bread as a “sub”, searching for “hoagie” won’t find the topic because the search is looking for the search term in the topic text. But an index lists keywords attached to the topics rather than terms in the topics, so it’s easy to add the keyword “sub”, plus the keyword “hoagie, see sub”, and so on. This makes it more likely that users will find the right topic. (Flare does let you add search synonyms but this can be a tedious job.)

    If you’re going to create indexes, define some rules to make the entries structurally consistent from project to project. Here are a few:

    o   Decide if the verb should use the infinitive (“to print”, then remove the “to”, leaving “print”), or the gerund (“printing”). I prefer the infinitive but that’s up to you.
    o   Decide whether the noun should be plural (“documents”) or singular (“document”).
    o   Decide whether to use sub-entries. For example, the term “BBQ” might be a first-level index entry, with “Carolina”, “Tennessee”, and “Texas” as sub-entries. Note that you could also use those sub-entries as first-level entries – e.g. “Carolina BBQ”, “Tennessee BBQ”, and so on.
    o   Consider using inversing. If you include the first-level entry “print dialog box”, include “dialog box” as a first-level entry with “print” as a sub-entry below it.
  • Hyperlinks vs. cross-references (xrefs) – Hyperlinks have been with us since the beginning of online help but they have two limitations.

    o   The link text doesn’t update if the title of the target topic changes. Let’s say you link the term “sub” in topic A to the “Subs” topic. You then change the title of the “Subs” topic to “Hoagies”. But the link term remains “sub”. You must find and change it by hand. In contract, a cross-referenced term is programmatically linked to the title of the target topic, so changing the target topic’s title automatically changes the link term too.
    o   A hyperlink keeps the format of a hyperlink when you output to print, such as PDF, and the link works if the user is viewing the material on the screen. But when the user prints the material, the link obviously doesn’t work. But a cross-reference will literally change its format from a link style to a page reference – “information about spaniels, see page 225”.
So a cross-reference is a better choice if you generate online and print targets. The limitation is that a cross-reference won’t work if you create a link from a topic in a target to an external file, like a URL or PDF. In that case, you must use hyperlinks. That limitation aside, cross-references are the best and most flexible choice for links.

  • File naming – Setting file naming conventions is, surprisingly, one of the hardest tasks in working with Flare. There are two naming issues, programmatic (thanks to Stephanie M.), and semantic.

    o   Programmatic conventions are straightforward. Use all lower case or mixed case? Can multi-word file names use spaces, use underscores in place of spaces (file_name), or use “Camel Case” (FileName). Check with your IT department.
    o   Semantic naming conventions, to indicate what a file contains, is harder, and you’ll want to involve all the authors in the process. For example, you might decide to name graphic files by the name of the screen followed by the type of screen, such as “Dialog Box.” The result is “Print Dialog Box”, easy to find in a list if you know the name of the screen that you want. Alternatively, you could name graphics by the type of screen followed by the name of the screen, such as “Dialog Box – Print”, easy to find in a list if you know that the desired screen is a dialog box but don’t remember which one.
Two final planning thoughts in addition to documenting your standards, getting trained, joining a user group, and contacting support as discussed in the white paper:

  • Decide what your priority and secondary output targets are. Some features, like togglers, work fine online but not in print. So deciding on your priority target will help guide your selection of Flare features.
  • Decide if mobile is in your future. If it is, note that some features, like popups, don’t work on mobile devices. So knowing if you’re going mobile will also help guide your selection of features. Note that this can apply in other project areas as well, such as making sure that all movies you create using Mimic are in HTML5 format because SWF format won’t display on iOS devices.

Conclusion

I’ll partly repeat what I said at the end of the white paper – Flare is a big powerful tool. If you’re new to it, it may not be obvious what can even be standardized. By following the suggestions in the white paper and here, you’ll help get your Flare work off on a sound footing and help keep it that way.

Monday, October 30, 2017

“Perfect vs. Good Enough” – Writing Quality in the Online Age - Part 2


This is part 2 of a three-part post examining the issue of “perfection” in content creation in the online age. 

The first part, which I posted on October 10, is a column I wrote in 2001 discussing an event from 1998. (Stay with me here...)

This second part is a column that I wrote in 2009 discussing what had changed since the first column in 2001. Look for the third part, in late November, to revisit the issue of “perfection” in light of emerging trends in 2017.

In this post, I’ll list the core points of the Wired article and some ideas about their impact on technical communication. First, the Wired article…

“…It’s … the latest triumph of what might be called Good Enough tech. Cheap, fast, simple tools are suddenly everywhere. We get our breaking news from blogs, we make spotty long-distance calls on Skype, we watch videos on small computers… The low end has never been riding higher.

So what happened? … technology happened. The world has sped up, become more connected, and … busier. As a result, what consumers want from the products and services they buy is fundamentally changing. We now favor flexibility over high fidelity, convenience over features, quick and dirty over polished. Having it here and now is more important than having it perfect. These changes run so deep and wide, they’re actually altering what we mean when we describe a product as “high quality.”

And it’s… everywhere. As more sectors connect to the digital world… they too are seeing the rise of Good Enough tools … Suddenly, what seemed perfect is anything but, and products that appear mediocre at first glance are often the perfect fit.”

Two examples from the article…

·         MP3, whose audio quality is lower than the CD standard but whose greater file compression lets us cram hundreds of songs into devices the size of a pack of cards.

·         The netbook, with minimal storage and power but which is light, portable, and cheap compared to traditional laptops that have more features, most of which may go totally unused.

The examples offer “flexibility over high-fidelity, convenience over features, and quick and dirty over slow and polished” and each has altered its market or created new markets. How might these factors affect technical communication? Here are three ideas – none new but, based on my training and consulting experience, worth repeating:

·         A major change since 2001 is the appearance and partial acceptance of user-generated content for online use. “Let the engineers write the doc” has been a laugh-getter for years within technical communication but the idea keeps coming up for one good reason – the engineers (the subject matter experts) know the material. And their content has now been appearing for years in blogs, wikis, and tweets.

I don’t see user-generated content replacing traditional online documentation/help but extending it. The documentation/help will still contain stable core content but link to user-generated content in blogs or wikis containing new, changeable content. Technical communicators and user-authors form a virtual team. If you create online documentation/help but don’t link it to your company blogs or wikis, take another look.

Similarly, video and animation have been around for years but not often used because of the costs and required skills. But lower prices and simpler tools are putting video and animation into more hands – e.g. user-generated. It may be “movies” created quickly using tools like Adobe Captivate, TechSmith Camtasia, or MadCap Flare, or from video bloggers. (YouTube may also be a source. You may not find the perfect video there, but there are so many clips about almost any topic that you may find one that’s good enough. The volume of clips provides flexibility, and the material is available quickly, even if the production values may be “dirty”.)

So rather than discount the idea of user-generated content, we should be actively helping to create, organize, use, and distribute it in the first place.

·         Software-driven writing features like templates and style sheets have existed for years but are still not used as often as they should. One reason is that the settings in these control files are often not quite “right.” Something in your material deviates from a setting in the control files. You could modify the setting, but it’s often easier to set up the non-standard material by hand. The result? You get perfect content, but at the expense of losing the consistency and automation provided by the control files.

Instead, consider setting up your control files to handle your common needs and ignore or modify other needs that are too difficult or marginal to handle in the control file. For example, you might create a “first-paragraph” style with extra space above for use in hard-copy, but can you replace that style with the “body” style and live with the “good-enough” result?

So the results may lack the perfection that you got by hand-tweaking the material, but you get the good enough, quick-and-dirty convenience of bringing programmatic control to your writing tasks. (Wired made an interesting point about users coming to accept MP3 quality as the standard rather than the higher quality of CD because they used MP3 more and got used to it. As more and more readers get material online, they may come to accept online style quality as the standard.)

·         Finally, consider lowering your writing standards – not to write badly but to change the definition of quality, standardize that definition, and write to it.


Summary


Many technical communicators started in hard-copy and transitioned to online, a transition that involved some hard changes including:

·         Adding tasks once performed by other people, like editors, to the writer’s workload.

·         The speedup of the work, losing the time we might once have had to get material “perfect.”

·         The appearance of media like blogs and wikis whose need for immediacy runs counter to the idea of perfecting the writing.


Since the old column appeared, I’ve seen more and more technical communicators accept the idea of “good enough.” But many still fight it, which is a losing battle. The field has seen many changes, each fought but not stopped. This is one more. If we fight it, the change will occur but without us. That would be a shame because these “good enough” technologies and methodologies are actually fun and highly challenging.

Wednesday, October 25, 2017

Correction To My Post About Flare 2017 R3


In my post about Flare 2017 R3 on Nov. 24, I said:

"If you copy the content in this topic, paste it into Word, and generate the readability statistics in Word, you’ll get different results. When I tried it, Word gave a readability of 65.1 and a grade level of 7.0, both still excellent but different from Flare... This may be caused by Flare’s using the same "Flesch Reading Ease” and “Flesch-Kincaid Grade Level” algorithms as Word but with different options enabled."

As it turns out, the problem is not one of having different options enabled in Flare vs. Word but rather the fact that Flare and Word interpret what a "sentence" is somewhat differently. So the feature is still as useful as I said it was in yesterday's post, but the Flare and Word results are not directly comparable.