Thursday, April 11, 2019


Creating Micro Content in MadCap Flare? What to Keep in Mind

At one time, the topic was the smallest unit of content that you could present to your users. But even a short topic might be too long. Users might just want the phone number for tech support or the setting for a field without having to read an entire topic to find that information. The solution, now available in the latest release of MadCap Flare – micro content.

Wikipedia defines micro content in several ways, one of which is “other small information chunks that can stand alone or be used in a variety of contexts, including instant messages, blog posts, RSS feeds, and abstracts.” MadCap defines it as “text, imagery and/or video content that can be consumed in 10-30 seconds”, i.e. short, concise answers to user questions.

For example, users often look for a phone number for technical support. Rather than search for “tech support” and skim through the list of results, micro content allows users to search for “tech support” and quickly find the phone number, which appears at the top of the results.



Or say users are looking for a specific piece of information in a larger topic, like the number of eggs needed for a cake.



In each case, the result, in the form of micro content, displays first on the list. The result isn’t limited to a single line of information. For example, a Google search for “healthy tomato soup recipe” produces the following:


Here, the micro content consists of the entire recipe.

Micro content offers a big potential benefit to the users – it saves them  time and aggravation when they’re looking for the answer to a question.

Plus, micro content offers several big benefits to Flare authors. It lets them segment and present information in the most immediately useful chunks, and do so quickly and easily by using Flare’s features.

Micro Content Implementation in Flare

There are several ways to create micro content. If you’ve created snippets in Flare, some of the micro content creations options will feel familiar.
  • Create the phrases and responses totally from scratch using the Micro Content Editor. Select File > New > Micro Content. The editor displays, as shown below.



    The phrase side lets you create a new phrase, add different versions of the same phrase – e.g. “Tech support phone #” and “Support phone #”, change or delete an existing phrase, and use variables for a phrase. The response side offers the familiar topic creation options, such as adding a hyperlink or cross-reference, images, variables, snippets, and special characters, and lets you apply mediums to the response.

  •       Create the phrase in the Micro Content Editor and link to a topic, snippet, or bookmark to serve as the response.



    You add the new phrase, click the pulldown at the far right of the phrase line, click Add Link, and select the topic or snippet from the Select File dialog box. The entire topic or snippet becomes the response.
  • Select a block of content in a topic or snippet to serve as the response, then click Create Micro Content in the block bar or Home ribbon to display the Create Micro Content dialog box. There, you can type the phrase and select the mco file in which to store this phrase/response pair. The pair then displays in the editor.
A few notes:

  • The micro content files are stored in a MicroContent folder under the Resources folder on the Content Explorer in Flare, and in a MicroContent folder under the Output folder for your target.
  • You can control the skin format of the micro content results by setting its properties in the TopNav and SideNav skin Styles tab.
  • Micro content is supported by the MadCap search and Elasticsearch.

Micro Content Management and Design Considerations

The Micro Content Editor is neatly integrated with existing Flare features. You can:

  • Find micro content files by using the File List feature from the View ribbon and changing the filter to MicroContent files.
  • Use the Text Analysis feature from the Tools ribbon to check the writing of the responses.
  • Use the Reports feature from the Tools ribbon to generate various reports about your micro content.
  • Spell check your micro content files.
  • Run Find and Replaces in your micro content files.

When adding micro content to your projects, there are several considerations to keep in mind that affect project management and design, and how they can be resolved:

  • Greater project complexity – Micro content is one more aspect of a project to be managed. It’s important to document your rules for creating micro content in the project description to be sure that your successors understand the logic behind them. Don’t keep a project description? It’s time to start.
  • Nature of the micro content – How do you decide what micro content to create in the first place? It’s tempting to simply jump into  creating the phrase/response pairs, but that must be done based on user needs. These needs can be identified through user analytics, and by reaching out to your customer support and tech support groups. Learn what questions they hear most often and use that information as the basis for your micro content. You’ll also have to include synonyms and different wordings in the phrases. In a sense, creating micro content is similar to indexing in that it’s never finished.
  • Speed of creation – The process of creating the phrase/response pairs is slow when done manually. Start keeping track of the time required so that you can factor that into future project planning.

How can Micro Content be Used?

Any short chunk of information that users might specifically search for can serve as micro content – like a miniature landing page as MadCap calls it. And there are several other potential uses of micro content, particularly in Flare:
  • Chatbots – Responses from a bot should be focused and concise, like micro content. Bots have been tremendously overhyped but they are no doubt coming, and micro content will support them.
  • AR – The annotations used in augmented reality applications should be focused and concise in order to use as little screen space as possible. Again, micro content will support this.

       And a fourth possible use case is starting to emerge…

  • The conversational web – Over the years, we’ve become accustomed to the search hit lists generated by Google and other search tools. Those work, if we’re looking at a screen and can scroll down the list of hits to find the one that meets our needs. But it’s almost impossible to remember multiple hits and choose between them without seeing them.

    The article “Alexa, I Want Answers” in the March 2019 issue of Wired posited a search paradigm in which users want one answer, or “one-shot answers” to solve the problem of dealing with multiple responses when you can’t see them. That means that search optimization will have to move toward providing the best answer, rather than the best 100,000 answers and, because voice responses have to be short, micro content could be used to provide the voice-optimized chunks of content.

While there are multiple applications for micro content, the easiest way to start using micro content is through featured search results. By applying and exploring the feature, users can start laying the groundwork for the chatbot and AR use cases of the future.

Conclusion

Micro content is likely to have major effects on project design, management, and the overall usability of the output. MadCap has done a smooth and neat implementation of micro content into the larger Flare architecture, and Flare authors should expect to be able to use it to good effect in future projects.

About the Author

Neil has 4 decades of experience in tech comm, with 34 years in training, consulting, and development for various online formats and tools including WinHelp, HTML Help, CE Help, JavaHelp, WebHelp, Flare, and more. Neil is a frequent speaker at MadWorld and various professional groups and the author of several books about Flare and mobile app development.

Neil is MadCap certified for Flare and Mimic, ViziApps certified for the ViziApps mobile app development platform, and certified in other authoring tools.  He provides training, consulting, and development for online help and documentation, Flare, Mimic, other authoring tools, mobile apps, XML, single-sourcing, topic-based and structured authoring, and content strategy.  He can be reached at nperlin@concentric.net, www.hyperword.com.





Position Zero – It’s a Good Thing


In my keynote at the Conduit conference in Philadelphia on April 6, 2019, I mentioned something called “position zero” as an aspect of SEO but didn’t really explain what it was and why it might matter to tech comm.

“Position zero”, also called a “featured snippet”, is a relatively recent addition to a Google search results list. It shows up in the list above the first hit – ergo “position zero”. It has a summary and a description of the site from which it came. For example, searching for “B58 Hustler” in Google gives this result.


The featured snippet appears above or, here, to the right of the first search result. It’s usually followed by a “People also search for” list of other questions in text form or, in this case, in graphical form.

The featured snippet is determined organically. According to “SEO above position 1: What's Position Zero?“ by Kent Campbell at https://blog.reputationx.com/what-is-position-zero-seo,

A few things play into which webpage's content is featured as the snippet:

      1. First page results. It’s necessary that your page is on the first page of search results for your given search query. Usually in the first five results. 
      2. Relevant information. The answer you provide has to be the right answer, and the information on the page must be relevant to the search term overall.
      3. Useful formatting. If you’ve formatted your answer like this answer is, or if you’ve got a nice table of information, Google will be more likely to display it.

So, what does this mean for tech comm? Until now, our searches have been internal to the authoring tool, like Flare’s search engine, or external, using Google, each giving the usual long list of hits. Ideally, our material will appear within the first ten hits, more ideally near the top of that list, but the exact position hasn’t been crucial. Until now…

We’re now moving from screen-based content toward voice-based content. We’ll want to appear at the top of the list of search hits because many users will go with the first hit because they won’t be able to remember the first three, let alone the first ten. Some users might respond to the first hit by asking the search engine for the next hit, but it will be the rare user who goes deeper down the list. So, the old rule of thumb that any item outside the first ten hits won’t be seen is changing. Now, any item outside the first one, possibly two, won’t be seen. That’s going to affect how we apply SEO to our content.

I expect to see conference presentations later this year or in 2020 on what’s required to reach position zero. Look for a blog post on the subject here in the next few months.

Monday, March 11, 2019

A Comment About My MadCap Flare Links Webinar

I gave a webinar on Flare link types for MadCap on Thursday, March 7, and got the comment below from Jane Brewster, Information Architect for the White Clarke Group in the UK. I appreciate comments like this because my theory is that there's always someone out there who's tried something that I have not and the best thing to do is to learn from them.

With that, here's Jane's comment:

I thought you might like to know another pro for using togglers rather than hotspots, particularly applicable if you single-source for HTML5 and PDF. I initially used dropdowns but was very pleased to discover togglers and how flexible they allowed me to be with formatting.

In the PDF target I want our toggler or dropdown hotspot to be a sub-heading that sits correctly within the hierarchy, so it might be h2, h3 or h4 depending on where the topic sits in the TOC hierarchy. 

However dropdowns don’t allow the hotspot to dynamically change style if the topic sits at a different level in the output – the style is static (unless I’ve missed something obvious of course!).

To get around this use a toggler link conditioned for the HTML5 target followed by a heading conditioned for the pdf target (usually h2 as our topics all have h1 as the main heading). That way in the PDF if the topic is at the top level the toggler heading is H2, but if the topic is at the next level down the toggler heading is automatically h3, and so on.

Moving on to XRefs, I agree with not listing them all at the beginning or end of the topic. However I like to use them slightly differently to the way you describe. Having worked previously with online and PDF help that had to be AAA compliant (so suitable for any differently abled user, possibly using a screen reader), I’m aware that just putting a link in the middle of a sentence isn’t always appropriate (particularly for screenreaders), so I put them at the end as a more explicitly worded reference.

For example, instead of:
I really like using Madcap Flare because it’s a very flexible authoring tool.

I use:
I really like using Madcap Flare because it’s a very flexible authoring tool, see  Madcap Flare.

The PDF output is in the format:
I really like using Madcap Flare because it’s a very flexible authoring tool, see Madcap Flare (on page 3).

I’ve used hyperlinks in this example but they would be XRefs in Flare. This phrasing also gets around the problems caused when you want to use an XRef to a topic with a title that doesn’t make sense in the context of the sentence you’re linking from (not a problem with hyperlinks of course!) so you can be a bit more flexible with topic titles.

Friday, December 14, 2018

“RoboCop” and “HLMT” Redux? Don’t Repeat Old Mistakes


If you have any experience in technical communication, you’ve seen, and even made, mistakes. Maybe it was the wrong technology or authoring tool. Maybe it was a project definition that took the wrong path. Such mistakes aren’t surprising; we work in a complex and rapidly changing field.

Many of these mistakes are silly in retrospect. But silly mistakes can also have big repercussions. And the worst thing is to make the same mistake twice. This might seem unlikely but staff turnover erases the “corporate memory”. We don’t learn from our mistakes.

This article describes various mistakes that I’ve been called to consult on, and lessons to be learned from them as we move toward Information 4.0 or whatever the future will be called. Some of the lessons will seem obvious. Others may not be until they’re pointed out.

The mistakes that I describe may make the clients seem incompetent; nothing could be further from the truth. (I would trust the hospital staff to take out my appendix; I just wouldn’t trust them to insert an image into a Word document.) The problem was simply that the clients were moving rapidly into new and usually unfamiliar waters. As they so often are today…

Misunderstanding the Terminology

Some of the most memorable mistakes come from misunderstanding the terminology behind a project. What’s the difference between a “staging server” and a “production server”, for example?

Me: “What browser do you use?”
Client: “What’s a browser?”
Me: “How do you access the company’s intranet?”
Client: “We click that blue “e” icon in Windows. Do you know what that is?”

A forward-thinking client asked me to put some documentation online and move it to an intranet. There were major differences between browsers at that time so it was reasonable to ask what browser they used. The conversation above is almost a word-for-word summary of my discussion with them.

The problem was that the concept of the internet was so new that few people understood the terminology. This was in 1997.

Client: “What is HLMT anyway?”

A client turned to me during a meeting and asked what “HLMT” was. I said I didn’t know. He said he was surprised because it had to do with the web so he assumed I’d know. The light dawned and I said that it was actually “HTML”, the code basis for the web. The client thanked me. This was in 2000.

Client: “Cut and paste? Yes, of course. We use scissors and double-faced tape.”

A client wrote procedure documents in Word, leaving white space for images. They would then copy the images out of medical textbooks, cut them to fit the white space, and paste them in with double-faced tape after printing the documents.

The problem was that the client was interpreting “cut and paste” in everyday terms, rather than in PC terms. (Never mind the innumerable copyright violations.) This was in 2003.

The lesson – Misunderstanding the basic terminology or trying to apply analogies from everyday life like cut and paste wasn’t surprising in the late 1990s. Twenty years later, such misunderstandings still occur because the terminology is still often new and confusing. (“What’s ‘mobile’?”)

All direct and indirect participants in a project have to understand at least the basic terminology in order to avoid talking past each other. Never assume that everybody is speaking the same language.

Misunderstanding the Technology

Equally memorable mistakes come from misunderstanding the technology behind a project.

Client: “WebHelp vs. Web Help”

A company got confused between WebHelp and Web Help. A staffer then wrote an RFP for a WebHelp consultant only to have the approving manager fix what appeared to be an obvious typo and change “WebHelp” to Web Help”.

The problem was that the two formats were totally different. It took several days to figure out what the company was really asking for in order to help them fix the RFP.  (To this day, I always refer to WebHelp as “WebHelp one word” because of that incident.) This was in 1998.

Client: “HTML Help vs. HTML help”

A company got confused over HTML Help vs. HTML help. The company used a help authoring tool called ForeHelp to create the online help project and expected to get the “tri-pane window” in the output. For some reason, it didn’t work. The software vendor’s normally excellent support people couldn’t figure out what the problem was. This went on for ten months.

The problem was that when the author called support and reported the problem creating the tri-pane in HTML “help”, the support reps heard HTML “Help” and asked if the author had compiled. The author didn’t understand what “compiled” meant, assumed that it meant to create the project, and said yes. At that point, the support reps were at a loss. The solution took two mouse clicks, one to compile and one to view the result. This was in 1999.

Client(s): “We’re going mobile!”

A company’s three divisions decided to go mobile but never defined what “mobile” meant. Each division therefore went mobile in a different way, based on its understanding of “mobile”.

The problem was that the term “mobile” is too vague. It could refer to an app, a PDF file, or responsive output from a help authoring tool. And, in fact, the three divisions used those three definitions. This made it impossible to coordinate an enterprise-wide mobile effort and caused political problems because no division wanted to abandon its effort in favor of another division’s. This was in 2017.

The lesson – Misunderstanding the technology, like misunderstanding the terminology, wasn’t unusual in the late 1990s when everything was new and confusing. Companies often failed to recognize how confusing the technologies could be. The same holds true today, just for different technologies. (What’s a “bot”?) But misunderstanding the technology can lead to buying the wrong authoring tools, buying the right tools and using them the wrong way, hiring the wrong writers, or all three.

All direct and indirect participants in a project have to understand at least the basics of the technology in order to avoid talking past each other. Never assume that everybody is speaking the same language. (It’s a good idea to hold some education sessions for all the participants. Some will get annoyed at what seems like a waste of time, but I tell clients that it’s better to have people mildly annoyed at an apparent waste of time than to have them really annoyed when the project goes awry.)

Misunderstanding the Workflow

Misunderstanding the terminology and technologies can lead to inefficient or just bad workflows.

Client: “Cut and paste? Yes, of course. We use scissors and double-faced tape.”

The same issue as described earlier.

The problem was that the misunderstanding of the terminology lead the client to create an incredibly inefficient workflow. Again, this was in 2003.

Client: “We use "authoring tool X". We were never trained on it so we use instructions that were written by a former member of the doc group.”

This is common.

The problem is that the workflow may have changed since the original author wrote the instructions. The tool probably has. And the original author may have made mistakes in the instructions that have been passed on between generations of authors. The result is that the project’s foundation is becoming increasingly unstable. This was in any year ranging from 1995 to today.

Client: “We create online and print outputs from our help authoring tool so we create two stylesheets, one for the online and one for the print.”

This is also common.

The problem is that the authoring tool may have features that can streamline this workflow such as the ability to create one stylesheet with two mediums, one for online and one for print. But the writers must be familiar with the tool or trained on it to know that these features exist. This was in any year ranging from 1995 to today.

Client: “We were wondering what those ‘styles’ things were.”

Writers for a real estate company used Word to write their company’s procedure manual, each writing a different section. The writers didn’t use styles. Instead, they did local formatting. They were sharp enough to realize the need for consistency and developed a set of formatting standards.

The problem was that they invariably deviated from those standards. When it was time to combine each writer’s output to create the final manual, they had an enormous amount of cleanup to do to make the formatting consistent. This was in 2011.

The lesson – Misunderstanding terminology or tool features often leads to inefficient or just plain wrong workflows that require work-arounds. In the past we had the time to fix the problems or do the work-arounds, often by hand, because the time-to-market requirements for our documentation were looser than they are today. Today, as content becomes increasingly important to your company, making the workflow efficient and effective is becoming crucial.

Justification to Management

Mistakes in justifying buying new software and training employees to use it can be harmless, or they can lead a company down the wrong path.

Client: “My daughter came home from college and said ‘Dad! The company has to start creating online help!’ and I said ‘Okay!’”

A manufacturing company division manager put his division on the road to online documentation based on that statement from his daughter who was home from college. There was so little guidance in the old days that the manager could tell his IT manager (tellingly, not the documentation manager) to buy whatever tool seemed appropriate, with no needs assessment or tool evaluation. That was justification to management in a simpler era, 1996.

Client: “We have an HTML tool that we adopted in 1999 and it seems to be working fine except for a few minor problems.”

A federal agency had gone online in 1999 and found the experience so difficult that they didn’t want to repeat it. But the old tool was no longer supported and no longer followed modern coding practice – it was a dead end. The only solution was to buy a modern authoring tool and convert thousands of pages from the old HTML-ish format to XHTML. It was going to be a very tough job but management didn’t want to change, instead telling the writers to create work-arounds to the problems. That was in 2015.

The lesson – Justifying the cost of a new technology or tool was easy when documentation wasn’t taken seriously. But documentation/content today must increasingly fit into corporate environments and support corporate business and strategic goals (not technical communication goals), has become more complex and expensive, and must compete with other ideas being presented to management. Proponents need to present and defend it on the business grounds of how it benefits the company. Any other approach is likely to fail.

General Silliness

I’ll end with two mistakes that suggest just how easy it is to go wrong.

Client: “We want to use Doctor Help but we can’t find it.”

I gave a presentation to the Boston chapter of the STC (Society for Technical Communication) in 1993. In that presentation, I mentioned various help authoring tools including Doc-to-Help. An attendee called me a week later to ask where she could find “Doctor Help”. I told her I’d never heard of it. She said that I had specifically mentioned it in the presentation. We eventually realized that when she heard me say “Doc-to-Help”, she assumed that I was speaking in a Boston accent (the “r” at the end of a word is often ignored so that “park the car” comes out as “pahk the cah”) and meant “Doctor Help”. But her company wasted several days looking for “Doctor Help”.

And finally…

Client: “We want to use RoboCop!”

A division of a manufacturing company used “Lotus Notes” to put its documentation online. During a visit to another division, the manager saw their online documentation, liked its format better than what Notes could create, asked how they created it, and was told “We use Robo-something.” He told his IT manager to switch their online documentation from “Notes” to “RoboCop”. The IT manager spent a month searching for it before finally stumbling over “RoboHelp”. This was in 1995.

The lesson – Some mistakes are just so odd and silly that they’re hard to defend against. But they suggest that you check your basic assumptions carefully when you start a project and re-check them over the duration of the project in the face of changing technologies.

Summary

It’s easy to summarize this article. Companies are continually moving into new, confusing, and rapidly-changing areas that lie outside their core competencies. Because of this, it would be surprising if a company did not make mistakes. But as the complexity of the technology goes up and time-to-market goes down, the cost of mistakes goes up as well. So while we’ll never avoid making mistakes, we can at least try not to repeat the mistakes of the past.

A version of this article was originally published in ISTC Communicator, Winter 2018

Wednesday, September 19, 2018

Is Single-Sourcing Dead or Alive – the Debate Continues


I recently wrote a blog post called “Is Single-Sourcing Dead” on the Hyper/Word Services blog (http://hyperword.blogspot.com/) in response to a blog post by Mark Baker called “Time to move to multi-sourcing” (https://everypageispageone.com/2018/04/06/time-to-move-to-multi-sourcing/). Baker responded with a post at https://everypageispageone.com/2018/09/10/is-single-sourcing-dead/.

In this post, I’ll respond to what I think are Baker’s major points. (This debate-by-blog can only go on so long before it overwhelms both of us, so I’m going to propose a live discussion between Baker and I at the STC Spectrum 2019 conference in Rochester, NY. We’ll see where that idea goes.)

Note: My most often-used tool these days is MadCap Flare and many of my answers will be from that perspective. However, I suspect that many of my answers apply to other authoring tools as well.

In some cases, Baker and I are, as he put it at one point, “in violent agreement”. Here’s where I think we disagree.

First, in the big picture – Baker notes that there are problems with the current model of single-sourcing and suggests various alternatives. I agree that there are problems but I think they have straightforward solutions – not necessarily simple ones but straightforward ones. I also think that today’s single-sourcing tools have so much untapped power that it would be a mistake to discard them too early.

Now, more specifically, with Baker’s points in red italics.

Re the single source format/single repository issue:

That single source format/single repository model has several significant disadvantages, however. I outlined those in my original post on the subject. But since the single format/repository model was used in part to enable multi-format delivery and content reuse, does that mean that those things are dead if we move away from the single format/repository model?

In a word, no, since they can manifestly be done independent of it. But we have to think seriously about how we do them if we don’t have a single source format and a single repository. Going back to everyone using their own independent desktop tools and storing the files on their own hard drives has all sorts of well documented disadvantages, and is a non-starter if and when you move to an integrated web strategy, semantic content, or Information 4.0. So, if the single source/single format approach isn’t the right one either, we have a legitimate question about how we do multi-format publishing and content reuse going forward.

The single source format and single repository is an ideal and, like most ideals, we’ll never quite reach it. But we may not have to. Flare, and probably similar tools, let us create content in the tool but also take content created in other tools in other formats, mainly Word, and automatically import it into the tool and convert it to the tool’s format. Authors using tools like Word do have to use it minimally correctly –styles rather than local formatting for headings, for example – but that can often be handled with simple training and motivation.

Re the “appropriate tools” issue:

The solution Perlin proposes is simple: Buy the appropriate tools for everyone who needs them.
But there are a couple of problems with this, beyond the unwillingness of companies to pony up the cash. First, these tools are unfamiliar to most of the people who would be asked to use them and they are conceptually more complex than Word. That introduces a training overhead and adds complexity to the writing task every day. And if the contributors don’t use those tools full time, they will forget most of their skills they are trained in.
Giving everyone more complex tools is not really a sustainable strategy, nor is it one that is easy to sell.
My point here is not to buy everyone new, expensive, and unfamiliar tools but instead to buy whatever tool is appropriate. In many cases, authors already have the appropriate tool – Word – and just have to learn how to use it minimally correctly. In other cases, companies may have to buy real single-sourcing tools. Some companies will balk at this, saying that they already have single-sourcing tools in-house so why buy new ones? But many of those tools were released years ago, no longer meet code standards, and may be minimally supported if at all. I’d argue that it’s a cost-saving in the long run to buy a modern tool for that small number of authors in the company who need them.

Re the training issue:

Perlin argues that many current problems with single sourcing arise because writers are not properly trained to use the tools they have. The solution: more training.

I’m not arguing for more training, although that’s often helpful. Instead, I’m arguing for any training. Too often, authors are thrown into a new tool with no training, just some instructions from a former author that may not apply to the current version of the software or current output needs, and that may contain errors.

Re the inappropriate standards issue:

·         Templates and embedded prompts get overwritten with content, so the structured is not retained and is not available to guide subsequent writers and editors.

Baker is right about the risk of templates and embedded prompts getting overwritten with content. But one feature of templates in modern tools is that they can be added to the tool interface for re-use. That way, creating new material does not overwrite the templates and prompts.

Re the increasing complexity issue:

Documenting all of your complexity is not a good (or simple) solution. Documenting it does not remove it from the writer’s attention. It is better than not documenting it, but not by much. The writer still has to deal with it, still has to spend time and mental energy on it, and can still make mistakes in following the complex procedures you have documented. Much of this complexity can be factored out using the right structured writing techniques.

Another area in which we agree overall but disagree on the details. Some of the complexities can indeed be factored out using structured writing but some can’t. For example, if you’ve defined fifteen different conditions, when should you use each one? What are the rules for clearly naming new topics, graphics, snippets, etc.? And so on. Documenting your projects isn’t the total answer but not documenting them is an invitation to disaster. (My book “Writing Effective Online Content Project Specifications”, available on Amazon, discusses how to document your projects and presents many unpleasant examples of what can happen when you don’t.)

And the authorial motivation issue:

·         With the best will in the world, people can’t comply with a regime that benefits someone else rather than themselves unless they get clear, direct, and immediate feedback, which current tools don’t give them, because the only real feedback is the appearance of the final outputs.

Perfectly true. That’s why, when I show a client how to use some feature that supports single-sourcing, I always emphasize how it will help them. “Remember that white paper you wrote that had fifty subheads formatted using Comic Sans and how you had to change each one individually? How about if I show you how to change all fifty at once by using these things called styles.” Authors don’t always get it right but they’re interested and motivated because the solution is benefiting them and, incidentally, the larger workflow.

·         Management oversight can’t ensure compliance in the production phase of a process if it can only perceive compliance in the finished product. Assessing the finished product every time is too time consuming and error prone (you can miss things). And the effectiveness of management oversight decreases exponentially the longer the delay between when the writer writes and when the manager finds the defect.

Also perfectly true. But problems in the finished product can usually be traced back to and solved in the production phase. We’ll never solve all the compliance problems but we can solve a lot of the major ones. In other words, this is a QA problem.

Turning to the broader point of what can take us beyond single-sourcing:

In Perlin’s model, all of the complexity of making single sourcing work is pushed onto the writers. “the more that this conversion and coding can be pushed back upstream to the individual authors … the easier life will be”. Well, if all that work is pushed to the writers, it is not their lives that are being made easier, since all the work and the responsibility is being pushed onto them. If anyone’s life is being made easier, it is the tool builder’s life.

Here we disagree. I’m not saying that we should push “complexity back upstream to the writers”. I’m saying that we should push tasks that improve the workflow back upstream. For example, rather than authors using local/inline formatting in their documents which then has to be fixed by the information architect, show authors how to use styles from a stylesheet in the first place and, as I noted above, explain how this will benefit the authors. This is a “do it right the first time” approach.

Today there is a rich collection of tools and standards available (largely created to run the Web, which is to say, to build and deliver content systems). With the right roles defined and the right system design, you can construct an appropriate custom system using these components. People do it every day, and at every scale. 

Baker is perfectly right about this. But somebody has to:
  • Combine those tools into a working system.
  • Understand, promulgate, measure, and enforce those standards.
  • Define the roles.
  • Design the system.

However, each one of these tasks has problems.
  • How to combine the tools into a working system? Combined by whom? The tasks may require code-level skills.
  • Driving the standards is hard. They’re often hard to understand without training – what is the CSS standard and what version should we adopt? What is DITA? (“Darwin Information Typing Architecture” tells us nothing about what DITA actually is.) And so on.
  • Roles can certainly be redefined but doing so can be confusing or sound gratuitous. (Yesterday I was a technical communicator. Today, I’m a content engineer. What’s the difference?) Baker is right that there’s a role for content engineers or information architects but there needs to be meat behind the title.
  • Every company has a system, but it’s often one that’s grown organically over years. It may have problems that everyone knows about and knows how to work around but no one has the time or skills to fix. Designing a new system from scratch is a wonderful opportunity but it takes time.

In summary, Baker and I agree in some ways. Today’s single sourcing works, even with its problems. It may not be robust enough to carry us into Information 4.0, but few companies in my experience need to worry about that yet. Most companies don’t have the time or resources to completely overturn their current workflows for a somewhat undefined future. That will happen, but iteratively.

Today’s single-sourcing tools have so much untapped power that abandoning them strikes me as a mistake. If people can make better use of those tools without changing the development model, that’s a simpler approach.

Monday, September 17, 2018

A Comment About Flare's Dropdown Link Type


On July 23, I wrote a post in the Hyper/Word Services blog on “A Review of MadCap Flare’s Link Types” at http://hyperword.blogspot.com/. This post was repeated in MadCap’s blog at https://www.madcapsoftware.com/blog/2018/08/30/navigation-best-practices-guide-link-types-madcap-flare/?utm_source=Newsletter&utm_medium=Email&utm_campaign=20180911Newsletter&utm_source=Newsletter&utm_medium=Email&utm_campaign=20180911SepMadCapInsider. In the post, I stated in the Dropdown Drawbacks section that there were “None, in my opinion. However, I’d be interested to hear competing opinions.

In response, Jana Vacková of ABRA Software in Praha (Prague) wrote:

We might know about one J - so if you are interested in it, here might be one. (I fixed a few spelling and grammatical errors but otherwise left the response as is.)

The opened dropdown menu has problem to flow around the side menu and to use all possible width of the page. Let me elaborate it more:

-         In our project (= ERP SW on line help) we use dropdowns very often – as we found it very cool J for making the long contents more clearly arranged.

-         We use TopNavigation skin with TOC menu at the side of the screen (side menu).

-         When the section (of our help) is complicated (as our SW is really huge, complex and complicated J) the side menu (with TOC) has a lot of items and is “long“.

-         So, if the topic inside in this section has dropdowns on the top (in general placed horizontally in the area where the side menu is) and user clicks on one of these dropdowns, the dropdown opens but the width of its body is limited (thanks to side menu). And is limited up to the end of body content (although the body is much longer than the side menu itself).  When the body itself is long there is a lot of unused space on the screen and user must scroll down more. If there are wide tables with many columns in such dropdown body, there is a problem L.

Have a look below:

I mean the unused area under the side menu:




It could be perfect if the width of dropdown-body could adapt the free space. So, at the bottom of side menu the text would start to flow round.

P.S. We have contacted MC Flare support but they haven’t advised any solution how to adapt this behavior of the width of opened dropdown body in combination with side menu.

Has anyone else encountered this problem and found a solution?



Monday, September 10, 2018

Word processing through the ages

This article was originally published in ISTC Communicator, Autumn 2018 Supplement.

Neil Perlin looks at the impact word processing has had on technical communication and his career.

In February, 1979, I was hired by a computer company called Digital Equipment Corporation to write the user manual for a general ledger accounting package. I have an MBA in accounting and operations management – mathematical process control – from Boston University, so I knew how a general ledger worked.

I wrote the manual by hand, 400 pages, using pencil and paper. We didn’t have word processors, and all typing was done on typewriters by ‘the girls’ in the typing group. (Stay with me…)

I sent the finished manual out for review. Four of my reviewers said I’d gotten it wrong – a general ledger didn’t work the way I described. What they said ran counter to what I’d learned in my MBA program but I assumed that a big computer company would have gotten a waiver on the standard. (I was very young and innocent then…) No word from the fifth reviewer.

So, I threw out the 400 hand-written pages and wrote a new 400-page manual. By hand. Pencil and paper.

When I finished, I sent it out for review. The four reviewers blessed it. However, the fifth, who had been on vacation during the first review pass, called and spent five minutes giving me an epic chewing out.

When he finished, I explained what happened. After he finally stopped laughing, he said ‘Tell me what you wrote the first time.’

I did, and he said ‘That was exactly right. The other reviewers don’t understand accounting. Go ahead and rewrite what you wrote the first time.’ Which I did. By hand. Pencil and paper.

So, I ultimately wrote 1200 pages – by hand, pencil and paper – to get 400, with the last 400 saying the same thing as the first 400.

Many technical communicators from that era have similar ludicrous stories.

The appearance of word processing changed technical communication forever. Stories like mine became things of the past. Things like ‘paste-up’ and ‘carbons’ vanished into history. In this article, I’ll look at how word processing came to be and end with some thoughts about where it may be going.

History


Word processing dates to Gutenberg and movable type. But for this article, I’ll start with the electronic version.

According to a Computer Nostalgia article1, the first units functionally recognisable as word
processors appeared in 1964 with IBM’s introduction of the MT/ST (Magnetic Tape/Selectric Typewriter) which added magnetic tape storage to a standard IBM Selectric. Users could store, edit, re-use, and even share documents. But it was still a typewriter – no screen.

People also did word processing on mainframe computers with time-shared terminals. To get a sense of this, see the first page of ‘Word Processing on the Mainframe Computer’, written in 1984 by Sue Varnon2.

The first units with screens – recognisable as modern word-processors – debuted in the early 1970s from companies like Lexitron and Vydec. Wang Laboratories’ CRT-based word processing system, introduced in 1976, became the standard and made Wang the dominant player in the word processing market. These systems were crude compared to today’s. Most had no navigation keys and instead used the e/s/d/x keys on the keyboard. They had no function keys for attributes like boldfacing, which was done by pressing key combinations at the beginning and end of the text to be emboldened. There were no options for fonts, and other things that we take for granted today.

WYSIWYG displays didn’t exist. Monitors showed text using the system’s default font. Formatting was done by inserting control characters. There’s debate as to when WYSIWYG appeared – some claim that the early Apple MacIntosh with a bitmapped display made it possible. Others claim that it wasn’t until laser printers became affordable and could fit on a desk that true WYSIWYG became possible and you were able to see what was printed, on screen.

But they offered the kernel of what we expect in word processors today.

Furthermore, the term ‘word processor’ referred to dedicated machines rather than software running on general purpose PCs. The general-purpose PCs we use today were just emerging. But once they did, the dedicated machines were doomed. Wang went through internal turmoil due to changing markets, management, and strategy and filed for bankruptcy in 1992. (A fragment of the company survived until 2014.) Other companies like Lexitron, Lanier, and Vydec disappeared so thoroughly that Google searches return only fragmentary mentions.

To put this in perspective, and for an interesting aspect of cultural sociology – (see the following item, reference “the girl”), consider this piece of history from the Computer Nostalgia1 article :

The New York Times, reporting on a 1971 business equipment trade show, said:

The ‘buzz word’ for this year's show was ‘word processing’, or the use of electronic equipment, such as typewriters; procedures and trained personnel to maximize office efficiency. At the IBM exhibition a girl typed on an electronic typewriter. The copy was received on a magnetic tape cassette which accepted corrections, deletions, and additions and then produced a perfect letter for the boss's signature....

These pioneers were replaced by software with almost legendary names – MacWrite, Lotus AmiPro and Manuscript, PC-Write, Electric Pencil, VolksWrite, MultiMate, PeachText, XyWrite, and three that will be more familiar – WordStar, WordPerfect, and Word.

WordStar was the leading application in the early 1980s when CP/M and MS-DOS were competitors. But changes in technology and interface and customer service issues made it falter. WordPerfect took its place as the leading word-processor in the 1980s. But problems with a release for Microsoft Windows gave Microsoft an entrée into the market with Word. Between a smoother introduction and bundling deals that led to Microsoft Office, Word took the lead in the 1990s and has not looked back.

Results


What has this evolution wrought?

  • Word processing has changed how we write, for the worse according to some literary critics. See ‘Has Microsoft Word affected the way we work?’ by John Naughton in the January 14, 2012 issue of The Guardian3 and ‘How Technology Has Changed the Way Authors Write’ by Matthew Kirschenbaum in the July 26, 2016 issue of The New Republic4.

    Personally, I agree that it has changed how I work - for the better. Using a typewriter, changing the material was difficult, often involving White-Out or perhaps even pulling the page out and re-typing it entirely. This made it easy to lose my train of thought. With a word-processor, I can write material, modify it as I go, and easily revert to a previous version. And I can try different wordings to see which is clearer or gives a better readability score. So, overall, and especially after my general ledger user manual fiasco in 1979, I could never give up my word-processor.
  • WYSIWYG authoring is useful but there are periodic arguments about whether it leads authors to focus on formatting content rather than on writing it – appearance over substance. Here’s one example, ‘Word Processors: Stupid and Inefficient’ by Allin Cottrell5.

    Personally, I agree with some of his positions but I think word processing as it currently exists is too entrenched to change in the near future. Also, and interestingly, Cottrell’s position ties in well with the emerging need for content in HTML or XHTML that has no format of its own but that can use multiple stylesheets for single sourcing.

  • WYSIWYG authoring, plus the ability to insert and position graphics electronically, has sharply reduced the role of the graphic designer. That’s not to say that a graphic designer couldn’t do a better job, just that graphic designers are no longer needed.

  • Authoring support tools like spell-checkers and readability analysers in word-processors sharply reduced the role of editors. (When I was at Digital Equipment Corp in 1982, there were, as I recall, about 20 writers supported by a formal editorial group. Today, I’m surprised and pleased if one of my clients has even one editor on staff.)

  • Many managers wanted computers in their offices because computers were cool, but didn’t want to actually use them because typing was considered to be secretarial work. So, some unsung marketing genius coined the term ‘keyboarding’ instead.

  • Typing pools were almost entirely female because management viewed typing as a secretarial function. The advent of word processing caused debate about whether it would perpetuate the typing pool as a so-called ‘pink ghetto’ or open new avenues for advancement for women. My experience from Digital Equipment was the latter. One woman who started as a typist became one of the coordinators of the company’s export control compliance programme.

  • The culture of technical writing changed. In 1980, my department got two word-processors for the writers to share. Soon after, the manager told me that he had offered jobs to two writers, both of whom turned him down on the grounds that 'technical writers don't use computers'.

  • The culture of technical writing changed. In 1980, my department got two word-processors for the writers to share. Soon after, the manager told me that he had offered jobs to two writers, both of whom turned him down on the grounds that ‘technical writers don’t use computers’. 
  • In the same vein, one of the greatest presentations in the STC conference’s Beyond the Bleeding Edge stem, which I started in 1999 and managed until it ended in 2014, was a retrospective look at changes in writing culture by a speaker who showed a video of a presentation he gave in 1980 entitled ‘why technical writers should be allowed to use computers’. It’s one of the funniest but most meaningful presentations I’ve ever seen at a conference. (Why meaningful? Because it examined a huge technical and philosophical shift in technical communication. Why funny? Because, almost on cue, the older attendees looked at each other and said “I remember those days!” while the younger attendees looked at each other and said “No word processors? No way!”)
  • Users of word-processors, primarily Word these days, break all kinds of rules to make sure the document prints well. But these users rarely consider that their documents may have to be converted to HTML or XHTML for use online. So, breaking the rules, often using local formatting rather than styles, seemed to have no down-side but now causes frequent problems.

  • Related to the prior point, management tends to view word-processors as akin to typewriters and thus doesn’t train the users on how to use the tool effectively and correctly. The result is usually chaos.

The Future?


Will today’s word processing powerhouses eventually go extinct? Word processing is so embedded in business and technical communication that it’s hard to imagine, but many once-dominant tools and companies have vanished.

I can think of two things that might change the future of word processing:

  • It’s been said of Word that most people use 5% of its features. The problem is that each person uses a different 5%. So, an interface that users can easily customise, without a consultant to do so, would be a big help.

  • Eliminating typing. A speech-to-text interface, an Alexa of word-processors, may be possible in the future. But the system will have to be smart enough to recognise and remove all the throat clearing and ‘like’ and ‘you know’. And, each person’s voice is different so the system will need a lot of training. And AI might be needed to help the system understand when to emphasise a word without the authors having to tell it to do so and breaking their train of thought.


And the need for word processing as we know it might disappear. An article called ‘Getting The Next Word In’ by Ernie Smith6 from 2016 makes some interesting philosophical points. “The reasons we have traditionally used word processors has slowly been eroded away,” he explained. “LinkedIn is replacing the resume, GitHub is replacing documentation, and blogging (and respective tools) have chipped into journalism. Even documents that are meant to be printed are largely being standardised and automated. Most letters in your physical mailbox today are probably from some bank that generated and printed it without touching Word.”

Perhaps the best indicator of how thoroughly word processing has penetrated the world, especially that of technical communication is the fact that it’s taken for granted except when we complain about some feature of Word. The wonder that it evoked in 1971 is long gone. And that’s a sign of success.

References


1.     Computer Nostalgia (no date) ‘Computer History. Tracing the History of the Computer – History of Word Processors’ www.computernostalgia.net/articles/HistoryofWordProcessors.htm (accessed July 2018)
2.     Varnon S (1984) ‘Word Processing on the Mainframe Computer’ The Journal of Data Education, Volume 24, 1984 – Issue 2 www.tandfonline.com/doi/abs/10.1080/00220310.1984.11646292 (accessed July 2018)
3.     Naughton J (2012) ‘Has Microsoft Word affected the way we work?’ The Guardian www.theguardian.com/technology/2012/jan/15/microsoft-word-processing-literature-naughton (accessed July 2018)
4.     Kirschenbaum M (2016) ‘How Technology Has Changed the Way Authors Write’ The New Republic https://newrepublic.com/article/135515/technology-changed-way-authors-write (accessed July 2018)
5.     Cottrell A (1999) ‘Word Processors: Stupid and Inefficient’  http://ricardo.ecn.wfu.edu/~cottrell/wp.html (accessed July 2018)
6.     Smith E (2016) ‘Getting The Next Word In’ Tedium. https://tedium.co/2016/10/04/word-processors-future (accessed July 2018)


Related reading


Ashworth M (2017) 'The death of sub-editing' Communicator, Spring 2017: 14-17

Dawson H (2017) 'Industrial revolution in Fleet Street' Communicator, Summer 2017: 26-29

Glossary


AI. AI (artificial intelligence) is the simulation of human intelligence processes by machines, especially computer systems. 
https://searchenterpriseai.techtarget.com/definition/AI-Artificial-Intelligence

Alexa. Alexa is a virtual digital assistant developed by Amazon for its Amazon Echo and Echo Dot line of computing devices.
https://www.webopedia.com/TERM/A/alexa.html

Carbon copy. A carbon copy (or carbons) was the under-copy of a document created when carbon paper was placed between the original and the under-copy during the production of a document. In email, the abbreviation CC indicates those who are to receive a copy of a message addressed primarily to another (CC is the abbreviation of carbon copy).
https://en.wikipedia.org/wiki/Carbon_copy

CP/M. CP/M originally stood for Control Program/Monitor and later Control Program for Microcomputers, is a mass-market operating system created for Intel 8080/85-based microcomputers by Gary Kildall of Digital Research, Inc.
https://en.wikipedia.org/wiki/CP/M

GitHub. GitHub is a web-based version-control and collaboration platform for software developers.
https://searchitoperations.techtarget.com/definition/GitHub
https://github.com

HTML. Hypertext Markup Language (HTML) is the standard markup language for creating web pages and web applications.
https://en.wikipedia.org/wiki/HTML

Keyboarding. Enter data by means of a keyboard.

MS-DOS. (Microsoft Disk Operating System). MS‑DOS was the main operating system for IBM PC compatible personal computers during the 1980s and the early 1990s.
https://en.wikipedia.org/wiki/MS-DOS

Paste-up. A document prepared for copying or printing by combining and pasting various sections on a backing.
https://en.wikipedia.org/wiki/Paste_up

Pink ghetto. ‘Pink ghetto’ is a term used to refer to jobs dominated by women. The term was coined in 1983 to describe the limits women have in furthering their careers, since the jobs are often dead-end, stressful and underpaid.
https://en.wikipedia.org/wiki/Pink-collar_worker#Pink_ghetto

White-Out. White-out is a correction fluid. It is an opaque, usually white, fluid applied to paper to mask errors in text. Once dried, it can be written over. It is typically packaged in small bottles, and the lid has an attached brush (or a triangular piece of foam) which dips into the bottle. The brush is used to apply the fluid onto the paper. In the UK, ‘Tipp-Ex’ is used more commonly.
https://en.wikipedia.org/wiki/Correction_fluid

WYSIWYG. WYSIWYG is an acronym for ‘what you see is what you get’.
https://en.wikipedia.org/wiki/WYSIWYG

XHTML. Extensible Hypertext Markup Language (XHTML) is part of the family of XML markup languages. 
.