Tuesday, June 4, 2019

GUI Information 4.0 Tools – A Proposed Feature Set

NOTE: This post is a slightly modified version of an article in the Summer 2019 issue of ISTC Communicator. The goal of the post is to foster discussion about what features might appear in a GUI Information 4.0 authoring tool. The Communicator staff has agreed to let me post comments about this article in a follow-up article to appear in the Autumn 2019 issue. Because of the editorial deadlines, I have to receive any comments that will appear in the follow-up article by June 30. (Comments received after that will of course be publicized but will not appear in the follow-up article.)

When I started creating hypertext in 1987, working directly in RTF codes, the field was almost unknown. Even Microsoft’s introduction of Windows Help in 1990 did little to expand the field. That didn’t happen until the first GUI authoring tools, Doc-To-Help (WexTech) and RoboHelp (Blue Sky Software) appeared in 1991. (The first online help conference that I attended, in 1991, had about a dozen attendees from all over North America. Three years later, after the GUI authoring tools appeared, I was speaking about help authoring to 100+ attendees just from the Boston area.)

The web followed the same path – an esoteric, code-based technology with a tiny cadre of authors that only expanded after GUI authoring tools like Hot Dog Pro appeared. I expect that Information 4.0 will follow the same path.

The point of this article is to take a first cut at the features that I’d look for in GUI I40 tools. This is not a comprehensive list by any means. Space limits how much I can discuss. Plus, tool definition is best done with multiple participants in order to get multiple viewpoints. So, this article is a start that I hope will provide the basis for further discussion.

To set the context for this article, here are the seven major characteristics of Information 4.0 as defined by Andy McDonald and Ray Gallon.

·         Independent – Separate from format, business rules.
·         Molecular – “Info molecules” self-assemble into “compounds” based on “state vectors”.
·         Dynamic – Continuously updated.
·         Offered – Available if needed.
·         Ubiquitous – Online, searchable, and findable.
·         Spontaneous – Triggered by contexts.
·         Profiled automatically.

Some of these items overlap so I’ll combine several of them to simplify the discussion. I’ll add project management.

Authoring Features

These include:

  • Editor for creating new molecules (content chunks) from scratch or based on templates and some structured authoring model.
  • Ability for authors to create custom templates.
  • Lightweight and unintimidating version of the editor for use by subject matter expert authors who are not technical communicators. Alternatively, ability for a system administrator to physically hide, not just disable, elements of the interface for subject matter expert authors.
  • Localizable editor interface.
  • Ability to import legacy content in Word, FrameMaker, HTML, DITA, DocBook, InDesign, and other formats, and able to break the incoming documents into smaller molecules based on the use of heading styles or other properties of the material.
  • Ability to detect and eliminate tool-specific features automatically or flagging them for human intervention.
  • Ability to detect and eliminate local, non-standard, or simply weird formatting automatically or flagging it for human intervention.
  • Ability to create different categories of content chunks with different properties, such as micro content or text-only, in order to create different types of molecules for different needs.
  • Limited to standard HTML features with no add-on or plug-in dependent features.
  • Support for W3C (WorldWide Web Consortium) compliant CSS features and validation.
  • Support for insertion of standards-body compliant metadata, such as the W3C’s RDF (Resource Description Framework) and validation.
  • Support for insertion of business rules ranging from conditionality to standards such as BPEL (Business Process Execution Language) and validation.
  • Support for accessibility standards such as WCAG (Web Content Accessibility Guidelines) and validation.
  • Ability to partly or fully automate the insertion of the previous four types of tags through the I40 tool rather than through proprietary add-ons.
  • Compliance with data privacy regulations.
  • Ability to enforce correct content structure, such as heading sequence and nesting.
  • Support grammar- and spell-checkers in multiple languages.
  • Ability to work in conjunction with an expert system for machine-generated content in cases where human authors cannot create or tag content quickly enough for the organization’s needs.


  • Many companies have created unique formatting using proprietary styles and hand-coding and may be reluctant to abandon it. The same is true for legacy content whose formatting may be so non-standard as to be difficult for I40 tools to parse, especially from Word. The best answer is to throw away the legacy content and create it again from scratch but many companies will balk at this because of the effort and expense.

    So the tools will have to offer analysis features that look at the proprietary coding and suggest W3C-compliant alternatives. (It also means that there will be a lot of consulting work cleaning up legacy material.) Finally, it means that authors will have to be sold on the need for proper, rule-based authoring – e.g. why they have to do it right from now on.
  • Eliminating tool-specific features may be difficult to sell to authors who are accustomed to the features offered by their current authoring tool.
  • The need to use expert systems to create machine-generated content will change authors’ roles from writer to expert system rule definer and content curator. Traditional writing will become a thing of the past in companies using Information 4.0. Based on years of experience helping companies move from print to online, I anticipate resistance, perhaps even mass retirement, on the part of older writers. The result will be a loss of the “corporate memory” that those writers hold in exchange for more technically oriented authors.
  • All the required tagging will require automation. Human authors won’t be able to work quickly or reliably enough. This again pushes toward an expert system model and a major shift in the nature of technical communication.

Contextualization and Spontaneity Features

These include:
  • Support for traditional context-sensitive help.
  • Support for additional contextualization types such as:
  • Geographical – physical location, outdoors using GPS or indoors using GPS or beaconing.
  • Chronological – date and/or time.
  • Environmental – temperature, light levels, and more.
  • Spatial – device orientation, such as whether you’re holding your phone in portrait or landscape mode, and more.
  • Personal – pulse, temperature, and more.
  • Perhaps other contexts, such as physical to detect conditions like vibration or strain in machines.
  • This support also, obviously, has to generate code that can be used for further processing within the repository.
  • Interface look and feel customization.


  • Transience. Traditional context-sensitivity is stable until the requester changes it – e.g. you’re in dialog box A until you go to dialog box B. But the other types can change quickly and often, like a light sensor that has to distinguish between light and shadow while the requester is under a tree on a windy day. This puts more demands on the sensors.
  • Context detection method. Traditional context detection is built-into our authoring tools; others are not and the detection method must be coded separately. We’ll need programmer support.
  • Context transmission method. Transmitting the contexts to the processor needs fast and reliable internet access, plus local fallback when internet access is slow or lacking.
  • Context processing. The context must be analyzed to determine what content fragments to send to the requester. This might take place outside or, eventually, within the authoring tool, possibly on a server.
  • What the minimum hardware, software, and network requirements will be.
  • Fragments will have to meet the needs defined by the contexts. That seems self-evident, but it means that authors will have to do context definition prior to content creation. “Winging it,” a bad idea today, will be a really bad idea under Information 4.0.
  • Fragments may have to stand alone or be combinable on the fly in response to user requests.
  • Molecule and graphics file naming and similar control conventions that are often glossed over, will be crucial.
  • Search will be crucial for finding information, so SEO (search engine optimization) will be crucial. SEO will also have to reflect new types of search, such as the “position zero” model for voice computing using devices like Alexa.
  • Fragments may have to be created to meet different, personalized requests. For example, for a process description, can there just be one fragment containing a list of the steps? Must there be an additional fragment containing the steps and the concepts? Or an additional fragment that describes the concepts that can be combined with the steps fragment depending on the requester’s background? And how do we know the requester’s background?

Dynamic/Continuous Updating Features

These include:
  • Continuous content updating “in real time” and availability to users “in real time”.
  • Content cannot be behind a firewall or login because that would force users to actively request it rather than receiving it automatically. However, this may conflict with companies’ needs to safeguard their content.
  • Users must clearly know and take for granted that the content they see is current and accurate. The currency aspect can be met by informational messages to the users indicating the date and time of the last update or the amount of time remaining for a new one.
  • Support for automatic updating in situations where users temporarily lack internet access.
  • Ability for system administrators to define “continuous” and “real time” for their specific needs.
  • Support for creation of update scripts.
  • Output tailored to users’ profiles.
  • User customization of the content through bookmarking and annotation.
  • Faceted search.


  • Defining “dynamic” as continuous updating is too vague. Some content may have to be updated continuously in, literally, real-time. For example, aircraft pre-flight checklists may have to be updated continuously as the weather changes. But other content, such as HR material, may only need updating weekly. Authors will have to analyze their content and its subject to define the frequency and scale of updating.
  • Dynamic updating suggests that the content should not require compilation since compilation might take minutes (crucial in an aircraft pre-flight or emergency procedures application).
  • Until Information 4.0 authoring tools become as integrated as today’s help authoring tools, we’ll need programming support to write the scripts to read the context state information, translate that to the RDF codes, and call the fragments to generate the output.
  • Is the output a loose set of XHTML or XML files or a packaged set of files like that created when outputting HTML5 from a help authoring tool. If ancillary navigation files, like tables of contents, are to be part of the output, they have to be generated and applied to the output through some build process. Most builds are quick, under a minute, but users may not want to or be able to wait for a build that takes even a minute so they may not use the content at all or use an older version, if they can.
  • The build time problem can be avoided by simply uploading the requested content molecules to the user’s device but, again, how will the ancillary files be applied, if at all?

Ubiquity Features

These include:
  • Support for responsive design, layout, and responsive text (such as automatically changing “click” to “tap”) through the GUI.
  • Support for micro content creation for use in search, bots, and AR/VR.
  • Support for voice-driven computing.
  • Support for SEO, including SEO for the emerging “position zero” search for voice-driven computing.
  • An open architecture to support new platforms or content mechanisms that will emerge in the future.


  • Successful micro content creation will require that the question/response pairs include as many synonyms as possible in order to avoid too many “No information available” responses that will quickly turn users away from the feature.
  • Responsive layout and text creation are quick to create but time-consuming in large projects and will have to be automated.

Project Management and Analysis Features

These include:
  • Workflow and sign-off control.
  • Detailed reporting through a report generator.
  • Summary reporting via configurable dashboards.
  • Linking to external reporting tools. For example, from Marie Girard, “…to get a consolidated and meaningful view of these performance metrics, you will need a fair amount of automation for data analytics across portals, sentiment analysis of comments, and the pulling together of that data into visualizations that you can easily interpret and act upon. You will also need some sort of connection between your knowledge management systems and customer relationship management systems, so that you can better trace the role of content in customer interactions. (Marie Girard, 3 steps towards continuously updated content, January 2018, Information 4.0 Consortium blog)


As I said in the beginning, this article is a first cut at a proposed features list for I40 authoring tools. It is far from comprehensive, but I hope it will serve to catalyze discussion. And I hope that it will follow the same path that the online help authoring tools and web authoring tools did in the early and mid-1990s but faster because we have that past to help direct us.

About the Author

Neil is president of Hyper/Word Services (www.hyperword.com) of Tewksbury, MA, USA  He has four decades of experience in technical writing, with 34 in training, consulting, and developing for online formats and outputs ranging from WinHelp to mobile apps and tools ranging from RoboHelp and Doc-To-Help to Flare and ViziApps. To top things off, he has been working in mobile since 1998 and XML since 2000, and speaking and writing about Information 4.0 since 2017.
Neil is MadCap-certified in Flare and Mimic, Adobe-certified for RoboHelp, and Viziapps-certified for the ViziApps Studio mobile app development platform. He is a popular conference speaker, most recently at MadWorld 2019 in San Diego, CA. Neil founded and managed the Bleeding Edge stem at the STC summit and was a long-time columnist for ISTC Communicator, STC Intercom, IEEE, and other publications.  You can reach him at nperlin@nperlin.cnc.net.

Thursday, April 11, 2019

Creating Micro Content in MadCap Flare? What to Keep in Mind

At one time, the topic was the smallest unit of content that you could present to your users. But even a short topic might be too long. Users might just want the phone number for tech support or the setting for a field without having to read an entire topic to find that information. The solution, now available in the latest release of MadCap Flare – micro content.

Wikipedia defines micro content in several ways, one of which is “other small information chunks that can stand alone or be used in a variety of contexts, including instant messages, blog posts, RSS feeds, and abstracts.” MadCap defines it as “text, imagery and/or video content that can be consumed in 10-30 seconds”, i.e. short, concise answers to user questions.

For example, users often look for a phone number for technical support. Rather than search for “tech support” and skim through the list of results, micro content allows users to search for “tech support” and quickly find the phone number, which appears at the top of the results.

Or say users are looking for a specific piece of information in a larger topic, like the number of eggs needed for a cake.

In each case, the result, in the form of micro content, displays first on the list. The result isn’t limited to a single line of information. For example, a Google search for “healthy tomato soup recipe” produces the following:

Here, the micro content consists of the entire recipe.

Micro content offers a big potential benefit to the users – it saves them  time and aggravation when they’re looking for the answer to a question.

Plus, micro content offers several big benefits to Flare authors. It lets them segment and present information in the most immediately useful chunks, and do so quickly and easily by using Flare’s features.

Micro Content Implementation in Flare

There are several ways to create micro content. If you’ve created snippets in Flare, some of the micro content creations options will feel familiar.
  • Create the phrases and responses totally from scratch using the Micro Content Editor. Select File > New > Micro Content. The editor displays, as shown below.

    The phrase side lets you create a new phrase, add different versions of the same phrase – e.g. “Tech support phone #” and “Support phone #”, change or delete an existing phrase, and use variables for a phrase. The response side offers the familiar topic creation options, such as adding a hyperlink or cross-reference, images, variables, snippets, and special characters, and lets you apply mediums to the response.

  •       Create the phrase in the Micro Content Editor and link to a topic, snippet, or bookmark to serve as the response.

    You add the new phrase, click the pulldown at the far right of the phrase line, click Add Link, and select the topic or snippet from the Select File dialog box. The entire topic or snippet becomes the response.
  • Select a block of content in a topic or snippet to serve as the response, then click Create Micro Content in the block bar or Home ribbon to display the Create Micro Content dialog box. There, you can type the phrase and select the mco file in which to store this phrase/response pair. The pair then displays in the editor.
A few notes:

  • The micro content files are stored in a MicroContent folder under the Resources folder on the Content Explorer in Flare, and in a MicroContent folder under the Output folder for your target.
  • You can control the skin format of the micro content results by setting its properties in the TopNav and SideNav skin Styles tab.
  • Micro content is supported by the MadCap search and Elasticsearch.

Micro Content Management and Design Considerations

The Micro Content Editor is neatly integrated with existing Flare features. You can:

  • Find micro content files by using the File List feature from the View ribbon and changing the filter to MicroContent files.
  • Use the Text Analysis feature from the Tools ribbon to check the writing of the responses.
  • Use the Reports feature from the Tools ribbon to generate various reports about your micro content.
  • Spell check your micro content files.
  • Run Find and Replaces in your micro content files.

When adding micro content to your projects, there are several considerations to keep in mind that affect project management and design, and how they can be resolved:

  • Greater project complexity – Micro content is one more aspect of a project to be managed. It’s important to document your rules for creating micro content in the project description to be sure that your successors understand the logic behind them. Don’t keep a project description? It’s time to start.
  • Nature of the micro content – How do you decide what micro content to create in the first place? It’s tempting to simply jump into  creating the phrase/response pairs, but that must be done based on user needs. These needs can be identified through user analytics, and by reaching out to your customer support and tech support groups. Learn what questions they hear most often and use that information as the basis for your micro content. You’ll also have to include synonyms and different wordings in the phrases. In a sense, creating micro content is similar to indexing in that it’s never finished.
  • Speed of creation – The process of creating the phrase/response pairs is slow when done manually. Start keeping track of the time required so that you can factor that into future project planning.

How can Micro Content be Used?

Any short chunk of information that users might specifically search for can serve as micro content – like a miniature landing page as MadCap calls it. And there are several other potential uses of micro content, particularly in Flare:
  • Chatbots – Responses from a bot should be focused and concise, like micro content. Bots have been tremendously overhyped but they are no doubt coming, and micro content will support them.
  • AR – The annotations used in augmented reality applications should be focused and concise in order to use as little screen space as possible. Again, micro content will support this.

       And a fourth possible use case is starting to emerge…

  • The conversational web – Over the years, we’ve become accustomed to the search hit lists generated by Google and other search tools. Those work, if we’re looking at a screen and can scroll down the list of hits to find the one that meets our needs. But it’s almost impossible to remember multiple hits and choose between them without seeing them.

    The article “Alexa, I Want Answers” in the March 2019 issue of Wired posited a search paradigm in which users want one answer, or “one-shot answers” to solve the problem of dealing with multiple responses when you can’t see them. That means that search optimization will have to move toward providing the best answer, rather than the best 100,000 answers and, because voice responses have to be short, micro content could be used to provide the voice-optimized chunks of content.

While there are multiple applications for micro content, the easiest way to start using micro content is through featured search results. By applying and exploring the feature, users can start laying the groundwork for the chatbot and AR use cases of the future.


Micro content is likely to have major effects on project design, management, and the overall usability of the output. MadCap has done a smooth and neat implementation of micro content into the larger Flare architecture, and Flare authors should expect to be able to use it to good effect in future projects.

About the Author

Neil has 4 decades of experience in tech comm, with 34 years in training, consulting, and development for various online formats and tools including WinHelp, HTML Help, CE Help, JavaHelp, WebHelp, Flare, and more. Neil is a frequent speaker at MadWorld and various professional groups and the author of several books about Flare and mobile app development.

Neil is MadCap certified for Flare and Mimic, ViziApps certified for the ViziApps mobile app development platform, and certified in other authoring tools.  He provides training, consulting, and development for online help and documentation, Flare, Mimic, other authoring tools, mobile apps, XML, single-sourcing, topic-based and structured authoring, and content strategy.  He can be reached at nperlin@concentric.net, www.hyperword.com.

Position Zero – It’s a Good Thing

In my keynote at the Conduit conference in Philadelphia on April 6, 2019, I mentioned something called “position zero” as an aspect of SEO but didn’t really explain what it was and why it might matter to tech comm.

“Position zero”, also called a “featured snippet”, is a relatively recent addition to a Google search results list. It shows up in the list above the first hit – ergo “position zero”. It has a summary and a description of the site from which it came. For example, searching for “B58 Hustler” in Google gives this result.

The featured snippet appears above or, here, to the right of the first search result. It’s usually followed by a “People also search for” list of other questions in text form or, in this case, in graphical form.

The featured snippet is determined organically. According to “SEO above position 1: What's Position Zero?“ by Kent Campbell at https://blog.reputationx.com/what-is-position-zero-seo,

A few things play into which webpage's content is featured as the snippet:

      1. First page results. It’s necessary that your page is on the first page of search results for your given search query. Usually in the first five results. 
      2. Relevant information. The answer you provide has to be the right answer, and the information on the page must be relevant to the search term overall.
      3. Useful formatting. If you’ve formatted your answer like this answer is, or if you’ve got a nice table of information, Google will be more likely to display it.

So, what does this mean for tech comm? Until now, our searches have been internal to the authoring tool, like Flare’s search engine, or external, using Google, each giving the usual long list of hits. Ideally, our material will appear within the first ten hits, more ideally near the top of that list, but the exact position hasn’t been crucial. Until now…

We’re now moving from screen-based content toward voice-based content. We’ll want to appear at the top of the list of search hits because many users will go with the first hit because they won’t be able to remember the first three, let alone the first ten. Some users might respond to the first hit by asking the search engine for the next hit, but it will be the rare user who goes deeper down the list. So, the old rule of thumb that any item outside the first ten hits won’t be seen is changing. Now, any item outside the first one, possibly two, won’t be seen. That’s going to affect how we apply SEO to our content.

I expect to see conference presentations later this year or in 2020 on what’s required to reach position zero. Look for a blog post on the subject here in the next few months.

Monday, March 11, 2019

A Comment About My MadCap Flare Links Webinar

I gave a webinar on Flare link types for MadCap on Thursday, March 7, and got the comment below from Jane Brewster, Information Architect for the White Clarke Group in the UK. I appreciate comments like this because my theory is that there's always someone out there who's tried something that I have not and the best thing to do is to learn from them.

With that, here's Jane's comment:

I thought you might like to know another pro for using togglers rather than hotspots, particularly applicable if you single-source for HTML5 and PDF. I initially used dropdowns but was very pleased to discover togglers and how flexible they allowed me to be with formatting.

In the PDF target I want our toggler or dropdown hotspot to be a sub-heading that sits correctly within the hierarchy, so it might be h2, h3 or h4 depending on where the topic sits in the TOC hierarchy. 

However dropdowns don’t allow the hotspot to dynamically change style if the topic sits at a different level in the output – the style is static (unless I’ve missed something obvious of course!).

To get around this use a toggler link conditioned for the HTML5 target followed by a heading conditioned for the pdf target (usually h2 as our topics all have h1 as the main heading). That way in the PDF if the topic is at the top level the toggler heading is H2, but if the topic is at the next level down the toggler heading is automatically h3, and so on.

Moving on to XRefs, I agree with not listing them all at the beginning or end of the topic. However I like to use them slightly differently to the way you describe. Having worked previously with online and PDF help that had to be AAA compliant (so suitable for any differently abled user, possibly using a screen reader), I’m aware that just putting a link in the middle of a sentence isn’t always appropriate (particularly for screenreaders), so I put them at the end as a more explicitly worded reference.

For example, instead of:
I really like using Madcap Flare because it’s a very flexible authoring tool.

I use:
I really like using Madcap Flare because it’s a very flexible authoring tool, see  Madcap Flare.

The PDF output is in the format:
I really like using Madcap Flare because it’s a very flexible authoring tool, see Madcap Flare (on page 3).

I’ve used hyperlinks in this example but they would be XRefs in Flare. This phrasing also gets around the problems caused when you want to use an XRef to a topic with a title that doesn’t make sense in the context of the sentence you’re linking from (not a problem with hyperlinks of course!) so you can be a bit more flexible with topic titles.