Thursday, June 27, 2019

GUI Information 4.0 Tools: A Proposed Feature Set – Comments

PRELIMINARY

In my article about a proposed feature set for GUI Information 4.0 authoring tools in the Summer 2019 issue of Communicator, I requested comments and feedback on the subject. I present those comments in this follow-up article as I received them, with no editing except to shorten a few due to space limits. The comments appear below in roughly the order in which I received them. I prefaced each comment, or set of comments, with the author’s name and Twitter handle in bold. (If you’re struck by the seeming lack of organization of the comments, remember that this is a tweet-stream.)

I also added a few comments of my own in response to a phone conversation with one of the commenters. Here, with no further discussion, are the comments.

The Comments

Cruce Saunders @mrcruce

What should next-generation authoring look like now that we have 1,000s of permutations of media, content-types, browsers, channels, contexts, & formats? A difficult & valiant question to try to answer! Added some thoughts to a recent article. A thread for further comment.

Authoring rarely ever happened consistently in one GUI, even for small companies. In an enterprise, authoring is the single most diverse environment within content lifecycle process and technology. Content can be acquired in dozens of ways in a single department!

We should never assume an ability to conform large populations to a single GUI authoring platform. What typically happens in such enforcement scenarios: “cheating”. No GUI, especially one that wants to be so feature-rich, ever meets everyone’s needs.

So content gets built elsewhere and then PASTED into the GUI (often by someone else), where gets further manipulated. And one hopes, enriched with metadata. Or, publishing systems just get built around the GUI for various authoring groups that decide not to use it.

And the well-intentioned standard authoring regime falls into a chaotic mess of manual content transforms with no accountability or traceability. Most enterprises today live in some form of this mess.

Even when some smaller silos create some most consistent coherence (e.g. #techcomm), none of the related content sets are compatible. The answer, [A] believes, lies in aligning structural & semantic standard patterns across disparate authoring, management, & publishing systems.

All that being said, we do need to advance the state of GUI authoring. Vendors are working on this in product roadmaps. The biggest area of interest to me is essentially today’s attempts at “What You See Is Semantically-Markup Up Content”.

GUIs that *as the author types* suggest semantic associations derived from an organizationally-standardized taxonomy or ontology provider. This is effortless and invisible...machine-prompted, author-empowering.

The same sort of in-context editing, coupled with machine intelligence, can also help to prompt additional annotation useful for content targeting.

Another area of interest are GUIs in which a “sidecar” toolbar powered by artificial intelligence provides authors with in-context structured snippets for reuse and inclusion, based on the content of the material being authored.

Or, the sidecar suggests portions of text that might be reused by others. And providers authors the ability to apply metadata or discussions to individual snippets, or molecules, of content. Of course, these sidecar tools can be made to perform MANY other functions.

In my view, any vendor authoring product, and any related interface, needs to embrace schema application & portability to matter long-term. Companies desperately need to be able to move content around. But this is not possible without schema alignment across systems.

And that is impossible without authoring interfaces that incorporate a structural schema. I’d like to see more friendly blank-canvas interfaces (‘Word-like’) that incorporate an ability apply and manage schema-driven templates, beyond just standardizing styles.

We can see many attempts at schema-based GUI authoring, especially in the plugin market, where Word-to-DITA has been something pursued for some years.

One of the biggest areas of need, and most challenging, is the development of graphical user interfaces that support multiple variations of the same content within a single authoring process.

Personalization based on user type and state, and device or environment states, is something that many authoring processes need. And as we feed our customer experiences with ever-more contextual data, authoring for human or machine-meditated variation becomes essential.

The good news is this has also been pursued for some years, and the heuristics have been explored in multiple production environments — mostly in Customer Experience Management #cem platforms.

But there's plenty of room for innovation here, because "variation authoring" interfaces have not yet been perfected or mass-adopted. It's still a blue ocean space and vendors can distinguish themselves here.

There’s more to say, and much more to discuss, but the future of authoring is a very deep rabbit hole. And a worthy exploration. Take a look at more ideas from Neil Perlin (@NeilEric) in @ISTC_org Communicator or via the #info40 blog post here:


James Mathewson @Mathewson_CS

The challenge is context. Content is only meaningful to the degree that it is relevant in context. How do you build an authoring system that helps writers grasp digital contextual cues and write relevant content using those cues? Modular content grows this problem exponentially.

Scott Abel @scottabel

Maybe our efforts would be better spent getting corporate leaders (those afraid of being displaced by disruptive innovators) to understand the need to become information-enabled. Authoring tools are created (and updated) in response to demand. The demand is simply not there — yet

Neil Perlin (in response to Scott Abel’s point above)

A fair point. However, in the early days of help and the web, GUI tool development went on - often in odd or even wrong directions - even as the technology was spreading. Better IMO to become information enabled AND create the tools for doing so at the same time.

Cruce Saunders


The sea change is coming. Both customers and vendors are driving the evolution. One hand washes the other. Celebrate the innovators, wherever they sit.

Mike Atherton @MikeAtherton

+1 for context and structure. Something akin to a headless CMS is a good start, but rather than a bare bones experience, illustrative device and platform-specific templating to show authors how their work may appear.

And more importantly, since we're moving from a centralised publishing environment to distributed 3pp (AMP, Instant Article, other API) then explicit support and guidance ('recipes' if you will) from platform owners.

Aaaand a new mental model. The print analogy refuses to die and doesn't help separate content from presentation. A better analogy might be radio waves.

Neil Perlin (in response to Mike Atherton’s previous point)

I'll bite. Why radio waves?

Mike Atherton (in response to Neil Perlin’s point above)

Because the information transmitted is intangible, device-agnostic and everywhere at once. And because the same technology can emit frequencies designed for humans and frequencies designed for machines. I didn't say it was perfect :)

Cruce Saunders (in response to Mike Atherton’s point above)

Mike's 'radio waves' is similar to how I see content. Anything that can be available in multiple states, places, usages at one time is very different than tangible one-time published artifacts. It's 'information energy'. ;) But it's more durable even. So, we do need new frames.

Real device, type, user, context agnostic contextual preview or simulation is a holy grail. Even think it should be source agnostic. I actually believe there's an entire missing product category here. Rendering simulation & collab is something more than just another feature.

Mike Atherton

It's not even about being WYSIWYG 2.0 (i made that up), but what's missing from the structured content rhetoric is solid criteria for *how and why* to make specific structural choices. Bringing home context of use may help.

Actually @eaton

I think "next-generation authoring" has to assume that beyond highly data-driven fill-out-the-form stuff that CMS devs have already (kind of) solved… content will end up consisting of 1) Narratives, 2) Components, and 3) Assemblies/Aggregates…

…And also has to assume that workflow/responsibility for each of those modalities will require different tooling. You talk a little about this downthread but I think there's too much attention paid to UI and not enough to contextualied UX in the content editing/mgmt space

Then the big mind-blowing piece is that a huge percentage of what we would call "narrative" is spread across multiple pages/screens/artifacts for final delivery. Some of the journey/experience management stuff starts touching on that, but…

Mark Demeny @mde_sitecore


Great thread and summary from @NeilEric as well. It's a hard one to resolve (esp. over Twitter). Even putting aside the harder questions of content lifecycle, reuse, transformations for specific channels, etc. you get into questions of appropriate tools and interfaces very early.

You'll often hear "I wish my simple to use CMS was better at structured/headless content" similarly, you'll hear the opposite complaint of vendors that have a bias toward structured content but sacrifice page layout or authoring experience.

As I see it, there are 3 fundamental conflicts with content lifecycle; - Distributed vs. Centralized (with tools, author roles, team, geo etc.) - Structured vs. channel-specific - Creation agility vs. reuse (via better findability, analytics, etc. - more lean to the former)

And personalized/contextual content is a problem *layered across all of these*. It could be that a specific region, or an analytics team is responsible for acting on that - so I see that as not a distinct problem, but related to and complicated by the existing conflicts.

Jan Benedictus @JanBenedictus


Structured Content Authoring, Component Based Authoring etc. are often mentioned - by leaders ; but “what problem do we solve” is not articulated. We have to go from “strategic talk” to Tangible Benefits to explain Why. Today we are at @DrugInfoAssn to do so for Pharma #dia2019

Ray Gallon @RayGallon


Check out #nemetics as another vision close to these ideas. cc @toughLoveforx @ddrrnt

Two Additional Points of My Own

Mark Demeny noted correctly that I gave scant coverage to issues of governance and workflow and sign-off control.

I’ll add that I barely mentioned the effect of Information 4.0 on technical communicators. The increased technical and management complexity may drive some of today’s practitioners out of the field. That’s been predicted with every new technology and, to a degree, is true but most practitioners adapt. What’s different with Information 4.0 is that even the base level of technical and management complexity is far higher than earlier disruptive technologies like word-processing in the 1980s and the web and online help in the 1990s.

Summary

The comments section may seem rambling because it largely matches the structure of the comments and responses. But I left it that way to show the wide range of thought about the technical, structural, management, and even philosophical issues. Once this article appears in Communicator, I’ll add it to the Information 4.0 Consortium blog and the Hyper/Word Services blog, and will add more posts as I get more comments.

So, now what? Is there a next step or has this just been an interesting discussion? That will have to be the subject of more discussion by members of the Information 4.0 Consortium. Stay tuned.

About the Author

Neil is president of Hyper/Word Services (www.hyperword.com) of Tewksbury, MA, USA  He has four decades of experience in technical writing, with 34 in training, consulting, and developing for online formats and outputs ranging from WinHelp to mobile apps and tools ranging from RoboHelp and Doc-To-Help to Flare and ViziApps. To top things off, he has been working in mobile since 1998 and XML since 2000, and speaking and writing about Information 4.0 since 2017.

Neil is MadCap-certified in Flare and Mimic, Adobe-certified for RoboHelp, and Viziapps-certified for the ViziApps Studio mobile app development platform. He is a popular conference speaker, most recently at MadWorld 2019 in San Diego, CA. Neil founded and managed the Bleeding Edge stem at the STC summit and was a long-time columnist for ISTC Communicator, STC Intercom, IEEE, and other publications.  You can reach him at nperlin@nperlin.cnc.net.


Thursday, April 11, 2019


Creating Micro Content in MadCap Flare? What to Keep in Mind

At one time, the topic was the smallest unit of content that you could present to your users. But even a short topic might be too long. Users might just want the phone number for tech support or the setting for a field without having to read an entire topic to find that information. The solution, now available in the latest release of MadCap Flare – micro content.

Wikipedia defines micro content in several ways, one of which is “other small information chunks that can stand alone or be used in a variety of contexts, including instant messages, blog posts, RSS feeds, and abstracts.” MadCap defines it as “text, imagery and/or video content that can be consumed in 10-30 seconds”, i.e. short, concise answers to user questions.

For example, users often look for a phone number for technical support. Rather than search for “tech support” and skim through the list of results, micro content allows users to search for “tech support” and quickly find the phone number, which appears at the top of the results.



Or say users are looking for a specific piece of information in a larger topic, like the number of eggs needed for a cake.



In each case, the result, in the form of micro content, displays first on the list. The result isn’t limited to a single line of information. For example, a Google search for “healthy tomato soup recipe” produces the following:


Here, the micro content consists of the entire recipe.

Micro content offers a big potential benefit to the users – it saves them  time and aggravation when they’re looking for the answer to a question.

Plus, micro content offers several big benefits to Flare authors. It lets them segment and present information in the most immediately useful chunks, and do so quickly and easily by using Flare’s features.

Micro Content Implementation in Flare

There are several ways to create micro content. If you’ve created snippets in Flare, some of the micro content creations options will feel familiar.
  • Create the phrases and responses totally from scratch using the Micro Content Editor. Select File > New > Micro Content. The editor displays, as shown below.



    The phrase side lets you create a new phrase, add different versions of the same phrase – e.g. “Tech support phone #” and “Support phone #”, change or delete an existing phrase, and use variables for a phrase. The response side offers the familiar topic creation options, such as adding a hyperlink or cross-reference, images, variables, snippets, and special characters, and lets you apply mediums to the response.

  •       Create the phrase in the Micro Content Editor and link to a topic, snippet, or bookmark to serve as the response.



    You add the new phrase, click the pulldown at the far right of the phrase line, click Add Link, and select the topic or snippet from the Select File dialog box. The entire topic or snippet becomes the response.
  • Select a block of content in a topic or snippet to serve as the response, then click Create Micro Content in the block bar or Home ribbon to display the Create Micro Content dialog box. There, you can type the phrase and select the mco file in which to store this phrase/response pair. The pair then displays in the editor.
A few notes:

  • The micro content files are stored in a MicroContent folder under the Resources folder on the Content Explorer in Flare, and in a MicroContent folder under the Output folder for your target.
  • You can control the skin format of the micro content results by setting its properties in the TopNav and SideNav skin Styles tab.
  • Micro content is supported by the MadCap search and Elasticsearch.

Micro Content Management and Design Considerations

The Micro Content Editor is neatly integrated with existing Flare features. You can:

  • Find micro content files by using the File List feature from the View ribbon and changing the filter to MicroContent files.
  • Use the Text Analysis feature from the Tools ribbon to check the writing of the responses.
  • Use the Reports feature from the Tools ribbon to generate various reports about your micro content.
  • Spell check your micro content files.
  • Run Find and Replaces in your micro content files.

When adding micro content to your projects, there are several considerations to keep in mind that affect project management and design, and how they can be resolved:

  • Greater project complexity – Micro content is one more aspect of a project to be managed. It’s important to document your rules for creating micro content in the project description to be sure that your successors understand the logic behind them. Don’t keep a project description? It’s time to start.
  • Nature of the micro content – How do you decide what micro content to create in the first place? It’s tempting to simply jump into  creating the phrase/response pairs, but that must be done based on user needs. These needs can be identified through user analytics, and by reaching out to your customer support and tech support groups. Learn what questions they hear most often and use that information as the basis for your micro content. You’ll also have to include synonyms and different wordings in the phrases. In a sense, creating micro content is similar to indexing in that it’s never finished.
  • Speed of creation – The process of creating the phrase/response pairs is slow when done manually. Start keeping track of the time required so that you can factor that into future project planning.

How can Micro Content be Used?

Any short chunk of information that users might specifically search for can serve as micro content – like a miniature landing page as MadCap calls it. And there are several other potential uses of micro content, particularly in Flare:
  • Chatbots – Responses from a bot should be focused and concise, like micro content. Bots have been tremendously overhyped but they are no doubt coming, and micro content will support them.
  • AR – The annotations used in augmented reality applications should be focused and concise in order to use as little screen space as possible. Again, micro content will support this.

       And a fourth possible use case is starting to emerge…

  • The conversational web – Over the years, we’ve become accustomed to the search hit lists generated by Google and other search tools. Those work, if we’re looking at a screen and can scroll down the list of hits to find the one that meets our needs. But it’s almost impossible to remember multiple hits and choose between them without seeing them.

    The article “Alexa, I Want Answers” in the March 2019 issue of Wired posited a search paradigm in which users want one answer, or “one-shot answers” to solve the problem of dealing with multiple responses when you can’t see them. That means that search optimization will have to move toward providing the best answer, rather than the best 100,000 answers and, because voice responses have to be short, micro content could be used to provide the voice-optimized chunks of content.

While there are multiple applications for micro content, the easiest way to start using micro content is through featured search results. By applying and exploring the feature, users can start laying the groundwork for the chatbot and AR use cases of the future.

Conclusion

Micro content is likely to have major effects on project design, management, and the overall usability of the output. MadCap has done a smooth and neat implementation of micro content into the larger Flare architecture, and Flare authors should expect to be able to use it to good effect in future projects.

About the Author

Neil has 4 decades of experience in tech comm, with 34 years in training, consulting, and development for various online formats and tools including WinHelp, HTML Help, CE Help, JavaHelp, WebHelp, Flare, and more. Neil is a frequent speaker at MadWorld and various professional groups and the author of several books about Flare and mobile app development.

Neil is MadCap certified for Flare and Mimic, ViziApps certified for the ViziApps mobile app development platform, and certified in other authoring tools.  He provides training, consulting, and development for online help and documentation, Flare, Mimic, other authoring tools, mobile apps, XML, single-sourcing, topic-based and structured authoring, and content strategy.  He can be reached at nperlin@concentric.net, www.hyperword.com.





Position Zero – It’s a Good Thing


In my keynote at the Conduit conference in Philadelphia on April 6, 2019, I mentioned something called “position zero” as an aspect of SEO but didn’t really explain what it was and why it might matter to tech comm.

“Position zero”, also called a “featured snippet”, is a relatively recent addition to a Google search results list. It shows up in the list above the first hit – ergo “position zero”. It has a summary and a description of the site from which it came. For example, searching for “B58 Hustler” in Google gives this result.


The featured snippet appears above or, here, to the right of the first search result. It’s usually followed by a “People also search for” list of other questions in text form or, in this case, in graphical form.

The featured snippet is determined organically. According to “SEO above position 1: What's Position Zero?“ by Kent Campbell at https://blog.reputationx.com/what-is-position-zero-seo,

A few things play into which webpage's content is featured as the snippet:

      1. First page results. It’s necessary that your page is on the first page of search results for your given search query. Usually in the first five results. 
      2. Relevant information. The answer you provide has to be the right answer, and the information on the page must be relevant to the search term overall.
      3. Useful formatting. If you’ve formatted your answer like this answer is, or if you’ve got a nice table of information, Google will be more likely to display it.

So, what does this mean for tech comm? Until now, our searches have been internal to the authoring tool, like Flare’s search engine, or external, using Google, each giving the usual long list of hits. Ideally, our material will appear within the first ten hits, more ideally near the top of that list, but the exact position hasn’t been crucial. Until now…

We’re now moving from screen-based content toward voice-based content. We’ll want to appear at the top of the list of search hits because many users will go with the first hit because they won’t be able to remember the first three, let alone the first ten. Some users might respond to the first hit by asking the search engine for the next hit, but it will be the rare user who goes deeper down the list. So, the old rule of thumb that any item outside the first ten hits won’t be seen is changing. Now, any item outside the first one, possibly two, won’t be seen. That’s going to affect how we apply SEO to our content.

I expect to see conference presentations later this year or in 2020 on what’s required to reach position zero. Look for a blog post on the subject here in the next few months.

Monday, March 11, 2019

A Comment About My MadCap Flare Links Webinar

I gave a webinar on Flare link types for MadCap on Thursday, March 7, and got the comment below from Jane Brewster, Information Architect for the White Clarke Group in the UK. I appreciate comments like this because my theory is that there's always someone out there who's tried something that I have not and the best thing to do is to learn from them.

With that, here's Jane's comment:

I thought you might like to know another pro for using togglers rather than hotspots, particularly applicable if you single-source for HTML5 and PDF. I initially used dropdowns but was very pleased to discover togglers and how flexible they allowed me to be with formatting.

In the PDF target I want our toggler or dropdown hotspot to be a sub-heading that sits correctly within the hierarchy, so it might be h2, h3 or h4 depending on where the topic sits in the TOC hierarchy. 

However dropdowns don’t allow the hotspot to dynamically change style if the topic sits at a different level in the output – the style is static (unless I’ve missed something obvious of course!).

To get around this use a toggler link conditioned for the HTML5 target followed by a heading conditioned for the pdf target (usually h2 as our topics all have h1 as the main heading). That way in the PDF if the topic is at the top level the toggler heading is H2, but if the topic is at the next level down the toggler heading is automatically h3, and so on.

Moving on to XRefs, I agree with not listing them all at the beginning or end of the topic. However I like to use them slightly differently to the way you describe. Having worked previously with online and PDF help that had to be AAA compliant (so suitable for any differently abled user, possibly using a screen reader), I’m aware that just putting a link in the middle of a sentence isn’t always appropriate (particularly for screenreaders), so I put them at the end as a more explicitly worded reference.

For example, instead of:
I really like using Madcap Flare because it’s a very flexible authoring tool.

I use:
I really like using Madcap Flare because it’s a very flexible authoring tool, see  Madcap Flare.

The PDF output is in the format:
I really like using Madcap Flare because it’s a very flexible authoring tool, see Madcap Flare (on page 3).

I’ve used hyperlinks in this example but they would be XRefs in Flare. This phrasing also gets around the problems caused when you want to use an XRef to a topic with a title that doesn’t make sense in the context of the sentence you’re linking from (not a problem with hyperlinks of course!) so you can be a bit more flexible with topic titles.