Monday, September 23, 2019

Single Sourcing with MadCap Flare – Part 4 – Flare Project Import


In previous posts, I noted that conditionality and placeholders are the foundations of single-sourcing but described them being used in single projects. What if you need to use the same elements in multiple projects? For example, if you want all projects from different authors to use the same product_name variable?

You can create those elements from scratch in each project but that’s inefficient. Or you can copy the control files for these elements from a main project and paste them into the child projects, but what happens if the author of the main project adds a condition? The main project’s modified control files must be recopied into each child project; it’s easy to forget or to copy the modified control file into the wrong folder in the child project. And it’s easy for a child project author to change the control file so that it deviates from the one in the main project, with a loss of consistency. There’s a better way, the Flare Project Import feature.

Conceptually, this feature is simple and much like copying the control files from the main project to the child projects. The difference is that when you copy a file from the main, or master project, to the child, the Flare Project Import feature maintains a link between the master project and the child projects and redownloads the file from the master to the child if a copied file has changed in either the master or child project. Consider the example below.


You decide that all projects must use the same CSS and Copyrights topic. So, you create a master project, the top box in the flowchart, and store the CSS and Copyrights topic there. (You could use any project as the master. However, that project may contain files that don’t apply in this scenario and are just clutter. It’s usually less messy to create a new project whose only purpose is to serve as the master.)

The authors of the three child projects then link to the master and download the CSS and the Copyrights topic to their projects. All other files in the child projects are different, but the CSS and Copyrights topic are identical.

Now, the author of one of the child projects makes a change in the copy of the CSS downloaded to that child project. Or perhaps the author of the master changes the CSS. As long as there’s a difference in the CSS, or the Copyrights topic, or any other shared file between the master and child project, Flare will redownload the version of that file from the master project to the child project and overwrite the version on the child project when the author of the child project generates the output.

The beauty of this feature is that it ensures consistency across the projects in which it’s used. You no longer have to worry about the authors of the child projects changing the color of the h1 style in the CSS or changing the wording in the Copyrights topic; the next time they generate their output, their changes will be overwritten by the official version of the file in the master project.

Better still, this applies to any file in a Flare project. You get invisible consistency across multiple projects. I consider this one of Flare’s coolest features and yet one of its least well-known because, unless you took a class or called in a consultant, you’d never know that the feature even existed. And the name, Flare Project Import, doesn’t really say what the feature does.

A few tips if you want to use the Flare Project Import feature:
       Create a separate master project and name it “Master Project” to avoid confusion. You can use any project as the master project, but this has two problems. First, using a real project as the master means that there will be a lot of files that have no use in the master/child model and are just clutter and potentially confusing.
       Document the feature to avoid creating “zombie” projects. The term comes from an old client who asked me to investigate why the CSSs in his group’s Flare projects kept reverting to some standard format even though the authors were deliberately modifying their CSSs. As the client said, it was as if the CSSs had become zombies. The problem was that a year earlier, an author had set up the Flare Project Import feature but didn’t document it, then left the company. So the feature was working perfectly but no one knew why or what to do about it. Avoid this problem, and spare yourself the need to call in a consultant, by documenting the project.

That’s it for Flare Project Import. Next up – conditionalized styles and CSS variables.


Monday, September 9, 2019

Single Sourcing with MadCap Flare – Part 3 - Placeholders


Another Flare feature that supports single-sourcing is placeholders, which Flare offers in two forms – variables and snippets. Placeholders let you insert the same content in multiple places in a project, such as multiple topics, and automatically change that content everywhere with just a few mouse clicks.

For example (from the introductory post), let’s say that you’re documenting a new product whose pre-release code name is Longhorn. You write the word “Longhorn” in hundreds of places throughout the documentation.

Then, just before release, Marketing changes the product’s name to Vista. You now have to search the documentation for all instances of “Longhorn” and change them to “Vista”. It’s easy – just do a search and replace.

But what if you misspelled “Longhorn” several times? A search and replace won’t fix that. You could do a fragmentary search and replace – search for “Long” – but that would give many false hits. 

Or, let’s say that you have to repeat the same set of steps in multiple topics. Easy – type the steps once, then copy them and insert them in the appropriate topics. But what happens when one of the steps changes? You have to find each insertion and change it. But how do you know where you inserted the material. You might keep track of the insertions but that calls for an unusual level of management. You could again search for the insertions but that might again bring up false hits. Placeholders are a solution in both cases.

Variables

The simplest placeholder is the text-only variable. In the first example, rather than typing the product’s name over and over again, you’d create a variable called, perhaps, productname, and set its definition (its value) as “Longhorn”. You’d then tell Flare to insert the value of the productname variable wherever you wanted the word “Longhorn” to appear in a topic or other file.

Then, when Marketing changes the product name, you’d simply change the value of productname from “Longhorn” to “Vista”. Flare then automatically changes the “wording” everywhere you inserted the variable. You don’t have to keep track of the insertions – Flare does that.

Variables are easy to create and use. What can be tricky is defining them. For example, what do you do if you need to use a variable as a possessive – if you want to say “Longorn’s features include…” Do you create two variables called productname and productname_possessive with values of Longhorn and Longhorn’s, or do you use the single productname variable and type “’s features include…” and insert the productname variable in front of the “’s”? Either approach works; you just have to define which approach to use and get all authors to agree.

Flare stores variables in a file called a VariableSet, with an extension of flvar, accessible from the Project Organizer pane. You can create multiple VariableSet files if you want to categorize multiple variables or simply break multiple variables into smaller groups.

Several points regarding variables:

  • Variables usually have one value but you may need more sometimes for different outputs. For example, you might be generating US and Canadian outputs and have a variable called country. You specify that the value of that variable is US. But if you’re generating Canadian output, you’d want to change the value to Canada. You can override the US value on the Variables tab of the Target Editor but it’s a good idea to minimize the typing needed to specify output settings. The solution is to create the country variable, specify its value as US, but then click the third icon, Add Variable Definition, on the VariableSet Editor toolbar. You’ll see a second instance of the country variable, greyed out, and can now type the additional value, here Canada.
  • How then do you tell Flare what value to use for a particular output target? On the Variables tab of the Target Editor, there’s a dropdown link to the right of any variable that has multiple values. Click that dropdown link and select the desired value.
  • In addition to the standard variables, you can also specify date and time variables. One common use is to add a variable that specifies when the output was generated and that’s automatically updated. Click the second icon, Add DateTime Variable, type the name, such as Generated On, then click in the Definition field. When the Edit Format dialog box opens, type the format – click the I icon for help – then specify when to update the variable.

Snippets

Snippets are similar to variables but more powerful because they can contain anything you’d enter in a topic – text, lists, images, tables, links, formatting, and so on – including variables. In example two above, rather than typing the steps, copying them, and pasting them into the appropriate topics, you’d create a snippet called, for example, task1_steps. You’d then insert the task1_steps into the appropriate topics. When the time came to change one of the steps, you’d change it in the snippet itself. Flare would then automatically change the “wording” everywhere that you inserted the snippet. Again, you don’t have to keep track of the insertions – Flare does.

Each snippet exists in a separate flsnp file that’s accessible from the Resources folder on the Content Explorer. (You can put the snippet files anywhere you want within the project’s structure. I just find it simplest to store them in the Resources folder.)

Several points regarding snippets:

  • You can create text snippets and block snippets. A text snippet consists of one paragraph of text or less. If you insert a text snippet on a new line or in an existing paragraph, it keeps its structure as a line of text. A block snippet consists of two or more paragraphs of text or other element followed by a return, such as a graphic. If you insert a block snippet on a new line, the snippet retains its paragraph structure. However, if you insert a block snippet in an existing paragraph, the block snippet paragraphs become run together.
  • As noted above, a snippet can contain a variable. However, because you can assign conditions to snippets and variables, it’s possible to have a snippet to which you’ve assigned a condition to be excluded for a particular output but that contains a variable to which you’ve assigned a condition to be included for that same output. In such a conflict, Flare will default to an include so it will display the snippet because it contains the variable. The problem here is a management one; you’ll be wondering why that snippet displayed in the output when you thought you’d excluded it. Keep careful track of how your snippets, variables, and conditions interact.
  • If you have numerous snippets and need to modify one, it can be hard to pick out the right one. The shortcut is to find one instance of the snippet, right-click on it, and select Open Link. This immediately opens the correct snippet file, saving you the trouble of hunting for it.
  • Snippets can solve one limitation of variables. What if you want to end a step in a procedure by telling the users to “Press Enter.” You might create a variable called actionstep, for example, whose value is “Press Enter”. But what if you want the action step’s value to be “Press the Enter icon” followed by the actual Enter key icon. A variable is text, so that won’t work. However, you could create a snippet called actionstep that contains both the text and the icon.


As with conditions, common problems with placeholders are in design and management. We often give placeholders names that are not clear. The result? Other authors using the same set of variables or snippets won’t find what they’re looking for and wind up creating duplicates, which complicates project management. A related problem lies in not documenting the rules and logic to use if an author really does need to create a new variable or snippet. As with conditions, when you leave and a new author comes on the project, they may send the project off the rails because the lack of documentation makes it easy to make mistakes.

That’s it for placeholders. Next up – Flare Project Import, one of Flare’s coolest and least-known features.



Tuesday, August 27, 2019

Single Sourcing with MadCap Flare – Part 2 - Conditions


In post 1, I said that you can create one source of content and use it for multiple outputs, or select a sub-set of the content for each output. This is one of the foundation concepts of single-sourcing.

Let’s say that you create a sales procedure manual for use in the US and Canada. Some of the material applies only to the US, some applies only to Canada, and some applies to both. You could generate one manual that contains all the material and tell US users to ignore material that applies only to Canada and vice versa. A better solution is to extract the common material and the US material to create the US manual, and the common material and the Canadian material to create the separate Canadian manual.

This capability is driven by conditionality, one of the core features of single-sourcing.

Conditionality

To repeat my initial description from the first post, conditions are essentially categories. To repeat the US/Canada example above – let’s say you have to create a sales procedure manual for use in the US and Canada. Some of the material applies only to the US, some applies only to Canada, and some applies to both.

Conditionality lets you define categories of “US only” and “Canada only”. (Material common to both, or all, categories is always used so it’s not conditionalized.) When you generate the US output, you’d tell Flare to exclude any content categorized as “Canada only”. The output would contain the common material plus the US material but not the Canadian material.

You can apply multiple conditions to material. For example, say that the sales procedure manuals will not only include material specific to the US and the Canada but also slightly different material in the online and print versions. So, to create the online sales manual for the US, after applying appropriate conditions to the appropriate material, you’d tell Flare to exclude any material categorized as “Canada only” and any material categorized as “print only”. The resulting output would contain the common material plus the US material and online material but not the Canadian or print material.

In addition to controlling the behavior of topics, conditions can have wider effects. For example, say that you create a general table of contents that lists topics A, B, and C. The actual topic C is conditionalized as “Canada only”. When you tell Flare to exclude “Canada only” material when generating the US output, it excludes the topic and automatically removes it from the table of contents as well. And if topic A has a hyperlink that points to topic C, Flare automatically turns that hyperlink into regular text in order to avoid a broken link.

Conditions are very flexible. You can apply one or more of them to a topic or multiple topics and to any content within a topic, as finely as a single character. You can also apply them to any other project element – images, master pages, stylesheets, and even to other conditions. (Although I’d have serious reservations about doing that in real project.) This flexibility makes them very powerful but also requires that they be used and managed carefully.

The two problems that I most often see with conditions aren’t technical but rather ones of management and design.

  • The first problem lies in not documenting the logic behind the conditions clearly or at all. When the initial author leaves and a new author comes on the project, they may send the project off the rails because the lack of documentation makes it easy to make mistakes when applying or invoking conditions.
  • The second problem is the rapid growth of permutations as you add conditions to a project. With one condition, for example, there are only two permutations – include or exclude. With two conditions, however, there are four permutations – include/include, include/exclude, exclude/include, and exclude/exclude. With three conditions, eight permutations. And so on. It’s easy to get confused when to include or exclude multiple conditions in combination. And it’s easy to get confused as to when to apply them at all. The solution ties back to that for the first problem – document the logic behind when to apply conditions in a project and when to include or exclude different conditions in what combinations.

When might you use conditions in a project?

  • Some uses are obvious – If you have to categorize material in a project as applying to the US or Canada, for example. A similar example would be to categorize material as applying to system administrators, professional users, or clerical users. When generating output for clerical users, you’d then exclude material conditionalized as “system administrator only” and “professional users only”.
  • Another use is to document a project for new authors to reference or current authors to get up to speed after being off the project for a while. You can document a project in a separate Word document, for example, but that might get lost. A better approach is to document the project in topics that are part of the project and with authors notes inserted in specific topics. But you don’t want users to see this material, so you create an “author’s notes” condition, apply it to the topics and the in-topic notes, and then exclude that material when generating the output.
  • Another use is to control which topics are made available to the reviewers. You could include all the material in the output and add instructions telling the reviewers to ignore topic 10 because it’s not finished. The problem is that some reviewers may not read the instructions, review topic 10, and give you scathing feedback about how topic 10 isn’t finished. You can avoid this by creating a condition called “not ready for review”, applying it to topic 10, and excluding the “not ready for review” condition on output.

That’s it for conditions. Next up – placeholders (aka variables and snippets).



Monday, August 26, 2019

GUI Information 4.0 Tools: A Proposed Feature Set – Comments – Final Version


This article first appeared in the Autumn 2019 issue of the ISTC Communicator.

In my article about a proposed feature set for GUI Information 4.0 authoring tools in the Summer 2019 issue of Communicator, I requested comments and feedback on the subject. I present those comments in this follow-up article as I received them, with no editing except to shorten a few threads due to space limitations. The comments appear below. I prefaced each comment, or set of comments, with the author’s name and Twitter handle in bold. (If the comments seem very disorganized, remember that this is a tweet-stream.) I also added a few comments of my own in response to a phone conversation with one of the commenters.

Here, with no further discussion, are the comments.

The Comments

Cruce Saunders @mrcruce
What should next-generation authoring look like now that we have 1,000s of permutations of media, content-types, browsers, channels, contexts, & formats? A difficult & valiant question to try to answer! Added some thoughts to a recent article. A thread for further comment.

Authoring rarely ever happened consistently in one GUI, even for small companies. In an enterprise, authoring is the single most diverse environment within content lifecycle process and technology. Content can be acquired in dozens of ways in a single department!

We should never assume an ability to conform large populations to a single GUI authoring platform. What typically happens in such enforcement scenarios: “cheating”. No GUI, especially one that wants to be so feature-rich, ever meets everyone’s needs.

So content gets built elsewhere and then PASTED into the GUI (often by someone else), where gets further manipulated. And one hopes, enriched with metadata. Or, publishing systems just get built around the GUI for various authoring groups that decide not to use it.

And the well-intentioned standard authoring regime falls into a chaotic mess of manual content transforms with no accountability or traceability. Most enterprises today live in some form of this mess.

Even when some smaller silos create some most consistent coherence (e.g. #techcomm), none of the related content sets are compatible. The answer, [A] believes, lies in aligning structural & semantic standard patterns across disparate authoring, management, & publishing systems.

All that being said, we do need to advance the state of GUI authoring. Vendors are working on this in product roadmaps. The biggest area of interest to me is essentially today’s attempts at “What You See Is Semantically-Markup Up Content”.

GUIs that *as the author types* suggest semantic associations derived from an organizationally-standardized taxonomy or ontology provider. This is effortless and invisible...machine-prompted, author-empowering.

The same sort of in-context editing, coupled with machine intelligence, can also help to prompt additional annotation useful for content targeting.

Another area of interest are GUIs in which a “sidecar” toolbar powered by artificial intelligence provides authors with in-context structured snippets for reuse and inclusion, based on the content of the material being authored.

Or, the sidecar suggests portions of text that might be reused by others. And providers authors the ability to apply metadata or discussions to individual snippets, or molecules, of content. Of course, these sidecar tools can be made to perform MANY other functions.

In my view, any vendor authoring product, and any related interface, needs to embrace schema application & portability to matter long-term. Companies desperately need to be able to move content around. But this is not possible without schema alignment across systems.

And that is impossible without authoring interfaces that incorporate a structural schema. I’d like to see more friendly blank-canvas interfaces (‘Word-like’) that incorporate an ability apply and manage schema-driven templates, beyond just standardizing styles.

We can see many attempts at schema-based GUI authoring, especially in the plugin market, where Word-to-DITA has been something pursued for some years.

One of the biggest areas of need, and most challenging, is the development of graphical user interfaces that support multiple variations of the same content within a single authoring process.

Personalization based on user type and state, and device or environment states, is something that many authoring processes need. And as we feed our customer experiences with ever-more contextual data, authoring for human or machine-meditated variation becomes essential.

The good news is this has also been pursued for some years, and the heuristics have been explored in multiple production environments — mostly in Customer Experience Management #cem platforms.

But there's plenty of room for innovation here, because "variation authoring" interfaces have not yet been perfected or mass-adopted. It's still a blue ocean space and vendors can distinguish themselves here.

There’s more to say, and much more to discuss, but the future of authoring is a very deep rabbit hole. And a worthy exploration. Take a look at more ideas from Neil Perlin (@NeilEric) in @ISTC_org Communicator or via the #info40 blog post.


James Mathewson @Mathewson_CS
The challenge is context. Content is only meaningful to the degree that it is relevant in context. How do you build an authoring system that helps writers grasp digital contextual cues and write relevant content using those cues? Modular content grows this problem exponentially.

Scott Abel @scottabel
Maybe our efforts would be better spent getting corporate leaders (those afraid of being displaced by disruptive innovators) to understand the need to become information-enabled. Authoring tools are created (and updated) in response to demand. The demand is simply not there — yet.

Neil Perlin (in response to Scott Abel’s point above)
A fair point. However, in the early days of help and the web, GUI tool development went on - often in odd or even wrong directions - even as the technology was spreading. Better IMO to become information enabled AND create the tools for doing so at the same time.

Cruce Saunders


The sea change is coming. Both customers and vendors are driving the evolution. One hand washes the other. Celebrate the innovators, wherever they sit.

Mike Atherton @MikeAtherton
+1 for context and structure. Something akin to a headless CMS is a good start, but rather than a bare bones experience, illustrative device and platform-specific templating to show authors how their work may appear.

And more importantly, since we're moving from a centralised publishing environment to distributed 3pp (AMP, Instant Article, other API) then explicit support and guidance ('recipes' if you will) from platform owners.

Aaaand a new mental model. The print analogy refuses to die and doesn't help separate content from presentation. A better analogy might be radio waves.

Neil Perlin (in response to Mike Atherton’s previous point)
I'll bite. Why radio waves?

Mike Atherton (in response to Neil Perlin’s point above)
Because the information transmitted is intangible, device-agnostic and everywhere at once. And because the same technology can emit frequencies designed for humans and frequencies designed for machines. I didn't say it was perfect :)

Cruce Saunders (in response to Mike Atherton’s point above)
Mike's 'radio waves' is similar to how I see content. Anything that can be available in multiple states, places, usages at one time is very different than tangible one-time published artifacts. It's 'information energy'. ;) But it's more durable even. So, we do need new frames.

Real device, type, user, context agnostic contextual preview or simulation is a holy grail. Even think it should be source agnostic. I actually believe there's an entire missing product category here. Rendering simulation & collab is something more than just another feature.

Mike Atherton
It's not even about being WYSIWYG 2.0 (i made that up), but what's missing from the structured content rhetoric is solid criteria for *how and why* to make specific structural choices. Bringing home context of use may help.

Actually @eaton
I think "next-generation authoring" has to assume that beyond highly data-driven fill-out-the-form stuff that CMS devs have already (kind of) solved… content will end up consisting of 1) Narratives, 2) Components, and 3) Assemblies/Aggregates…

…And also has to assume that workflow/responsibility for each of those modalities will require different tooling. You talk a little about this downthread but I think there's too much attention paid to UI and not enough to contextualied UX in the content editing/mgmt space

Then the big mind-blowing piece is that a huge percentage of what we would call "narrative" is spread across multiple pages/screens/artifacts for final delivery. Some of the journey/experience management stuff starts touching on that, but…

Mark Demeny @mde_sitecore


Great thread and summary from @NeilEric as well. It's a hard one to resolve (esp. over Twitter). Even putting aside the harder questions of content lifecycle, reuse, transformations for specific channels, etc. you get into questions of appropriate tools and interfaces very early.
You'll often hear "I wish my simple to use CMS was better at structured/headless content" similarly, you'll hear the opposite complaint of vendors that have a bias toward structured content but sacrifice page layout or authoring experience.

As I see it, there are 3 fundamental conflicts with content lifecycle; - Distributed vs. Centralized (with tools, author roles, team, geo etc.) - Structured vs. channel-specific - Creation agility vs. reuse (via better findability, analytics, etc. - more lean to the former)

And personalized/contextual content is a problem *layered across all of these*. It could be that a specific region, or an analytics team is responsible for acting on that - so I see that as not a distinct problem, but related to and complicated by the existing conflicts.

Jan Benedictus @JanBenedictus


Structured Content Authoring, Component Based Authoring etc. are often mentioned - by leaders ; but “what problem do we solve” is not articulated. We have to go from “strategic talk” to Tangible Benefits to explain Why. Today we are at @DrugInfoAssn to do so for Pharma #dia2019

Ray Gallon @RayGallon


Check out #nemetics as another vision close to these ideas. cc @toughLoveforx @ddrrnt

Two Additional Points of My Own

Mark Demeny noted correctly that I gave scant coverage to issues of governance and workflow and sign-off control.

I’ll add that I barely mentioned the effect of Information 4.0 on technical communicators. The increased technical and management complexity may drive some of today’s practitioners out of the field. That’s been predicted with every new technology and, to a degree, has been true but most practitioners adapt. What’s different with Information 4.0 is that even the base level of technical and management complexity is far higher than earlier disruptive technologies like word-processing in the 1980s and the web and online help in the 1990s.

Summary

The comments section may seem rambling because it largely matches the structure of the comments and responses. But I left it that way to show the wide range of thought about the technical, structural, management, and even philosophical issues. Once this article appears in Communicator, I’ll add it to the Information 4.0 Consortium blog and the Hyper/Word Services blog, and will add more posts as I get more comments.

So, now what? Is there a next step or has this just been an interesting discussion? That will have to be the subject of more discussion by members of the Information 4.0 Consortium. Stay tuned.

About the Author

Neil is president of Hyper/Word Services (www.hyperword.com) of Tewksbury, MA, USA  He has four decades of experience in technical writing, with 34 in training, consulting, and developing for online formats and outputs ranging from WinHelp to mobile apps and tools ranging from RoboHelp and Doc-To-Help to Flare and ViziApps. To top things off, he has been working in mobile since 1998 and XML since 2000, and speaking and writing about Information 4.0 since 2017.

Neil is MadCap-certified in Flare and Mimic, Adobe-certified for RoboHelp, and Viziapps-certified for the ViziApps Studio mobile app development platform. He is a popular conference speaker, most recently at MadWorld 2019 in San Diego, CA. Neil founded and managed the Bleeding Edge stem at the STC summit and was a long-time columnist for ISTC Communicator, STC Intercom, IEEE, and other publications.  You can reach him at nperlin@nperlin.cnc.net.


Wednesday, August 21, 2019

Single Sourcing with MadCap Flare – Part 1: Overview

Flare is a single-sourcing authoring tool but if you’re new to Flare or single-sourcing in general, what that means may not be completely clear. That’s the subject of my upcoming series of blog posts.

In this post, I’ll define “single-sourcing” and briefly list Flare features that support it. In later posts, I’ll go into the features in detail, focusing on their mechanics, obviously, but also on their effects on project design. I’ll end the series by looking at how single-sourcing can affect project management.

What is Single Sourcing?

Single-sourcing is the creation of one source of content that can be used in multiple outputs (or, in Flare terms, targets).

Let’s say that you need to create a sales procedure manual in online and print form. You can create two projects to do so, one for the online and one for the print, but that has some obvious drawbacks.

  • You have to write the content twice.
  • You have to make any changes twice, once in each project. Eventually, inevitably, you’ll make a change in one of the projects but forget to make it in the other one.

Single sourcing fixes these problems.
  • You can create one source of content to use for both outputs, or select a sub-set of the content for each output. For example, let’s say that you have to create a sales procedure manual for use in the US and Canada. Some of the content might apply only to the US, some might apply only to Canada, and some might be common to both. So you can generate one manual that contains all the content and tell the US users to ignore anything marked as applying only to Canada and vice versa, or better yet, extract the common content and the US content to create the US manual, and the common content and Canadian content to create the separate Canadian manual. But no matter which approach you take, it’s still only one source of content.
  • You can create re-usable chunks of content and share them in different outputs. For example, the same screen shot and description might be used in different outputs from the same project, or even different outputs from different projects. Rather than writing that content multiple times, one in each project, you could write it once and re-use it and automatically change it everywhere if the content changes.

The basic concepts are that simple.

Overview of Single-Sourcing Features

Flare offers a bunch of features that support single-sourcing. Here are the major ones.
  • Conditions – Conditions are categories. To repeat the US/Canada example – you have to create a sales procedure manual for use in the US and Canada. Some of the content applies only to the US, some applies only to Canada, and some is common to both. Conditionality lets you define “US only” and “Canada only” categories. (Content common to both audiences is always used, so it doesn’t have to be put in a category.)

    To create the US output, you tell Flare to exclude anything set to “Canada only”. The output will contain the common and US content but not the Canadian content.

  • Placeholders (variables and snippets) – Placeholders are for repeated content. For example, say you’re documenting a new application whose pre-release code name is Longhorn. You write the word “Longhorn” hundreds of times in the content. Then marketing changes the name to “Vista”. You now have to search for all instances of “Longhorn” and replace them with “Vista”. Easy – do a search and replace. But what if you misspelled “Longhorn” several times. Search and replace won’t fix that. That’s where placeholders come in.

    The simplest placeholder is the variable – text-only. So, in the example above, rather than typing the product’s name constantly, you’d create a variable called, for example, productname, and set its value to “Longhorn”. You’d then tell Flare to insert the value of the productname variable and what seems to be the word “Longhorn” appears in the text. When the name changes, just change the value of the productname variable to “Vista” and Flare automatically changes the wording everywhere that you’d inserted the variable.

    A more powerful placeholder is the snippet. Unlike variables, which are text-only, snippets can contain anything that you’d insert in a topic, including variables.

  • Flare Import File – Placeholders let us insert repeated content in multiple topics (and other files – discussed later). The Flare import file feature takes that one step further to let us share repeated content and files in multiple projects. You designate one project as the master and put all the shared files in it. You then create the child “real” projects and specify that they download and use the shared content and files from the master project. The big advantage of this feature is that child projects maintain an active link to the master. If a shared file differs between the master and child project, either because the owner of the master project changed the shared file or the author of the child project changed it, Flare will automatically download a copy of that file when the child project author builds the output. This overwrites the version in the child project and ensures consistency.

  • Conditionalized styles – Most projects use one stylesheet but several Flare features can extend that. For example, you can create two stylesheets for a project and specify that stylesheet A is to be used for online targets and B for print. More efficient is to create one stylesheet and allow stylistic variations in the targets by using Flare’s medium feature. Finally, you can create one stylesheet and use the mc-conditions feature in the Stylesheet Editor to literally turn a given style on or off for a particular target.

  • CSS variables – There may be cases where you want to apply the same value for multiple styles, such as color. You might specify the same color for h1, h2, and h3. However, specifying the color multiple times makes it easy to make a mistake. You might type the wrong digit in a hex-value color specification, for example. One solution is to specify the color in the body style, which propagates it down to all the subsidiary styles, but this might still be a problem since you’d have to specify the alternative colors for h4, h5, and h6. The CSS variables feature eliminates this issue by letting you specify the color in one variable and then call that variable for use in different styles.

Other features could be described as supporting single-sourcing, such as the Easy-Sync feature for importing Word and Framemaker files, responsive output and responsive layout, the Clean XHTML target and the Remove MadCap Styles feature in the Target Editor, and others. I’ll discuss all these features in more detail in upcoming posts.

Thursday, June 27, 2019

GUI Information 4.0 Tools: A Proposed Feature Set – Comments - Final Version

This post appeared in the Autumn 2019 edition of the ISTC Communicator.

In my article about a proposed feature set for GUI Information 4.0 authoring tools in the Summer 2019 issue of Communicator, I requested comments and feedback on the subject. I present those comments in this follow-up article as I received them, with no editing except to shorten a few due to space limits. The comments appear below in roughly the order in which I received them. I prefaced each comment, or set of comments, with the author’s name and Twitter handle in bold. (If you’re struck by the seeming lack of organization of the comments, remember that this is a tweet-stream.)

I also added a few comments of my own in response to a phone conversation with one of the commenters. Here, with no further discussion, are the comments.

The Comments

Cruce Saunders @mrcruce

What should next-generation authoring look like now that we have 1,000s of permutations of media, content-types, browsers, channels, contexts, & formats? A difficult & valiant question to try to answer! Added some thoughts to a recent article. A thread for further comment.

Authoring rarely ever happened consistently in one GUI, even for small companies. In an enterprise, authoring is the single most diverse environment within content lifecycle process and technology. Content can be acquired in dozens of ways in a single department!

We should never assume an ability to conform large populations to a single GUI authoring platform. What typically happens in such enforcement scenarios: “cheating”. No GUI, especially one that wants to be so feature-rich, ever meets everyone’s needs.

So content gets built elsewhere and then PASTED into the GUI (often by someone else), where gets further manipulated. And one hopes, enriched with metadata. Or, publishing systems just get built around the GUI for various authoring groups that decide not to use it.

And the well-intentioned standard authoring regime falls into a chaotic mess of manual content transforms with no accountability or traceability. Most enterprises today live in some form of this mess.

Even when some smaller silos create some most consistent coherence (e.g. #techcomm), none of the related content sets are compatible. The answer, [A] believes, lies in aligning structural & semantic standard patterns across disparate authoring, management, & publishing systems.

All that being said, we do need to advance the state of GUI authoring. Vendors are working on this in product roadmaps. The biggest area of interest to me is essentially today’s attempts at “What You See Is Semantically-Markup Up Content”.

GUIs that *as the author types* suggest semantic associations derived from an organizationally-standardized taxonomy or ontology provider. This is effortless and invisible...machine-prompted, author-empowering.

The same sort of in-context editing, coupled with machine intelligence, can also help to prompt additional annotation useful for content targeting.

Another area of interest are GUIs in which a “sidecar” toolbar powered by artificial intelligence provides authors with in-context structured snippets for reuse and inclusion, based on the content of the material being authored.

Or, the sidecar suggests portions of text that might be reused by others. And providers authors the ability to apply metadata or discussions to individual snippets, or molecules, of content. Of course, these sidecar tools can be made to perform MANY other functions.

In my view, any vendor authoring product, and any related interface, needs to embrace schema application & portability to matter long-term. Companies desperately need to be able to move content around. But this is not possible without schema alignment across systems.

And that is impossible without authoring interfaces that incorporate a structural schema. I’d like to see more friendly blank-canvas interfaces (‘Word-like’) that incorporate an ability apply and manage schema-driven templates, beyond just standardizing styles.

We can see many attempts at schema-based GUI authoring, especially in the plugin market, where Word-to-DITA has been something pursued for some years.

One of the biggest areas of need, and most challenging, is the development of graphical user interfaces that support multiple variations of the same content within a single authoring process.

Personalization based on user type and state, and device or environment states, is something that many authoring processes need. And as we feed our customer experiences with ever-more contextual data, authoring for human or machine-meditated variation becomes essential.

The good news is this has also been pursued for some years, and the heuristics have been explored in multiple production environments — mostly in Customer Experience Management #cem platforms.

But there's plenty of room for innovation here, because "variation authoring" interfaces have not yet been perfected or mass-adopted. It's still a blue ocean space and vendors can distinguish themselves here.

There’s more to say, and much more to discuss, but the future of authoring is a very deep rabbit hole. And a worthy exploration. Take a look at more ideas from Neil Perlin (@NeilEric) in @ISTC_org Communicator or via the #info40 blog post here:


James Mathewson @Mathewson_CS

The challenge is context. Content is only meaningful to the degree that it is relevant in context. How do you build an authoring system that helps writers grasp digital contextual cues and write relevant content using those cues? Modular content grows this problem exponentially.

Scott Abel @scottabel

Maybe our efforts would be better spent getting corporate leaders (those afraid of being displaced by disruptive innovators) to understand the need to become information-enabled. Authoring tools are created (and updated) in response to demand. The demand is simply not there — yet

Neil Perlin (in response to Scott Abel’s point above)

A fair point. However, in the early days of help and the web, GUI tool development went on - often in odd or even wrong directions - even as the technology was spreading. Better IMO to become information enabled AND create the tools for doing so at the same time.

Cruce Saunders


The sea change is coming. Both customers and vendors are driving the evolution. One hand washes the other. Celebrate the innovators, wherever they sit.

Mike Atherton @MikeAtherton

+1 for context and structure. Something akin to a headless CMS is a good start, but rather than a bare bones experience, illustrative device and platform-specific templating to show authors how their work may appear.

And more importantly, since we're moving from a centralised publishing environment to distributed 3pp (AMP, Instant Article, other API) then explicit support and guidance ('recipes' if you will) from platform owners.

Aaaand a new mental model. The print analogy refuses to die and doesn't help separate content from presentation. A better analogy might be radio waves.

Neil Perlin (in response to Mike Atherton’s previous point)

I'll bite. Why radio waves?

Mike Atherton (in response to Neil Perlin’s point above)

Because the information transmitted is intangible, device-agnostic and everywhere at once. And because the same technology can emit frequencies designed for humans and frequencies designed for machines. I didn't say it was perfect :)

Cruce Saunders (in response to Mike Atherton’s point above)

Mike's 'radio waves' is similar to how I see content. Anything that can be available in multiple states, places, usages at one time is very different than tangible one-time published artifacts. It's 'information energy'. ;) But it's more durable even. So, we do need new frames.

Real device, type, user, context agnostic contextual preview or simulation is a holy grail. Even think it should be source agnostic. I actually believe there's an entire missing product category here. Rendering simulation & collab is something more than just another feature.

Mike Atherton

It's not even about being WYSIWYG 2.0 (i made that up), but what's missing from the structured content rhetoric is solid criteria for *how and why* to make specific structural choices. Bringing home context of use may help.

Actually @eaton

I think "next-generation authoring" has to assume that beyond highly data-driven fill-out-the-form stuff that CMS devs have already (kind of) solved… content will end up consisting of 1) Narratives, 2) Components, and 3) Assemblies/Aggregates…

…And also has to assume that workflow/responsibility for each of those modalities will require different tooling. You talk a little about this downthread but I think there's too much attention paid to UI and not enough to contextualied UX in the content editing/mgmt space

Then the big mind-blowing piece is that a huge percentage of what we would call "narrative" is spread across multiple pages/screens/artifacts for final delivery. Some of the journey/experience management stuff starts touching on that, but…

Mark Demeny @mde_sitecore


Great thread and summary from @NeilEric as well. It's a hard one to resolve (esp. over Twitter). Even putting aside the harder questions of content lifecycle, reuse, transformations for specific channels, etc. you get into questions of appropriate tools and interfaces very early.

You'll often hear "I wish my simple to use CMS was better at structured/headless content" similarly, you'll hear the opposite complaint of vendors that have a bias toward structured content but sacrifice page layout or authoring experience.

As I see it, there are 3 fundamental conflicts with content lifecycle; - Distributed vs. Centralized (with tools, author roles, team, geo etc.) - Structured vs. channel-specific - Creation agility vs. reuse (via better findability, analytics, etc. - more lean to the former)

And personalized/contextual content is a problem *layered across all of these*. It could be that a specific region, or an analytics team is responsible for acting on that - so I see that as not a distinct problem, but related to and complicated by the existing conflicts.

Jan Benedictus @JanBenedictus


Structured Content Authoring, Component Based Authoring etc. are often mentioned - by leaders ; but “what problem do we solve” is not articulated. We have to go from “strategic talk” to Tangible Benefits to explain Why. Today we are at @DrugInfoAssn to do so for Pharma #dia2019

Ray Gallon @RayGallon


Check out #nemetics as another vision close to these ideas. cc @toughLoveforx @ddrrnt

Two Additional Points of My Own

Mark Demeny noted correctly that I gave scant coverage to issues of governance and workflow and sign-off control.

I’ll add that I barely mentioned the effect of Information 4.0 on technical communicators. The increased technical and management complexity may drive some of today’s practitioners out of the field. That’s been predicted with every new technology and, to a degree, is true but most practitioners adapt. What’s different with Information 4.0 is that even the base level of technical and management complexity is far higher than earlier disruptive technologies like word-processing in the 1980s and the web and online help in the 1990s.

Summary

The comments section may seem rambling because it largely matches the structure of the comments and responses. But I left it that way to show the wide range of thought about the technical, structural, management, and even philosophical issues. Once this article appears in Communicator, I’ll add it to the Information 4.0 Consortium blog and the Hyper/Word Services blog, and will add more posts as I get more comments.

So, now what? Is there a next step or has this just been an interesting discussion? That will have to be the subject of more discussion by members of the Information 4.0 Consortium. Stay tuned.

About the Author

Neil is president of Hyper/Word Services (www.hyperword.com) of Tewksbury, MA, USA  He has four decades of experience in technical writing, with 34 in training, consulting, and developing for online formats and outputs ranging from WinHelp to mobile apps and tools ranging from RoboHelp and Doc-To-Help to Flare and ViziApps. To top things off, he has been working in mobile since 1998 and XML since 2000, speaking and writing about Information 4.0 since 2017, and is a member of the Information 4.0 Board.

Neil is MadCap-certified in Flare and Mimic, Adobe-certified for RoboHelp, and Viziapps-certified for the ViziApps Studio mobile app development platform. He is a popular conference speaker, most recently at MadWorld 2019 in San Diego, CA. Neil founded and managed the Bleeding Edge stem at the STC summit and was a long-time columnist for ISTC Communicator, STC Intercom, IEEE, and other publications.  You can reach him at nperlin@nperlin.cnc.net.