Monday, September 17, 2018

A Comment About Flare's Dropdown Link Type


On July 23, I wrote a post in the Hyper/Word Services blog on “A Review of MadCap Flare’s Link Types” at http://hyperword.blogspot.com/. This post was repeated in MadCap’s blog at https://www.madcapsoftware.com/blog/2018/08/30/navigation-best-practices-guide-link-types-madcap-flare/?utm_source=Newsletter&utm_medium=Email&utm_campaign=20180911Newsletter&utm_source=Newsletter&utm_medium=Email&utm_campaign=20180911SepMadCapInsider. In the post, I stated in the Dropdown Drawbacks section that there were “None, in my opinion. However, I’d be interested to hear competing opinions.

In response, Jana Vacková of ABRA Software in Praha (Prague) wrote:

We might know about one J - so if you are interested in it, here might be one. (I fixed a few spelling and grammatical errors but otherwise left the response as is.)

The opened dropdown menu has problem to flow around the side menu and to use all possible width of the page. Let me elaborate it more:

-         In our project (= ERP SW on line help) we use dropdowns very often – as we found it very cool J for making the long contents more clearly arranged.

-         We use TopNavigation skin with TOC menu at the side of the screen (side menu).

-         When the section (of our help) is complicated (as our SW is really huge, complex and complicated J) the side menu (with TOC) has a lot of items and is “long“.

-         So, if the topic inside in this section has dropdowns on the top (in general placed horizontally in the area where the side menu is) and user clicks on one of these dropdowns, the dropdown opens but the width of its body is limited (thanks to side menu). And is limited up to the end of body content (although the body is much longer than the side menu itself).  When the body itself is long there is a lot of unused space on the screen and user must scroll down more. If there are wide tables with many columns in such dropdown body, there is a problem L.

Have a look below:

I mean the unused area under the side menu:




It could be perfect if the width of dropdown-body could adapt the free space. So, at the bottom of side menu the text would start to flow round.

P.S. We have contacted MC Flare support but they haven’t advised any solution how to adapt this behavior of the width of opened dropdown body in combination with side menu.

Has anyone else encountered this problem and found a solution?



Monday, September 10, 2018

Word processing through the ages

This article was originally published in ISTC Communicator, Autumn 2018 Supplement.

Neil Perlin looks at the impact word processing has had on technical communication and his career.

In February, 1979, I was hired by a computer company called Digital Equipment Corporation to write the user manual for a general ledger accounting package. I have an MBA in accounting and operations management – mathematical process control – from Boston University, so I knew how a general ledger worked.

I wrote the manual by hand, 400 pages, using pencil and paper. We didn’t have word processors, and all typing was done on typewriters by ‘the girls’ in the typing group. (Stay with me…)

I sent the finished manual out for review. Four of my reviewers said I’d gotten it wrong – a general ledger didn’t work the way I described. What they said ran counter to what I’d learned in my MBA program but I assumed that a big computer company would have gotten a waiver on the standard. (I was very young and innocent then…) No word from the fifth reviewer.

So, I threw out the 400 hand-written pages and wrote a new 400-page manual. By hand. Pencil and paper.

When I finished, I sent it out for review. The four reviewers blessed it. However, the fifth, who had been on vacation during the first review pass, called and spent five minutes giving me an epic chewing out.

When he finished, I explained what happened. After he finally stopped laughing, he said ‘Tell me what you wrote the first time.’

I did, and he said ‘That was exactly right. The other reviewers don’t understand accounting. Go ahead and rewrite what you wrote the first time.’ Which I did. By hand. Pencil and paper.

So, I ultimately wrote 1200 pages – by hand, pencil and paper – to get 400, with the last 400 saying the same thing as the first 400.

Many technical communicators from that era have similar ludicrous stories.

The appearance of word processing changed technical communication forever. Stories like mine became things of the past. Things like ‘paste-up’ and ‘carbons’ vanished into history. In this article, I’ll look at how word processing came to be and end with some thoughts about where it may be going.

History


Word processing dates to Gutenberg and movable type. But for this article, I’ll start with the electronic version.

According to a Computer Nostalgia article1, the first units functionally recognisable as word
processors appeared in 1964 with IBM’s introduction of the MT/ST (Magnetic Tape/Selectric Typewriter) which added magnetic tape storage to a standard IBM Selectric. Users could store, edit, re-use, and even share documents. But it was still a typewriter – no screen.

People also did word processing on mainframe computers with time-shared terminals. To get a sense of this, see the first page of ‘Word Processing on the Mainframe Computer’, written in 1984 by Sue Varnon2.

The first units with screens – recognisable as modern word-processors – debuted in the early 1970s from companies like Lexitron and Vydec. Wang Laboratories’ CRT-based word processing system, introduced in 1976, became the standard and made Wang the dominant player in the word processing market. These systems were crude compared to today’s. Most had no navigation keys and instead used the e/s/d/x keys on the keyboard. They had no function keys for attributes like boldfacing, which was done by pressing key combinations at the beginning and end of the text to be emboldened. There were no options for fonts, and other things that we take for granted today.

WYSIWYG displays didn’t exist. Monitors showed text using the system’s default font. Formatting was done by inserting control characters. There’s debate as to when WYSIWYG appeared – some claim that the early Apple MacIntosh with a bitmapped display made it possible. Others claim that it wasn’t until laser printers became affordable and could fit on a desk that true WYSIWYG became possible and you were able to see what was printed, on screen.

But they offered the kernel of what we expect in word processors today.

Furthermore, the term ‘word processor’ referred to dedicated machines rather than software running on general purpose PCs. The general-purpose PCs we use today were just emerging. But once they did, the dedicated machines were doomed. Wang went through internal turmoil due to changing markets, management, and strategy and filed for bankruptcy in 1992. (A fragment of the company survived until 2014.) Other companies like Lexitron, Lanier, and Vydec disappeared so thoroughly that Google searches return only fragmentary mentions.

To put this in perspective, and for an interesting aspect of cultural sociology – (see the following item, reference “the girl”), consider this piece of history from the Computer Nostalgia1 article :

The New York Times, reporting on a 1971 business equipment trade show, said:

The ‘buzz word’ for this year's show was ‘word processing’, or the use of electronic equipment, such as typewriters; procedures and trained personnel to maximize office efficiency. At the IBM exhibition a girl typed on an electronic typewriter. The copy was received on a magnetic tape cassette which accepted corrections, deletions, and additions and then produced a perfect letter for the boss's signature....

These pioneers were replaced by software with almost legendary names – MacWrite, Lotus AmiPro and Manuscript, PC-Write, Electric Pencil, VolksWrite, MultiMate, PeachText, XyWrite, and three that will be more familiar – WordStar, WordPerfect, and Word.

WordStar was the leading application in the early 1980s when CP/M and MS-DOS were competitors. But changes in technology and interface and customer service issues made it falter. WordPerfect took its place as the leading word-processor in the 1980s. But problems with a release for Microsoft Windows gave Microsoft an entrée into the market with Word. Between a smoother introduction and bundling deals that led to Microsoft Office, Word took the lead in the 1990s and has not looked back.

Results


What has this evolution wrought?

  • Word processing has changed how we write, for the worse according to some literary critics. See ‘Has Microsoft Word affected the way we work?’ by John Naughton in the January 14, 2012 issue of The Guardian3 and ‘How Technology Has Changed the Way Authors Write’ by Matthew Kirschenbaum in the July 26, 2016 issue of The New Republic4.

    Personally, I agree that it has changed how I work - for the better. Using a typewriter, changing the material was difficult, often involving White-Out or perhaps even pulling the page out and re-typing it entirely. This made it easy to lose my train of thought. With a word-processor, I can write material, modify it as I go, and easily revert to a previous version. And I can try different wordings to see which is clearer or gives a better readability score. So, overall, and especially after my general ledger user manual fiasco in 1979, I could never give up my word-processor.
  • WYSIWYG authoring is useful but there are periodic arguments about whether it leads authors to focus on formatting content rather than on writing it – appearance over substance. Here’s one example, ‘Word Processors: Stupid and Inefficient’ by Allin Cottrell5.

    Personally, I agree with some of his positions but I think word processing as it currently exists is too entrenched to change in the near future. Also, and interestingly, Cottrell’s position ties in well with the emerging need for content in HTML or XHTML that has no format of its own but that can use multiple stylesheets for single sourcing.

  • WYSIWYG authoring, plus the ability to insert and position graphics electronically, has sharply reduced the role of the graphic designer. That’s not to say that a graphic designer couldn’t do a better job, just that graphic designers are no longer needed.

  • Authoring support tools like spell-checkers and readability analysers in word-processors sharply reduced the role of editors. (When I was at Digital Equipment Corp in 1982, there were, as I recall, about 20 writers supported by a formal editorial group. Today, I’m surprised and pleased if one of my clients has even one editor on staff.)

  • Many managers wanted computers in their offices because computers were cool, but didn’t want to actually use them because typing was considered to be secretarial work. So, some unsung marketing genius coined the term ‘keyboarding’ instead.

  • Typing pools were almost entirely female because management viewed typing as a secretarial function. The advent of word processing caused debate about whether it would perpetuate the typing pool as a so-called ‘pink ghetto’ or open new avenues for advancement for women. My experience from Digital Equipment was the latter. One woman who started as a typist became one of the coordinators of the company’s export control compliance programme.

  • The culture of technical writing changed. In 1980, my department got two word-processors for the writers to share. Soon after, the manager told me that he had offered jobs to two writers, both of whom turned him down on the grounds that 'technical writers don't use computers'.

  • The culture of technical writing changed. In 1980, my department got two word-processors for the writers to share. Soon after, the manager told me that he had offered jobs to two writers, both of whom turned him down on the grounds that ‘technical writers don’t use computers’. 
  • In the same vein, one of the greatest presentations in the STC conference’s Beyond the Bleeding Edge stem, which I started in 1999 and managed until it ended in 2014, was a retrospective look at changes in writing culture by a speaker who showed a video of a presentation he gave in 1980 entitled ‘why technical writers should be allowed to use computers’. It’s one of the funniest but most meaningful presentations I’ve ever seen at a conference. (Why meaningful? Because it examined a huge technical and philosophical shift in technical communication. Why funny? Because, almost on cue, the older attendees looked at each other and said “I remember those days!” while the younger attendees looked at each other and said “No word processors? No way!”)
  • Users of word-processors, primarily Word these days, break all kinds of rules to make sure the document prints well. But these users rarely consider that their documents may have to be converted to HTML or XHTML for use online. So, breaking the rules, often using local formatting rather than styles, seemed to have no down-side but now causes frequent problems.

  • Related to the prior point, management tends to view word-processors as akin to typewriters and thus doesn’t train the users on how to use the tool effectively and correctly. The result is usually chaos.

The Future?


Will today’s word processing powerhouses eventually go extinct? Word processing is so embedded in business and technical communication that it’s hard to imagine, but many once-dominant tools and companies have vanished.

I can think of two things that might change the future of word processing:

  • It’s been said of Word that most people use 5% of its features. The problem is that each person uses a different 5%. So, an interface that users can easily customise, without a consultant to do so, would be a big help.

  • Eliminating typing. A speech-to-text interface, an Alexa of word-processors, may be possible in the future. But the system will have to be smart enough to recognise and remove all the throat clearing and ‘like’ and ‘you know’. And, each person’s voice is different so the system will need a lot of training. And AI might be needed to help the system understand when to emphasise a word without the authors having to tell it to do so and breaking their train of thought.


And the need for word processing as we know it might disappear. An article called ‘Getting The Next Word In’ by Ernie Smith6 from 2016 makes some interesting philosophical points. “The reasons we have traditionally used word processors has slowly been eroded away,” he explained. “LinkedIn is replacing the resume, GitHub is replacing documentation, and blogging (and respective tools) have chipped into journalism. Even documents that are meant to be printed are largely being standardised and automated. Most letters in your physical mailbox today are probably from some bank that generated and printed it without touching Word.”

Perhaps the best indicator of how thoroughly word processing has penetrated the world, especially that of technical communication is the fact that it’s taken for granted except when we complain about some feature of Word. The wonder that it evoked in 1971 is long gone. And that’s a sign of success.

References


1.     Computer Nostalgia (no date) ‘Computer History. Tracing the History of the Computer – History of Word Processors’ www.computernostalgia.net/articles/HistoryofWordProcessors.htm (accessed July 2018)
2.     Varnon S (1984) ‘Word Processing on the Mainframe Computer’ The Journal of Data Education, Volume 24, 1984 – Issue 2 www.tandfonline.com/doi/abs/10.1080/00220310.1984.11646292 (accessed July 2018)
3.     Naughton J (2012) ‘Has Microsoft Word affected the way we work?’ The Guardian www.theguardian.com/technology/2012/jan/15/microsoft-word-processing-literature-naughton (accessed July 2018)
4.     Kirschenbaum M (2016) ‘How Technology Has Changed the Way Authors Write’ The New Republic https://newrepublic.com/article/135515/technology-changed-way-authors-write (accessed July 2018)
5.     Cottrell A (1999) ‘Word Processors: Stupid and Inefficient’  http://ricardo.ecn.wfu.edu/~cottrell/wp.html (accessed July 2018)
6.     Smith E (2016) ‘Getting The Next Word In’ Tedium. https://tedium.co/2016/10/04/word-processors-future (accessed July 2018)


Related reading


Ashworth M (2017) 'The death of sub-editing' Communicator, Spring 2017: 14-17

Dawson H (2017) 'Industrial revolution in Fleet Street' Communicator, Summer 2017: 26-29

Glossary


AI. AI (artificial intelligence) is the simulation of human intelligence processes by machines, especially computer systems. 
https://searchenterpriseai.techtarget.com/definition/AI-Artificial-Intelligence

Alexa. Alexa is a virtual digital assistant developed by Amazon for its Amazon Echo and Echo Dot line of computing devices.
https://www.webopedia.com/TERM/A/alexa.html

Carbon copy. A carbon copy (or carbons) was the under-copy of a document created when carbon paper was placed between the original and the under-copy during the production of a document. In email, the abbreviation CC indicates those who are to receive a copy of a message addressed primarily to another (CC is the abbreviation of carbon copy).
https://en.wikipedia.org/wiki/Carbon_copy

CP/M. CP/M originally stood for Control Program/Monitor and later Control Program for Microcomputers, is a mass-market operating system created for Intel 8080/85-based microcomputers by Gary Kildall of Digital Research, Inc.
https://en.wikipedia.org/wiki/CP/M

GitHub. GitHub is a web-based version-control and collaboration platform for software developers.
https://searchitoperations.techtarget.com/definition/GitHub
https://github.com

HTML. Hypertext Markup Language (HTML) is the standard markup language for creating web pages and web applications.
https://en.wikipedia.org/wiki/HTML

Keyboarding. Enter data by means of a keyboard.

MS-DOS. (Microsoft Disk Operating System). MS‑DOS was the main operating system for IBM PC compatible personal computers during the 1980s and the early 1990s.
https://en.wikipedia.org/wiki/MS-DOS

Paste-up. A document prepared for copying or printing by combining and pasting various sections on a backing.
https://en.wikipedia.org/wiki/Paste_up

Pink ghetto. ‘Pink ghetto’ is a term used to refer to jobs dominated by women. The term was coined in 1983 to describe the limits women have in furthering their careers, since the jobs are often dead-end, stressful and underpaid.
https://en.wikipedia.org/wiki/Pink-collar_worker#Pink_ghetto

White-Out. White-out is a correction fluid. It is an opaque, usually white, fluid applied to paper to mask errors in text. Once dried, it can be written over. It is typically packaged in small bottles, and the lid has an attached brush (or a triangular piece of foam) which dips into the bottle. The brush is used to apply the fluid onto the paper. In the UK, ‘Tipp-Ex’ is used more commonly.
https://en.wikipedia.org/wiki/Correction_fluid

WYSIWYG. WYSIWYG is an acronym for ‘what you see is what you get’.
https://en.wikipedia.org/wiki/WYSIWYG

XHTML. Extensible Hypertext Markup Language (XHTML) is part of the family of XML markup languages. 
.

Thursday, September 6, 2018

Is Single-Sourcing Dead?

Matthew Dorma of Calgary, AB pointed me to a post by Mark Baker (https://everypageispageone.com/) entitled “Time to move to multi-sourcing” (https://everypageispageone.com/2018/04/06/time-to-move-to-multi-sourcing/). I discussed that post with several people and have thought about the implicit question it poses – is single-sourcing dead – for a while. This post is the result.


NOTE: The first part of the post – Is Single-Sourcing Dead? – discusses problems with single-sourcing in general. The second part – Alternatives to Traditional Single-Sourcing? – briefly addresses the specific points that Baker raises in his original post.

Is single-sourcing dead?

In my opinion, no. The single-sourcing concept – write once, re-use many times in many ways and many places – has some problems. But the basic concept is so useful that I see nothing that can replace it yet.

What are those problems? Can they be fixed, and how?

 ·       Inappropriate tools – Today, single-sourcing is based on using tools like MadCap Flare (note that I do a lot of training and consulting for MadCap), or standards like DITA. However, companies aren’t going to buy those tools for every employee who has to write something; the cost is too high. Instead, most of those employees will use Microsoft Word because companies see Word as being free. And employees can use Word to create print and PDF output – technically, single-sourcing but lacking the flexibility and output options of full-power single-sourcing.

The result? Trying to do single-sourcing using the wrong tools.

The solution? Simple in theory. Identify which employees create what material that has to be single-sourced and buy them the appropriate tools.

 6       Inappropriate training – In the early days of word-processing, specially-trained operators did the work. Today, employees get a copy of Word and are largely on their own to figure it out – with no training. The results are often inconsistent and with ugly code, but no one cares as long as the document looks good when printed. But if that document has to be imported into a single-sourcing tool like Flare, ugly code often causes problems that few authors know how to avoid or fix. Authors need at least some training in how to use Word but few companies offer it.

The same is true in single-sourcing. Authors may have the right tool but are often not trained on how to use it, or on the concepts of single-sourcing. I often meet Flare authors who were given the tool and told to figure it out on their own. Sometimes the results are surprisingly impressive but often the authors are just terribly frustrated.

The result? The best tools are often worthless if authors don’t know how to use them.

The solution? Obvious. Train the authors on the tools. And for subject matter experts upstream who use Word et al to provide content to the single-sourcing authors, provide at least minimal training and support in how to use their tools. How minimal? Two examples…

o   A client in Austin, TX whose authors used Word asked me what those “styles” were. I explained what they were and how to create and use them. The client was ecstatic at the amount of work they could save. From a five-minute discussion…

o   A client in Connecticut was having trouble getting their authors to create consistently-structured material. They had defined a structural standard but the authors deviated from it constantly. I explained how to create topic templates that could be added to their authoring tool’s interface. The client’s employees spent about an hour at a white board laying out a template which I then turned into an electronic one and added to the tool interface in about five minutes.

          Inappropriate standards – People often have no standards to follow to when it comes to using their tools – no templates for different types of material, or style usage standards, for example.

The result? People do whatever provides the result they want, even if that causes trouble down the road when it’s time to import the material into a single-sourcing tool or output to a new format.

The solution? Surprisingly simple. Identify authors’ pain points and create standards for them. Better still, embed the standards into the authoring tools as much as possible to make their use automatic. For example, create topic-type templates with embedded prompts – “type the list of tools here” – to guide authors as they write. Or create a stylesheet with clear style names and make it the project’s master stylesheet so that it will be applied automatically to every topic.

Adding standards is surprisingly straightforward. What’s harder is getting authors to use them. That will take training and time and perhaps some management muscle to insist that using the standards is a requirement, but that’s not a new task.

          Increasing complexity – Single-sourcing requires many tasks beyond just writing the content. Authors have to decide which output is primary in order to decide which features to use because some won’t work well or at all on different outputs. That means understanding those features. Authors have to create and assign conditions to control which content to use for which output. Define re-usable chunks of content. Create style sheets that behave differently depending on the output. Perhaps define microcontent. And more. And this all must be documented somewhere for reference by the current authors and later ones.

The result? The increasing power of our tools and increasing customer demands are leading to increasingly complex projects that that can easily go out of control.

The solution? Again, simple. Document your project. (See my book “Writing Effective Online Content Project Specifications”, available on Amazon, for my suggestions on how to document your projects and what can happen if you don’t.)

·        Lack of motivation on authors’ parts – Single-sourcing isn’t on most authors’ radar so they have no reason to move from the tools and workflows they know to something new to support some vague goal of single-sourcing.

The result? Authors type their content and make sure it prints well and that’s that.

The solution? Several parts. First, make single-sourcing a job requirement. Second, and crucially, explain why single-sourcing is important to the company and show how it can solve authors’ problems. Without that, authors will do the bare minimum needed to meet the single-sourcing requirement and even skimp on that unless there’s management oversight.

Alternatives to Traditional Single-Sourcing?

What about the “shared pipes” (from Sarah O’Keefe) and “multi-source” (from Alan Porter) models that Baker describes? Each seems to fix some problems of single-sourcing. However, each one has to add a complex black box in the center of the process, where the conversion and coding is done. In my view, the more that this conversion and coding can be pushed back upstream to the individual authors by giving them templates, style sheets, and other tools and leaving the black box central processor to the tool vendor, the easier life will be. No need for a dedicated IT person managing and maintaining a proprietary system that, in my experience, languishes after its initial champions have moved on.

What about the “subject-domain” model that Baker describes? In my view, this model can be handled by creating information-type templates for authors to use. We generally think of templates as specific to types of information/topics, but there’s no reason why templates can’t be applied to specific domains of information as well.

Summary

Single-sourcing isn’t perfect. No authoring model is. But it’s worked well for years and its problems seem to have straightforward solutions. Try those before throwing the single-sourcing baby out with the bath water.

Monday, July 23, 2018

A Review of MadCap Flare’s Link Types


Flare offers a wide variety of link types. Some, like hyperlinks, and popups, are common and easy to understand. Others, like cross-references, dropdowns, expanding links, and togglers, may be unfamiliar. In this post, I’ll look at all of these link types, discuss how to create them, how to customize them through the CSS, their uses, and some of their design implications.

Hyperlinks

Hyperlinks are the standard “jump” link. Clicking on a hyperlink takes users from the starting topic to the target topic. A target topic usually replaces the starting topic in the browser window but you can make the target topic open in a second browser window, resulting in two windows open on the screen (and perhaps many windows if users leave each “secondary” window open after looking at its topic).

Hyperlinks are very flexible; you can create all the types shown in the Insert Hyperlink dialog box’s Link To dropdown, below.


How to Insert

To insert a hyperlink, highlight the link text and either:
  • Select Insert > Hyperlink (in the Links group on the Insert ribbon) or
  • Click the Insert a Hyperlink icon on the XML editor toolbar or
  • Press Control/K

Customization Through the CSS

Unvisited hyperlinks typically display in a dark blue, underlined font. You can change these properties by editing the “a” style in the Stylesheet Editor. (What does the “a” style have to do with a hyperlink? The story (which I haven’t verified) is that when Tim Berners-Lee created HTML, he decided that a link was a connection between associated pieces of content. Ergo, “a” for associated.)

Hyperlink Drawbacks

Two drawbacks.

  • A hyperlink “knows” that it points to the URL of its target topic but doesn’t know what that topic is or what to do if the title of the target topic changes. This can cause surprising problems.

    Let’s say you hyperlink the word “Frappe” in topic A and point it to a target topic called Frappe. If you rename the target topic Milkshake, the link still works but it looks wrong – clicking on the Frappe link takes users to the Milkshake topic. Your users might assume that milkshake and frappe are the same thing but they’re more likely to assume that the link is bad. This kind of thing makes maintenance difficult because you’ll have to search the project for each use of “Frappe” in a hyperlink pointing to “Milkshake” and change “Frappe” to “Milkshake”. It’s easy to do but it’s one more thing to worry about, and you’ll have to do multiple search and replace runs to look for cases where you misspelled “Frappe”.
  •  A hyperlink uses a link format – click on the link to jump to the target topic. However, if you’re single sourcing out to a print target like PDF, the link obviously won’t work if users print the topic. They’ll have to look for the topic in the index, if there is one, or the table of contents, or flip through the pages. In other words, usability declines.

Cross-references, or xrefs, solve both of these problems.

Cross-References

A cross-reference, or xref, does the same thing as a hyperlink. Clicking it jumps users to the target topic. Xrefs are less flexible than hyperlinks – you can only use them in two cases as shown in the Insert Cross-Reference dialog box’s Link To dropdown, below.


Although xrefs are less flexible than hyperlinks, they solve both drawbacks of hyperlinks.

  • An xref “knows” the title of its target topic. If an xref links to the Frappe topic and you rename the target topic Milkshake, the xref’s wording changes automatically – effectively automating part of your maintenance work. (Flare automatically changes the wording when you generate the target. If you want to change the xref’s wording before you generate the target, click Tools > Update Cross-References.)
  • A side benefit of the xref’s “knowing” the name of the target topic is that you don’t have to type and select the link text in order to create the link as you do with a hyperlink. Instead, when you select the target topic for your xref, the xref automatically uses the topic’s title as the link text.
  • An xref “knows” when it’s being used in an online or print target. If you generate a print target, Flare automatically changes the xref’s format from a link style (…information about Frappes…” to a page reference style (…information about Frappes, see page 55)

How to Insert

To insert a cross-reference, either:
  • Select Insert > Cross-Reference (in the Links group on the Insert ribbon) or
  • Click the Insert a Cross-Reference icon on the XML editor toolbar or
  • Press Control/Shift/R

Customization Through the CSS

  • You can change the formatting of your xrefs by editing the MadCap | xref style in the Stylesheet Editor.
  • You can also see how Flare changes an xref’s format for online vs. print targets by selecting the mc-format style in the Unclassified property group on the Stylesheet Editor and changing the Medium (on the Stylesheet Editor’s toolbar) from Default to Print.

Xref Drawbacks

You can use xrefs to create links between two topics in a given target but not between topics in different targets or out to external targets like PDFs or web pages. But since most links in a target usually go from one topic to another, this still leaves a lot of places to use xrefs.

Hyperlink and Xref Drawbacks

Hyperlinks and xrefs share a common drawback when used in task description topics. Clicking on the link takes users out of the original topic and out to topic B. A link in topic B might then take users to topic C, and so on. This makes sense but it breaks the flow of the material. Users who jump from topic A, the original topic, to B to C and so on can lose track of where they are in the steps.
What’s needed are links that access related content without taking users out of the primary topic. That’s where the remaining link types come in.

Popups

A popup keeps users in the starting topic and displays the target topic in a window that open on top of the starting topic.

NOTE: There's a bug that’s preventing a popup from displaying correctly so I don’t have an example as I write this.

Using a popup solves the problem of users linking out of a task description topic to another topic, then having to find their way back to the original topic and regain their focus.

When Use Popups

  • To display short glossary definitions within the context of a topic or to display a quick piece of information, such as the phone number for tech support.
  • To display interim steps in a larger procedure. For example, assume that step 1 in a process says to do X, followed by step 2 that says to do Y, and so on. If the users know how to do each of those steps, they can simply proceed. However, if users don’t know how to do task Y, they’re stuck. You could provide a link out to another topic that explains how to do task Y but the users have now lost their train of thought. With a popup, they can click on a link that pops open a window that explains how to do task Y but keeps them in the primary topic.

But there are several drawbacks to popups, as discussed below.

How to Insert

To insert a popup, highlight the link text and either:
  • Select Insert > Hyperlink > Topic Popup (in the Links group on the Insert ribbon) or
  • Click the Insert a Hyperlink icon on the XML editor toolbar and, when the Insert Hyperlink window opens, click the Target Frame field pulldown and select Popup Window or
  • Press Control/K and, when the Insert Hyperlink window opens, click the Target Frame field pulldown and select Popup Window 
Note that you can also insert a popup by using the Cross-Reference feature.

Customization Through the CSS

You can change popups’ properties by editing the MadCap | popup style in the Stylesheet Editor.

Popup Drawbacks

  • You don’t control where the popup window opens – Windows does that based on the available screen space above or below the popup link.
  • Because a popup window opens on top of the starting topic, the popup may cover something that users want to see.
  • It may not be clear to new users how to close a popup in order to keep reading in the primary topic. You could create a snippet that tells users where to click in order to close the popup but that’s one more detail to worry about and one more bit of clutter on the screen.
  • A new drawback comes out of the mobile space. Popups in a target running on a mobile device display as hyperlinks. This may be a problem if your design is based on using popups as popups.

What’s needed is a “popup-style” link that fixes these problems. That’s where dropdowns come in.

Dropdowns


Dropdowns are similar to popups in that clicking the link displays the target topic but keeps users in the starting topic, as shown below. But clicking the dropdown link “stretches” the screen and displays the dropdown body below the link. The first image shows the topic with the dropdown links unselected. The second image shows the HTML Element dropdown link selected and thus expanded.






Why use dropdowns instead of popups?

  • The dropdown body always appears below the dropdown link, eliminating the uncertainty of where the body will display, as with popups.
  • When users click a dropdown link, the screen “stretches” down in order to display the dropdown body rather than covering up part of the primary topic.

When might you use a dropdown?

  • In a topic that has a screen shot of a dialog box with many fields but you don’t want to show the full description of all the fields for fear of making the topic look too long. Instead, you list each field but hide its description in a dropdown. All the information is still present but hidden until users click on the link, thus making the topic look shorter and less intimidating.
  • In a topic containing a list of steps. But if some users might not know how to perform a particular step, you might include a link called How or Tell Me How that, when clicked, opens a dropdown explaining how to perform that particular step.

How to Insert

To insert a dropdown, highlight the link text and:
  • Select Insert > Drop-Down Text (in the Text group on the Insert ribbon)

Customization Through the CSS

Closed dropdowns look like normal text and are prefaced by a right-arrow-in-a-box icon. Expanded dropdowns look the same but the icon changes to a down-arrow in a box. You can change these properties by editing the MadCap | Dropdown styles in the Stylesheet Editor.

Dropdown Drawbacks

None, in my opinion. However, I’d be interested to hear competing opinions.

Expanding Links

Expanding links are similar to dropdowns except that the expansion is horizontal, like pulling a window shade sideways. This literally reformats the text paragraph that contains the link, as shown below. The first image shows the topic with the expanding link, the word hoagie, unselected. The second image shows the topic with the link selected.



Like a dropdown and a popup, clicking the link displays the body but keeps users in the starting topic. However, expanding links can only contain text.

When might you use an expanding link? Typically, when you want to create a short, text-only link such as a definition or perhaps the phone number for tech support.

How to Insert

To insert an expanding text link, highlight the link text and:
  • Select Insert > Expanding Text (in the Text group on the Insert ribbon)
  • Select the Show Tags > Show Markers option, shown below. This displays the link inside a pair of square brackets, followed by a pair of empty square brackets. 







         

  • Select the text to use for the link body and move it inside the empty square brackets. 

Customization Through the CSS

Closed expanding links look like normal text and are followed by the right-facing-arrow-in-a-box icon. Expanded expanding links display the link body to the right of the icon. You can change these properties by editing the MadCap | Expanding styles in the Stylesheet Editor.

Expanding Link Drawbacks

Expanding links, while cool, have a number of drawbacks.

  • They’re text-only.
  • The link body text format looks like regular text in a topic. This can make it hard to tell if the link is closed or expanded. The arrow-in-a-box icon indicates whether the link or closed or expanded by pointing to the right or down but users may not notice it. The solution is to change the format of the link body text to italic or red using the Stylesheet Editor.
  • Creating expanding links takes more steps than creating most other links. For an expanding link, you select the text to use as the link. Then select Insert > Expanding Text. Then select the Show Tags > Show Markers option, which displays the link inside a pair of square brackets, followed by a pair of empty square brackets. Finally, select the text to use as the body and move it inside the empty square brackets. It’s not difficult; it’s just a little more involved.
  • Expanding links dynamically reformat the paragraph in which they appear. Many users seem to find this disconcerting.

Togglers

Togglers are similar to dropdowns in that clicking the link displays the link body while keeping users in the starting topic. However, unlike a dropdown, where the body displays below the link, clicking a toggler can display multiple text, graphics, tables, etc. anywhere in the topic.

When the topic opens, the toggler-controlled content is hidden until users click on the toggler, shown below. The first image shows the topic with the toggler unselected. The second image shows the topic with the toggler selected and various new pieces of content displayed – the advanced information paragraph, the graphic, and the list of steps.




Why use togglers?

  • They offer tremendous flexibility; a toggler can display any type of content anywhere in a topic.
  • They represent a user-oriented philosophy in terms of who controls what content is visible. The author can control what content is visible in a topic through the use of conditions but this takes control away from the users. Togglers give that control back to the users.

When might you use a toggler?

  • When displaying a topic that includes a lot of supporting content but showing that content all at once might make the topic look too long or overwhelming.
  • When documenting a procedure whose first few steps are identical for all users but whose later steps vary somehow, whether the user is in the US or Canada, for example. You could add two toggler buttons labelled US Steps and Canadian Steps to the topic. Clicking the appropriate one displays the appropriate steps without the visual clutter and potential for confusion of showing both sets of steps and telling the users to pick the appropriate ones.

How to Insert


To insert a toggler, highlight the text or other content item (images, tables, etc.) that you want the toggler to show or hide and:
  • Select Home > Attributes > Name (in the Attributes group on the Home ribbon) and type a unique name for the text.
  • Repeat for each additional content item. Note that each toggler-able item must have a unique name.
  • Type the text or insert an icon to use as the toggler link.
  • Select the text or icon and select Insert > Toggler (in the Text group on the Insert ribbon).
  • In the Insert Toggler dialog box, select each named content item that you want the toggler to control.
     

Customization Through the CSS

Closed togglers look like normal text and are prefaced by the right-facing-arrow-in-a-box icon. Expanded togglers look the same but the icon changes to a down-arrow in a box. You can change these properties by editing the MadCap | Toggler style in the Stylesheet Editor.

Toggler Drawbacks


  • Togglers are cool but, like expanding links, require some extra work to create. The first step is to create all the content that will be in the topic. You then assign a name to each piece of content that you want to make controllable by the toggler. (Right-click in the block bar for that piece of content, select Name, and type the name.) Then add the text or graphic that you want to use as the toggler link. Finally, make that text or graphic the toggler (Insert > Toggler) and specify the named content items that toggler will control. This is per topic. Nothing difficult, but the number of steps may lead you to only use togglers on a limited basis.
  • Users might get confused as different pieces of content appear or disappear in a topic.

Miscellaneous Other Link Types

In addition to the types described above, there are several others.

  • Table of Contents – We don’t think of a TOC as a link type, but it’s effectively a list of hyperlinks.
  • Index – Like a TOC, the index is effectively a list of hyperlinks.
  • Text popup – This is similar to a regular, or “topic” popup but with some crucial differences. 
    A regular popup is a link to a topic where the target topic displays in a popup window. That topic is itself a topic. This means any number of links can point to it and any change to the content only has to be made once, in the target topic.

    In contrast, a text popup looks like a regular popup but the target content is inserted in the topic that contains the popup link. So if you want to list the phone number for tech support as a text popup in ten different topics, you have to insert it in each of those ten topics. And if the phone number changes, you have to find those ten topics and modify the content in each one. (But, to be fair, you could create the text popup as a snippet.) The text popup option is available in the Text group on the insert ribbon.
    A text popup is text-only, so it’s less flexible than a regular popup.

Summary

If you haven’t yet gone beyond hyperlinks and topic popups, take a look at the other types of links that Flare offers. You may find some unexpectedly useful new ones.





Wednesday, June 13, 2018

Information 4.0 Technologies and Their Issues


Information 4.0 is getting a lot of attention, but what is it and how will it work? Andy McDonald, one of its evangelists, describes it as “…the informational component of Industry 4.0.” (I discussed Industry 4.0 in an article in the Winter 2017 issue of Communicator.)

Think of “Information 4.0” as an umbrella term for advanced technical communication technologies. Its overall goal is to create user assistance that is:
  • Continuously updated – as up-to-date as possible.
  • Focused on the requester’s needs – an event triggers the content which is then automatically profiled for the requester, as opposed to being static and generic.
  • Ubiquitous – available when and where needed.
  • Broken into small chunks, or “fragments” that are independent of each other and assembled as needed, like beads on a string.

How will Information 4.0 work? At a high level:
  1. Some mechanism determines the context of an information request.
  2. Some mechanism sends that context to a repository of categorized content fragments.
  3. Some mechanism extracts the appropriate fragments from the repository, forms them into an output, and sends that output to the requester.
If you’re thinking that this is simply an extension of how we create context-sensitive help today, you’re right. But Information 4.0’s technologies and required skills make it a HUGE extension of that work.

In this article, we’ll look at some of the major Information 4.0 technologies by function, and some issues behind those functions. As you’ll see, there are as many questions as answers today as the concepts, technologies, and methodologies emerge. (Similar to the web in the mid-90s, when the browser wars were heating up and people often didn’t know what browser they had or even what a browser was.)

The article does not discuss specific technical communication-oriented tools because those tools are still undefined. Today’s help authoring tools will add Information 4.0 features and new tools will appear as that market emerges. That’s the subject of a later article.

Contextualization

Technical Overview

In order to tailor the content to a requester’s needs, the system must know the context of the request. Contextualization sounds mysterious but it will be familiar if you create context-sensitive online help. It lets the help system determine the requester’s location within an application and what help topic is related to that location. (If the requester is in the Print dialog box and clicks a Help button, a Print Dialog Box Help topic should open. Simple.)

Traditional context-sensitive help is simple. A standard method has existed for years and is supported by the GUI of our help authoring tools; it’s a well-known process with no coding work. (Although companies can have proprietary methods that the tools don’t support.)

In Information 4.0, however, “context” goes far beyond the “in what dialog box is the requester located” model to include contexts like:
  • Geographical – physical location, outdoors using GPS or indoors using GPS or beacons.
  • Chronological – date and/or time.
  • Environmental – temperature, light levels, and more.
  • Spatial – device orientation, such as whether you’re holding your phone in portrait or landscape mode, and more.
  • Personal – pulse, temperature, and more.
  • Perhaps other contexts, such as physical to detect conditions like vibration or strain in machines.

Issues

Contextualization issues include:
  • Transience. Traditional context-sensitivity is stable until the requester changes it – e.g. you’re in dialog box A until you go to dialog box B. But the other types can change quickly and often, like a light sensor that has to distinguish between light and shadow while the requester is under a tree on a windy day. This puts more demands on the sensors.
  • Context detection method. Traditional context detection is built-into our authoring tools; others are not and the detection method must be coded separately. We’ll need programmer support.
  • Context transmission method. Transmitting the contexts to the processor needs fast and reliable internet access, plus some local fallback when internet access is slow or lacking.
  • Context processing. The context must be analyzed to determine what content fragments to send to the requester. This might take place outside or, eventually, within the authoring tool, possibly on a server.
  • The effect on hardware, software, and network requirements.

Content creation

Technical Overview

This is simply the creation of the content to be delivered in response to a user’s request.

Conceptually, it’s identical to content creation today but Information 4.0’s requirement for fragments complicates things. How?
  • Traditional authoring tools like Word or FrameMaker exist to create documents – books. We can create content fragments with them but the process takes more concentration. Users of Word of FrameMaker may have to switch to more topic-oriented authoring tools.
  • Writing will have to change. For example, traditional continuity (“as described above”) won’t work because “above” may be in a different fragment that may not appear in a given output.

Issues

Some content creation issues exist today, and new and more complex ones will appear.
  • Fragments will have to meet the needs defined by the contexts. That seems self-evident, but it means that context definition will have to be done prior to content creation. In other words, “winging it,” already a bad idea today, will be a really bad idea under Information 4.0.
  • Fragments may have to stand alone or be combinable on the fly in response to user requests.
  • Fragment naming, metadata, and similar control conventions will be crucial. The “winging it” that we can get away with for 100 fragments will be unmanageable for 1,000 or more.
  • Fragment creation will require authoring tools that create syntactically clean code and no tool-specific code that might affect the processing.
  • Fragment content must be separate from formatting rules. This requires a CSS and elimination of local formatting. The content must also be separate from business rules to let it be used in any output ranging from a browser to a mobile app to a bot to whatever is next. The internal structure of the content has to reflect this separation.
  • Search will be crucial for finding information, so SEO (search engine optimization) will be crucial.
  • Fragments may have to be created to meet different, personalized requests. For example, for a process description, can there just be one fragment containing a list of the steps? Must there be an additional fragment containing the steps and the concepts? Or an additional fragment that describes the concepts that can be combined with the steps fragment depending on the requester’s background? And how do we know the requester’s background?
  • The contents might use “microcontent” depending on your definition of microcontent – ranging from a title or heading to an abstract or meta-description that appears on a search results list.
Finally, and most meaningfully for the future of technical communication…
  • The number of fragments required, plus the naming and coding requirements, may mean that traditional technical communication won’t be able to keep up with the work. Instead, AI-driven tools will create the fragments; our roles will become that of AI rule writer and content curator. Traditional writing will become a thing of the past in companies using Information 4.0.

Content selection

Technical Overview

Once the content fragments exist, it’s necessary to select appropriate fragments for a particular context and control the order in which they’re presented to the requester. (“Order” may seem like an odd issue if each fragment is independent but individual fragments may discuss individual steps in a task and must be presented in the right order. This is more important in print than online. In online, fragment order is less important because the order may be controlled by hyperlinks – e.g. “Click to go to the next step”. However, that implies that the links may have to be included for some outputs but excluded for others. This increases the structural complexity of the fragments.)

Back to content selection…

Content selection means that the fragments must be tagged so that they can be retrieved based on the context. There’s a model for this today, conditionality.

Issues

The conditionality feature in help authoring tools like MadCap Flare lets us assign a tag to fragments of content. We can then select content for a particular output by including or excluding content that has particular conditional tags. To do this, however:
  • Authors must know what outputs they need in order to create and assign the tags. This work is simple but time-consuming when they have to tag many fragments. The same will be true for Information 4.0.
  • Conditionality code is tool-specific. It will be years before Information 4.0 tools are as integrated as today’s help authoring tools so working in Information 4.0 will require multiple tools. This means the tags must be open source. The W3C’s RDF (Resource Description Framework) seems like the most likely candidate because it’s already used in Industry 4.0, the conceptual home of Information 4.0.
  • Authors will have to become familiar with RDF. We probably won’t have to know it at the code level; GUI tools exist now. But it will be important to understand RDF at a conceptual level in order to use it well.
  • The number of fragments to tag and the speed needed to do so is likely to shift the work toward an AI-based model. This means that our roles will change to AI rule writer and enforcer/curator and technical communication will become a thing of the past in Information 4.0 shops.

After the tags have been assigned, the appropriate fragments must be called from the repository. Calling fragments in today’s help authoring tools is mechanically simple point and click (though figuring out the logic can still be complicated). But until Information 4.0 authoring tools become as integrated as today’s help authoring tools, we’ll need programming support to write the scripts to read the context state information, translate that to the RDF codes, and call the fragments to generate the output.

Output generation

Technical Overview

After the processor receives the context information and retrieves the appropriate fragments, it has to generate the output. This seems straightforward, like generating HTML5 output from a help authoring tool. However, as you might expect by now, the process may not be that clear.

Issues

One issue is whether the output is a loose set of XHTML files or a packaged set of files like that created when outputting HTML5 from a help authoring tool. Why does this matter?
  • If ancillary navigation files, like a table of contents, or control files, like a CSS, are to be part of the output, they have to be generated and applied to the output through some build process. Most builds are quick, under a minute, but I have seen some that take hours. Requesters won’t want to wait for a build that takes hours, so they may not use the content at all or use an older version, if they can. The problem is whether requesters will wait for a build that takes a minute.
  • To avoid the build time problem, the fragments may just be uploaded to the requester’s device. If so, how will the ancillary files be applied, if at all?

Issue two has to do with whether to enable responsive output features. Given that ubiquity is one of the qualities propounded for Information 4.0, one output should be readable on desktops, tablets, phones, etc. We can create a separate output for each device but that requires a build, with the build time issue, or we can create one responsive output that can detect what type of device it’s on and reformat itself accordingly.
  • If we want to enable responsive output, which seems logical to drive ubiquity, there must be a build process. That raises the build time delay issue mentioned above.
  • To avoid the build time issue, there must be a way to generate files that will run on any device.

Output delivery

Technical Overview

Output delivery entails the ubiquity mentioned above – the accurate and up-to-date output must always be available when and where needed. This seems like standard internet operations, but there’s also the issue of internet access.

Issues

Some of the issues are similar to those under output creation. But two others apply to delivery.
  • In order for content to be ubiquitous, dynamic, and spontaneous, three properties desired for Information 4.0, requesters need internet access. What happens when requesters have poor or nonexistent access? This is an issue with mobile apps as well and lead to the creation of local storage options that could hold content until users got internet access back, at which point the app would connect to the database and automatically sync the data.
  • What part of the content would be sent to the requester, all of it or just those parts affected by the context call?

Summary

There will be other issues too, such as content storage and analytics.

Where does Information 4.0 stand as of mid-2018?
  • Many of the concepts – contextualization, fragmentary content creation, networked content, content tagging and selection, ubiquity in the form of multi-device capable responsive output, analytics, and others – already exist and have been implemented in varying degree in today’s authoring tools. They’ll have to be extended to move technical communication into the Information 4.0 world.
  • Other concepts, such as AI, RDF, and machine-generated content, exist now outside traditional technical communication. They will have to be integrated into technical communication.
  • Today’s help authoring tools don’t support Information 4.0 but they provide a model for those tools as they’ll start to emerge.
  • The challenges of working in Information 4.0 are broad and deep, as this article tried to show, and will rival or exceed the issues that we faced when word processing, the web, and online help all hit technical communication.

Information 4.0 may ultimately be very different from what I describe here. It may live under another name. But the challenges will be the same and will take technical communication into new and intellectually challenging areas with new and fascinating jobs.

This article was originally published in ISTC Communicator, Summer 2018.

About the Author

Neil is president of Hyper/Word Services (www.hyperword.com) of Tewksbury, MA.  He has many years of experience in technical writing, with 34 in training, consulting, and developing for online formats and outputs ranging from WinHelp to mobile apps and tools ranging from RoboHelp and Doc-To-Help to Flare and ViziApps. To top things off, he has been working in mobile since 1998 and XML since 2000.

Neil is MadCap-certified in Flare and Mimic, Adobe-certified for RoboHelp, and Viziapps-certified for the ViziApps Studio mobile app development platform. He is a popular conference speaker, most recently at TCUK 2017. Neil is an STC Fellow, founded and managed the Bleeding Edge stem at the STC summit, and was a long-time columnist for STC Intercom, IEEE, and various other publications.  You can reach him at nperlin@nperlin.cnc.net.