In my article about a proposed feature set for GUI
Information 4.0 authoring tools in the Summer 2019 issue of Communicator, I
requested comments and feedback on the subject. I present those comments in
this follow-up article as I received them, with no editing except to shorten a
few due to space limits. The comments appear below in roughly the order in
which I received them. I prefaced each comment, or set of comments, with the
author’s name and Twitter handle in bold. (If you’re struck by the seeming lack
of organization of the comments, remember that this is a tweet-stream.)
I also added a few comments of my own in response to a phone
conversation with one of the commenters. Here, with no further discussion, are
the comments.
The Comments
Cruce Saunders
@mrcruce
What should
next-generation authoring look like now that we have 1,000s of permutations of
media, content-types, browsers, channels, contexts, & formats? A difficult
& valiant question to try to answer! Added some thoughts to a recent
article. A thread for further comment.
Authoring
rarely ever happened consistently in one GUI, even for small companies. In an
enterprise, authoring is the single most diverse environment within content
lifecycle process and technology. Content can be acquired in dozens of ways in
a single department!
We
should never assume an ability to conform large populations to a single GUI
authoring platform. What typically happens in such enforcement scenarios:
“cheating”. No GUI, especially one that wants to be so feature-rich, ever meets
everyone’s needs.
So
content gets built elsewhere and then PASTED into the GUI (often by someone
else), where gets further manipulated. And one hopes, enriched with metadata.
Or, publishing systems just get built around the GUI for various authoring
groups that decide not to use it.
And
the well-intentioned standard authoring regime falls into a chaotic mess of
manual content transforms with no accountability or traceability. Most
enterprises today live in some form of this mess.
Even
when some smaller silos create some most consistent coherence (e.g. #techcomm), none
of the related content sets are compatible. The answer, [A] believes, lies in
aligning structural & semantic standard patterns across disparate
authoring, management, & publishing systems.
All
that being said, we do need to advance the state of GUI authoring. Vendors are
working on this in product roadmaps. The biggest area of interest to me is
essentially today’s attempts at “What You See Is Semantically-Markup Up
Content”.
GUIs
that *as the author types* suggest semantic associations derived from an
organizationally-standardized taxonomy or ontology provider. This is effortless
and invisible...machine-prompted, author-empowering.
The
same sort of in-context editing, coupled with machine intelligence, can also
help to prompt additional annotation useful for content targeting.
Another
area of interest are GUIs in which a “sidecar” toolbar powered by artificial
intelligence provides authors with in-context structured snippets for reuse and
inclusion, based on the content of the material being authored.
Or,
the sidecar suggests portions of text that might be reused by others. And providers
authors the ability to apply metadata or discussions to individual snippets, or
molecules, of content. Of course, these sidecar tools can be made to perform
MANY other functions.
In
my view, any vendor authoring product, and any related interface, needs to
embrace schema application & portability to matter long-term. Companies
desperately need to be able to move content around. But this is not possible
without schema alignment across systems.
And
that is impossible without authoring interfaces that incorporate a structural
schema. I’d like to see more friendly blank-canvas interfaces (‘Word-like’)
that incorporate an ability apply and manage schema-driven templates, beyond
just standardizing styles.
We
can see many attempts at schema-based GUI authoring, especially in the plugin
market, where Word-to-DITA has been something pursued for some years.
One
of the biggest areas of need, and most challenging, is the development of
graphical user interfaces that support multiple variations of the same content
within a single authoring process.
Personalization
based on user type and state, and device or environment states, is something
that many authoring processes need. And as we feed our customer experiences
with ever-more contextual data, authoring for human or machine-meditated
variation becomes essential.
The
good news is this has also been pursued for some years, and the heuristics have
been explored in multiple production environments — mostly in Customer
Experience Management #cem
platforms.
But
there's plenty of room for innovation here, because "variation
authoring" interfaces have not yet been perfected or mass-adopted. It's
still a blue ocean space and vendors can distinguish themselves here.
There’s more to
say, and much more to discuss, but the future of authoring is a very deep
rabbit hole. And a worthy exploration. Take a look at more ideas from Neil
Perlin (@NeilEric) in @ISTC_org Communicator or via the #info40 blog post here:
James Mathewson @Mathewson_CS
The challenge is
context. Content is only meaningful to the degree that it is relevant in
context. How do you build an authoring system that helps writers grasp digital
contextual cues and write relevant content using those cues? Modular content
grows this problem exponentially.
Scott Abel
@scottabel
Maybe our efforts would be
better spent getting corporate leaders (those afraid of being displaced by
disruptive innovators) to understand the need to become information-enabled.
Authoring tools are created (and updated) in response to demand. The demand is
simply not there — yet
Neil Perlin (in response
to Scott Abel’s point above)
A fair point.
However, in the early days of help and the web, GUI tool development went on -
often in odd or even wrong directions - even as the technology was spreading.
Better IMO to become information enabled AND create the tools for doing so at
the same time.
Cruce Saunders
The sea change is
coming. Both customers and vendors are driving the evolution. One hand washes
the other. Celebrate the innovators, wherever they sit.
Mike Atherton
@MikeAtherton
+1 for context and structure. Something
akin to a headless CMS is a good start, but rather than a bare bones
experience, illustrative device and platform-specific templating to show
authors how their work may appear.
And more importantly,
since we're moving from a centralised publishing environment to distributed 3pp
(AMP, Instant Article, other API) then explicit support and guidance ('recipes'
if you will) from platform owners.
Aaaand a new mental
model. The print analogy refuses to die and doesn't help separate content from
presentation. A better analogy might be radio waves.
Neil Perlin (in response to Mike Atherton’s previous point)
I'll bite. Why radio
waves?
Mike Atherton
(in response to Neil Perlin’s point above)
Because the
information transmitted is intangible, device-agnostic and everywhere at once.
And because the same technology can emit frequencies designed for humans and
frequencies designed for machines. I didn't say it was perfect :)
Cruce Saunders
(in response to Mike Atherton’s point above)
Mike's 'radio waves'
is similar to how I see content. Anything that can be available in multiple
states, places, usages at one time is very different than tangible one-time
published artifacts. It's 'information energy'. ;) But it's more durable even.
So, we do need new frames.
Real device, type, user, context agnostic
contextual preview or simulation is a holy grail. Even think it should be
source agnostic. I actually believe there's an entire missing product category
here. Rendering simulation & collab is something more than just another
feature.
Mike Atherton
It's not even
about being WYSIWYG 2.0 (i made that up), but what's missing from the
structured content rhetoric is solid criteria for *how and why* to make
specific structural choices. Bringing home context of use may help.
Actually
@eaton
I think "next-generation
authoring" has to assume that beyond highly data-driven fill-out-the-form
stuff that CMS devs have already (kind of) solved… content will end up
consisting of 1) Narratives, 2) Components, and 3) Assemblies/Aggregates…
…And also has to assume that
workflow/responsibility for each of those modalities will require different
tooling. You talk a little about this downthread but I think there's too much
attention paid to UI and not enough to contextualied UX in the content
editing/mgmt space
Then the big
mind-blowing piece is that a huge percentage of what we would call
"narrative" is spread across multiple pages/screens/artifacts for
final delivery. Some of the journey/experience management stuff starts touching
on that, but…
Mark Demeny
@mde_sitecore
Great thread and summary from
@NeilEric as well. It's a hard one to resolve (esp. over
Twitter). Even putting aside the harder questions of content lifecycle, reuse,
transformations for specific channels, etc. you get into questions of
appropriate tools and interfaces very early.
You'll often hear
"I wish my simple to use CMS was better at structured/headless
content" similarly, you'll hear the opposite complaint of vendors that
have a bias toward structured content but sacrifice page layout or authoring
experience.
As I see it, there are 3 fundamental
conflicts with content lifecycle; - Distributed vs. Centralized (with tools,
author roles, team, geo etc.) - Structured vs. channel-specific - Creation
agility vs. reuse (via better findability, analytics, etc. - more lean to the
former)
And personalized/contextual
content is a problem *layered across all of these*. It could be that a specific
region, or an analytics team is responsible for acting on that - so I see that
as not a distinct problem, but related to and complicated by the existing conflicts.
Jan Benedictus
@JanBenedictus
Structured
Content Authoring, Component Based Authoring etc. are often mentioned - by
leaders ; but “what problem do we solve” is not articulated. We have to go from
“strategic talk” to Tangible Benefits to explain Why. Today we are at @DrugInfoAssn to do so for Pharma #dia2019
Ray Gallon @RayGallon
Two Additional Points of My Own
Mark Demeny noted correctly that I gave scant coverage to
issues of governance and workflow and sign-off control.
I’ll add that I barely mentioned the effect of Information
4.0 on technical communicators. The increased technical and management complexity
may drive some of today’s practitioners out of the field. That’s been predicted
with every new technology and, to a degree, is true but most practitioners
adapt. What’s different with Information 4.0 is that even the base level of
technical and management complexity is far higher than earlier disruptive
technologies like word-processing in the 1980s and the web and online help in
the 1990s.
Summary
The comments section may seem rambling because it largely
matches the structure of the comments and responses. But I left it that way to
show the wide range of thought about the technical, structural, management, and
even philosophical issues. Once this article appears in Communicator, I’ll add
it to the Information 4.0 Consortium blog and the Hyper/Word Services blog, and
will add more posts as I get more comments.
So, now what? Is there a next step or has this just been an
interesting discussion? That will have to be the subject of more discussion by
members of the Information 4.0 Consortium. Stay tuned.
About the Author
Neil is president of Hyper/Word Services (www.hyperword.com) of Tewksbury, MA,
USA He has four decades of experience in
technical writing, with 34 in training, consulting, and developing for online
formats and outputs ranging from WinHelp to mobile apps and tools ranging from
RoboHelp and Doc-To-Help to Flare and ViziApps. To top things off, he has been
working in mobile since 1998 and XML since 2000, speaking and writing about
Information 4.0 since 2017, and is a member of the Information 4.0 Board.
Neil is MadCap-certified in Flare and Mimic, Adobe-certified
for RoboHelp, and Viziapps-certified for the ViziApps Studio mobile app
development platform. He is a popular conference speaker, most recently at
MadWorld 2019 in San Diego, CA. Neil founded and managed the Bleeding Edge stem
at the STC summit and was a long-time columnist for ISTC Communicator, STC
Intercom, IEEE, and other publications.
You can reach him at nperlin@nperlin.cnc.net.