Archive for the ‘1’ Category

FOSS Training

April 6, 2010

I was privileged enough to be able to attend linux.conf.au in Wellington in January. While there, I caught Bob Edwards’ and Andrew Tridgell’s talk on “Teaching FOSS at Universities” (video of which can be found here). It intrigued me.

Open source software development is very different to developing software in a more traditional, closed source environment. The aim of the course is to teach students how to go about working within the open source community. It covers the practical aspects of checking out code from a repository, submitting patches, and undergoing code approvals and reviews. It also looks at some of the less tangible aspects, like what’s accepted and expected within the community, the motivation behind project development, and governance. The course also goes into some detail about documentation.

Documentation for open source projects is not quite the known quantity that it can be in many proprietary software environments. I once had a developer I was working with describe it as “we live in the Wild West out here”, and – at least to an extent – he makes a good point. While writing for an open source project may not be as wild and exciting as that sentence makes it sound, it can sometimes be unpredictable and, at times, incredibly frustrating. Frequently, a book has been written and reviewed in preparation for a release, only to find at the last minute that a feature has been pulled from the version, a component has suddenly been renamed, or the graphical interface has had some kind of redesign. All of these things happen to open source writers on a regular basis, and frequently the only solution is to pull an all-nighter, get the changes in, and have the document released on schedule. And that’s only if you were lucky enough to find out about the change with enough time to spare before release!

So how does a writer plan for and write a documentation suite when there’s so much unknown in a project? The answer is – perhaps ironically – to plan ahead. You can’t plan for every contingency, nor should you. But if you have a plan of any description, you’re going to be better off when things start to go wrong. Pin down the details as best you can as far ahead as possible. But don’t leave it there, continue to review and adapt your plan. Keep your ear to the ground, and constantly tweak your schedule and your book to suit. If something comes up in a mailing list about a feature you’ve never heard of, don’t be afraid to ask the question – “Does this need to be documented? Will it be in the next version? Where can I get more source information?”. Another trick is to make sure you build in ‘wiggle room’ to your schedule, in case you suddenly discover a new chapter that needs adding, or a whole section that needs to be changed. If you’re consistently a few days or a week ahead of schedule, then even a substantial change should not throw you too far off balance.

Just like a ballet dancer, technical writers need to be disciplined, structured, and organised. But you also need to have grace, poise, tact, and – most importantly – flexibility.

Thanks to Bob and Tridge, I’ll be lecturing the 2010 FOSS course students at the Australian National University later this week. I’ll also be contributing to the textbook that is being developed for the course. True to form, it is being built by and for the open source community, using open source tools (including Publican which has been developed in-house by some of my esteemed colleagues). Watch this space for more information.

Cross-posted to On Writing, Tech, and Other Loquacities

Advertisements

Wanted: Schedule Monkey

February 24, 2010

If you have any open source writing or writing-related opportunities, let us know and we’ll pimp them here.

Love technology? Love a challenge? We have the job for you.

Red Hat, the defining technology company of the 21st century, continues to expand its team to create compelling open source products.

We need someone to monkey around with our schedules. We have a bunch of schedules for our writers, and we need to merge them into one uber-schedule. We’re working with open source tools, so we need someone who isn’t scared of TaskJuggler.

You don’t need millions of years in the industry – you just need to be able to learn new tricks.

This is a part-time job is based in Brisbane, Australia. Check it out on seek.com.

Why Thank You!

November 13, 2009

Yesterday, a co-worker alerted me to the fact that my name had been listed as one of the Top Open Source Technical Writers on the web. I was blown away! I am seriously over the moon about it all, and wanted to sincerely thank both Aaron Davis and Scott Nesbitt of DMN Communications for the vote of confidence.

Technical writing is a funny kind of industry to be in. The people who are in it are for the most part seriously excited about where tech writing is going, and what we can do along the way. Of course, combined with the type of people who are involved in open source generally, it means you end up working with crazy-smart people who are really seriously passionate about what they do, and how what they do can make the technical world a better place.

I’m very privileged to be able to write free/libre and open source technical documentation for a living. Not many get to have that experience. The things that the open source community has taught me, and the experiences I’ve been able to have doing so, are something that working anywhere else just wouldn’t offer you.

My passion is creating the best technical documentation I possibly can, and making it available to as many people as possible. More often than not in open source, the deadlines are tight, the scope is big, and the resources limited. The challenge that situation creates is, as expected, pretty huge. Being given the opportunity to attempt to create documentation that shines within that environment is one of the biggest challenges I’ve ever encountered. It’s a challenge that I wake up every morning to, and while there are days that I think I can’t do it, there are many more days where all I want to do is inch a little closer to that goal. Having people like Aaron and Scott publically recognise that effort is what makes the hard work all worth it.

To follow on from Aaron and Scott’s list, I’d like to shout out to all those people who write, contribute, edit, review, and use open source technical documentation – even if it’s only spotting typos and raising a bug. You are the ones who deserve the recognition, because without you, I wouldn’t have the opportunity to do what I love. I hope you all enjoy creating and using open source technical documentation as much as I do.

Cross-posted to On Writing, Tech, and Other Loquacities

Magic waterfalls

October 20, 2009

I was invited to speak as a guest lecturer at the Australian National University last week. The audience was a class of third and fourth year computer science students, and the topic was technical writing. After speaking for somewhere pretty close to an hour, and successfully getting a few laughs in that time, I answered a clutch of questions, and was then drawn into a discussion about engineering methods. The course convener had pointed out that the five-phase model (that I discussed at least briefly in this blog post) that I use is, in itself, a fairly typical engineering process. And of course he’s absolutely correct. It’s a perfectly ordinary process, based on the waterfall model.

It’s called a waterfall model because if you start at the top, the results of the first step are used to move into the second step, just like water flowing down a series of steps into a pool.

The students I was speaking to are at a point in their projects where they need to be producing some documentation. For a bunch of budding engineers this process can be a little daunting, and the question came up about the best way to tackle it. The answer is fairly simple – start the top of the waterfall, and let the current take you. By answering a few questions in the information plan, you can start creating a content specification. Using the chapter headings and source information you developed in the content spec, you can write the document.  Once it’s written, you can publish it, once it’s published you can review it, and then you’re ready to start again at the top with the next project.

Technical writing is less of a creative process, and more of a scientific process than just about any other kind of writing (with the possible exclusion of some kinds of academic writing).  The creativity only becomes important when you try and turn it from something dry and boring, to something magical.

Anyone with a scientific or engineering mind can create technical documentation, they might not enjoy it, but they are more than capable of creating it. It takes an artist to make it something wonderful, to turn it into something that people actually want to read, and to make it shine. It’s the difference between ‘magic’ and ‘more magic’.

Cross-posted to On Writing, Tech, and Other Loquacities

Haiku: A Journey

October 16, 2009

Warning: following these instructions will not result in the successful installation of Haiku onto an iMac G3.

It’s my first morning as a technical writer, and I’m presented with a challenge: install Haiku on a Mac. And not just any Mac. An iMac G3: Bondi Blue; at least ten years old; and well past anything resembling its prime.

Anybody need a good working definition of ‘character building exercise’?

For those, like myself, who know little or nothing about Haiku (the operating system, not the poetic form), it is an open source operating system. Currently in active development, Haiku is designed to be compatible with BeOS.

To make things a little easier, I was given a CD with the Haiku bootloader on it and a link to instructions for installing Haiku using such a CD. Unfortunately, the iMac couldn’t read the CD. Instead it needed to download the file from another computer running a TFTP server. So I unplugged my laptop from the network and got to work.

Skimming through the Red Hat Enterprise Linux 5 Installation Guide, it became clear: to set up a TFTP server, my laptop had to be the DHCP server for a new network. I followed the reference to the Red Hat Enterprise Linux 5 Deployment Guide and delved into the section on setting up a DHCP server.

One problem arose: the Deployment Guide has no installation step in its procedure for setting up a DHCP server. Configuration is covered, but it assumes the DHCP server package is already installed, which was not the case on my laptop. So I filed a bug report and began the hunt. After a number of frustrating dead ends, I resorted to Yum Extender (yumex), and found it by trawling through every package which mentioned DHCP in the package name or description.

The needed package was dhcp, by the way: sudo yum install dhcp and I was back on track.

Now that I had a DHCP server installed, I was ready to install the TFTP server. Or was I? According to Chapter 21.2, Configuring a DHCP server, I had to create a new file — /etc/dhcpd.conf — for it to all work. So I did. And it didn’t work. So I copied the sample file mentioned in Chapter 21.2 straight into the /etc/ directory. That didn’t work either. I tried multiple changes, including rewriting dhcpd.conf myself.

In the end, the solution was fairly simple. Between Red Hat Enterprise Linux 5 and Red Hat Enterprise Linux 6 alpha, the directory had moved. What had been /etc/dhcpd.conf was now /etc/dhcp/dhcpd.conf. So, with the new directory sorted out, the DHCP server was online. Success, yes? No.

After editing dhcpd.conf according to the instructions in Chapter 21.2.1, I checked to see if the TFTP packages were installed on my laptop. They weren’t. So I loaded yumex once again and searched for the packages required for the TFTP server. I installed those, and then used the chkconfig command to see if the TFTP server was configured to start automatically. It wasn’t. I entered the commands as per the Installation Guide, Chapter 34.4.1 to bring it online, and checked again. Everything was now up and running.

Next step was the bootloader file, which I copied to /var/lib/tftpboot/ on my TFTP server. Then I put the CD back into the iMac; checked the network cables; and tested to see if the iMac (which was at this point booting into Linux) had received an IP address from my laptop. It had, so I started to tick off the list. Step one; check. Step two; check. Step three; check. Step four; check.

Step five? It failed. So, I booted the iMac and gave the four-fingered salute, Command-Option-O-F. Rather than finding myself at an Open Firmware prompt, as expected, I watched the machine boot into Linux again. Incongrously enough, Open Firmware would not open. After numerous attempts, we detached the ancient keyboard and attached a brand-new one. It worked perfectly.

Step five? Check.

Two steps from success.

After booting to the Open Firmware prompt I typed in the boot command specified in the procedure and waited. And waited. And all I got back was load-size=0. Load-size too small? Too large I could have understood, though the openfirmware_boot_loader file is only 231 kB. This was, perhaps, the most perplexing hurdle. But it was overcome, like all the others, with a touch of simplicity. The firewall for the laptop was turned off, and after a moment’s hesitation, the iMac transferred the file across.

There it was. Ready to install. Except it still didn’t work.

Those of you who took a moment to read the step-by-step guide may have noticed the paragraph at the bottom of the page which notes the kernel for loading Haiku via a TFTP server is currently broken.

At the bottom of the page. After the installation instructions.

Hence the warning at the top of this page. It seemed the polite thing to do.

The beast within

September 16, 2009

Writing is the only thing that, when I do it, I don’t feel I should be doing something else.

Gloria Steinem

National Novel Writing Month (NaNoWriMo) is coming up again. And so, like many other writers (both professional and aspiring), I’ll be setting aside the thirty days of November to pump out 50,000 words of a novel, or about 1,600 words a day. This is in addition to the thousands of words I pump out every month as part of my role as a technical writer, of course. The question here is, what on earth makes someone who writes all day for a living, want to go home and write all night as well? It sounds like a Dr Suess story: “Oh I say, we write all day. Write, write, we write all night”. The really peculiar thing is that I’m not alone in this endeavour. There are many tech writers out there moonlighting as novelists every November. Don’t try to take a tech writer out to dinner in November, unless you’re willing to have them with their laptop out at the table … taptaptaptappitytap

nanowrimo

I suspect writers are born, not made. That’s not to say that good writers are rare, I actually suspect that they’re quite ubiquitous. Many of them never actually become writers. They become all manner of other things – butchers, bakers, and candlestick makers – but that drive to write exists within them still. They might write a private journal, be secretly working on a novel, submit letters to the editor, write lengthy letters to their friends, submit stories to a website, or keep a blog.  Or just wish they had the time.

All of this means that, as a writer, when you meet another in the street, you see that gleam in their eyes. There’s a passion, an excitement, a certain joie de vivre that they only truly experience when they are head down and writing. Have you ever wandered down the street, completely lost in thought trying to work out a plot twist, a character development, a particularly witty piece of dialogue, only to realise that you’re grinning your head off, looking like a loon? Then you’re a writer. And here’s my advice to you: don’t fight it.

I have a stack of manuscripts in my desk drawer. Will I ever submit them to a publisher? No. Will I ever give them the edits and re-writes they really need? No. Will I ever look at them again? Probably not. So why bother creating them in the first place? Because I need to write. There is a living thing inside me that is only satiated when there are words flowing through me. What happens to those words afterwards is entirely irrelevant. I think them up, I write them down, I make sure I like the way they sound, and then I let them live on without me.

So if you share my passion, why not join me in November? And if just one month a year of crazy writing isn’t nearly enough, why not apply for a job?

Cross-posted to On Writing, Tech, and Other Loquacities

Dejargonize your documentation

July 1, 2009

I recently attended a product demo given by an Apple representative. It was held in the local music store and covered the latest versions of Garage Band and Logic Pro.

During the demonstration the rep showed how Garage Band can adjust the timing of recorded audio tracks, such as a live drum take recorded using a microphone. Adjusting the timing of instrument tracks recorded using MIDI (Musical Instrument Digital Interface) is old hat. It’s known as “quantization”. However, the ability to adjust the timing of an audio track is a novel development. He stressed that it could only be done to audio tracks that are recorded with Garageband, and not to audio tracks that are recorded elsewhere and imported into Garageband.

I asked him: “Does Garageband store some kind of metadata for audio files that it records?” He replied: “No, it stores additional information along with the sound file.” Then he paused, and said: “…which is pretty much the definition of metadata“.

It was interesting to me to see the contrast in communication style. Garageband is designed, as he explained: “for people who know nothing about making music“. As such it avoids using the jargon regularly employed by those familiar with music-making technology. Quantization becomes adjust timing. Metadata becomes additional information.

Sometimes a concept benefits from a precise technical term, sometimes it just serves to make the material harder to understand for someone unfamiliar with it.

As someone who knows what quantization and metadata are, I had no problem understanding what he was talking about when he talked about adjusting timing and storing additional information. The reverse is not true: someone who can grok* adjusting timing and storing additional information may be left completely in the dark when the terms quantization and metadata are used. It’s not that the subject matter has changed and is any harder to understand, but the use of unfamiliar terms reduces comprehensibility by raising the bar for the audience.

Glossaries can help, and so can really thinking about the choice of words: “Can I say this in a more direct, simple way, without using “jargon”?”

Something to keep in mind.

* to grok = to understand

Neologisms and Localization

June 25, 2009

One of my fellow writers tweets the gems she uncovers while editing docs, marking them with the hashtag #docfail. (I leave it as an exercise for readers to track her down and stalk her if they are so inclined).

A recent tweet read:

#docfail “Parameterized”. 😦 Sadly this is an official term.

“Parameterized” is actually not a neologism, one of the subjects of this post. According to the Merriam-Webster dictionary entry for parameterize, it’s been in the authoritative (according to the Merriam-Webster) english lexicon since 1940.

A “neologism”, a term that entered into the English language in 1803, again according to Merriam-Webster, is “a new word, usage, or expression“. New technologies give rise to new terms, obviously, so information technology is a major source of contemporary neologisms.

Since developers develop new and innovative technologies and ways of doing things, they routinely coin a new word to describe a novel method or application. An excessive proliferation of neologisms by developers can make them begin to resemble the second definition that Merriam-Webster gives for “neologism”: “a meaningless word coined by a psychotic” (at least to people tasked with translating them).

Neologisms pose particular challenges for technical documentation, especially when a document is translated (localized) into languages other than the language in which it was originally written (mostly, and for the purpose of illustration in this post, English).

Often an reader of a technical document can infer the meaning of a neologism from its context; because it is a compound of previously existing words; or because it is a novel transformation of an previously existing technical term.

“Parameterize” is a classic example of the “turn a noun into a verb” method of neologism generation that is favored by another goldmine of contemporary neologisms – business-speak. “Aspectize” and “Annotationed” are two examples of taking a specific technical definition of a common English noun, turning it into a verb, and then going postal with it.

While English readers can infer or deduce the meaning of these words, translating them into another language is problematic. To do it properly a technical translator will have to accomplish the following:

  1. Find out if this neologism already exists in the target language. This involves researching the subject area by reading related existing documentation in the target language (if there is any), or trawling through message boards and mailing lists to see if people are talking about this, and if so, what terms they are using.
  2. If a term does not exist, the technical translator must coin a term in the target language. To do this they have to understand both the intended meaning of the term, and the already existing terms in the target language. Will the translated term by generated through a similar process of grammatical Frankensteinization in the target language, or will it be a modification of another already existing native term?

This process is repeated for every target language. When a technical document is localized into 26 different languages, as Red Hat Enterprise Linux documentation is, that adds up to a whole lot of friction – costing time and money.

A recent example I observed: last night I watched the opening of the 2007 movie “Transformers: The Beginning” subtitled in Spanish. The translators of the movie opted to use the term “La Matriz”, a term which carries the sense of “The Original (Source | Form)” (or literal: “The Matrix”), as their translation for “The All Spark”. The “All Spark” is an esoteric item at the center of the battle between the Decepticons and the Autobots. Interestingly, while the “All Spark” is a neologism in English, its equivalent term in Spanish “La Matriz” is not. If the translators were to translate it literally as “La Chispa de Todo” (“The Spark of Everything”) it would be an unfamiliar term in Spanish, when it doesn’t have to be. Sure, in English the name conveys that it’s an esoteric item, but to convey the sense of what it is in Spanish does not require the invention of a new term. Sometimes a neologism doesn’t have a need to exist beyond satisfying a developer’s desire to underscore that they are doing something COMPLETELY NEW!!!!!111

Neologisms also come into use as a form of short hand. As new technologies are constructed by aggregating previous technologies, the complex aggregate then becomes one of the building blocks for something else. To reduce complexity, new terms are coined to refer to these complex structures. A Central Processing Unit becomes a CPU. The whole CPU, hard disk, monitor, plus input devices becomes a computer. A bunch of computers becomes a cluster, and so on. Especially in the software world, which is all about the rapid aggregation of complex elements, these ever-more-encompassing terms appear frequently and regularly. In helping us deal with increasing complexity by encapsulating it in linguistic terms, neologisms serve an important purpose.

When editing we try to reduce the vocabulary of technical documentation as far as possible, running it through the lexical equivalent of an mastering audio compressor. Whenever and wherever possible we replace unnecessary neologisms with “plain English” to clarify the meaning and assist translation.

Technical writing is not about creativity – it’s about communicating information as efficiently as possible.

We need to be wary of the human tendency to create a new priesthood of the elite that distinguishes itself by an incomprehensible dialect. Sure it’s always cool to belong to a group that converses in a form of “leet-speak”, but if the goal is to be understood, then in documentation it’s important to relate the unknown to the known. When neologisms do appear in a document they benefit from explanation, or from the inclusion of a glossary. Always think of the audience.

And developers – please think twice before coining yet another new word to go with your technological innovation. Is it really needed? Can you explain it in plain English? Does a new term reduce complexity more than it increases it?