For screen reader scripters: thoughts on script writing procedures, object hierarchy, API's and some script writing stories

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

For screen reader scripters: thoughts on script writing procedures, object hierarchy, API's and some script writing stories

Joseph Lee

Ladies and gentlemen, especially scripters of other screen readers,

 

Continuing from our discussion regarding script documentation, I’d like to let you join me on a day of an NVDA app module writer and screen reader code contributor. Some of you may see a duplicate of this (especially those subscribed to the Groups.IO list), but as this is a development list, it will be slightly more technical than what I wrote to users regarding how NVDA features are born. In hopes of helping you understand what’s going on, I’ll use two real-life examples: GoldWave and StationPlaylist Studio, both of which are either my own brainchild (former) or am the current maintainer of (latter):

 

It was October 2013. I, a 23-year-old computer science student at UC Riverside back then, sat down one night and was preparing to give an audio presentation about NVDA add-ons to an online community. Around midnight, I received an email from a mailing list where someone asked if there was an add-on for GoldWave audio editor, and someone replied that there was none but JAWS scripts were available (Jim Grimsby, Jr.’s scripts, in fact). Then I jumped on a chance to write one, as I was familiar with this program and sort of knew how it worked. I also wanted to experiment with app modules, as the only thing I knew from top of my head was how to write global plugins, and back then, the only thing I can put on my resume was the fact that I was the author of a popular, context-sensitive help add-on called Control Usage Assistant.

 

I first contacted Jim (the author of GoldWave JAWS scripts) if he wanted to write an NVDA add-on for GoldWave. No replies. So I decided to write an add-on on my own, and in a span of two weeks, it was complete and functional.

 

I started by looking at documentation for Jim’s JAWS scripts and read his source code. Then I installed GoldWave on my computer, then looked at how the app was laid out via object hierarchy (parent-child and sibling relationships amongst GUI controls). I used a combination of object navigation commands in NVDA (NVDA+Numpad arrow keys) along with Python Console (Control+NVDA+Z) and typed the following:

 

import api

api.getForegroundObject().windowClassName

 

Turns out GoldWave was written in Delphi, and thankfully, it was accessible to some extent. Then I thought about commands Jim provided as part of his JAWS scripts and what JAWS announced when producers dropped start and finish markers, cues and what not. Although I could have used screen review emulation, I decided to use object navigation emulation (someObj.parent or someObj.firstChild.firstChild and what not). But this was where the problems began:

 

  • GoldWave has two status bars, so it was important that I locate the correct status bar. I solved this by caching the status bar index itself as in the following code:

 

import api

import controlTypes

 

foreground = api.getForegroundObject()

location = 0

for element in foreground.children:

if element.role == controlTypes.ROLE_STATUSBAR:

someIndex = location

location+=1

 

Note: I’m using spaces instead of tabs.

 

Basically, I would loop through controls in foreground until I locate the first status bar, which was the one I was looking for. Then other functions can use this location information to obtain various status bar components.

 

Back to the commands at hand: so with the location cached, I started defining commands for GoldWave app module. But there was another problem to solve:

 

  • Should I let users use audio editing commands from everywhere or when the sound window is focused? I reasoned that it would be better to go with the latter, so I defined an overlay class for sound window as follows:

 

import ui

from NVDAObjects.IAccessible import IAccessible

# As to how I found out, I opened an audio file in GoldWave and pressed NVDA+F1. One of the things I looked for was the kind of Python object and window class name.

 

class SoundWindow(IAccessible):

 

# Scripts, events and what not.

 

Def script_dropStartMarker(self, gesture):

Gesture.send()

 Status = api.getForegroundObject().getChild(location)

  Ui.message(status.getChild(1).firstChild.name)

  # The ui.message function lets NVDA speak and braille a string.

 

What you just saw is how you define scripts (or command responders) in NVDA. One more thing:

 

import appModuleHandler

 

class AppModule(appModuleHandler.AppModule):

 

def chooseNVDAObjectOverlayClasses(self, obj, clsList):

 if obj.windowClassName == “TWaveForm”:

  clsList.insert(0, SoundWindow)

 

You’ve just read the most fundamental piece of code you’ll ever read (and write) in great app modules: yes, NVDA is object-oriented, and every GUI element on screen is represented as an object. Basically, with the above code, I told NVDA to perform such and such command if and only if the user is focused on sound window. No screen scraping, period.

 

This went on for days, and the end result was an app module that is used by some GoldWave users who uses NVDA and GoldWave to produce audio. The code repository for this app module can be found at:

https://github.com/josephsl/goldwave

 

In regards to StationPlaylist Studio, it took me a month to transform what was nothing more than an event handling code to a version that was suitable for use by radio broadcasters. This add-on is a combination of app modules and global plugins that uses API’s provided by Studio along with object navigation, add-on configuration management, and is one of only a handful of add-ons that ships with ability to check for add-on updates. The source code for this add-on can be found at:

https://github.com/josephsl/stationplaylist

 

An add-on internals article that talks about how this add-on works, as well as providing info on some useful concepts and NVDA API functions can be found at:

https://github.com/josephsl/stationplaylist/wiki/spladdoninternals

 

P.S. I don’t expect folks to write a detailed (add-on internals) article like the one I wrote, but this is the kind of documentation I imagine many of you were looking for years.

 

At this time I’d like to challenge current NVDA add-on writers to tell others about their stories with add-ons, tips and tricks and so on, as well as for scripters of other screen readers to share their experiences and ask questions (and thank you all for your feedback, and I’m impressed with how Project Contact Lenses is going).

 

Cheers,

Joseph


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Nvda-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/nvda-devel