Replies: 8 comments 6 replies
-
Very nicely laid out! One suggestion is to consider alternative click methods like a switch click for head tracking. Find this contracting experience can we give some able to use a switch independently head tracker |
Beta Was this translation helpful? Give feedback.
-
Also is this the right What is the talk about two-step access in which the user clicks an area of the keyboard which the magnifies around user to select From a key in the magnified area area? |
Beta Was this translation helpful? Give feedback.
-
I cleaned it up some. It appeared to be two copies of my original smashed together. And their markdown is strange. Those paragraphs should be filled but they are simply breaking at every linebreak in the input. It seems like every github tool uses a different version of markdown. |
Beta Was this translation helpful? Give feedback.
-
One thing that jumps out to me is how our use case differs from the use case of the WAI and its conventions. Particularly, in the latter case, discoverability is everything, and you rightly point out that the default behavior should correspond with those conventions. With that in mind, once a user interface is designed, it becomes a much more predictable piece of content than websites generally are, and, if possible, should be afforded some means by which one can create shortcuts to navigate around faster. It looks like you mention this, but it is a good representation of a larger pattern of thinking about the design of these interfaces that not only conforms to the standards of the environment it inhabits (the browser), but it also extends the logic given that there is less variance in our designs compared to the entire gamut of websites that can exist. Will write more soon. |
Beta Was this translation helpful? Give feedback.
-
Yes! Well put. I think we should, at least, support actions that jump
around in the scan groups. I think that would allow designers to implement
shortcuts (good term) for their users.
The contrast with the full web is apt also.
Good thinking.
…On Tue, Feb 22, 2022 at 8:32 AM gtrogdon ***@***.***> wrote:
One thing that jumps out to me is how our use case differs from the use
case of the WAI and its conventions. Particularly, in the latter case,
discoverability is everything, and you rightly point out that the default
behavior should correspond with those conventions. With that in mind, once
a user interface is designed, it becomes a much more predictable piece of
content than websites generally are, and, if possible, should be afforded
some means by which one can create shortcuts to navigate around faster. It
looks like you mention this, but it is a good representation of a larger
pattern of thinking about the design of these interfaces that not only
conforms to the standards of the environment it inhabits (the browser), but
it also extends the logic given that there is less variance in our designs
compared to the entire gamut of websites that can exist.
Will write more soon.
—
Reply to this email directly, view it on GitHub
<#74 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABQK7E63DAWPQYZ4H3LZ7TU4OF6DANCNFSM5O7AFFWQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Access scenario: Eye tracker controlling the pointer locationThis should be like VoiceOver on iOS.
|
Beta Was this translation helpful? Give feedback.
-
Scanned Access Follow-upAdditional parameters to consider:
Notes about scan groups and patterns:
|
Beta Was this translation helpful? Give feedback.
-
Since the eye tracker is a separate piece of hardware and software, it’s
going to have to send a separate message for blink recognition
…On Wed, Mar 2, 2022 at 9:26 AM Gary Bishop ***@***.***> wrote:
1. The eye tracker itself would have to tell us about blink. I'm not
proposing that we do the eye tracking. But if the device gives us an event
it'll be easy to use.
—
Reply to this email directly, view it on GitHub
<#74 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHYBQ5OVLWGXXRVB64SKNTDU553AVANCNFSM5O7AFFWQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Raw signals available
The signals available from the browser include:
Conditioning
We need to condition the raw inputs to prevent accidental triggers.
Conditioners could include:
inactive.
like hysteresis or stickiness for eye-gaze users.
Scanned access
The most interesting part of scanned access is the access pattern.
I'd like for researchers to be able to define their own new and general patterns
but I don't want those who only want something standard to be confronted with
too much complexity.
We could use scan groups that define the order of visitation. A scan group
might be defined as a nested list of buttons or other scan groups. Each group
has a single parent and one or more children. You start at the top of the
hierarchy choosing among elements at the top level. If you select a nested
group, you then scan it.
A designer should be free to explicitly list every item if they wish but
shouldn't be required to. Perhaps we could have common patterns represented as
functions with a few parameters.
On complicated user interfaces with tab controls, radio buttons and such, I
think we should adapt the strategies recommended by the Web Accessibility Initiative. They assume multiple user
inputs (tab, enter, arrow keys) but I think we can adapt their guidance. They
suggest scaning components first in an order specified by the designer, then
elements in the component. In the Contact simulation you might visit the tab
control as a group, followed by the Quick grid tone buttons as a group, then
the Quick grid.
I'm thinking when a group includes only one component, that component is
immediately selected. Another example would be a grid that contains only 1 row
or a row that contains only 1 active column. Maybe you start with the individual
items instead of selecting the single row first. If a group has only one member,
an optimization would be to go directly to that member.
You might specify the scanning of a grid like this with increasing indentation
indicating the user choosing YES.
You change the order of components by naming them.
If you don't specify the children, you get the default order so you only have to
name the parts you want to change.
Perhaps you can specify a default for all components of a given type. Or maybe
you can name patterns and then apply them to components.
1-switch parameters
2-switch parameters
more switches
You should be able to bind events to conditioned inputs to do things like
Highlighting groups
Designers should be able to specify how groups are to be highlighted to make it
clear what is currently being chosen. This could be done with explicit CSS with
a few examples already defined. Highlights might include:
Eye tracker access
I'm assuming that the eye tracker looks to us like a mouse with no buttons. That
is, you can move the pointer but you can't click. If they are using an Eye-Gaze
device they likely should use its ability to click on dwell.
I think we can easily support adjustable gaps between buttons.
Buttons might grow to fill the gap when the pointer is hovered over them. This
could provide some hysteresis for selections making it easy to avoid rapidly
cycling between choices.
It should be possible to support effects like filling the button from the
outside-in to draw the eye to the middle during the hover interval.
Eye tracker parameters
Touch access
We should be able to highlight (or announce?) choices when the user touches a
button but only activate it when they release without touching another.
We should also be able to emulate iOS VoiceOver; you find the choice you want,
release, and then tap again to activate.
Beta Was this translation helpful? Give feedback.
All reactions