Working with Intel® UK and Microsoft we created VisualNav for Visual Studio, an extension that makes coding fun for disabled developers and young pupils.
The WIMP (windows, icons, menus and pointers) interface paradigm dominates modern computing systems, but disabled users may find it challenging to use. UCL's MotionInput enables voice and gesture controls of a computer with only a webcam and microphone. VisualNav for Visual Studio provides a touchless interface for writing code. It is designed for ease of use for beginners in the world of coding, from beginner through to advanced.
The project adopts a Visual coding paradigm, this is where blocks of code can be pieced together in an analogy to Legos and will be familiar to children coming from a background in Microsoft Make code. We use CEFSharp, an embedded web browser based on chromium, to integrate ‘Blockly’, the JavaScript library that MakeCode is built upon, and render this panel directly within Visual Studio.
Users can select a code block from an accessible command palette, with minimal motor movement via the use of a radial dial component. To ensure the correct block is selected, a preview gives a description and visualisation—finally, a building workspace to drag and assemble the blocks of code, to be compiled into code. Throughout the process, voice commands can be used to trigger shortcuts, as an accelerator.
The project supports 9 kinds of blocks, contains 65 block elements supporting JavaScript, Python, PHP, Lua, and Dart and C#. C# contains the extra feature of ‘custom blocks’, which allows library functions to can be added to the radial menu as blocks facilitating advanced developers to build more complex applications.
It is now possible to write code with only facial movement and speach commands.
The radial menu interface, preview and building windows enabling efficient code block creation:
Although fully standalone, the application is best used to be used with MotionInput V3 best done with nose based navigation plus speech, but also fully compatible with eyegase and multitouch availible from https://touchlesscomputing.org/.
During the pandemic, academic & students from UCL developed UCL MotionInput 3. This technology uses computer vision with a regular webcam to control a computer like a mouse & keyboard. In addition, it uses natural language processing to control existing applications & shortcuts. This project uses this technology to enable young & mature developers with accessibility needs to design, write & test programs with Visual Studio. For example, one can move their hands or face to move the mouse. Or, one can say the words "click" and "double click" out loud to access the mouse's functionality.
- Download: https://marketplace.visualstudio.com/items?itemName=UCLFacialNavforVisualStudio.VisualNav
- Website: https://hrussellzfac023.github.io/VisualNav/
- Examples: https://github.com/HRussellZFAC023/VisualNavExamples/settings/access
- MI3 Facial Navigation v3.04 (Special Edition for VS accessibility): https://touchlesscomputing.org/
- Work published in Microsoft blog: https://techcommunity.microsoft.com/t5/educator-developer-blog/ucl-amp-intel-visualnav-v2-facial-navigation-for-visual-studio/ba-p/3616447
- Secret Note: https://pastebin.com/x19YXsa8
To get started with VisualNav, there are two methods for the installation process:
After closing Visual Studio, go to the release section of the repository. Then double click on the .VSIX file and run the installer.
Open Visual Studio, then go to "extensions" and search for VisualNav.
- Visual Studio installed.
- Microsoft .NET 4.5.2 or greater.
- Visual C++ Runtime 2019 or greater.
To setup, simply navigate to the "Tools" bar and click on "open all windows".
Example navigating the command palette:
Example of creating blocks from the command palette: