The aim of this page is to cover, outline and describe the design decisions taken during the development of these extensions for Unity. The aim of this page is also to describe some of the assumptions that we make in the scripts to produce something that is easy to drop in, regardless of project size or configuration, and with minimal configuration from the developer/user.

As mentioned above, one of the key goals for this project is to produce something that is modular, and easy to drop in for a developer, with little to no input required or needed from them. Accessibility shouldn’t just be in the form of making games and projects accessible, but making the tools to do so easy to implement and an easy process that encourages people to factor in accessibility.


Raycasting is currently the only method we use for determining objects in a scene, it was chosen, as it’s part of the standard Unity engine (as part of the physics system), it’s also got little to no performance impact. Whilst not tested, I also believe that Unity is able to handle mutiple rays, meaning that it’s a solution that could integrate easily into existing games that use Raycasting as part of their object collision/detection/physics systems.

Raycast source

Initially, I experimented with using rays fired from the camera in the scene, however I found that using some augmented reality platforms don’t quite work. This led me to creating the Casting Cube component, which when enabled and set up, will follow/mirror the direction of the main camera. From here, we cast the ray in a foward direction, using transform.forward.

When describing this functionality, I allude, and liken it to a cane for a blind or visualy impaired person, as it allows the user to sweep across the scene using their device, much like a cane would be used to sweep in the real world. When paired up with the other scripts and functionality I’ve built, the user gets feedback, just like they would when the cane hits something in the real world.

We do assume that the Camera is going to be paired up and configured to match the devices rotation and movement, since we’ve focused on Augmented Reality so far, this typically makes perfect sense and has been the case in all of our tests so far.

Raycast setup

In the setup script, we again, make a few assumptions to allow things to work somewhat seamlessly, regardless of the set up that the developer has in place. We use tags to identify object, and rely on some pre-existing tags in Unity. Primarily, we rely on the MainCamera tag initially to determine the camera in the scene, and place all of the components required for raycasting as children of it.

It is worth noting though, that whilst we initially rely on the MainCamera tag, we do shift it over to a ScnCamera tag that gets set up, and referenced throughout the scripts that have been created.

It’s also worth noting, that as we’re using Raycasting, all objects that you want to be detectable by the end user require some form of collider on them to function/be picked up by the raycasting script.

Script Modularity

Initially, the scripts and code for this project was all handled within a singular script, there was no communication between scripts, and things became very messy and hard to debug and modify without fear of breaking something else. Below is an example of how this behaved:

digraph {
graph [label="Flow chart illustrating processing flow", labelloc=b]
   "Camera" -> "Processing Script";
   "Raycasting Data" -> "Processing Script";
   "Object Feedback" -> "Processing Script";
   "Processing Script" -> "Devices Text-to-Speech service"

However, since then, we’ve moved away from this approach to something more modular, that allows for information to be referenced and pulled from across the various scripts, and piped in to whatever may require it. This looks like this:

digraph {
graph [label="Flow chart illustrating processing flow", labelloc=b]
   "Camera" -> "Rotation Parser";
   "Raycasting Data" -> "Raycasting Script";
   "Object Feedback" -> "Object Description Script";
   "Rotation Parser" -> "Event Handler";
   "Raycasting Script" -> "Event Handler";
   "Object Description Script" -> "Event Handler";
   "Event Handler" -> "Devices Text-to-Speech service"

This allows us to pull data from the various scripts easily, and create bespoke functionality that only relies on certain functions, without having to invoke and work with the entire accessibility extension codebase. Having modular, yet centralised points to pull from has been successful, however, I’m not sure how performant it’d be in the long term or on larger projects. We’re continuously investigating things such as ECS, or more event driven systems however.

Object Descriptions

Rather than write some bespoke structure or format for object descriptions, I’ve settled upon using long strings with an Editor UI to accommodate holding longer strings and wrapping them to the ssize of the editor. This decision was done to make things easier, and also saving converting between types etc. when passing data to the event handler, and then the Text-to-Speech engine on a users device.

As a general rule of thumb for object descriptions, try and make them as descriptive as possible, but succinct. It’s worth testing out how your descriptions sound on a device with TalkBack or VoiceOver enabled, just to see if they’re too long or if they potentially get in the way of a user receiving other bits of information.

Priority Queue

As of the 30th May 2019 - SH has merged Master and ExperimentalEventDelegation Branches, making this the default behaviour as of now (until further tweaks and changes)

In the event driven branch, there is configuration tied to each event that determines the priority of an event. As a developer, you can remap and change the priority levels, if you feel it makes sense to do so. Currently the priority levels are as follows:

  • Priority 1: Raycasting Feedback: This will always take priority, as the main means for the user to interact with the AR/VR/MR world.
  • Priority 2: Rotation Feedback: An additional bit of informtation that will help a blind or visually impaired user orient theirselves, however, not as important as the raycast feedback
  • Priority 3: Object Description: As this is a bit of feedback that requires user action to trigger it, it’s the lowest priority currently.

It’s possible to add an unlimited amount of priorities, there is a custom struct set up so that an int, alongside a string can be passed along through Unity’s messaging system. This int is used to define the priority of the event, and is passed on as such to the queuing system itself.

It’s worth noting though that the priority queueing system currently adds in a lot of latency. SH, as of 12th Sept. ‘19 has experimented with potential resolutions and fixes for this, including using a fixed size prioirty queue, but has not had much luck at the moment.