Skip to content

Navigation: more feedback¤

Our prior tests were centered around notebook structure and subsequent navigation. Because we addressed some feedback between test sessions and exploring content types does first require navigating to that content, we found further feedback on navigation in this round.

Two visibly identical notebooks with different underlying structures were used during these tests. Feedback is sorted by notebook.

Labeled cells: notebook structure 1¤

This notebook structure directly addressed some of the feedback from previous rounds of tests. Headings automatically became links. Execution numbers became their own grouping rather than a portion of the tags encapsulating all content for that cell. div tags were removed wherever possible. Participants reported the following.

  • This version of the notebook provided some improvement in navigation for most participants. Having headings as links allowed non-screen reader users to take advantage of headings. Having headings as links added headings to the list of interactive elements on the page and made them appear with the tab key. This offered an additional way for screen reader users and keyboard navigators to explore the notebook structure.
  • Requests for a table of contents continued.
  • This version received less feedback overall; it was noted as relatively similar to previous tests.

Tabable cells: notebook structure 2¤

This notebook structure labeled each cell as an article and leveraged other HTML content categories to provide a more standard HTML structure for assistive tech to hook into. It also made adjustments to low-contrast areas. Participants reported the following.

  • This version of the notebook received consistently neutral to unfavorable feedback.
  • Non-screen reader users responded neutrally and were unimpacted.
    • Participants using their vision did give positive feedback on increased color contrast in the execution number and code comments.
  • Screen reader users were most impacted by these changes and gave negative feedback.
    • Participants using a screen reader generally needed to be prompted to discover the notebook’s structural changes; the methods it supported them using were not the ways these participants were interested in interacting.
    • Participants successfully used the tab key to navigate through cells (to “tab through cells”). However, populating the tab list with every cell on top of all interactive areas in the notebook created an “overhead of tabs.” One participant described it as “you don’t know where you are going or what you are looking for. Could be five tabs. Could be fifty” before they can complete the task. This was not considered a positive change.
    • To clarify, the tab key is, by default, a coarse navigation tool that allows users to jump to and from areas where they can perform some kind of interaction. Without this change in notebook structure, which already included all inline links, headings (because they are also links), the video play button, and all browser-level navigation. Making cells tabable adds an additional type of content to filter through in this list, and adds another fifty-plus items to the list in this over-fifty cell notebook.

Additional notes¤

We found larger UX patterns worth noting. They are listed in no particular order:

  • There were issues searching and navigating by content type (ie. cell, image, video, so on), but there was a high rate of eventual success. Most tasks were completed by most participants.
  • A common sentiment in tests: “annoying but normal.” Participants expressing this sentiment would first encounter an obstacle they knew how to overcome. They would then report that this obstacle was an everyday occurrence for them across the internet and that the notebook was behaving within the current standard for that user experience. Unfortunately, this was one of the most positive types of feedback we received. It tells us we have a lot of room to grow in making enjoyable and equitable user experiences in both Jupyter and in wider digital spaces.
  • Unlike in the first set of tests for navigation, participants were more likely to miss information or not be able to access it at all. Interestingly, very few participants expressed that they noticed they were missing information; most remained confident they had access to the whole notebook. The few who did observe that they could not access information knew because they found familiar failures—especially images lacking descriptions.
  • Many issues and fixes (requested by participants or found in review) are what might be considered accessibility “basics.” Alt text/image descriptions, labeling, and contrast issues came up frequently. These are very fixable issues, and they need to be done both in the interface and when authoring individual notebook files.
  • Participants who are more comfortable and/or familiar with Jupyter notebooks expressed more interest in working with the source notebook when encountering obstacles or when trying to find information that wasn’t immediately findable. Filtering through the non-editable version of the notebook was comparatively not worth the effort.
  • Text-based content regularly gave participants fewer issues when compared to non-text content like images or videos. While no content type was without issue, inaccessible images and videos were more likely to block participants completely.
  • Participants using screen magnifiers are especially impacted by the lack of maximum width for notebooks in this form. Because magnifying limits how much information fits on a screen and horizontal scrolling is typically more awkward than vertical scrolling, the full-window line length of notebook content came up as a serious pain point and contributor to fatigue. It also increased the risk of screen magnifier users missing information, especially on the right-hand side (for a notebook in English, a left-to-right language).
  • Some participants would complete or describe completing tasks using an ability that fatigued or even hurt them. For example, participants with low vision strained to use their vision to complete a task that their assistive tech was unable to work with (due to poor infrastructure or tagging on Jupyter’s part). Yet another way that inaccessibility harms people who are determined to work in fields that rely heavily on notebooks.
  • Jupyter notebooks often bring together many types of content, and this content can bring its own accessibility issues with it. Notebooks have the capacity to inherit accessibility problems from everything that makes them up—from Jupyter-maintained tools to any other package. For these tests, we ran into issues like lack of image description support for plotting packages, lack of labeling in the embedded video player and its buttons, and low contrast syntax highlighting themes. On the Jupyter side, we can also make choices about what packages to support or how we handle these inaccessible defaults. Notebooks can surface inaccessibility from anywhere.
  • Authors will continue to have a large amount of power to determine the accessibility of an individual document. This is part of why we are drafting authoring recommendations.
  • Participants search for familiarity to anchor their experience. What was familiar to each participant varied depending on their field of expertise, accessibility accommodations used, what other software they were familiar with, and Jupyter notebook experience specifically. Examples include:
    • Participants who are familiar with Jupyter notebooks would more often talk about cells and try and find ways to distinguish between them. They also were the only participants who called out insufficient divisions and information to find cells.
    • Participants using screen readers were more likely to expect content headings to be more robust. These participants were also more likely to explain their mental model of cells (or other divides) in the notebook by the idea of headings.
    • Participants used to working with editable versions of notebooks or other source code forms were more likely to compare behaviors to an editable document and asked to have those experiences carry over. For example, some participants wanted to be able to navigate content by editable versus non-editable areas to tell the difference between cell inputs and outputs.
    • Error and warning outputs—an (often unexpected) cell output that reports to users when something in the code run is not functioning as expected—were only findable because some participants knew to expect one. Many participants missed the text-only transition to an error message in the test notebook because it had no other indicators. As it was a common error, some participants clocked into it immediately and without host support, but they reported it was only because they had heard that exact sentence many times before.