Operations Information Hub Design

Problem
The site Operations team was using over 30 tools to carry out daily activities, often requiring multiple tools to complete a single workflow. Additionally, the tools were siloed, and users had to manually transfer data between them, leading to inefficiencies that slowed down the operations team and took time away from essential well surveillance and optimization.
Solution
The Tool is web-based application that centralizes information from multiple operation applications. Tailored interface is offered to the various operation roles so only the relevant information is shown, greatly minimizing the data search effort. After the minimum viable product (MVP) was released in March 2023, the tool received positive feedback for ease of use, significantly reducing training time from days to hours comparing to other similar applications introduced previously. Moreover, MVP already centralized information from 10+ tools, replaced a previous in-house application, and it’s in the process of replacing more applications.
My Role
UX Design Lead
Team Member
Senior UX Designer
Senior Service Designer
UX Designer
Employer
ExxonMobil Information Technology
Duration
May 2021 - December 2022
Background
The project team decided to tackle the issue of too many operation applications by providing a single-pane-of-glass, the Tool, for operation data to streamline operation process and reduce unproductive time for the operation team. Because the operation team consists of multiple roles, each with different responsibilities, it was impossible to tackle all at the same time. The project team decided to focus on increase operator work efficiencies for MVP. The design took a mobile-first approach, specifically tablet, as that was the device that operators used. The idea of a central data hub was great, but it wasn’t easy to accomplish.

The project faced many obstacles. Some of the biggest obstacles we faced included:
  • Business stakeholders were doubtful about the project
  • Not understanding operator mental model
  • Forgotten offline inefficiencies
Obstacle #1 – Business stakeholders were doubtful about the project
Step 1: Painted Early Vision with Storyboards
The business stakeholders for this project were leaders in the Operations team so they were also acting as subject matter experts (SMEs). It was critical for the team go have their support. However, many of them were doubtful about the project, questioning whether project team will be able to deliver a usable product in the end with useful features. To help build business stakeholders’ confidence in the project, my colleague and I decided to create storyboards to paint the vision of how a single-pane-of-glass is going to help with the day-to-day life of operators. We brainstormed the storyboards together and I did the illustration. These illustrations were showed during the first framing session and helped align project team’s vision.
Figure 1 Operations information hub vision storyboards.
Additional to the storyboards, which were high level, I also put together a rough mockup of how a central operations information hub might look so the concept was more concrete and easier to grasp for business stakeholders.
Figure 2 First concept created to visualize a central operations information hub.
Step 2: Researched the Day-In-The-Life of Operators with Interviews
To collect requirements for the central hub, my colleague and I proposed and carried out a round of interview with operators. With these interviews, we aimed to
  1. Have a better understanding of field operators' current workflow and pain points.
  2. Capture a list of required data and their associated tools.
We took turns to be the interviewer and note taker for the 7 interviews we did. Later, I spearheaded the qualitative data analysis and identified several opportunities, including but not limited to
  1. Unnecessary tool / workflow complications. There were several tools serving the same purpose, causing there to be too many tools for operators to keep track of.
  2. Data isolation. There were several multi-step-multi-tool workflows, but data are not shared between these tools, so users end up having to manually transfer data between tools.
  3. Unstable technical infrastructure. The internet connection at site was spotty and unstable, making it challenging to utilize digital tools.
  4. Poor software experience. Much of the 3rd party tools operators were using were slow to load, had short log-in time, and presented tedious process.
Figure 3 Colleague and I categorizing the different data mentioned during interviews.
To build empathy towards operators, these opportunities, together with a list of tools based on criticality, were later presented to rest of the project team. Business Stakeholders resonated strongly with the findings we presented and were glad that we were able to document and share them with rest of the team.

We also create a simple day in the life to help us paint a clear picture of a typical operator day, the tasks they must accomplish, and the tools they use.
Figure 4 Simple Day-in-the-Life map create by colleague and me.
Step 3: Showed Empathy with Mockups and Prototypes
After getting a basic understanding of the data required for operator daily work, my colleague and I started developing mockups based on the requirements collected. We used mockups to validate the requirements we collected. Additionally, mockups were used to bridge the communication gap between the business stakeholders and the IT team since they visualized the business requirements raised by business stakeholders.
Figure 5 Pages from first end-to-end prototype created.
Outcome: Turned Skeptics into Supporters
Our mockups were a big success! Our research-based design showed our business stakeholders that we understood their problems and needs, and we were there to deliver a truly useful product. After showing the business stakeholders our first prototype with end-to-end experience, they were excited and satisfied with the direction we were going, the features included, and the data points presented. One of them even said “I haven’t been this happy since my first child was born” because “I have been dreaming of this for the last 10 years”. We were able to re-establish business’ confidence in IT which allowed the project to continue getting full support from them. In fact, they were excited to contribute. This paved the road for carrying out future UX activities.
Obstacle #2 – Not understanding operators' mental model
Step 1: Tested Information Architecture with Operators
Operators consumes a lot of data daily, but how do they approach the data available? Do they search for data based on the task they are about to finish or by data type? That was the next question we had to answer. My colleague and I had some guesses based on what we heard during interviews, and we decided to test them with operators with mockups.
Round 1 Approaches
Figure 6 Three approaches to information architecture tested in first round of usability testing.
From Round 1, we learned that rather than looking for data by task or type, operators expected to be able to look for information by pad because all their tasks are pad centric. We created a second set of info architecture that groups data by pad and put them to the test. The new set of info architecture consisted of two new approaches, one requires operators to select their phase of the day first and the other required selecting their entity (pad, well, etc.) first.
Round 2 Approaches
Figure 7 Two approaches to information architecture tested in second round of usability testing.
From this exercise, we learned a lot about how to better present the various operation data to the operators. A couple of the highlights included
  1. The ability to see critical data together in context out weights issue with discoverability.
  2. Operators wanted to be able to see data on run (a collection of pads) level as well since they are each assigned a specific run.
Step 2: Redefined “Single-Pane-of-Glass”
When we started creating mockups and laying out the end-to-end workflows, we explored various ways to integrate operation data and feature from different source tools into a single application. From embedding with iFrame to opening data source tool in new tab to replacing old tools.
Figure 8 Four different ways to reach source tool data through the Tool.
When shared these different approaches with the client, we were informed that the Tool will be data read only, no writing back. With this constraint in mind, we moved forward with the approach of opening source tool in a virtual window to allow operators to take actions such as creating a task. Though it wasn’t ideal, but we chose it because would offer a more immersive experience comparing to opening a new tab, and more flexibility comparing to iFrame embed.

To assess how acceptable the idea of a read only “single-pane-of-glass” is for operators, I moved forward with creating a prototype to test with operators the usability of a read-only tool.
Figure 9 Screenshots of prototype used for testing task creation in virtual window.
During testing, operators expressed strong desire to complete work in the Tool such as creating a new job request and responding to tasks in the Tool. This is because
  1. Operators expected to do everything in a single application. Having to launch source tool was unexpected.
  2. Having to jump between tabs to complete a single task was exactly what they were already doing. It’s not solving the problem.
With users’ responses and our explanation of how breaking users’ flow into several tools would lead to inconsistent interaction and branding experiences as well as increasing risk of low adoption rate, we were able to convince clients to enable data write back to source tools if APIs were available.
Step 3: Restructured Information to Incorporate Action Features
With data read and write being the new approach for the Tool, I went back to ideation to incorporate more action features such as the ability to respond to tasks in the Tool. I also saw this as an opportunity to redefine some terminology.

For example, historically, a “task” is something that a specialist verbally asked an operator to do. Aside from “tasks”, there are many other action items getting assigned to operators through various tools, each called a different name. I wanted to challenge whether we can combine them all under the “task” umbrella.

I went on explored different ways of allowing operators to respond to tasks in the Tool.
Figure 10 Three approaches to allow task response in the Tool.
When consulting with operators and business stakeholders on these ideas, their responses were:
  1. Prefer having slide out for task response to maximize the real estate for listing tasks.
  2. Prefer distinguish tasks by type, not by source, because the two requires different type of responses and generally varies in urgency.
Outcome: Convinced Project Team to Follow Operator Mental Model
Figure 11 Final design for task page.
Through rounds of iteration and testing, my colleague and I were able to determine the fundamental data architecture for the Tool. We were able to utilize usability testing results to show the project team the pros and cons of continuing with a collection of separate apps verses unifying the interface into one and more workflow driver. By making these decisions early in product planning, before development, we were able to
  1. Save development time
  2. Provide users with more consistent experience
  3. Reduce maintenance cost on applications and tool by replacing 4 of them with the Tool
The final design was proven to match with operator mental model well. The training time on this application was significantly lower than that of historical applications. Most operators reported requiring weeks before getting used to previous applications but with the Tool only half an hour of walkthrough was needed to learn how to operate it. Operators reported it being “very easy to use”.
Obstacle #3 – Forgotten offline inefficiencies
Step 1: Uncovered Ignored Pain Points with Contextual Inquiry
As the project progresses, its focus moved to a new workflow, and a new round of research on operator was needed. The new workflow was well testing, which was more complicated in terms of the roles and tools involved. Therefore, I proposed to do contextual. It was the perfect opportunity to validate previous research and collect data for future scope. Addition to myself, I was able to include a researcher on this trip as well.

During the two-week site visit, we followed operators and specialists, observing and interviewing them on their daily activities. Not only did we learn about their pain points around digital tools but also the forgotten inefficiencies in their offline practices. One of the most surprising inefficiencies we learned was the use of pen and paper to track well testing results.
  1. Redundant work. Well test results are digitally recorded, yet because they are not available to operators, operators must record it again on paper.
  2. Prone to error. Operator could forget to write down the test result or put the wrong results on the wrong day.
  3. Inefficient. Operators need to know historical test results to better assess test result quality. However, test results were recorded in a table, making it difficult to visualize the trend.
Step 2: Influenced Project Scope with Design
I saw great value in digitizing the well test record book and making historical well test results available. However, this was not in the original scope of supporting well testing workflow. To show the value of expanding work scope, I did a mockup of a well test trend page and incorporated into the prototype to show the project team how it works.
Figure 12 First mockup of well test result trend.
Business stakeholders resonated with the issue of inaccessible well test results and fully supported adding test trend into release 1 scope. They also suggested some changes to make the trend more powerful.
  1. There were various types of testing. It’s valuable to know the test type for each test result.
  2. There were other data points that would be helpful in understand how the well is producing.
  3. Having a table view of the test data for easier number read.
Step 3: Refined the Experience with Design Iterations
Based on the feedback I received, I worked with my colleague to iterate on the design and continuously looked for feedback from the stakeholders and users. We were able to arrive at a final design that addressed the concerns raised during the design process.
Figure 13 New design for well test trend iterated based on feedback.
  1. Differentiated test types using color blind friendly colors.
  2. Added additional data points that were relevant to understanding well testing results and provided flexibility to turn them on and off to avoid overwhelming visualization.
  3. Provided the option to view the same data in table view so users can quickly see the specific testing numbers in one go.
  4. Differentiated acceptance and rejection with shading so shapes can be used to differentiate data points.
  5. Provided tooltip to help users read the specific numbers from the graph.
Outcome: Research Expands MVP Scope
My colleague and I were able to identify pain points that operators have become habitual to show its impact, and influence product scope.

During testing,operators praised having historical well test available at their fingertips through the tool and being able to visualize the history through graphics so it’s easier to spot problematic wells and lowers the risks of making the wrong test acceptance decisions. The well test visualization has been officially included in scope and on the roadmap for development.