UPA Boston’s UX Conference

I’m indulging my interest in usability and creating positive user experiences today by attending UPA Boston’s conference. I don’t talk much in this space about usability and user experience, but I’ve handled elements of them throughout my career. I was first introduced to the concepts in library school when we were discussing designing spaces and signage for library visitors and again, of course, during the various Web design courses I took. Good design and making things usable are important to everything. Understanding how people use things and what they want from systems is very important, especially when it comes to online services people from around the world might use.

At any rate, notes from the conference will be in this entry.

Boldly Going Where No UX Has Gone Before with Jeremy Kriegel provided an overview of what Kriegel has learned during his career as far as working with other people, getting buy-in, and convincing people to support your projects. Sometimes, communicating clearly what the negative results might be is a good way to convince people of the importance of what you’re doing. Some people need to know the negative consequences before they’ll be willing to set their fear of risk aside.

Q: When we only have one chance to bring people in to test a new Web site design, at what point in the redesign process should we do that? Some of my colleagues are not in favor of user testing.
A: You’ll get incredible value whenever you do it, so think about when it would be most beneficial to the people on your team who are against bringing people in.
Another audience member suggested finding some people who could review the site for free (like other colleagues not involved in the Web site, friends, family members …), do some quick and dirty testing with them, bring the results to the team, and point out that you could learn much more with more user testing.

==

Whirlwind tour of Mobile Usability Testing Apps and Services by Vijay Hanumolu began with an overview of the importance of mobile testing and some of the major mobile OSes: iOS, android, bada, Java, MeeGo, WindowsMobile, Tizen, and a symbol I absolutely do not recognize. Devices include: Internet TVs, phones, tablets, ereaders, and more. HTML5 is a big change. Be aware of it.

Conceptualizing designs is important: sketch them out, look at them, prepare wireframes, do as much testing on the device(s) as possible, and observe users. Some stencils exist to simplify the sketching process. As he talked about designing for Windows, he showed a slide that said: “Warning: Do not try this at home.” and said he could pick on Microsoft all day.

Mobile design is all about real estate and how efficiently and creatively you can use that space. He shared images of designs drawn with the help of device stencils and special sticky notes that are stencils of mobile phones. He finds it useful to use them as a storyboard. He’ll stick them on the wall and look at them periodically to see what strikes him, what he might want to change, what doesn’t make sense from a distance.

Paper prototypes definitely have their place. They’re very easy to share with others. You can use them to establish relationships with other people.

“If you’re like me (and I hope not) …” he quips.

Testing your own designs yourself seems like a reasonable practice, but keep in mind that you only gather a certain level of information from that activity.

Blueprint and AppCooker are examples of programs that allow mobile design and testing. They provide many mobile features to show fairly accurately how things will work on the device.

Adobe Device Central comes with many templates of devices, so you can draw on a mock device screen to see how things will look. It also has different views for different conditions, like a bright room, a foggy day, etc. Hanumolu shared a story of designing something that would be displayed at a kiosk, but they didn’t realize the kiosk screen would be in bright sunlight, so he rushed back to the designers to develop a different color palette for the demo. He mentioned this product was recently discontinued.

Adobe Shadow allows the mock viewing of Web pages as they would appear on different devices. Hanumolu has had bad experiences using it over VPN because Shadow must be running in the background on whatever device you’re using to show mock pages.

Bjango Skala is another program Hanumolu showed briefly. It works in layers to show what a mobile design would look like.

TestFlight for iOS Beta Testing on the Fly allows testing in segments via checkpoints. Whoever runs the test can leave questions at the checkpoints to get feedback. When testers get to the checkpoint, they can answer the question to give people additional perspective on the design.

Android has testing features.

You can get valuable feedback in very short tests, like asking a colleague to perform a certain task that should take less than 5 minutes.

Mr. Tappy and other devices measure aspects like keystrokes. Tobii is special eyewear for eye tracking. Looxie Cam hangs on an ear and shows what someone is looking at. Hanomolu mentions considering putting such a tool on his toddler, so he can see what his toddler is doing.

MobileUserTests used to provide a way for people to pay for others to test mobile apps. The domain name is now for sale. [Business opportunity, anyone … ?]

Apphance and proto.io are other tools.

Crowd sourced micro labor: CrowdFlower, Mechanical Turk …

Responsinator provides virtual representations of what sites will look like on mobile devices.

Hanumolu works at Mobiquity.

==

Julie Strothman opened Designing for People Who Struggle with Reading and Attention by having us read a story she claimed would be familiar that was a few paragraphs long, but seemed like jibberish. After a few minutes’ pause, she told us what the story was and how our experience simulated people with certain kinds of learning, reading, or processing disabilities. (Here is a similar version of the tale.)

Solving some common hurdles for people with disabilities can help everyone. Think about streamlined interfaces, text with lists or pull quotes, sidebars, headings … Common issues include: short-term memory, discriminating critical from non-critical information, highly literal interpretation, fine motor coordination, and execution of complex sequential operations. Keep in mind that people might have problems picking out what’s salient. Is your audience going to be familiar with idioms? How is their reading ability?

Fitts’ Law: The time to acquire a target is a function of the distance to and size of the target.

Think about how much easier it is to click on text as well as a radio button instead of just having to click on the radio button.

Supporting attention recovery online is important. People have many distractions. Is there a better way to hint people what to put in form boxes rather than having text that vanishes when people begin typing?

Are universally masked passwords really better? What if someone has motor coordination problems and is using a computer at home? Would it be better for someone to be able to view what she’s typing? Strothman mentioned the mobile device convention of showing a password character for a second or two. That works well if you can look up from your typing quick enough. She also mentions one of my pet peeves: after failing to log in enough times that you need to reset your password, then you get the hint that tells you the format of the password (e.g. two capital letters plus a number and a special character). Dismissible messages meant to help someone complete a form should, well, be dismissed. In line hints are better and friendlier for people using screenreaders.

Is it possible to add glossaries, particularly for sites with lots of jargon or advanced vocabulary?

How good are your FAQs? Many people start there instead of trying to navigate Web sites.

When playing the reCAPTCHA audio, Strothman points out the various layers of audio make it especially difficult for people who have trouble decoding language to figure out what they’re supposed to do. Without being able to see the reCAPTCHA, how do you know which words in the layered audio are the correct words to enter into the blank. Also, the audio file does not tell someone what they need to do with the sounds in order to figure out which noise will pass the reCAPTCHA. reCAPTCHA has a feature in the code that allows different versions of their form to appear at different intervals. She encourages her clients to use that option and watch the results: will more people have success with one kind of reCAPTCHA than another? When she said reCAPTCHA’s text “stop spam read books” is completely extraneous, I wondered if she knows the story behind reCAPTCHA, that it’s basically crowd sourcing proofreading questionable words from digitized books by displaying them in a CAPTCHA. [Addundum: After the presentation, I talk to Strothman about what she said about reCAPTCHA. When I pointed out that their words are those from scanned books that need a human interpretation, she said she didn’t know that and was thankful I pointed that out to her.]

Using active voice in text further simplifies language processing. Reducing prepositional phrases and extra clauses (that/who/which). She jokes that Twitterers have gotten really good at streamlining language.

Mixed case gives additional information about the letters because lowercase letters provide more clues about the letter’s identity than uppercase letters do.

Strothman indicated her slides would be available at http://strottrot.com/u/9, but they do not seem to be there yet. She is User Experience & Project Manager for Green River.

==

Sarah Pomerantz opened Reader-Centered Design for Health Communication by describing that the company focuses on increasing health literacy online. About 9 out of 10 Americans have limited health literacy skills, meaning they might not be able to read prescription medication bottles, understand text on a Web site, know where or how to find a medical provider, etc.

Literacy is about whether someone can understand words. Health literacy adds a level of understanding medical information on top of that. And, of course, being able to navigate the technology adds yet another layer of complication.

Some changes to benefit low literacy users will also benefit people with higher literacy.

Ask: what do the users need to know to take action? Consider behavior rather than background information and statistics.

Category labels are very important. Use short labels that make the most sense to your users.

Mel Choyce is going to talk about presenting text on the Web site. At first, there were only a few Web fonts, some of which weren’t really great for Web content. For different fonts, people often had to use images. As Web development expands, font options expand. Google web fonts shows selections. Consider your audience before you get to design stages. Does your audience have any reading or visual issues (old age, most wear glasses, some are blind, a few have red-green color blindness …). Is a sans-serif or serif font better? Many people still think sans-serif fonts are better for the Web, but some studies indicate people can determine letters easier when they have more points of distinction, like with serif fonts. Equal stroke width is better online. Medium letter width is good. Wider fonts make the page look like it has more text than it actually has. The more open the counter space, the better. Tall x-heights are good (Verdana has a tall x-height). Open Sans and Droid Sans are good for Web sites and mobile readability. (And, yes, Droid is as in Android.) Test fonts in context.

Molly McLeod explores a case study with the National Cancer Institute. People find big walls of text online to be a barrier. Communicate Health applied their skills to the text on an NCI site and did some user studies of it.

Not going smaller than 16 pixels on a site is good. Don’t make people squint. Yes, some sites have plus signs to increase the font size, but you don’t want users clicking that option all the time.

Nine to twelve words per line is good for columns. Line height should be about 120-150%. White space (eye rest space) is a good thing.

Hierarchy to add structure to the page is good, but you don’t want to go overboard with hierarchy. Four to six sizes and styles might be good, depending on your audience and content.

Consider who is using your site and how. Is it a professional who is going to use it in the context of a busy work day? Or someone who will leisurely read it on the weekend?

Some stats:

  • 75% of adults have looked for health or medical information online
  • 9 of 10 of them have health literacy challenges
  • 60% of adults have searched for health information online
  • Searching for health information is one of the top 3 most popular online activities.

Useful sites:

Consider your audience. What might they already know? What’s culturally relevant? If you say a deer tick is as big as a poppy seed, are people going to then try to find out what a poppy seed is?

==

Kris Engdahl began Conducting a Summative Study of [Electronic Health Record] Usability: Case Study by polling the audience to learn who is doing user experience design and who is working in the medical industry. About 1/4 of the room is in health care and about 1/3 does user experience design.

Engdahl defines an electronic health record (EHR) as a paper patient record on steroids. Usability, per NIST, is “The extent to which a product can be used by specific users …”

EHR usability is a hot topic because health care is a huge industry, EHRs impact all of us, recent legislation encourages the adoption of EHRs, poor EHR usability can put patients at risk, and health care providers demand better EHR usability.

They test with each group of users. Their software is deployed in many different settings. They decide which tasks to test, which version, which data (but they can’t use real data), which roles (quite a variety of jobs are in health care: pediatric nurse, surgeon, administrator …), and other choices. And there are other dimensions: how tech savvy is the person, how much time does she have, what’s her education level … Sometimes, testing can be a real challenge.

They conducted a study because big changes were coming down the pike and they wanted to quantify the improvements they were going to make. This approach drove the tested tasks and users They focused on areas they intended to re-examine in upcoming releases. They chose clinicians’ tasks common to several specialities.

Engdahl shared a task analysis chart of what the nurse or physician’s assistant would do when the patient comes into an office. They focused on all the steps (like record blood pressure) where the nurse or physician’s assistant would interact with their software.

Step 1: recruit “representative” users. They decided on 20-24 per user group.
Step 2: narrow the list: look for MD, DO, NP, PA; mix of specialties, genders, experience, practice size; people who had used EHRs, but not theirs; wanted people who commonly did tasks they were testing
Step 3: what to test: model the environment on an actual client environment; chose a practice with as little configuration as possible; scrambled the data to deidentify patients; were able to copy the set-up test environment for each participant

They had a month to do the testing before the next version/release would go live because their development cycle lasts a month.

Most implementations have 2-5 days of training. How are they going to do that for each participant in a month? Decided on very active help and task outlines. Click here. Do this. Do that. When someone referred to help, they considered that to be a failure.

Thoughts: Know how your data will be used. Don’t do the test in December. Prioritize tasks and user groups (most common tasks, critical tasks, tasks that carry patient safety risks …). Determine a reasonable sample size (how much money and time do you have? what’s the userbase like?)

Balance customization with comparability. Get realistic (but not real) data. (Sample tests exist on the Web. Why not sharable sample data?) Prepare the screener(s). Allow recruiting time. Don’t skimp on recruiting or incentives. Be flexible to accommodate participants. (For medical professionals, very early mornings and late nights were far better than during standard working hours.)

Get equipment you need ahead of time. Plan to have backup plans for everything. Technical problems tend to happen when you least expect them. Schedule time to train moderators. Plan for multiple pilot sessions. Ahead of time, discuss errors and success. Get consensus on those definitions. Figure out how to handle some possible “situations.”

Practice good test hygiene: disclosure forms, consent forms, tell your testers they can leave at any time … Schedule sessions reasonably. Allow moderators time to sleep, eat, chill between tests. Normalize observations and analysis.

Engdahl mentioned they use Morae, software for conducting user tests.

==

The Panel: Delivering Results: How Do You Report User Research Findings features Jen McGinn of Oracle, Eva Kaniasty, Bob Thomas, Dharmesh Mistry of Acquia, Kyle Soucy, Carolyn Snyder, and Steve Krug. Each speaker has 3 minutes at first, then there’ll be interaction with the audience.

McGinn presents results as bullet points on a wiki page or during conference calls. Stakeholders must attend every session.

She provides an executive summary, agenda, tasks, information about the participants, summarizes findings via annotated PowerPoint slides, and reviews the goals. Then she begins with the positive findings, then negative findings, recommendations (sometimes sorted by priority), and again a summary of the goals.

Kaniasty is an independent professional whose reports vary based on the client and the task. Some deciding factors include time and budget, company culture and industry, stakeholder involvement, and deliverable shelf life. She works a lot in health care, which can be formal. She’s more likely to do a detailed written report there. If the stakeholders are fairly involved, taking notes, making observations, they might not need much from her. Format:

Task 1. Task Label (with ease rating)
Task Scenario
Screenshot
Findings
* bullet points
* positive findings first
Next steps / recommendations
Summary

Mistry often does not provide recommendations with his analysis because he finds the recommendations detract from what he thinks is more important: the list of issues that need fixing. Also, making recommendations might hamper some brainstorming. An engineer might have a better idea for a fix if he only knows about the problem. Sometimes, they give Google Doc reports based on a template. Sometimes, they use a spreadsheet showing tasks with scores indicating how much certain elements might need work. They can also provide a number for severity per user to indicate how much of a blocker the problem is.

Soucy of Usable Interface sometimes writes reports long enough to have a table of contents. She finds teams do not usually read reports cover to cover. She makes sure her executive summary is detailed, broken out in bullets, and very easy to read. It’s usually an intro paragraph followed by 3-4 positive findings in bullets and 3-4 action items in bullets. She provides findings with severity ratings because she found many of her clients usually had no idea where to begin or which issue was the most important to fix.

Her reports are very visual. She learned quickly that people warm up to the images more than they will read all of the text. Screenshots are very helpful. Using actual quotes from the study really help the engineers. She explains problems without providing solutions. Sometimes she makes short videos highlighting particular issues. If clients want videos, she charges more. She encourages them instead to watch the user tests. She sometimes shares observer debrief notes, which she compiles into a spreadsheet.

Snyder of Synder Consulting announces at the beginning of her presentation that there is no single “best” format. Ask clients how they do reports and why they do them that way. She summarized why she prepares a long, formal text report. Sometimes, the client needs to pass a report along to a higher up or a funder or someone else who wants everything in writing. When she’s testing a visual design, she includes a screenshot marked up with what’s good and bad. Everyone seems to understand that kind of final product. Instead of a report, can you do something more useful instead? After clients observed user tests with her one time, they said, “We think we know what we need to fix. Instead of a report, will you do an additional day of testing for us once we implement the fixes?” They went that route and it worked well.

Krug prefers to give a live remote walkthrough. He does not enjoy writing reports. People don’t usually read them, but they need some kind of proof something happened. Do something different instead. He tells clients up front he’ll present his results in a GoToMeeting session and suggests they bring all stakeholders to that meeting. They often go 90-120 minutes. He integrates storytelling with his reporting of the most serious problems (usually fewer than 10). He recommends to them fixing the issues, then if they’d like to know more, he’ll tell them more. He encourages them to get their objections out of their system while he’s there to answer questions. He tells them to record the session, so they can refer to it later. He chooses not to accentuate the positive. He feels like they already know what’s good, it’s patronizing. He also explains that getting clients to come watch the tests is far more important than delivering a finalized report. Seeing is believing: watching makes converts. Many other good effects flow from watching as a group. Do whatever it takes to get people to come to the meeting: expensive snacks, drinks, etc. If you have to create a report, he suggests a two-page bulleted list in an email that takes no more than 30 minutes to write.

Suggested reading: Recommendations on Recommendations & get Jen McGinn to share her report from CUE 9, which was best in show out of 19 seasoned UX pros (try to imitate her secret sauce).

Q: Why the controversy over sharing positive findings?
A: Some panelists do it so people know what not to change, about which they don’t need to worry. It also warms some groups up to hearing the negatives. It lets them know or feel like not everything they’re doing is bad. Krug admits he’s probably wrong not to share positive notes, but that’s his choice. He thinks telling them what to change is more important than telling them what not to change.

Don’t forget that you are the expert in these cases, particularly because you might need to remind your clients of your expertise and skills and why they hired you.

==

How User Experience Evolves in a Company – a New Look at UX Maturity Models with Rich Buttiglieri began with a reminder that a lot of developers don’t think like their product’s users will and that someone needs to come in thinking like various users. Some user research begins when someone says “Someone go figure out why users are having problems.” Methods are often inconsistent or performed by staff not fully dedicated to UX. It’s typically done at the end of the cycle when engineers don’t want to keep developing, they just want to be finished.

[I arrived a little late to this presentation. The format is that the slides began at the beginning with no user experience or usability considerations and evolve to become more sophisticated with each subsequent slide. There are graphics of apes becoming more and more human.]

Some designers think “Users will be trained on the system. Things don’t need to be clear up front.” User interfaces are frequently designed by developers and/or product experts, not user experience or usability professionals.

Required before moving forward: Proven positive results creates more demand. Dedicated budget for staff and studies.

Considered: “We need to do lots more of that ‘usability’ stuff.”

When some companies come to Buttiglieri, they know so little about usability, they don’t know what to ask for. “We’d like to buy some usability.” “Ok, would you like large or extra large? We have some different colors, too.”

UX focus: more staff dedicated to conduct more volume of what has worked in level 1; quality becomes more predictable, but inconsistent reports as the organization figures out what works; despite increased volume, it still feels too late to make significant changes to design resulting in very few recommendations; UX applied to a few projects

Typical Methods: Heuristic review or usability test (formative but done both early and late); mockups and prototypes available for testing

Design Thinking: Genius Design “We know our users so well, we don’t need to talk to them”; UI designed by dedicated designer

Needed for next level: Broader understanding of UX process, Unify UX processes and procedures; define UX roles and skills needed

Managed: “Let’s study user behavior in context to discover unmet needs”

UX focus: UX process well defined; consistent quality and performance across projects; more recommendations influence design; starting to do discovery research to inform design; documented context of use

Typical Methods: Iterative evaluation with heuristic reviews or usability tests; competitive analysis, personas, field research

Design Thinking: Activity Focused Design “In the field to study users”; UI typically designed by dedicated designer

To advance: Systemic process; UX metrics requested to be used for product planning

Integrated UX: “What is the state of the union with UX?”

UX Focus: UX process well integrated with overall product development lifecycle; consistent and predictable quality, staff begins to present at UX industry conferences; UX recommendations driving design and influencing business requirements; UX metrics formalized, baseine measurements compared to new designs (summative)

Typical Methods: iterative evaluation with heuristic reviews or usability tests, competitive analysis, personas, field research; Quantitative studies (baseline and comparative)

Design Thinking: Experience Focused Design “What is it like to be a user?” ; UI typicaly designed by interdisciplinary team

Required to advance: Corporate commitment; cultural buy-in

UX Driving (Institutionalized): “All products must follow UX design process”

UX Focus: UX is a corporate business strategy and applied to every product; continuously improving process; industry leading quality of methods, staff recognized as a leader at UX industry conferences

[This talk is not what I thought it would be, so I’m going to switch sessions.]

==

The evolution of agile methods and user-centered design: a research study (Michael Ledoux/Terry Skelton)

This presentation seems like it analyzed one company’s use of UX and agile methodologies, complete with radar plots to share numerical data. UX fits well into agile methodologies.

UXA Manifesto: UX …

  • owns the process of design and all its contributing methods
  • cannot depend on the sprint story to direct design
  • is not confined by the sprint time period
  • research must be part of the process
  • is represented in agile ceremonies
  • is centralized to the agile sprint teams to better share a holistic design vision

The UX team found itself more knowledgeable about the customers and products than the product managers.

==

Are You Designing Your Professional Relationships? asks Alla Zollers. We should invest in professional relationships just like we invest in other aspects of our careers. We rely on professional relationships for all sorts of things. In the UX/usability world, we need to build relationships with all of our team members, customers, and other stakeholders. A participant recommends working on those relationships not just in terms of needing something from someone. Say, stopping by a developer’s office for a quick chat instead of only bothering that person when you want a change.

When relationships break down, all sorts of bad things happen.

To design a relationship:
Step 1: Set the stage: create a safe space or “container” to design the relationship “Hey, I’d like to talk to you about how we can work together on this project.”
Step 2: Know your values: Begin the design by communicating your values.
Step 3: Invite co-design: Ask questions to help the other individual express their values; No judgement.
Step 4: Iterate: Continue to redesign the relationship on a continuous and ongoing basis.

Why do these steps?
Create a safe space for communication
establish trust and design an alliance
help each other understand how to effectively work together
set yourself up for success

What are values? What are some of your values? Think of a special, peak, rewarding moment in your life. What values were being honored? Think of a bad moment in your life. What values were being suppressed? What do you need in your life to be fulfilled (besides basic necessities)? Whom do you admire? What values does that person have?

Now that you know your values, you can design your relationships, figure out where to work, decide how to vet clients/choose where to work, figure out what you should do to be fulfilled.

Be Sociable, Share!

Leave a Reply

You must be logged in to post a comment.