These images and review are based on the Honeycomb 3.1 update using the Sony Blu-ray player», so the actual TVs that shipped last year may have a few differences, but from what we have seen, these issues exist for both. The main interface for GTV is a revamped Home Screen which now is simply a bar at the bottom plus any widgets the user has selected.
The idea of combining widgets and the Home Screen is odd, it demands more screen pixels at once for what are two different types of content: apps to launch and informational widgets. Even without any widgets on the screen, the video playing in the background is covered with a semi-transparent, blue layer. This means that any other people that are watching the show are now almost completely interrupted while one user searches for the widget they want or the program they want to launch. It is even more difficult to get to recently used applications or an application that is not one of these favorites. There are some simple ways to make sure that the shared TV experience is upheld. First, separate widgets from application launching. Second, remove the transparent layer, it is not necessary anymore for contrast. Third, allow users to choose whether to go into full screen mode to view more applications or to scroll. Fourth, even in full screen mode there should be a Picture-in-Picture (PiP) mode similar to most cable/satellite box guide set ups so that the current video is still viewable. From our review, only the TV and Google Chrome applications allowed PiP, so there appears to be a deeper level problem to be solved by Google here.
Similar issues occur when using the built-in guide application TV & Movies. The application takes over the entire screen and then is only able to show 5-10 items at a time. This causes the browsing method to take even longer while interrupting anything other viewers may have wanted to view. Again, PiP combined with a smaller interface would benefit users as well as optimizing the information on the page. Does the default view really need the year and such large poster art? Algorithms exist to detect faces in images and crop and scale appropriately, perhaps this same technique could be applied to make these images more useful.
The menu system of GTV is perhaps the most annoying element when we have used the system though. They all appear centered and add the same transparent layer over all content. To simply bring up PiP options or other menus takes over the whole screen for something that is often just a simple list. In short, it is hard to tell which of these design choices are due to technical constraints, or are possibly intentional. The result is the same though, an interface that interferes instead of enhances the viewing experience, which is frustrating because some of the features here are potentially really valuable.
The first thing that surprised me was I did not know how to turn it on. In fact, I handed the device to five different people and only one of them found the power button in less than five seconds. Then, figuring out how to unlock the device was tricky as well as Amazon uses a subtle yellow triangle pointing to the left to hint at how to unlock it. Part of this is due to patents, but there are other clever designs out there that seem more obvious. Once the device is unlocked, more confusion sets in as the overall mental model seems to be lacking.
The iOS model is fairly simple. The launcher has pages of apps (with folders) that users swipe sideways to access and then launch applications. The applications launch in an animation that reinforces the idea that they open “on top” of the launcher plane. When closed, they shrink back into that same plane. Other parts of the UI such as the lock screen and notifications (as in Android) help reinforce the mental model of planes. The Fire simply is not as refined and the transition animations and UI elements create a bit of confusion instead of helping solidify where you are in the interface, where you are going, and how things are related to one another. The main issues are that the system begins with an interface with large carousel which is unlabeled, with a shelf system below that which is also unlabeled, with what looks like tabs across the top of the screen.
The size is quite smaller than an iPad or the larger Android tablets from Motorola or Samsung. While this appears to be a great form factor for reading standard Kindle formatted books, the diminutive size has limitations for standard applications such as web browsing, email, and reading magazines or newspapers. Part of the fault lies in the size of the screen. At 7 inches, even with 1024×600 pixels providing 169 ppi (pixels per inch), it simply does not provide much room for text to be large enough to be readable. The tradeoff then is less content per screen view for lighter weight, great portability and $199. Jakob Nielsen’s Alertbox» has a review of usability based on target sizes and the results were not favorable. In fact, much of what they discovered is that for the Kindle Fire, full scaled websites do not render well at that smaller size.
What we found when looking at websites at default size (partially zoomed), full size, Kindle Books, Magazines, and email is that only Kindle books (at default size) were easily readable and the zoomed default of the New York Times website was just barely readable, even though it has a high resolution screen. There are diminishing returns on resolution density when it comes to the human eye, and at that point, the physical size of the type or object is the limiting factor. In fact, size is a dissociative retinal variable, meaning it affects your ability to see other variables such as character shapes or letters properly.
The quality of the screens themselves are fairly close, though the site Displaymate» goes into a lot of detail about this if you want to learn more, they compare the Fire to the Nook and iPad 2 because they all use IPS technology. Given that Kindles have traditionally trumpeted their lack of reflections and ability to read in daylight, the Fire was surprisingly reflective (we experienced these issues when photographing and filming it) and only average in brightness. We do not have the same depth of tools, so we simply used a cheap USB microscope and zoomed in on each to see how they looked. Up close, at about 400x magnification, you can see a little difference in size and shape, but it was fun to see the difference of the 169ppi of the Fire versus the 132ppi of the iPad2.
The consequence of trading portability for size is that now when a user wants to read a news website, or a magazine, they either have to switch to a text based mode (for Kindle magazines), losing the magazine experience, or they must zoom in and out more than they would with a larger screen. This may be a fine trade off for some, but the value proposition is $199 for this tablet, or $269 or more for a larger tablet with a camera, SD slot, full Android Marketplace, etc. And this is the crux of the user experience for such smaller tablets, is the size so limiting that they really are no longer tablets? Amazon has definitely positioned the Fire as more of a consumption device than a full fledged tablet. But this is certainly not a phone sized device. So where does this leave us in terms of defining what the device is? For lack of a better term, maybe we should call these mini-tablets.
This could be a similar situation to how the Mac mini lacks some of the features of larger Macs, but are still valuable and worthy of a place in many homes and businesses. What really struck me when the original iPad came out was the idea of a durable, high-quality alternative to a laptop for kids at $499. The Fire is not that, but for a lot of kids or people that have laptops or desktop computers already, this price point and feature set strikes me as a very large potential market. I do think that screen size prevents it from doing a lot of Tablet type tasks well though, so there is room and a purpose for mini-tablets, tablets, and laptops. A big reason is that the scale of feature set and screen size fits nicely with the increase in prices. What will be interesting is to see if the Fire eats into iPod touch and other product sales at the high end, leaving more room for the cheaper nano and shuffle devices.]]>
Update on December 30th:
Apparently Google had to pull the Nexus S update as people were having trouble. This is their own phone and they don’t have 4.0 working on it yet? What were they testing on all these months?]]>
“When to Use Which User Experience Research Methods” is a great article on Jakob Nielsen’s Alertbox by Christian Rohrer. It summarizes a variety of research methods into three dimensions and then creates a graph for mapping them out against one another. Even here, the “How to fix” part is not straightforward and often requires multiple designs to determine which is most successful. As seen in Our Methodology, these different tools help us measure SEED (satisfaction, efficiency, effectiveness, desirability). Depending on the project goals, budget and timing, user testing may be of types Exploration, Assessment, or Validation.
Exploration: Focus on learning more about the users and their thought processes. May be in person with a facilitator, remote, or even through use of surveys. Mostly qualitative in nature. Is useful for comparing alternative designs and branding initiatives and may include focus group testing.
Assessment: Determine whether designs are performing adequately, at a level of completion of tasks and general usability of the site. This is typically performed with a prototype that allows users to interact with the system directly, rather than through a facilitator. Captures both qualitative and quantitative data.
Validation: Without preset goals or a baseline, it is difficult to measure improvements. For validation to get the best data, comparing previous design or expectations to new results is most valuable. Ability to measure time-on-task, errors, efficiency and learnability can prove or disprove design patterns.
The methods and tools that are best for any testing are largely influenced by the testing goals and budget. Three types of methods allow for flexibility in achieving those goals.
Online Survey/Tool: Survey forms provide user self assessment feedback and click/task tracking tools are objective measures of success. Because this can often be implemented via site intercepts, recruitment costs are minimal. Tools can range from $ to $$$$ depending on features.
Online Facilitation: An extension of the online tools, these allow a facilitator to investigate user reactions and/or questions in real time. Often this requires users to have certain software (such as Flash) installed and rarely can facilitator see user body language. Again, this can utilize conference call type software to specialized software ranging from $$ (facilitator time) to $$$$.
In Person Facilitation: This includes one on one and focus group types of testing. Allows facilitator to adapt in real time to users and capture more qualitative information. This often requires recruitment fees as well as facility and travel costs on top of software. Cost $$$ to $$$$.
For more information about testing, we recommend the following books:
Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests
Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics
Observing the User Experience: A Practitioner’s Guide to User Research
Remote Research: Real Users, Real Time, Real Research]]>
satisfying: self-reported; what users think rather than actual results
efficient: the ability to accomplish tasks quickly
effective: a high rate of success and learnability
desirable: meets self-reported needs and wants of users; some of these may be brand and content focused
Our methodology is structured to ensure that any project will be successful, regardless of platform, target audience, or industry. Essentially, we model our practice on the scientific method, simplifying to only three broad stages: Research, Design, and Test, so our services mirror these three stages as well. These are utilized at both a high and low level, meaning for traditional architecture, design and implementation phases, we still continue to research, design and test, but the execution of these methods are very different given the outputs.
The purpose of the first stage is to establish goals and requirements, capturing who, what, where, why and when. This may include in-depth ethnographic research, or if everything is already defined clearly, we conduct a couple sessions to review and clarify everything with the team. No matter how well something is defined, interpretation always occurs, so it is critical that all assumptions are laid out and verified.
Whether we create wireframes, design comps, or prototypes, we are constantly designing solutions for the problems we are given. It is critical that the problem has been defined correctly for this stage to be efficient, effective and as quick as possible. This is the stage where we leverage our experience of design patterns the most, but each project is unique so we always verify our designs.
A project is never over because time never stops. The environment in which we define a project often evolves while we design and develop. Once developed and launched, the market and technology may change the opportunities and goals. Verifying a project varies at each stage, based on scale and scope. Often the early stages are measured by focus groups or surveys while the later may be live site data or one- on-one user testing of a prototype. The key is to never assume you have the right answer, no matter how much experience and confidence you have. We take the time to do even simple verification and reduce risk immensely.
I also recently picked up the seminal work Semiology of Graphics by Jacques Bertin. One of my all time favorite UX Design books, Designing Visual Interfaces by Mullet and Sano references this book often so I finally got it and am glad I did. If you are interested in the core principles behind information graphics, then pick it up, it is an excellent reference if nothing else.]]>