I just finished watching this great talk from Tony Fadell, at TED this year. Tony is the Lead Product designer of the iPod and of the NEST Thermostat. This talk is thoroughly enjoyable and it does a great job of getting a simple message across: why is Product Design and UX so important today?
Tony takes us through his design principles in the typical 18-minutes TED format. He does it with great passion and emotional intelligence. What is Habituation? Why do we get annoyed at problems and then stop caring? Why is it so important to notice the tiniest details? Why sometimes the solution involves taking a step back and looking at the problem together, as a whole? Why do we need to think like young people to get a fresh perspective?
It is great to see how Tony can deliver this message with superb simplicity and clarity.
With the unstoppable rise of mobile apps, some pundits within the tech industry have hastily demoted the mobile web to a second-class citizen, or even dismissed it as ‘dead’. Who cares about websites and webapps when you can deliver a superior user experience with a native app?
Well, we care because the reality is a bit different. New apps are hard to discover; their content is locked, with no way to access it from the outside. People browse the web more than ever on their mobile phones. The browser is the most used app on the phone, both as starting point and a destination in the user journey.
At Ubuntu, we decided to focus on improving the user experience of browsing and searching the web. Our approach is underpinned by our design principles, namely:
Content is king: UI should recede in the background once user starts interacting with content
Leverage natural interaction by using gestures and spatial metaphors.
In designing the browser, there’s one more principle we took into account. If content is our king, then recency should be our queen.
Recency is queen
People forget about things. That’s why tasks such as finding a page you visited yesterday or last week can be very hard: UIs are not designed to support the long-term memory of the user. For example, when browsing tabs on a smartphone touchscreen, it is hard to recognise what’s on screen as we forgot what that page is and why we arrived there.
Similarly, bookmarks are often a meaningless list of webpages, as their value was linked to the specific time when they were taken. For example, let’s imagine we are planning our next holiday and we start bookmarking a few interesting places. We may even create a new ‘holidays’ folder and add the bookmarks to it. However, once the holiday is the bookmarks are still there, they don’t expire once they have lost their value. This happens pretty much every time; old bookmarks and folders will eventually start cluttering our screen and make it difficult to find the information we need.
Therefore we redesigned tabs, history and bookmarks to display the most recent information first. Consequently, the display and the retrieval of information is simplified.
In our browser, most recent tabs come first. Here is how it works:
In this way, users don’t have to painstakingly browse an endless list of tabs that may have been opened weeks or days ago, like in Mobile Safari or Chrome.
Browser history has not changed much since Netscape Navigator; modern browser still display a chronological log of all the web pages we visited starting from today. Finding a website or a page is hard because of the sheer amount of information. In our browser we employ a clustered model where you display the last visited websites, not every single page. On tap, you then display all webpages for that websites, starting from the most recent. In this way scanning the history log is much easier and less painful.
[Tweet “It is time to look beyond the screens and discover a truly new product”]
I found this very interesting article on Medium today. It resonates with my idea about wearables. So, OK… we are obsessed by touch screen and touch UIs, but we should not try to put them everywhere and rather focus on providing value to users in different ways.
– What kind of data can we collect without relying on touch or speech input?
– How can we notify users and provide feedback without having to watch a new screen?
“As developers start to build applications for Google Glass, Pebble, Galaxy Gear, and Android Wear, we have to decide if we really need to see all of the same content on a new device. We already have an immensely powerful computer and content-consuming device in our pockets every day. It is time to look beyond the screens and discover a truly new product”
I just discovered this website from a colleague at Canonical. Check it out. It has an amazing, mesmerising nature that you hardly find in web today. Plus, it talks about Creativity in Music, Performance, Architecture.
Having worked more than 13 years as either an Information Architect, User Experience Designer and HCI researcher, I accepted the invite from the organiser of UCD 2013 London to discuss what I learned by doing user centred design client- and agency-side.
How does it differ?
Does the agency engagement model allow a true user centred design process?
Do agencies need to change the way they engage with their clients to create great products and services?
What is our role in shaping the future of our practice?
This excellent infographic sums up some of the most recent insights from a Sterling Brands/Ipsos research commissioned by Google.
Here’s some highlights:
90% of people move between devices to accomplish a goal, whether that’s on smartphones, PCs, tablets or TV.
Two primary ways we multi-screen
In understanding what it means to multi-screen, they discovered two main modes of usage:
Sequential screening where we move from one device to another to complete a single goal
Simultaneous screening where we use multiple devices at the same time
They found that nine out of ten people use multiple screens sequentially and that smartphones are by far the most common starting point for sequential activity. So completing a task like booking a flight online or managing personal finances doesn’t just happen in one sitting on one device. In fact, 98% of sequential screeners move between devices in the same day to complete a task.
With simultaneous usage, they also found that TV no longer commands our undivided attention, with 77% of viewers watching TV with another device in hand. In many cases people search on their devices, inspired by what they see on TV.
I found this excellent introduction to Touch UIs by Luke Wroblewski that looks at the disruptiveness of the introduction of successful input methods in computer history. In a nutshell, every time a new input paradigm was introduced to the market, market dominance shifted to those companies who used them to serve consumers best.
From Re-imagining Apps for Ultrabook™ (Part 1): Touch Interfaces
Over the past several years, both in my product work and writings, I’ve focused primarily on designing for mobile devices. Mobile has not only grown tremendously, but popularized new ways for people to interact with digital services as well. New capabilities like multi-touch, location detection, device orientation, and much more have made mobile devices a playground for new interactions and product ideas. It’s been an exciting ride to say the least.
Now many of these revolutionary capabilities are making their way to a new category of devices through Intel’s Ultrabook™ system and, once again, a new set of opportunities is available for designers and developers to re-imagine software. It’s an exciting time for desktop apps and I hope this video series will not only inspire you to explore new ways of thinking but help you with detailed design advice as well.
To start the series, we’re going to look at the opportunity touch interfaces provide for desktop applications. Specifically, we’ll outline the impact of new input methods in personal computing and walk through the top-level principles behind designing for touch. http://c.brightcove.com/services/viewer/federated_f9?isVid=1
When I was offered presenting at the Design track at Droidcon 2011, I enthusiastically accepted as very little has been written on the topic. This still holds true, regardless of Android being the most widespread Smartphone OS on the planet.
The things is, Android apps have been heavily criticised in the past due to poor usability and aesthetic appeal. The truth lies in the middle: there are some great apps on the market, but they are flooded by a huge number of dreadful ones. Often the functionality is there, but lack of design makes them hard and unpleasant to use.
One of the issues with Android is a lack of solid & consistent UI patterns. UI Patterns are beneficial to designers and users as they set the expectations in interacting with a device.
When I started designing for the Skype Android app back in 2009, my team faced the huge challenge of creating a solid, consistent interaction design language almost from scratch. Even Google proprietary apps such as Gmail, Messaging, YouTube, etc. had several pitfalls. In a way, it was also extremely exciting as we could do whatever we wanted – a designer’s dream and nightmare, folded into one.
Fast forward to 2011, I feel Android is in a better position. Google – while being the smartphones market leader – has hired early this year former Danger’s Sidekick & Palm Web OS user experience director, Mathias Duarte. I watched him at Google I/O this year presenting the Honeycomb UI framework with his team and I recognised there was a lot of progress in there.
In my view, the UI changes started by Honeycomb is going to make Android easier (and more pleasant) to use. However, Honeycomb is just for tablets: the main challenge will be with when the next Android release (a.k.a. Ice Cream Sandwich) comes out in a few weeks – as the same principles will support both tablets & handsets.
Here are some of the design challenges Android designers still face nowadays:
How do you navigate between the different sections of the app?
How do you visualise information?
How do you provide feedback while avoiding interrupting the user?
Each app is different and there is no silver bullet to tackle all these questions – it depends on a number of factors.
My goal with this presentation is to look at some of the most remarkable apps on the Android Market and analyse best practices in navigation, fluid, responsive interaction and information visualisation.
Where some of the finest code-crackers and scientist were gathered during the war to decrypt Nazis messages sent throughout the war-torn Europe?
… the answer is at Bletchley Park, just south of Milton Keynes, in the UK. During World War II, Bletchley Park was the site of the United Kingdom’s main decryption establishment, where ciphers and codes of several Axis countries were decrypted. The intelligence produced at Bletchley Park, codenamed Ultra, provided crucial assistance to the Allied war effort. Some people even claim that Ultra shortened the war by two to four years and that the outcome of the war would have been uncertain without it.
Bletchley is one of the places where history of modern computing was made – even Alan Turing worked for some time on cracking those codes. Bletchley Park is the best possible setting for the fourth Over The Air, the london-centric conference where all flavours of mobile development and design techniques are intensively explored. And having good weather throughout the conferences, with temperatures reaching 28C was rather unique for this time of the year.
This was my first visit at OTA, and I am extremely pleased by the quality and the variety of inputs I received in the last 36 hours. Here is a few things I plotted down – this is just my take and it is based on my (design) interests and I hope somebody else will find them useful:
Nick Butcher(@crafty) from Google has illustrated the Android Honeycomb design patterns and pointed out to some of the changes from Froyo and Gingerbread. I’ve got to say that I have been relatively impressed by the amount of good thought Matias Duarte and his team have put on Honeycomb.
My feeling is apps have now a solid framework including signposting, navigating to views and presenting the most used functions of the website. Another aspect of the framework I really like is designing fragments (i.e. self-sufficient UI modules) which can then be either composed in a single view on large screen or split and navigated on smaller screens.
My feeling is The seeds planted at Google I/O in 2010 with the Twitter app have slowly grown in what I see implemented in Honeycomb. Something I didn’t realise was how ‘Ice Cream Sandwich’ (please stop them!) will bring this changes to small-screen devices and how this will make using Android Tablets and Smartphones a consistent experience.
From my perspective, it is a joy to see a valid alternative to iOS (but they are still at the top of their game). Looking forward to present at Droidcon in a week or so and give my view of how Android patterns & best practices have evolved in the last three years.
Liza Danger Gardner(@lyzadanger) also picturesquely talked about ‘mobile web’ and quite rightly said that ‘mobile web’ should just be called mobileWEB full stop. Rather than coding for large screens and then creating a different rendering for smaller screens, much better designing for smaller screen first. This is quite right considering that the amount of people accessing the web ONLY from a mobile device has exceeded the number of those doing accessing from desktop ONLY 🙂
My favourite presentation was Franco Papeschi’s (@bobbywatson) talk about “Changing the World” one start-up at a time. Franco has recently started working for the Time Berners Lee’s World Wide Web Foundation, whose goal is to bring the benefit of the Internet to those who need it most – the developing economies in Africa, India and Asia. The W3C foundation has set a number of initiatives and laboratories in these countries – providing skills and tools where they are needed the most. It is absolutely inspiring seeing how people of all ages and kinds in those countries are eagerly embracing Internet to change their lives – in better.
Franco concluded saying he still has no magic recipe on how to help this people walking this path – but he said there are three things we can volunteer with: COACHING them with new skills, MAKING stuff, and MENTORING them on their way to success.
On a final note, I was particularly impressed by the IGNITE speech format at the end of the first day: it’s a five-minutes-twenty-slides-fifteen-seconds-each presentation format that allows people to talk about things in an visual, intuitive, emotional and powerful way of presenting ONE idea, it doesn’t matter how crazy! I’d love to try once presenting at one of these sessions.