London Design Week events

Hi guys,

There’s a few things happening in London this week that I thought might be worth sharing

Tuesday 15 September, Adam Greenfield (Author of ‘Everyware’) is speaking at the LSE about Smart Cities

This Saturday 20, at the Science Museum there’s a workshop about how to photograph magnetic fields (£10, limited places)
The field life of electronic objects

During the week-end 20-21, the V&A has a number of free exhibits about Digital Design

Ping me on Twitter you think of going to any of those

The browser is dead. Long live the browser!

Note: This article has been originally posted on the Ubuntu Design blog

With the unstoppable rise of mobile apps, some pundits within the tech industry have hastily demoted the mobile web to a second-class citizen, or even dismissed it as ‘dead’. Who cares about websites and webapps when you can deliver a superior user experience with a native app?

Well, we care because the reality is a bit different. New apps are hard to discover; their content is locked, with no way to access it from the outside. People browse the web more than ever on their mobile phones. The browser is the most used app on the phone, both as starting point and a destination in the user journey.

InstallingSource: xkcd

At Ubuntu, we decided to focus on improving the user experience of browsing and searching the web. Our approach is underpinned by our design principles, namely:

  1. Content is king: UI should recede in the background once user starts interacting with content
  2. Leverage natural interaction by using gestures and spatial metaphors.

In designing the browser, there’s one more principle we took into account. If content is our king, then recency should be our queen.

Recency is queen

People forget about things. That’s why tasks such as finding a page you visited yesterday or last week can be very hard: UIs are not designed to support the long-term memory of the user. For example, when browsing tabs on a smartphone touchscreen, it is hard to recognise what’s on screen as we forgot what that page is and why we arrived there.

Similarly, bookmarks are often a meaningless list of webpages, as their value was linked to the specific time when they were taken. For example, let’s imagine we are planning our next holiday and we start bookmarking a few interesting places. We may even create a new ‘holidays’ folder and add the bookmarks to it. However, once the holiday is the bookmarks are still there, they don’t expire once they have lost their value. This happens pretty much every time; old bookmarks and folders will eventually start cluttering our screen and make it difficult to find the information we need.

Therefore we redesigned tabs, history and bookmarks to display the most recent information first. Consequently, the display and the retrieval of information is simplified.

Browser tabs

In our browser, most recent tabs come first. Here is how it works:


In this way, users don’t have to painstakingly browse an endless list of tabs that may have been opened weeks or days ago, like in Mobile Safari or Chrome.


Browser history has not changed much since Netscape Navigator; modern browser still display a chronological log of all the web pages we visited starting from today. Finding a website or a page is hard because of the sheer amount of information. In our browser we employ a clustered model where you display the last visited websites, not every single page. On tap, you then display all webpages for that websites, starting from the most recent. In this way scanning the history log is much easier and less painful.


Loving the bottom edge

We believe the bottom edge is the most pleasurable edge to use. It is easily accessible at any time and ergonomically friendly to the typical one-hand phone hold. Once discovered, it will slowly build into our muscle memory and become a natural and intuitive way of interacting with the application.


This is why we combined tabs and history and made them accessible through the bottom edge. As a team, we spent months building and refining a sleek, intuitive and fluid user experience.

Here’s a sneak preview of how it will look like:

Video: Browser interactions

Bottom edge gesture will have three stages:

  1. Dragging from the bottom edge will hint and reveal the most recently viewed tab
  2. Continue dragging and the full tab spread is revealed
  3. Keep on dragging and browser history will be fully revealed.

All elements will support gestural interaction: user can swipe to delete a tab or a website from history.


Beyond the screens

Wearable devices

[Tweet “It is time to look beyond the screens and discover a truly new product”]

I found this very interesting article on Medium today. It resonates with my idea about wearables. So, OK… we are obsessed by touch screen and touch UIs, but we should not try to put them everywhere and rather focus on providing value to users in different ways.
– What kind of data can we collect without relying on touch or speech input?
– How can we notify users and provide feedback without having to watch a new screen?

“As developers start to build applications for Google Glass, Pebble, Galaxy Gear, and Android Wear, we have to decide if we really need to see all of the same content on a new device. We already have an immensely powerful computer and content-consuming device in our pockets every day. It is time to look beyond the screens and discover a truly new product”

No More Screens | Andy Stone |


Doug Aitken | The Source

I just discovered this website from a colleague at Canonical. Check it out. It has an amazing, mesmerising nature that you hardly find in web today. Plus, it talks about Creativity in Music, Performance, Architecture.

Doug Aitken | The Source

The Source | Doug Aitken


Solving the UX Puzzle

Having worked more than 13 years as either an Information Architect, User Experience Designer and HCI researcher, I accepted the invite from the organiser of UCD 2013 London to discuss what I learned by doing user centred design client- and agency-side.

  1. How does it differ?
  2. Does the agency engagement model allow a true user centred design process?
  3. Do agencies need to change the way they engage with their clients to create great products and services?
  4. What is our role in shaping the future of our practice?

Some answers can be found here

[slideshare id=28052635?rel=0&w=597&h=486&fb=0&mw=0&mh=0&style=border:1px solid #CCC;border-width:1px 1px 0;margin-bottom:5px&sc=no]

UCD 2013 solving the UX puzzle from Giorgio Venturi

The New Multi-Screen World: Understanding Cross-platorm Consumer Behavior

This excellent infographic sums up some of the most recent insights from a Sterling Brands/Ipsos research commissioned by Google.


Here’s some highlights:

90% of people move between devices to accomplish a goal, whether that’s on smartphones, PCs, tablets or TV.

Two primary ways we multi-screen
In understanding what it means to multi-screen, they discovered two main modes of usage:

  • Sequential screening where we move from one device to another to complete a single goal
  • Simultaneous screening where we use multiple devices at the same time

They found that nine out of ten people use multiple screens sequentially and that smartphones are by far the most common starting point for sequential activity. So completing a task like booking a flight online or managing personal finances doesn’t just happen in one sitting on one device. In fact, 98% of sequential screeners move between devices in the same day to complete a task.

With simultaneous usage, they also found that TV no longer commands our undivided attention, with 77% of viewers watching TV with another device in hand. In many cases people search on their devices, inspired by what they see on TV.


Excellent introduction to touch UI

I found this excellent introduction to Touch UIs by Luke Wroblewski that looks at the disruptiveness of the introduction of successful input methods in computer history. In a nutshell, every time a new input paradigm was introduced to the market, market dominance shifted to those companies who used them to serve consumers best.

From Re-imagining Apps for Ultrabook™ (Part 1): Touch Interfaces

Over the past several years, both in my product work and writings, I’ve focused primarily on designing for mobile devices. Mobile has not only grown tremendously, but popularized new ways for people to interact with digital services as well. New capabilities like multi-touch, location detection, device orientation, and much more have made mobile devices a playground for new interactions and product ideas. It’s been an exciting ride to say the least.

Now many of these revolutionary capabilities are making their way to a new category of devices through Intel’s Ultrabook™ system and, once again, a new set of opportunities is available for designers and developers to re-imagine software. It’s an exciting time for desktop apps and I hope this video series will not only inspire you to explore new ways of thinking but help you with detailed design advice as well.

To start the series, we’re going to look at the opportunity touch interfaces provide for desktop applications. Specifically, we’ll outline the impact of new input methods in personal computing and walk through the top-level principles behind designing for touch.



Android Design Patterns

When I was offered presenting at the Design track at Droidcon 2011, I enthusiastically accepted as very little has been written on the topic. This still holds true, regardless of Android being the most widespread Smartphone OS on the planet.

The things is, Android apps have been heavily criticised in the past due to poor usability and aesthetic appeal. The truth lies in the middle: there are some great apps on the market, but they are flooded by a huge number of dreadful ones. Often the functionality is there, but lack of design makes them hard and unpleasant to use.

One of the issues with Android is a lack of solid & consistent UI patterns. UI Patterns are beneficial to designers and users as they set the expectations in interacting with a device.

When I started designing for the Skype Android app back in 2009, my team faced the huge challenge of creating a solid, consistent interaction design language almost from scratch. Even Google proprietary apps such as Gmail, Messaging, YouTube, etc. had several pitfalls. In a way, it was also extremely exciting as we could do whatever we wanted – a designer’s dream and nightmare, folded into one.

Fast forward to 2011, I feel Android is in a better position. Google – while being the smartphones market leader – has hired early this year former Danger’s Sidekick & Palm Web OS user experience director, Mathias Duarte. I watched him at Google I/O this year presenting the Honeycomb UI framework with his team and I recognised there was a lot of progress in there.

In my view, the UI changes started by Honeycomb is going to make Android easier (and more pleasant) to use. However, Honeycomb is just for tablets: the main challenge will be with when the next Android release (a.k.a. Ice Cream Sandwich) comes out in a few weeks – as the same principles will support both tablets & handsets.

Here are some of the design challenges Android designers still face nowadays:

  • How do you navigate between the different sections of the app?
  • How do you visualise information?
  • How do you provide feedback while avoiding interrupting the user?

Each app is different and there is no silver bullet to tackle all these questions – it depends on a number of factors.

My goal with this presentation is to look at some of the most remarkable apps on the Android Market and analyse best practices in navigation, fluid, responsive interaction and information visualisation.

[slideshare id=9619718?rel=0&w=512&h=421&fb=0&mw=0&mh=0&style=border:1px solid #CCC;border-width:1px 1px 0;margin-bottom:5px&sc=no]

Droidcon 2011 – Android Design patterns from Giorgio Venturi



Pixel Perfect Code: How to Marry Interaction & Visual Design the Android Way
Google IO. Chris Nesladek May 27, 2009.

Android Design Patterns
Google IO. Chris Nesladek, German Bauer, Richard Fulcher, Christian Robertson, Jim Palmer. May 2010.

Designing and Implementing Android UIs for Phones and Tablets
Matias Duarte, Rich Fulcher, Roman Nurik, Adam Powell and Christian Robertson

Android Patterns website


Cracking the mobile user experience at OTA 2011

Check the Over The Air FlickR gallery

Where some of the finest code-crackers and scientist were gathered during the war to decrypt Nazis messages sent throughout the war-torn Europe?

… the answer is at Bletchley Park, just south of Milton Keynes, in the UK. During World War II, Bletchley Park was the site of the United Kingdom’s main decryption establishment, where ciphers and codes of several Axis countries were decrypted. The intelligence produced at Bletchley Park, codenamed Ultra, provided crucial assistance to the Allied war effort. Some people even claim that Ultra shortened the war by two to four years and that the outcome of the war would have been uncertain without it.

Bletchley is one of the places where history of modern computing was made – even Alan Turing worked for some time on cracking those codes. Bletchley Park is the best possible setting for the fourth Over The Air, the london-centric conference where all flavours of mobile development and design techniques are intensively explored. And having good weather throughout the conferences, with temperatures reaching 28C was rather unique for this time of the year.

This was my first visit at OTA, and I am extremely pleased by the quality and the variety of inputs I received in the last 36 hours. Here is a few things I plotted down – this is just my take and it is based on my (design) interests and I hope somebody else will find them useful:

Nick Butcher(@crafty) from Google has illustrated the Android Honeycomb design patterns and pointed out to some of the changes from Froyo and Gingerbread. I’ve got to say that I have been relatively impressed by the amount of good thought Matias Duarte and his team have put on Honeycomb.

My feeling is apps have now a solid framework including signposting, navigating to views and presenting the most used functions of the website. Another aspect of the framework I really like is designing fragments (i.e. self-sufficient UI modules) which can then be either composed in a single view on large screen or split and navigated on smaller screens.
My feeling is The seeds planted at Google I/O in 2010 with the Twitter app have slowly grown in what I see implemented in Honeycomb. Something I didn’t realise was how ‘Ice Cream Sandwich’ (please stop them!) will bring this changes to small-screen devices and how this will make using Android Tablets and Smartphones a consistent experience.

From my perspective, it is a joy to see a valid alternative to iOS (but they are still at the top of their game). Looking forward to present at Droidcon in a week or so and give my view of how Android patterns & best practices have evolved in the last three years.

Bruce Lawson (@brucel) from Opera brazenly showcased some tips and techniques for coding websites that work at all screen resolutions with Javascript, CSS queries and other optimisation techniques. Despite my superficial knowledge of HTML, CSS and Javascript I managed to get some immediate improvements to our Closertag website code, such as the tel =”xxx” link attribute allows people to tap & call you straight away from your site; it seems to work for most mobile browser.

Liza Danger Gardner(@lyzadanger) also picturesquely talked about ‘mobile web’ and quite rightly said that ‘mobile web’ should just be called mobileWEB full stop. Rather than coding for large screens and then creating a different rendering for smaller screens, much better designing for smaller screen first. This is quite right considering that the amount of people accessing the web ONLY from a mobile device has exceeded the number of those doing accessing from desktop ONLY 🙂

My favourite presentation was Franco Papeschi’s (@bobbywatson) talk about “Changing the World” one start-up at a time. Franco has recently started working for the Time Berners Lee’s World Wide Web Foundation, whose goal is to bring the benefit of the Internet to those who need it most – the developing economies in Africa, India and Asia. The W3C foundation has set a number of initiatives and laboratories in these countries – providing skills and tools where they are needed the most. It is absolutely inspiring seeing how people of all ages and kinds in those countries are eagerly embracing Internet to change their lives – in better.

Franco concluded saying he still has no magic recipe on how to help this people walking this path – but he said there are three things we can volunteer with: COACHING them with new skills, MAKING stuff, and MENTORING them on their way to success.

On a final note, I was particularly impressed by the IGNITE speech format at the end of the first day: it’s a five-minutes-twenty-slides-fifteen-seconds-each presentation format that allows people to talk about things in an visual, intuitive, emotional and powerful way of presenting ONE idea, it doesn’t matter how crazy! I’d love to try once presenting at one of these sessions.

Further reading

Twitter for Android: A closer look at Android’s evolving UI patterns

Android User Interface Guidelines

Android UI patterns

Google I/O 2011: Designing and Implementing Android UIs for Phones and Tablets

World Wide Web Foundation

Bruce Lawson

Lyza Danger Gardner

HTML5 Rocks – A resource for open web HTML5 developers

Over the Air 2011 Programme schedule

Mobile UX

A ‘Step backwards for Gestural Interfaces’ or for the NN’s Group?

Just finished reading the article Gestural Interfaces: A Step Backwards In Usability by Donald Norman and Jakob Nielsen.
I would suggest reading the article before reading my critique below.

My immediate gut feeling was that their article didn’t hit the mark this time. Despite my admiration for Donald Normand and sharing a similar (HCI) background with both of them, I feel they failed to acknowledge the degree of change that iPhone (and the following touch screen smartphones) have brought to the Interaction Design field.

In fact, you can agree or disagree with their analysis at a granular level, but I think they made a very wrong claim at the beginning of their article:

“… the place for such [i.e. gestural interfaces] experimentation is in the lab. After all, most new ideas fail, and the more radically they depart from previous best practices, the more likely they are to fail. Sometimes, a radical idea turns out to be a brilliant radical breakthrough. Those designs should indeed ship, but note that radical breakthroughs are extremely rare in any discipline. Most progress is made through sustained, small incremental steps. Bold explorations should remain inside the company and university research laboratories and not be inflicted on any customers until those recruited to participate in user research have validated the approach.”

Their mindset is coming from the ’70s and ’80s to HCI, where interaction design was still in its infancy and the Usability Lab was the Holy Grail.

Well, Mr Norman and Mr Nielsen, times have moved on since then. The pace of innovation in interaction design has changed; it is not measured in years anymore, but in months or even weeks. There is an incredible amount of “bold exploration” in the mobile space, where Google and Apple are the key players – and the others are just following. Mobile patterns are unfolding into other spaces as well, with interaction design patterns gradually spreading to the web and ‘desktop’ interfaces.

Sure, there are problems with rapid, agile development approaches. Usability issues are pretty obvious, especially with Android. But we cannot expect companies to wait for the usability people to test the devices to death before release. The “incremental steps” are going to happen in the market, not in the lab; Apple and Google can roll out these changes in a matter of week, with an OS update.

In my view, the smartphones market is still in the early adopters, ‘high technology’ phase (see below); as soon as the market matures and crosses the ‘transition point’, people will choose their phone based not on technology and features, but on the quality of user experience; Norman know this well of course, as the figure below is taken from his book ‘the Invisible Computer’, more than 10 years ago (1998).

At a granular level, Nielsen & Norman hit a few good chords by talking about consistency and lack of standards:

“…. the rush to develop gestural interfaces – “natural” they are sometimes called – well-tested and understood standards of interaction design were being overthrown, ignored, and violated. Yes, new technologies require new methods, but the refusal to follow well-tested, well-established principles leads to usability disaster”.

However, they also have to admit:

“The first crop of iPad apps revived memories of Web designs from 1993, when Mosaic first introduced the image map that made it possible for any part of any picture to become a UI element. As a result, graphic designers went wild: anything they could draw could be a UI, whether it made sense or not. It’s the same with iPad apps: anything you can show and touch can be a UI on this device. There are no standards and no expectations”.

Well, that is exactly the point: the first years of the web where pretty much the same – it took a few years for good patterns UI to spread and consolidate. It will happen the same thing in the mobile space.

Natural gestures

Another good chord is how to use natural gestures:
“In Apple Mail, to delete an unread item, swipe right across the unopened mail and a dialog appears, allowing you to delete the item. Open the email and the same operation has no result. In the Apple calendar, the operation does not work. How is anyone to know, first, that this magical gesture exists, and second, whether it operates in any particular setting?

With the Android, pressing and holding on an unopened email brings up a menu which allows, among other items, deletion. Open the email and the same operation has no result. In the Google calendar, the same operation has no result. How is anyone to know, first, that this magical gesture exists, and second, whether it operates in any particular setting?

Whenever we discus these examples with others, we invariably get two reactions. One is “gee, I didn’t know that.” The other is, “did you know that if you this (followed by some exotic swipe, multi-fingered tap, or prolonged touch) that the following happens?” Usually it is then our turn to look surprised and say “no we didn’t know that.”  This is no way to have people learn how to use a system”.

I agree with that; however, a good designer must know that a natural gesture must be considered as an ‘expert shortcut’, not as the only way to access that function:

–    In the Apple Mail example above, it is also possible to delete the email from within the email
–    In the Android example, holding and pressing brings up the ‘contextual menu’; Google explicitly says that it only be used as an ‘alternative way of accessing that function’ (it is the equivalent of the right button on a mouse).

Back button

On the usage of back button in Android:

“In the Android, the back button moves the user through the activities stack, which always includes the originating activity: home. But this programming decision should not be allowed to impact the user experience: falling off the cliff of the application on to the home screen is not good usability practice. (Note too that the stack on the Android does not include all the elements that the user model would include: it explicitly leaves out views, windows, menus, and dialogs)”.

I totally agree on the above; the back/undo button has inconsistent behaviours across several applications and this should be addressed. But it is a ‘design execution’ problem and it will be addressed over time as soon as the development framework matures. Also, they miss the mark on Apple’s UX:

“Both Apple and Android recommend multiple ways to return to a previous screen. Unfortunately, for any given implementation, the method used seems to depend upon the whim of the designer. Sometimes one can swipe the screen to the right or downwards. Usually, one uses the back button. In the iPhone, if you are lucky, there is a labeled button.”

Well, Apple does strongly recommend the standard Back button at the top left and it is consistently used by the Apple’s signature applications. It is also well documented in the Apple HCI guidelines and most of (good) app developers seem to get the principle; ultimately it is the developers’ choice not to apply the ‘back button’ pattern.


The true advantage of the Graphical User Interface, GUI, was that commands no longer had to be memorized. Instead, every possible action in the interface could be discovered through systematic exploration of the menus.  Discoverability is another important principle that has now disappeared. Apple specifically recommends against the use of menus. Android recommends it, even providing a dedicated menu key, but does not require that it always be active. Moreover, swipes and gestures cannot readily be incorporated in menus: So far, nobody has figured out how to inform the person using the app what the alternatives are.

Menus can be good for feature discovery on a large screen, but they are not a viable alternative on such small screens. Quite recently, some of the most recent applications have pushed forward a ‘dashboard’ design pattern that allows users get an idea of the main features of the app (e.g. Facebook and Linkedin on iPhone, Twitter on Android). Again, it takes some time for good interaction design patterns to emerge and consolidate.


I feel that this essay is slightly off mark; Norman and Nielsen seem more interested in cashing their credibility in the ‘booming’ mobile & tablet user experience design market by firing their guns on the whole industry; however, they see the glass ‘half empty’ by not acknowledging the outstanding amount of value that the mobile revolution has brought to people in term of playful interactions, voice-based interaction and wayfinding in the physical world.

Just consider the following:

  1. With gestural and haptic interfaces, the interaction has become more playful; people engage with this devices in a completely different way from desktops, and as a result the level of emotional attachment to the devices is different and users develop a much more intimate relationship with their devices and apps.
  2. Search becomes contextual on a mobile device; the phone browser knows the user location and can provide location-based results.
  3. Consequently, mobile maps become an extremely powerful to ‘augment’ our understanding of the environment: users can search or browse for shops, goods, people, recommendations near them in a quick and intuitive way.
  4. The usage of voice become an alternative way to interact with the device; thanks to Google and Could Computing, we can now speak to our phones and get an answer in return.

The three years 2008/2010 mark another phase of the Internet revolution in how people use technology to mediate their relationship with their environment and social networks. In the next years, we will witness the further expansion of this revolution to less technological-savy users: some people will experience Internet for the first time through their mobile phones.