UX evolution as the user’s skills and needs improve over time.

Posted on September 1, 2017


I’ve been thinking about the use of AI in apps and web sites.
Currently there is a lot of effort going into the data science aspects, using AI to make sense, at a human scale, of the huge volumes of data.
Separately there is a strong movement towards conversational interfaces where the AI is processing your requests either through text, speech, or gestures.
All well and good.
I am thinking, however, that we need to infuse AI into the interface design that we use, and not just in the services that manage the actions. Making the way the app works, the way it displays data or actions, the order in which things happen should be smartly matched to the user.  To me,  not you or Tom or Dick or Harry.
What I mean by this is that currently most apps will have fundamentally the same interface for everyone.  The designers and product owners will have carefully, and often in great detail, thought about the person that issuing their app/service, and will generate persona that guide how the interface and the actions are designed.
The thing is there is often only one main persona.  Ok we have personalisation,  which tries to use your browsing/action data and the data of similar users to re-order content, or products etc,  but this is just filtering and sorting.
While this is a good starting point,  it’s not such a good ending.  I want the way the app works to reflect me and the way I think, behave and act.
Yes start me off with the newbie interface, but as I learn the app as I become experienced, and show tendencies for particular actions or behaviours,  then learn from this.
Curate: Then either show me what I’m missing, or enhance the functions I use most to make them work better for me. That is select the ways of working for me,  curate from the available options.
Take into account the context of use and use this to adjust the interface, based on algorithms that are built in models of behaviour.  Remember I may be operating across channels (voice, mobile app / web, PC, car, TV, al of the above over the course of a day) and use the rich context in each. Not just location and channel type, but also if I am moving, and if so how, e.g. train, car, walking, still, and not just right now,  but for the last while, so that the app acts appropriately, e.g. do not give me a video to watch if that would be unsafe.
Sometimes I need or want simple,  e.g. driving, walking, time of day? in other scenarios  I want the full on experience, I want the richness, I need to be sated.  listen to the microphone, is it noisy where the app is being used, if so don’t assume you can talk to me without a visual.
Look ahead, use predictive analytics to make the choice for me before I know I need to make the choice, and let me know you’ve done this. Don’t just curate and coordinate, alter the very structure of the interface to suit.
Now many folks will be thinking,  hang on a user needs to feel at home and to have a known mental model for the app and it’s actions to be able to use it, and this is true,  but this is not static,  change it over time, so that it improves for each user.
IBM describes the URL of AI thus:
I want to have this to make my interface to your shop or what ever, be responsive to the emotional intelligence I show when using your service.
Now combine that with AI infused product or service selection and profiling, and you are on to a winner.
Think of this not as simple UX but as Intelligent eXperience, IX.
Easier said than done,  but hey …
Posted in: Apps, Architecture, mobile