Deriving user context from gyroscopes

Imagine you're rushing to catch a train.

While you're walking you open a mobile banking app to get a glimpse of your checking and savings accounts. Once you're seated on the train you hold the phone in your lap to perform more complex tasks: reviewing a portfolio, transferring money, etc. Leaving the station, you walk into your office and place your phone facedown on a conference table.

This use case can be broadly described as "mobile," but it encompasses three unique contexts:

  1. Browsing while you walk
  2. Performing detailed actions while seated
  3. Disengaging the app

The gyroscopes that tell native and web-based apps how a user is moving are nothing new, but few apps derive UX intelligence from the sensors. Imagine how we might change an interface to accommodate a customer in each situation:

  1. Browsing while you walk: Call attention to a high-level overview of accounts. Deemphasize buttons for complicated financial transactions.

  2. Performing tasks while seated: Expose the more complicated options along with account information that requires extra reading.

  3. Disengaging the app: If we know a customer is inactive because her phone is facedown on a table, we might expedite the session timeout for security purposes.

These examples are purely illustrative. In a real setting we would research the customer's goals for each context instead of making blind assumptions.

Although gyroscopes and the Device Orientation API are an old feature of the mobile Web, I haven't seen any apps source them for UX intel. Slicing mobile into specific contexts based on the user's body motion, we can create a more frictionless experience for customers on the go.