CarOS

Exploring a safer infotainment system

CarOS

Exploring a safer infotainment system

Car infotainment UIs are safety-critical systems: any time spent not looking at the road is a risk. Until our cars can fully drive themselves, there are ways to make these systems safer.

Reducing complexity

Reducing complexity

This first prototype tries to simplify the multitasking model inherited from touch interfaces in phones. In cars, showing many options or deep navigation flows can be overwhelming. This also forces buttons to be smaller, which makes them harder to hit (Fitt's law).

Here, the system is focused on the most common tasks performed in an infotainment: navigation and music. By displaying both with a limited number of buttons, and automatically adjusting the size of each app based on the user's intent, the system becomes dual-tasking (similar to iPhones with Dynamic Island) and leaves the navigation visible at all times.

To reduce the overall distraction caused by this dual interface, backgrounds and colors fade as soon as the system is not used (similar to the always-on screen on iPhones).

Although invisible, haptic feedback plays an important part here: any action needs to be confirmed, with variations to allow users to develop a simple mental model around it (success, failure, type of action). I still believe that the future of touchscreen interfaces is to integrate local haptic feedback, to partially replace physical controls with more flexibility.

In this music app, swipe gestures are used to go to the previous or next track, reducing the need for additional buttons. Navigation is performed with a swipe down, expanding the app window to make the track selection easier.

Human centered, not car centered

Human centered, not car centered

Most car UIs today start with a search box. There are ways to make this faster based on context: time of day, routine, recent searches relevant by car, or even vaguely discussed locations in messages. Putting these pieces together to surface 1-3 most likely destinations seems attainable for an AI assistant with access to OS-level data. Once the destination is known, the car should suggest one route based, to reduce decision fatigue.

The prototype below covers a common problem with EV trips: when and where to charge.

One design principle here is to minimize mental mapping. A lot of map UIs show side-by-side a map and a list of locations. This has advantages (more places can fit, familiar pattern, it never moves), but it forces users to look back and forth. By showing lunch places in place, the driver can better visualize where they are in relation to the charger. Additionally, tap targets can be made bigger, and spread farther apart, which requires less aiming.

Note that the destination card is focused on charging, since this can impact the entire trip. If the driver decides not to charge, the system will continue the conversation and adjust the trip to arrive with more battery left. Today, this thinking is left to the driver.

In the next prototype below, the interface is also centered around human needs: knowing that one charge is required and that the trip will still be underway around lunch time, the system conveniently places the stop at noon. From there, the user can decide in a few taps to combine the stop with a lunch break.

This principle can extend to a lot of stop opportunities (coffee, bathroom break, rest break after a long drive) and should work permanently in the background. If a traffic jam suddenly forms, the system should also suggest to use the opportunity to stop and charge.

Note the car status in the top left. In the cold, EVs lose range, and it can be hard to get an idea of its impact. By displaying this information during the planning phase, the information is shown at the right time, and contributes to build trust, reassuring the driver that things are under control.

Design principles

Design principles

Some principles I've tried to use when designing these prototypes:

Eyes on the road: adjusting the temperature should not require to look away. Physical buttons solve this, but too many can create slips. Screens should give instant feedback.

Don't distract me: unessential elements should be reduced to a minimum. Animations should be short, and always serve a purpose.

Adapt to the situation: what does the driver need to see right now? Ideally, the system should show what is likely to be used at a given moment, without shuffling things around.

Use the right input: voice (with LLMs) could help clarify ambiguous commands. Voice output is slow though, and can't always replace visual, haptic, or acoustic feedback.

Cooperate, don't delegate: to avoid over-reliance on automation, the system should state its capabilities honestly, build trust gradually, and cooperate with the driver on tasks.

Next steps

Next steps

I've lots of ideas to improve and refine this concept. One of them is how to design an input that uses voice and visual feedback to quickly give and refine instructions to the system. For example when the trip needs to change along the way.