Article

Crafting your Product AI Principles: Taking Cues from Apple & Google on the Themes of Reliability

Jack Cunningham

October 20, 2023

Lighthouse illustration

With the realization of Generative Artificial Intelligence (GenAI), the surface area in which digital teams can explore and develop more meaningful products has greatly increased. At the same time, teams are trying to answer for safety, reliability, and privacy. In this period of rapid innovation, many groups are seeking a principled approach to explore generative AI opportunities more safely. Following the lead of many consumer products organizations such as Canva , Grammarly , and Adobe , leaders have taken to establishing organizational guiding principles when it comes to AI. Developing guiding principles should be seen as a critical first step. So where do you get started? To help with this question we have been reviewing the patterns emerging within the industry and have found Apple’s Human Interface Guidelines and Google’s AI principles to be effective launch pads to spark discussions within organizations.

Leaning on Apple’s Human Interface Guidelines

For all of the functionality they enable, Apple publishes a set of design principles and standards that help teams create easy-to-use, intuitive, and consistent interfaces on Apple’s platform. These standards serve both Apple and development teams uniquely. Apple wants the experiences built on its platform to be of the highest quality possible. Without a published standard, they would be without a measuring stick to assess apps against. For developers, this body of work provides access to well-researched and reasoned guides to learn from without having to reinvent the wheel with each new product released on the platform.

In standard fashion, Apple’s team has offered up a fairly concise synopsis of considerations in the space of Machine Learning that can assist greatly in the effort to discover your team’s principles for developing AI. Their approach to designing for Machine Learning and AI can be lifted to enable an evaluation of your own approach. Let’s take a look at just how you might go about doing that.



1. Critical or Complementary?

When making judgments about the right and wrong places to infuse functionality powered by AI, teams need to consider how it might integrate with the jobs that users are working to achieve. Are we inserting AI into key flows that facilitate the completion of a core event, or are we infusing this work in secondary flows? The answer to this question will implicitly set a standard for how confident and mature the results you serve up may need to be. Apple summarizes this implication by stating,

In general, the more critical an app feature is, the more people need accurate and reliable results. On the other hand, if a complementary feature delivers results that aren’t always of the highest quality, people might be more forgiving. ( source )

What this means is that your principles should help clearly define where AI is being inserted to help prioritize development resources, set user expectations, and set a threshold for when functionality may be ready for primetime.

2. Private or Public?

The crux of effective GenAI hinges upon the underlying data your team is using to provide results. Every product choice optimization you explore is intrinsically linked to the quality and type of data that feeds into the models. A shared understanding of this can dramatically shape the direction and efficacy of the app’s features your team explores. By assessing the public or private nature of your available data, your team can evaluate the optimal times and places for integrating AI solutions in order to serve your users most effectively. Apple’s rule of thumb to consider is,

…The more sensitive the data, the more serious the consequences of inaccurate or unreliable results…( source )

Explore conversations with your team ensuring your guidelines answer for the critical need for data literacy and shared value for data sensitivity. With each move your team considers making, you must account for the notion that getting something wrong while using sensitive information has the potential to dramatically worsen experiences for your users.

3. Proactive or Reactive?

Understanding what your users expect is simply sound product practice and that goes unchanged as you explore GenAI. Are you offering up an experience proactively, prior to the user expecting it, or are you doing so reactively, when the user took action to engage with it? If you find yourself in a case where your functionality is discovered by surprise, you should assume that the bar has been set higher as consumers assess if they are indeed delighted by your work. Consumers who are general explorers of AI behave with vastly different expectations than those who are trying to complete a key task and bump into AI. The Human Interface Guidelines sums up this phenomenon well when saying,

Because people don’t ask for the results that a proactive feature provides, they may have less tolerance for low-quality information. To reduce the possibility that people will find proactive results intrusive or irrelevant, you may need to use additional data for the feature. ( source )

This lens should help your team review ways to ensure that you preserve a human-centered approach to your AI exploration. Rather than pursuing technology for the sake of the technology, ensure that your principles help guide your team back toward providing value derived from a deep understanding of consumer needs. Also, ensure that you take a stance on how public you feel your team must be about the inclusion or exclusion of AI at certain points of the user journey.

4. Visible or Invisible?

A beautiful interface is only as effective as the systems that feed the experience. Today, teams make active choices as to what they wish to surface to a user and what tasks are performed in the background. Acknowledging what is made visible to a consumer inherently impacts their expectations of the outcomes and utility they require from a tool. As you discuss your AI dreams, Apple notes the importance of reviewing how visible your work is to your users by acknowledging,

With a visible feature, people form an opinion about the feature’s reliability as they choose from its results. It’s harder for an invisible feature to communicate its reliability — and potentially receive feedback — because people may not be aware of the feature at all. ( source )

In your team’s principles exploration, ensure that you answer for the group's required levels of consumer agency and visibility. If it is critical to your users that they see you as a technology forward leader and that is why they seek you out, find ways to make as much visible as possible.

5. Explicit or Implicit?

GenAI models have developed precipitously over the last several years with one of the key drivers being the commercialization and intentional solicitation of feedback. Your team’s approach to incorporating GenAI must account for some type of feedback loop to ensure that what is being generated is meeting the mark for your consumers. Spend time exploring and discussing whether this feedback is going to be explicitly solicited from consumers in the moment of use, or if implicit actions will be used to drive forward momentum and evaluate success. Apple notes the importance of reviewing and exploring how critical and useful the various types of feedback loops can be when,

Explicit feedback provides actionable information your app can use to improve the content and experience it presents to people. Unlike implicit feedback — which is information an app gleans from user actions — explicit feedback is information people provide in response to a specific request from the app. ( source )

This fifth lens should help your team discuss and review how you plan to approach closing the loop and assessing generative success. Is there a bare minimum you will require your teams to comply with or any areas in which you must answer in the area of feedback? Ensure that your principles address this concern before you get started.

6. Espoused or Enacted?

The process of aligning a team around foundational principles appeals to and serves both the forward-thinking visionary and the practical realist. However, in order to sustain a positive product culture and avoid potential blunders, you need to pair processes and principles with the human element — an intentionally built culture. Evaluate if your principles are espoused and aspirational or if they are well fulfilled in your work today. This is where we see layering in one of Google’s many approaches as key. Google revisits its principles and its definitions of the AI applications they are unwilling to build, on a regular cadence. Not only does this mean they are continually investigating their approach and accepting the rapid advancement in the space, but they are also building team alignment by revisiting this process. A regular cadence can help uncover the unintended consequences of the unforeseen and create a sense of pride in adhering to and living out the principles established by a group. (Review the evolution of Google’s AI Principles here .)

Apple’s Human Interface Guidelines and Google’s approach to a continuous reassessment of their AI principles provide valuable insights into sparking the necessary conversation needed to define ownable guardrails for teams beginning their exploration of AI.



Jack investigates opportunities at Livefront , helping teams explore and refine what is possible for their products.