Fastlane is widely used by iOS teams all around the world. It became the standard de facto to automate common tasks such as building apps, running tests, and uploading builds to App Store Connect. Fastlane has been recently moved under the Mobile Native Foundation which is amazing as Google
CloudWatch is great for observing and monitoring resources and applications on AWS, on premises, and on other clouds.
While it's trivial to have the agent running on Linux, it's a bit more involved for mac instances (which are commonly used as CI workers). The support was
Amazon Web Services (AWS) provides EC2 Mac instances commonly used as CI workers. Configuring them can be either a manual or an automated process, depending on the DevOps and Platform Engineering experience in your company. No matter what process you adopt, it is sometimes useful to log into the
I previously wrote about JustTweak here. It's the feature flagging mechanism we've been using at Just Eat Takeaway.com to power the iOS consumer apps since 2017. It's proved to be very stable and powerful and it has evolved over time. Friends have heard
Hey there!
I had the pleasure to talk at Swift Heroes Digital on October 1, 2020.
The talk "Scalable Modular iOS Architecture" is about the unfolding of a multi-year iOS vision at Just Eat, restructuring the whole app from the ground up to make it modular and global
I wrote the first version of iHarmony in 2008. It was the very first iOS app I gave birth to, combining my passion for music and programming. I remember buying an iPhone and my first Mac with the precise purpose of jumping on the apps train at a time
Implementing Authorization on mobile can be tricky. Here are some recommendations to avoid common issues.
Originally published on the Just Eat Engineering Blog.
OverviewModern mobile apps are more complicated than they used to be back in the early days and developers have to face a variety of interesting problems.
How the iOS team at Just Eat built a scalable architecture to support navigation and deep linking.
Originally published on the Just Eat Engineering Blog.
In this article, we propose an architecture to implement a scalable solution to Deep Linking on iOS using an underlying Flow Controller-based architecture, all powered
Edit: in 2020, Will Larson published Staff Engineer, the first book that properly reasons about Staff+ roles. I cannot recommend the book, the articles, and the podcast enough. You can find all about them at staffeng.com.
To extend the tech career ladder, a number of roles have been introduced
Ever since I was a boy, I’ve been fascinated with movies. I loved the characters and the excitement—but most of all the stories. I wanted to be an actor. And I believed that I’d get to do the things that Indiana Jones did and go on exciting adventures. I even dreamed up ideas for movies that my friends and I could make and star in. But they never went any further. I did, however, end up working in user experience (UX). Now, I realize that there’s an element of theater to UX—I hadn’t really considered it before, but user research is storytelling. And to get the most out of user research, you need to tell a good story where you bring stakeholders—the product team and decision makers—along and get them interested in learning more.
Think of your favorite movie. More than likely it follows a three-act structure that’s commonly seen in storytelling: the setup, the conflict, and the resolution. The first act shows what exists today, and it helps you get to know the characters and the challenges and problems that they face. Act two introduces the conflict, where the action is. Here, problems grow or get worse. And the third and final act is the resolution. This is where the issues are resolved and the characters learn and change. I believe that this structure is also a great way to think about user research, and I think that it can be especially helpful in explaining user research to others.
Three-act structure in movies (© 2024 StudioBinder. Image used with permission from StudioBinder.). Use storytelling as a structure to do researchIt’s sad to say, but many have come to see research as being expendable. If budgets or timelines are tight, research tends to be one of the first things to go. Instead of investing in research, some product managers rely on designers or—worse—their own opinion to make the “right” choices for users based on their experience or accepted best practices. That may get teams some of the way, but that approach can so easily miss out on solving users’ real problems. To remain user-centered, this is something we should avoid. User research elevates design. It keeps it on track, pointing to problems and opportunities. Being aware of the issues with your product and reacting to them can help you stay ahead of your competitors.
In the three-act structure, each act corresponds to a part of the process, and each part is critical to telling the whole story. Let’s look at the different acts and how they align with user research.
Act one: setupThe setup is all about understanding the background, and that’s where foundational research comes in. Foundational research (also called generative, discovery, or initial research) helps you understand users and identify their problems. You’re learning about what exists today, the challenges users have, and how the challenges affect them—just like in the movies. To do foundational research, you can conduct contextual inquiries or diary studies (or both!), which can help you start to identify problems as well as opportunities. It doesn’t need to be a huge investment in time or money.
Erika Hall writes about minimum viable ethnography, which can be as simple as spending 15 minutes with a user and asking them one thing: “‘Walk me through your day yesterday.’ That’s it. Present that one request. Shut up and listen to them for 15 minutes. Do your damndest to keep yourself and your interests out of it. Bam, you’re doing ethnography.” According to Hall, “[This] will probably prove quite illuminating. In the highly unlikely case that you didn’t learn anything new or useful, carry on with enhanced confidence in your direction.”
This makes total sense to me. And I love that this makes user research so accessible. You don’t need to prepare a lot of documentation; you can just recruit participants and do it! This can yield a wealth of information about your users, and it’ll help you better understand them and what’s going on in their lives. That’s really what act one is all about: understanding where users are coming from.
Jared Spool talks about the importance of foundational research and how it should form the bulk of your research. If you can draw from any additional user data that you can get your hands on, such as surveys or analytics, that can supplement what you’ve heard in the foundational studies or even point to areas that need further investigation. Together, all this data paints a clearer picture of the state of things and all its shortcomings. And that’s the beginning of a compelling story. It’s the point in the plot where you realize that the main characters—or the users in this case—are facing challenges that they need to overcome. Like in the movies, this is where you start to build empathy for the characters and root for them to succeed. And hopefully stakeholders are now doing the same. Their sympathy may be with their business, which could be losing money because users can’t complete certain tasks. Or maybe they do empathize with users’ struggles. Either way, act one is your initial hook to get the stakeholders interested and invested.
Once stakeholders begin to understand the value of foundational research, that can open doors to more opportunities that involve users in the decision-making process. And that can guide product teams toward being more user-centered. This benefits everyone—users, the product, and stakeholders. It’s like winning an Oscar in movie terms—it often leads to your product being well received and successful. And this can be an incentive for stakeholders to repeat this process with other products. Storytelling is the key to this process, and knowing how to tell a good story is the only way to get stakeholders to really care about doing more research.
This brings us to act two, where you iteratively evaluate a design or concept to see whether it addresses the issues.
Act two: conflictAct two is all about digging deeper into the problems that you identified in act one. This usually involves directional research, such as usability tests, where you assess a potential solution (such as a design) to see whether it addresses the issues that you found. The issues could include unmet needs or problems with a flow or process that’s tripping users up. Like act two in a movie, more issues will crop up along the way. It’s here that you learn more about the characters as they grow and develop through this act.
Usability tests should typically include around five participants according to Jakob Nielsen, who found that that number of users can usually identify most of the problems: “As you add more and more users, you learn less and less because you will keep seeing the same things again and again… After the fifth user, you are wasting your time by observing the same findings repeatedly but not learning much new.”
There are parallels with storytelling here too; if you try to tell a story with too many characters, the plot may get lost. Having fewer participants means that each user’s struggles will be more memorable and easier to relay to other stakeholders when talking about the research. This can help convey the issues that need to be addressed while also highlighting the value of doing the research in the first place.
Researchers have run usability tests in person for decades, but you can also conduct usability tests remotely using tools like Microsoft Teams, Zoom, or other teleconferencing software. This approach has become increasingly popular since the beginning of the pandemic, and it works well. You can think of in-person usability tests like going to a play and remote sessions as more like watching a movie. There are advantages and disadvantages to each. In-person usability research is a much richer experience. Stakeholders can experience the sessions with other stakeholders. You also get real-time reactions—including surprise, agreement, disagreement, and discussions about what they’re seeing. Much like going to a play, where audiences get to take in the stage, the costumes, the lighting, and the actors’ interactions, in-person research lets you see users up close, including their body language, how they interact with the moderator, and how the scene is set up.
If in-person usability testing is like watching a play—staged and controlled—then conducting usability testing in the field is like immersive theater where any two sessions might be very different from one another. You can take usability testing into the field by creating a replica of the space where users interact with the product and then conduct your research there. Or you can go out to meet users at their location to do your research. With either option, you get to see how things work in context, things come up that wouldn’t have in a lab environment—and conversion can shift in entirely different directions. As researchers, you have less control over how these sessions go, but this can sometimes help you understand users even better. Meeting users where they are can provide clues to the external forces that could be affecting how they use your product. In-person usability tests provide another level of detail that’s often missing from remote usability tests.
That’s not to say that the “movies”—remote sessions—aren’t a good option. Remote sessions can reach a wider audience. They allow a lot more stakeholders to be involved in the research and to see what’s going on. And they open the doors to a much wider geographical pool of users. But with any remote session there is the potential of time wasted if participants can’t log in or get their microphone working.
The benefit of usability testing, whether remote or in person, is that you get to see real users interact with the designs in real time, and you can ask them questions to understand their thought processes and grasp of the solution. This can help you not only identify problems but also glean why they’re problems in the first place. Furthermore, you can test hypotheses and gauge whether your thinking is correct. By the end of the sessions, you’ll have a much clearer picture of how usable the designs are and whether they work for their intended purposes. Act two is the heart of the story—where the excitement is—but there can be surprises too. This is equally true of usability tests. Often, participants will say unexpected things, which change the way that you look at things—and these twists in the story can move things in new directions.
Unfortunately, user research is sometimes seen as expendable. And too often usability testing is the only research process that some stakeholders think that they ever need. In fact, if the designs that you’re evaluating in the usability test aren’t grounded in a solid understanding of your users (foundational research), there’s not much to be gained by doing usability testing in the first place. That’s because you’re narrowing the focus of what you’re getting feedback on, without understanding the users' needs. As a result, there’s no way of knowing whether the designs might solve a problem that users have. It’s only feedback on a particular design in the context of a usability test.
On the other hand, if you only do foundational research, while you might have set out to solve the right problem, you won’t know whether the thing that you’re building will actually solve that. This illustrates the importance of doing both foundational and directional research.
In act two, stakeholders will—hopefully—get to watch the story unfold in the user sessions, which creates the conflict and tension in the current design by surfacing their highs and lows. And in turn, this can help motivate stakeholders to address the issues that come up.
Act three: resolutionWhile the first two acts are about understanding the background and the tensions that can propel stakeholders into action, the third part is about resolving the problems from the first two acts. While it’s important to have an audience for the first two acts, it’s crucial that they stick around for the final act. That means the whole product team, including developers, UX practitioners, business analysts, delivery managers, product managers, and any other stakeholders that have a say in the next steps. It allows the whole team to hear users’ feedback together, ask questions, and discuss what’s possible within the project’s constraints. And it lets the UX research and design teams clarify, suggest alternatives, or give more context behind their decisions. So you can get everyone on the same page and get agreement on the way forward.
This act is mostly told in voiceover with some audience participation. The researcher is the narrator, who paints a picture of the issues and what the future of the product could look like given the things that the team has learned. They give the stakeholders their recommendations and their guidance on creating this vision.
Nancy Duarte in the Harvard Business Review offers an approach to structuring presentations that follow a persuasive story. “The most effective presenters use the same techniques as great storytellers: By reminding people of the status quo and then revealing the path to a better way, they set up a conflict that needs to be resolved,” writes Duarte. “That tension helps them persuade the audience to adopt a new mindset or behave differently.”
A persuasive story pattern.This type of structure aligns well with research results, and particularly results from usability tests. It provides evidence for “what is”—the problems that you’ve identified. And “what could be”—your recommendations on how to address them. And so on and so forth.
You can reinforce your recommendations with examples of things that competitors are doing that could address these issues or with examples where competitors are gaining an edge. Or they can be visual, like quick mockups of how a new design could look that solves a problem. These can help generate conversation and momentum. And this continues until the end of the session when you’ve wrapped everything up in the conclusion by summarizing the main issues and suggesting a way forward. This is the part where you reiterate the main themes or problems and what they mean for the product—the denouement of the story. This stage gives stakeholders the next steps and hopefully the momentum to take those steps!
While we are nearly at the end of this story, let’s reflect on the idea that user research is storytelling. All the elements of a good story are there in the three-act structure of user research:
The researcher has multiple roles: they’re the storyteller, the director, and the producer. The participants have a small role, but they are significant characters (in the research). And the stakeholders are the audience. But the most important thing is to get the story right and to use storytelling to tell users’ stories through research. By the end, the stakeholders should walk away with a purpose and an eagerness to resolve the product’s ills.
So the next time that you’re planning research with clients or you’re speaking to stakeholders about research that you’ve done, think about how you can weave in some storytelling. Ultimately, user research is a win-win for everyone, and you just need to get stakeholders interested in how the story ends.
Picture this. You’ve joined a squad at your company that’s designing new product features with an emphasis on automation or AI. Or your company has just implemented a personalization engine. Either way, you’re designing with data. Now what? When it comes to designing for personalization, there are many cautionary tales, no overnight successes, and few guides for the perplexed.
Between the fantasy of getting it right and the fear of it going wrong—like when we encounter “persofails” in the vein of a company repeatedly imploring everyday consumers to buy additional toilet seats—the personalization gap is real. It’s an especially confounding place to be a digital professional without a map, a compass, or a plan.
For those of you venturing into personalization, there’s no Lonely Planet and few tour guides because effective personalization is so specific to each organization’s talent, technology, and market position.
But you can ensure that your team has packed its bags sensibly.
Designing for personalization makes for strange bedfellows. A savvy art-installation satire on the challenges of humane design in the era of the algorithm. Credit: Signs of the Times, Scott Kelly and Ben Polkinghome.There’s a DIY formula to increase your chances for success. At minimum, you’ll defuse your boss’s irrational exuberance. Before the party you’ll need to effectively prepare.
We call it prepersonalization.
Behind the musicConsider Spotify’s DJ feature, which debuted this past year.
https://www.youtube.com/watch?v=ok-aNnc0DkoWe’re used to seeing the polished final result of a personalization feature. Before the year-end award, the making-of backstory, or the behind-the-scenes victory lap, a personalized feature had to be conceived, budgeted, and prioritized. Before any personalization feature goes live in your product or service, it lives amid a backlog of worthy ideas for expressing customer experiences more dynamically.
So how do you know where to place your personalization bets? How do you design consistent interactions that won’t trip up users or—worse—breed mistrust? We’ve found that for many budgeted programs to justify their ongoing investments, they first needed one or more workshops to convene key stakeholders and internal customers of the technology. Make yours count.
From Big Tech to fledgling startups, we’ve seen the same evolution up close with our clients. In our experiences with working on small and large personalization efforts, a program’s ultimate track record—and its ability to weather tough questions, work steadily toward shared answers, and organize its design and technology efforts—turns on how effectively these prepersonalization activities play out.
Time and again, we’ve seen effective workshops separate future success stories from unsuccessful efforts, saving countless time, resources, and collective well-being in the process.
A personalization practice involves a multiyear effort of testing and feature development. It’s not a switch-flip moment in your tech stack. It’s best managed as a backlog that often evolves through three steps:
This is why we created our progressive personalization framework and why we’re field-testing an accompanying deck of cards: we believe that there’s a base grammar, a set of “nouns and verbs” that your organization can use to design experiences that are customized, personalized, or automated. You won’t need these cards. But we strongly recommend that you create something similar, whether that might be digital or physical.
Set your kitchen timerHow long does it take to cook up a prepersonalization workshop? The surrounding assessment activities that we recommend including can (and often do) span weeks. For the core workshop, we recommend aiming for two to three days. Here’s a summary of our broader approach along with details on the essential first-day activities.
The full arc of the wider workshop is threefold:
Give yourself at least a day, split into two large time blocks, to power through a concentrated version of those first two phases.
Kickstart: Whet your appetiteWe call the first lesson the “landscape of connected experience.” It explores the personalization possibilities in your organization. A connected experience, in our parlance, is any UX requiring the orchestration of multiple systems of record on the backend. This could be a content-management system combined with a marketing-automation platform. It could be a digital-asset manager combined with a customer-data platform.
Spark conversation by naming consumer examples and business-to-business examples of connected experience interactions that you admire, find familiar, or even dislike. This should cover a representative range of personalization patterns, including automated app-based interactions (such as onboarding sequences or wizards), notifications, and recommenders. We have a catalog of these in the cards. Here’s a list of 142 different interactions to jog your thinking.
This is all about setting the table. What are the possible paths for the practice in your organization? If you want a broader view, here’s a long-form primer and a strategic framework.
Assess each example that you discuss for its complexity and the level of effort that you estimate that it would take for your team to deliver that feature (or something similar). In our cards, we divide connected experiences into five levels: functions, features, experiences, complete products, and portfolios. Size your own build here. This will help to focus the conversation on the merits of ongoing investment as well as the gap between what you deliver today and what you want to deliver in the future.
Next, have your team plot each idea on the following 2×2 grid, which lays out the four enduring arguments for a personalized experience. This is critical because it emphasizes how personalization can not only help your external customers but also affect your own ways of working. It’s also a reminder (which is why we used the word argument earlier) of the broader effort beyond these tactical interventions.
Getting intentional about the desired outcomes is an important component to a large-scale personalization program. Credit: Bucket Studio.Each team member should vote on where they see your product or service putting its emphasis. Naturally, you can’t prioritize all of them. The intention here is to flesh out how different departments may view their own upsides to the effort, which can vary from one to the next. Documenting your desired outcomes lets you know how the team internally aligns across representatives from different departments or functional areas.
The third and final kickstart activity is about naming your personalization gap. Is your customer journey well documented? Will data and privacy compliance be too big of a challenge? Do you have content metadata needs that you have to address? (We’re pretty sure that you do: it’s just a matter of recognizing the relative size of that need and its remedy.) In our cards, we’ve noted a number of program risks, including common team dispositions. Our Detractor card, for example, lists six stakeholder behaviors that hinder progress.
Effectively collaborating and managing expectations is critical to your success. Consider the potential barriers to your future progress. Press the participants to name specific steps to overcome or mitigate those barriers in your organization. As studies have shown, personalization efforts face many common barriers.
The largest management consultancies have established practice areas in personalization, and they regularly research program risks and challenges. Credit: Boston Consulting Group.At this point, you’ve hopefully discussed sample interactions, emphasized a key area of benefit, and flagged key gaps? Good—you’re ready to continue.
Hit that test kitchenNext, let’s look at what you’ll need to bring your personalization recipes to life. Personalization engines, which are robust software suites for automating and expressing dynamic content, can intimidate new customers. Their capabilities are sweeping and powerful, and they present broad options for how your organization can conduct its activities. This presents the question: Where do you begin when you’re configuring a connected experience?
What’s important here is to avoid treating the installed software like it were a dream kitchen from some fantasy remodeling project (as one of our client executives memorably put it). These software engines are more like test kitchens where your team can begin devising, tasting, and refining the snacks and meals that will become a part of your personalization program’s regularly evolving menu.
Progressive personalization, a framework for designing connected experiences. Credit: Bucket Studio and Colin Eagan.The ultimate menu of the prioritized backlog will come together over the course of the workshop. And creating “dishes” is the way that you’ll have individual team stakeholders construct personalized interactions that serve their needs or the needs of others.
The dishes will come from recipes, and those recipes have set ingredients.
In the same way that ingredients form a recipe, you can also create cards to break down a personalized interaction into its constituent parts. Credit: Bucket Studio and Colin Eagan. Verify your ingredientsLike a good product manager, you’ll make sure—andyou’ll validate with the right stakeholders present—that you have all the ingredients on hand to cook up your desired interaction (or that you can work out what needs to be added to your pantry). These ingredients include the audience that you’re targeting, content and design elements, the context for the interaction, and your measure for how it’ll come together.
This isn’t just about discovering requirements. Documenting your personalizations as a series of if-then statements lets the team:
This helps you streamline your designs and your technical efforts while you deliver a shared palette of core motifs of your personalized or automated experience.
Compose your recipeWhat ingredients are important to you? Think of a who-what-when-why construct:
We first developed these cards and card categories five years ago. We regularly play-test their fit with conference audiences and clients. And we still encounter new possibilities. But they all follow an underlying who-what-when-why logic.
Here are three examples for a subscription-based reading app, which you can generally follow along with right to left in the cards in the accompanying photo below.
A useful preworkshop activity may be to think through a first draft of what these cards might be for your organization, although we’ve also found that this process sometimes flows best through cocreating the recipes themselves. Start with a set of blank cards, and begin labeling and grouping them through the design process, eventually distilling them to a refined subset of highly useful candidate cards.
You can think of the later stages of the workshop as moving from recipes toward a cookbook in focus—like a more nuanced customer-journey mapping. Individual “cooks” will pitch their recipes to the team, using a common jobs-to-be-done format so that measurability and results are baked in, and from there, the resulting collection will be prioritized for finished design and delivery to production.
Better kitchens require better architectureSimplifying a customer experience is a complicated effort for those who are inside delivering it. Beware anyone who says otherwise. With that being said, “Complicated problems can be hard to solve, but they are addressable with rules and recipes.”
When personalization becomes a laugh line, it’s because a team is overfitting: they aren’t designing with their best data. Like a sparse pantry, every organization has metadata debt to go along with its technical debt, and this creates a drag on personalization effectiveness. Your AI’s output quality, for example, is indeed limited by your IA. Spotify’s poster-child prowess today was unfathomable before they acquired a seemingly modest metadata startup that now powers its underlying information architecture.
You can definitely stand the heat…Personalization technology opens a doorway into a confounding ocean of possible designs. Only a disciplined and highly collaborative approach will bring about the necessary focus and intention to succeed. So banish the dream kitchen. Instead, hit the test kitchen to save time, preserve job satisfaction and security, and safely dispense with the fanciful ideas that originate upstairs of the doers in your organization. There are meals to serve and mouths to feed.
This workshop framework gives you a fighting shot at lasting success as well as sound beginnings. Wiring up your information layer isn’t an overnight affair. But if you use the same cookbook and shared recipes, you’ll have solid footing for success. We designed these activities to make your organization’s needs concrete and clear, long before the hazards pile up.
While there are associated costs toward investing in this kind of technology and product design, your ability to size up and confront your unique situation and your digital capabilities is time well spent. Don’t squander it. The proof, as they say, is in the pudding.
I offer a single bit of advice to friends and family when they become new parents: When you start to think that you’ve got everything figured out, everything will change. Just as you start to get the hang of feedings, diapers, and regular naps, it’s time for solid food, potty training, and overnight sleeping. When you figure those out, it’s time for preschool and rare naps. The cycle goes on and on.
The same applies for those of us working in design and development these days. Having worked on the web for almost three decades at this point, I’ve seen the regular wax and wane of ideas, techniques, and technologies. Each time that we as developers and designers get into a regular rhythm, some new idea or technology comes along to shake things up and remake our world.
How we got hereI built my first website in the mid-’90s. Design and development on the web back then was a free-for-all, with few established norms. For any layout aside from a single column, we used table
elements, often with empty cells containing a single pixel spacer GIF to add empty space. We styled text with numerous font
tags, nesting the tags every time we wanted to vary the font style. And we had only three or four typefaces to choose from: Arial, Courier, or Times New Roman. When Verdana and Georgia came out in 1996, we rejoiced because our options had nearly doubled. The only safe colors to choose from were the 216 “web safe” colors known to work across platforms. The few interactive elements (like contact forms, guest books, and counters) were mostly powered by CGI scripts (predominantly written in Perl at the time). Achieving any kind of unique look involved a pile of hacks all the way down. Interaction was often limited to specific pages in a site.
At the turn of the century, a new cycle started. Crufty code littered with table
layouts and font
tags waned, and a push for web standards waxed. Newer technologies like CSS got more widespread adoption by browsers makers, developers, and designers. This shift toward standards didn’t happen accidentally or overnight. It took active engagement between the W3C and browser vendors and heavy evangelism from folks like the Web Standards Project to build standards. A List Apart and books like Designing with Web Standards by Jeffrey Zeldman played key roles in teaching developers and designers why standards are important, how to implement them, and how to sell them to their organizations. And approaches like progressive enhancement introduced the idea that content should be available for all browsers—with additional enhancements available for more advanced browsers. Meanwhile, sites like the CSS Zen Garden showcased just how powerful and versatile CSS can be when combined with a solid semantic HTML structure.
Server-side languages like PHP, Java, and .NET overtook Perl as the predominant back-end processors, and the cgi-bin was tossed in the trash bin. With these better server-side tools came the first era of web applications, starting with content-management systems (particularly in the blogging space with tools like Blogger, Grey Matter, Movable Type, and WordPress). In the mid-2000s, AJAX opened doors for asynchronous interaction between the front end and back end. Suddenly, pages could update their content without needing to reload. A crop of JavaScript frameworks like Prototype, YUI, and jQuery arose to help developers build more reliable client-side interaction across browsers that had wildly varying levels of standards support. Techniques like image replacement let crafty designers and developers display fonts of their choosing. And technologies like Flash made it possible to add animations, games, and even more interactivity.
These new technologies, standards, and techniques reinvigorated the industry in many ways. Web design flourished as designers and developers explored more diverse styles and layouts. But we still relied on tons of hacks. Early CSS was a huge improvement over table-based layouts when it came to basic layout and text styling, but its limitations at the time meant that designers and developers still relied heavily on images for complex shapes (such as rounded or angled corners) and tiled backgrounds for the appearance of full-length columns (among other hacks). Complicated layouts required all manner of nested floats or absolute positioning (or both). Flash and image replacement for custom fonts was a great start toward varying the typefaces from the big five, but both hacks introduced accessibility and performance problems. And JavaScript libraries made it easy for anyone to add a dash of interaction to pages, although at the cost of doubling or even quadrupling the download size of simple websites.
The web as software platformThe symbiosis between the front end and back end continued to improve, and that led to the current era of modern web applications. Between expanded server-side programming languages (which kept growing to include Ruby, Python, Go, and others) and newer front-end tools like React, Vue, and Angular, we could build fully capable software on the web. Alongside these tools came others, including collaborative version control, build automation, and shared package libraries. What was once primarily an environment for linked documents became a realm of infinite possibilities.
At the same time, mobile devices became more capable, and they gave us internet access in our pockets. Mobile apps and responsive design opened up opportunities for new interactions anywhere and any time.
This combination of capable mobile devices and powerful development tools contributed to the waxing of social media and other centralized tools for people to connect and consume. As it became easier and more common to connect with others directly on Twitter, Facebook, and even Slack, the desire for hosted personal sites waned. Social media offered connections on a global scale, with both the good and bad that that entails.
Want a much more extensive history of how we got here, with some other takes on ways that we can improve? Jeremy Keith wrote “Of Time and the Web.” Or check out the “Web Design History Timeline” at the Web Design Museum. Neal Agarwal also has a fun tour through “Internet Artifacts.”
Where we are nowIn the last couple of years, it’s felt like we’ve begun to reach another major inflection point. As social-media platforms fracture and wane, there’s been a growing interest in owning our own content again. There are many different ways to make a website, from the tried-and-true classic of hosting plain HTML files to static site generators to content management systems of all flavors. The fracturing of social media also comes with a cost: we lose crucial infrastructure for discovery and connection. Webmentions, RSS, ActivityPub, and other tools of the IndieWeb can help with this, but they’re still relatively underimplemented and hard to use for the less nerdy. We can build amazing personal websites and add to them regularly, but without discovery and connection, it can sometimes feel like we may as well be shouting into the void.
Browser support for CSS, JavaScript, and other standards like web components has accelerated, especially through efforts like Interop. New technologies gain support across the board in a fraction of the time that they used to. I often learn about a new feature and check its browser support only to find that its coverage is already above 80 percent. Nowadays, the barrier to using newer techniques often isn’t browser support but simply the limits of how quickly designers and developers can learn what’s available and how to adopt it.
Today, with a few commands and a couple of lines of code, we can prototype almost any idea. All the tools that we now have available make it easier than ever to start something new. But the upfront cost that these frameworks may save in initial delivery eventually comes due as upgrading and maintaining them becomes a part of our technical debt.
If we rely on third-party frameworks, adopting new standards can sometimes take longer since we may have to wait for those frameworks to adopt those standards. These frameworks—which used to let us adopt new techniques sooner—have now become hindrances instead. These same frameworks often come with performance costs too, forcing users to wait for scripts to load before they can read or interact with pages. And when scripts fail (whether through poor code, network issues, or other environmental factors), there’s often no alternative, leaving users with blank or broken pages.
Where do we go from here?Today’s hacks help to shape tomorrow’s standards. And there’s nothing inherently wrong with embracing hacks—for now—to move the present forward. Problems only arise when we’re unwilling to admit that they’re hacks or we hesitate to replace them. So what can we do to create the future we want for the web?
Build for the long haul. Optimize for performance, for accessibility, and for the user. Weigh the costs of those developer-friendly tools. They may make your job a little easier today, but how do they affect everything else? What’s the cost to users? To future developers? To standards adoption? Sometimes the convenience may be worth it. Sometimes it’s just a hack that you’ve grown accustomed to. And sometimes it’s holding you back from even better options.
Start from standards. Standards continue to evolve over time, but browsers have done a remarkably good job of continuing to support older standards. The same isn’t always true of third-party frameworks. Sites built with even the hackiest of HTML from the ’90s still work just fine today. The same can’t always be said of sites built with frameworks even after just a couple years.
Design with care. Whether your craft is code, pixels, or processes, consider the impacts of each decision. The convenience of many a modern tool comes at the cost of not always understanding the underlying decisions that have led to its design and not always considering the impact that those decisions can have. Rather than rushing headlong to “move fast and break things,” use the time saved by modern tools to consider more carefully and design with deliberation.
Always be learning. If you’re always learning, you’re also growing. Sometimes it may be hard to pinpoint what’s worth learning and what’s just today’s hack. You might end up focusing on something that won’t matter next year, even if you were to focus solely on learning standards. (Remember XHTML?) But constant learning opens up new connections in your brain, and the hacks that you learn one day may help to inform different experiments another day.
Play, experiment, and be weird! This web that we’ve built is the ultimate experiment. It’s the single largest human endeavor in history, and yet each of us can create our own pocket within it. Be courageous and try new things. Build a playground for ideas. Make goofy experiments in your own mad science lab. Start your own small business. There has never been a more empowering place to be creative, take risks, and explore what we’re capable of.
Share and amplify. As you experiment, play, and learn, share what’s worked for you. Write on your own website, post on whichever social media site you prefer, or shout it from a TikTok. Write something for A List Apart! But take the time to amplify others too: find new voices, learn from them, and share what they’ve taught you.
Go forth and makeAs designers and developers for the web (and beyond), we’re responsible for building the future every day, whether that may take the shape of personal websites, social media tools used by billions, or anything in between. Let’s imbue our values into the things that we create, and let’s make the web a better place for everyone. Create that thing that only you are uniquely qualified to make. Then share it, make it better, make it again, or make something new. Learn. Make. Share. Grow. Rinse and repeat. Every time you think that you’ve mastered the web, everything will change.
In reading Joe Dolson’s recent piece on the intersection of AI and accessibility, I absolutely appreciated the skepticism that he has for AI in general as well as for the ways that many have been using it. In fact, I’m very skeptical of AI myself, despite my role at Microsoft as an accessibility innovation strategist who helps run the AI for Accessibility grant program. As with any tool, AI can be used in very constructive, inclusive, and accessible ways; and it can also be used in destructive, exclusive, and harmful ones. And there are a ton of uses somewhere in the mediocre middle as well.
I’d like you to consider this a “yes… and” piece to complement Joe’s post. I’m not trying to refute any of what he’s saying but rather provide some visibility to projects and opportunities where AI can make meaningful differences for people with disabilities. To be clear, I’m not saying that there aren’t real risks or pressing issues with AI that need to be addressed—there are, and we’ve needed to address them, like, yesterday—but I want to take a little time to talk about what’s possible in hopes that we’ll get there one day.
Alternative textJoe’s piece spends a lot of time talking about computer-vision models generating alternative text. He highlights a ton of valid issues with the current state of things. And while computer-vision models continue to improve in the quality and richness of detail in their descriptions, their results aren’t great. As he rightly points out, the current state of image analysis is pretty poor—especially for certain image types—in large part because current AI systems examine images in isolation rather than within the contexts that they’re in (which is a consequence of having separate “foundation” models for text analysis and image analysis). Today’s models aren’t trained to distinguish between images that are contextually relevant (that should probably have descriptions) and those that are purely decorative (which might not need a description) either. Still, I still think there’s potential in this space.
As Joe mentions, human-in-the-loop authoring of alt text should absolutely be a thing. And if AI can pop in to offer a starting point for alt text—even if that starting point might be a prompt saying What is this BS? That’s not right at all… Let me try to offer a starting point—I think that’s a win.
Taking things a step further, if we can specifically train a model to analyze image usage in context, it could help us more quickly identify which images are likely to be decorative and which ones likely require a description. That will help reinforce which contexts call for image descriptions and it’ll improve authors’ efficiency toward making their pages more accessible.
While complex images—like graphs and charts—are challenging to describe in any sort of succinct way (even for humans), the image example shared in the GPT4 announcement points to an interesting opportunity as well. Let’s suppose that you came across a chart whose description was simply the title of the chart and the kind of visualization it was, such as: Pie chart comparing smartphone usage to feature phone usage among US households making under $30,000 a year. (That would be a pretty awful alt text for a chart since that would tend to leave many questions about the data unanswered, but then again, let’s suppose that that was the description that was in place.) If your browser knew that that image was a pie chart (because an onboard model concluded this), imagine a world where users could ask questions like these about the graphic:
Setting aside the realities of large language model (LLM) hallucinations—where a model just makes up plausible-sounding “facts”—for a moment, the opportunity to learn more about images and data in this way could be revolutionary for blind and low-vision folks as well as for people with various forms of color blindness, cognitive disabilities, and so on. It could also be useful in educational contexts to help people who can see these charts, as is, to understand the data in the charts.
Taking things a step further: What if you could ask your browser to simplify a complex chart? What if you could ask it to isolate a single line on a line graph? What if you could ask your browser to transpose the colors of the different lines to work better for form of color blindness you have? What if you could ask it to swap colors for patterns? Given these tools’ chat-based interfaces and our existing ability to manipulate images in today’s AI tools, that seems like a possibility.
Now imagine a purpose-built model that could extract the information from that chart and convert it to another format. For example, perhaps it could turn that pie chart (or better yet, a series of pie charts) into more accessible (and useful) formats, like spreadsheets. That would be amazing!
Matching algorithmsSafiya Umoja Noble absolutely hit the nail on the head when she titled her book Algorithms of Oppression. While her book was focused on the ways that search engines reinforce racism, I think that it’s equally true that all computer models have the potential to amplify conflict, bias, and intolerance. Whether it’s Twitter always showing you the latest tweet from a bored billionaire, YouTube sending us into a Q-hole, or Instagram warping our ideas of what natural bodies look like, we know that poorly authored and maintained algorithms are incredibly harmful. A lot of this stems from a lack of diversity among the people who shape and build them. When these platforms are built with inclusively baked in, however, there’s real potential for algorithm development to help people with disabilities.
Take Mentra, for example. They are an employment network for neurodivergent people. They use an algorithm to match job seekers with potential employers based on over 75 data points. On the job-seeker side of things, it considers each candidate’s strengths, their necessary and preferred workplace accommodations, environmental sensitivities, and so on. On the employer side, it considers each work environment, communication factors related to each job, and the like. As a company run by neurodivergent folks, Mentra made the decision to flip the script when it came to typical employment sites. They use their algorithm to propose available candidates to companies, who can then connect with job seekers that they are interested in; reducing the emotional and physical labor on the job-seeker side of things.
When more people with disabilities are involved in the creation of algorithms, that can reduce the chances that these algorithms will inflict harm on their communities. That’s why diverse teams are so important.
Imagine that a social media company’s recommendation engine was tuned to analyze who you’re following and if it was tuned to prioritize follow recommendations for people who talked about similar things but who were different in some key ways from your existing sphere of influence. For example, if you were to follow a bunch of nondisabled white male academics who talk about AI, it could suggest that you follow academics who are disabled or aren’t white or aren’t male who also talk about AI. If you took its recommendations, perhaps you’d get a more holistic and nuanced understanding of what’s happening in the AI field. These same systems should also use their understanding of biases about particular communities—including, for instance, the disability community—to make sure that they aren’t recommending any of their users follow accounts that perpetuate biases against (or, worse, spewing hate toward) those groups.
Other ways that AI can helps people with disabilitiesIf I weren’t trying to put this together between other tasks, I’m sure that I could go on and on, providing all kinds of examples of how AI could be used to help people with disabilities, but I’m going to make this last section into a bit of a lightning round. In no particular order:
We need to recognize that our differences matter. Our lived experiences are influenced by the intersections of the identities that we exist in. These lived experiences—with all their complexities (and joys and pain)—are valuable inputs to the software, services, and societies that we shape. Our differences need to be represented in the data that we use to train new models, and the folks who contribute that valuable information need to be compensated for sharing it with us. Inclusive data sets yield more robust models that foster more equitable outcomes.
Want a model that doesn’t demean or patronize or objectify people with disabilities? Make sure that you have content about disabilities that’s authored by people with a range of disabilities, and make sure that that’s well represented in the training data.
Want a model that doesn’t use ableist language? You may be able to use existing data sets to build a filter that can intercept and remediate ableist language before it reaches readers. That being said, when it comes to sensitivity reading, AI models won’t be replacing human copy editors anytime soon.
Want a coding copilot that gives you accessible recommendations from the jump? Train it on code that you know to be accessible.
I have no doubt that AI can and will harm people… today, tomorrow, and well into the future. But I also believe that we can acknowledge that and, with an eye towards accessibility (and, more broadly, inclusion), make thoughtful, considerate, and intentional changes in our approaches to AI that will reduce harm over time as well. Today, tomorrow, and well into the future.
Many thanks to Kartik Sawhney for helping me with the development of this piece, Ashley Bischoff for her invaluable editorial assistance, and, of course, Joe Dolson for the prompt.
I am a creative. What I do is alchemy. It is a mystery. I do not so much do it, as let it be done through me.
I am a creative. Not all creative people like this label. Not all see themselves this way. Some creative people see science in what they do. That is their truth, and I respect it. Maybe I even envy them, a little. But my process is different—my being is different.
Apologizing and qualifying in advance is a distraction. That’s what my brain does to sabotage me. I set it aside for now. I can come back later to apologize and qualify. After I’ve said what I came to say. Which is hard enough.
Except when it is easy and flows like a river of wine.
Sometimes it does come that way. Sometimes what I need to create comes in an instant. I have learned not to say it at that moment, because if you admit that sometimes the idea just comes and it is the best idea and you know it is the best idea, they think you don’t work hard enough.
Sometimes I work and work and work until the idea comes. Sometimes it comes instantly and I don’t tell anyone for three days. Sometimes I’m so excited by the idea that came instantly that I blurt it out, can’t help myself. Like a boy who found a prize in his Cracker Jacks. Sometimes I get away with this. Sometimes other people agree: yes, that is the best idea. Most times they don’t and I regret having given way to enthusiasm.
Enthusiasm is best saved for the meeting where it will make a difference. Not the casual get-together that precedes that meeting by two other meetings. Nobody knows why we have all these meetings. We keep saying we’re doing away with them, but then just finding other ways to have them. Sometimes they are even good. But other times they are a distraction from the actual work. The proportion between when meetings are useful, and when they are a pitiful distraction, varies, depending on what you do and where you do it. And who you are and how you do it. Again I digress. I am a creative. That is the theme.
Sometimes many hours of hard and patient work produce something that is barely serviceable. Sometimes I have to accept that and move on to the next project.
Don’t ask about process. I am a creative.I am a creative. I don’t control my dreams. And I don’t control my best ideas.
I can hammer away, surround myself with facts or images, and sometimes that works. I can go for a walk, and sometimes that works. I can be making dinner and there’s a Eureka having nothing to do with sizzling oil and bubbling pots. Often I know what to do the instant I wake up. And then, almost as often, as I become conscious and part of the world again, the idea that would have saved me turns to vanishing dust in a mindless wind of oblivion. For creativity, I believe, comes from that other world. The one we enter in dreams, and perhaps, before birth and after death. But that’s for poets to wonder, and I am not a poet. I am a creative. And it’s for theologians to mass armies about in their creative world that they insist is real. But that is another digression. And a depressing one. Maybe on a much more important topic than whether I am a creative or not. But still a digression from what I came here to say.
Sometimes the process is avoidance. And agony. You know the cliché about the tortured artist? It’s true, even when the artist (and let’s put that noun in quotes) is trying to write a soft drink jingle, a callback in a tired sitcom, a budget request.
Some people who hate being called creative may be closeted creatives, but that’s between them and their gods. No offense meant. Your truth is true, too. But mine is for me.
Creatives recognize creatives.Creatives recognize creatives like queers recognize queers, like real rappers recognize real rappers, like cons know cons. Creatives feel massive respect for creatives. We love, honor, emulate, and practically deify the great ones. To deify any human is, of course, a tragic mistake. We have been warned. We know better. We know people are just people. They squabble, they are lonely, they regret their most important decisions, they are poor and hungry, they can be cruel, they can be just as stupid as we can, because, like us, they are clay. But. But. But they make this amazing thing. They birth something that did not exist before them, and could not exist without them. They are the mothers of ideas. And I suppose, since it’s just lying there, I have to add that they are the mothers of invention. Ba dum bum! OK, that’s done. Continue.
Creatives belittle our own small achievements, because we compare them to those of the great ones. Beautiful animation! Well, I’m no Miyazaki. Now THAT is greatness. That is greatness straight from the mind of God. This half-starved little thing that I made? It more or less fell off the back of the turnip truck. And the turnips weren’t even fresh.
Creatives knows that, at best, they are Salieri. Even the creatives who are Mozart believe that.
I am a creative. I haven’t worked in advertising in 30 years, but in my nightmares, it’s my former creative directors who judge me. And they are right to do so. I am too lazy, too facile, and when it really counts, my mind goes blank. There is no pill for creative dysfunction.
I am a creative. Every deadline I make is an adventure that makes Indiana Jones look like a pensioner snoring in a deck chair. The longer I remain a creative, the faster I am when I do my work and the longer I brood and walk in circles and stare blankly before I do that work.
I am still 10 times faster than people who are not creative, or people who have only been creative a short while, or people who have only been professionally creative a short while. It’s just that, before I work 10 times as fast as they do, I spend twice as long as they do putting the work off. I am that confident in my ability to do a great job when I put my mind to it. I am that addicted to the adrenaline rush of postponement. I am still that afraid of the jump.
I am not an artist.I am a creative. Not an artist. Though I dreamed, as a lad, of someday being that. Some of us belittle our gifts and dislike ourselves because we are not Michelangelos and Warhols. That is narcissism—but at least we aren’t in politics.
I am a creative. Though I believe in reason and science, I decide by intuition and impulse. And live with what follows—the catastrophes as well as the triumphs.
I am a creative. Every word I’ve said here will annoy other creatives, who see things differently. Ask two creatives a question, get three opinions. Our disagreement, our passion about it, and our commitment to our own truth are, at least to me, the proofs that we are creatives, no matter how we may feel about it.
I am a creative. I lament my lack of taste in the areas about which I know very little, which is to say almost all areas of human knowledge. And I trust my taste above all other things in the areas closest to my heart, or perhaps, more accurately, to my obsessions. Without my obsessions, I would probably have to spend my time looking life in the eye, and almost none of us can do that for long. Not honestly. Not really. Because much in life, if you really look at it, is unbearable.
I am a creative. I believe, as a parent believes, that when I am gone, some small good part of me will carry on in the mind of at least one other person.
Working saves me from worrying about work.I am a creative. I live in dread of my small gift suddenly going away.
I am a creative. I am too busy making the next thing to spend too much time deeply considering that almost nothing I make will come anywhere near the greatness I comically aspire to.
I am a creative. I believe in the ultimate mystery of process. I believe in it so much, I am even fool enough to publish an essay I dictated into a tiny machine and didn’t take time to review or revise. I won’t do this often, I promise. But I did it just now, because, as afraid as I might be of your seeing through my pitiful gestures toward the beautiful, I was even more afraid of forgetting what I came to say.
There. I think I’ve said it.
Humility, a designer’s essential value—that has a nice ring to it. What about humility, an office manager’s essential value? Or a dentist’s? Or a librarian’s? They all sound great. When humility is our guiding light, the path is always open for fulfillment, evolution, connection, and engagement. In this chapter, we’re going to talk about why.
That said, this is a book for designers, and to that end, I’d like to start with a story—well, a journey, really. It’s a personal one, and I’m going to make myself a bit vulnerable along the way. I call it:
The Tale of Justin’s Preposterous PateWhen I was coming out of art school, a long-haired, goateed neophyte, print was a known quantity to me; design on the web, however, was rife with complexities to navigate and discover, a problem to be solved. Though I had been formally trained in graphic design, typography, and layout, what fascinated me was how these traditional skills might be applied to a fledgling digital landscape. This theme would ultimately shape the rest of my career.
So rather than graduate and go into print like many of my friends, I devoured HTML and JavaScript books into the wee hours of the morning and taught myself how to code during my senior year. I wanted—nay, needed—to better understand the underlying implications of what my design decisions would mean once rendered in a browser.
The late ’90s and early 2000s were the so-called “Wild West” of web design. Designers at the time were all figuring out how to apply design and visual communication to the digital landscape. What were the rules? How could we break them and still engage, entertain, and convey information? At a more macro level, how could my values, inclusive of humility, respect, and connection, align in tandem with that? I was hungry to find out.
Though I’m talking about a different era, those are timeless considerations between non-career interactions and the world of design. What are your core passions, or values, that transcend medium? It’s essentially the same concept we discussed earlier on the direct parallels between what fulfills you, agnostic of the tangible or digital realms; the core themes are all the same.
First within tables, animated GIFs, Flash, then with Web Standards, div
s, and CSS, there was personality, raw unbridled creativity, and unique means of presentment that often defied any semblance of a visible grid. Splash screens and “browser requirement” pages aplenty. Usability and accessibility were typically victims of such a creation, but such paramount facets of any digital design were largely (and, in hindsight, unfairly) disregarded at the expense of experimentation.
For example, this iteration of my personal portfolio site (“the pseudoroom”) from that era was experimental, if not a bit heavy- handed, in the visual communication of the concept of a living sketchbook. Very skeuomorphic. I collaborated with fellow designer and dear friend Marc Clancy (now a co-founder of the creative project organizing app Milanote) on this one, where we’d first sketch and then pass a Photoshop file back and forth to trick things out and play with varied user interactions. Then, I’d break it down and code it into a digital layout.
Figure 1: “the pseudoroom” website, hitting the sketchbook metaphor hard.Along with design folio pieces, the site also offered free downloads for Mac OS customizations: desktop wallpapers that were effectively design experimentation, custom-designed typefaces, and desktop icons.
From around the same time, GUI Galaxy was a design, pixel art, and Mac-centric news portal some graphic designer friends and I conceived, designed, developed, and deployed.
Figure 2: GUI Galaxy, web standards-compliant design news portalDesign news portals were incredibly popular during this period, featuring (what would now be considered) Tweet-size, small-format snippets of pertinent news from the categories I previously mentioned. If you took Twitter, curated it to a few categories, and wrapped it in a custom-branded experience, you’d have a design news portal from the late 90s / early 2000s.
We as designers had evolved and created a bandwidth-sensitive, web standards award-winning, much more accessibility-conscious website. Still ripe with experimentation, yet more mindful of equitable engagement. You can see a couple of content panes here, noting general news (tech, design) and Mac-centric news below. We also offered many of the custom downloads I cited before as present on my folio site but branded and themed to GUI Galaxy.
The site’s backbone was a homegrown CMS, with the presentation layer consisting of global design + illustration + news author collaboration. And the collaboration effort here, in addition to experimentation on a ‘brand’ and content delivery, was hitting my core. We were designing something bigger than any single one of us and connecting with a global audience.
Collaboration and connection transcend medium in their impact, immensely fulfilling me as a designer.
Now, why am I taking you down this trip of design memory lane? Two reasons.
First, there’s a reason for the nostalgia for that design era (the “Wild West” era, as I called it earlier): the inherent exploration, personality, and creativity that saturated many design portals and personal portfolio sites. Ultra-finely detailed pixel art UI, custom illustration, bespoke vector graphics, all underpinned by a strong design community.
Today’s web design has been in a period of stagnation. I suspect there’s a strong chance you’ve seen a site whose structure looks something like this: a hero image / banner with text overlaid, perhaps with a lovely rotating carousel of images (laying the snark on heavy there), a call to action, and three columns of sub-content directly beneath. Maybe an icon library is employed with selections that vaguely relate to their respective content.
Design, as it’s applied to the digital landscape, is in dire need of thoughtful layout, typography, and visual engagement that goes hand-in-hand with all the modern considerations we now know are paramount: usability. Accessibility. Load times and bandwidth- sensitive content delivery. A responsive presentation that meets human beings wherever they’re engaging from. We must be mindful of, and respectful toward, those concerns—but not at the expense of creativity of visual communication or via replicating cookie-cutter layouts.
Pixel ProblemsWebsites during this period were often designed and built on Macs whose OS and desktops looked something like this. This is Mac OS 7.5, but 8 and 9 weren’t that different.
Figure 3: A Mac OS 7.5-centric desktop.Desktop icons fascinated me: how could any single one, at any given point, stand out to get my attention? In this example, the user’s desktop is tidy, but think of a more realistic example with icon pandemonium. Or, say an icon was part of a larger system grouping (fonts, extensions, control panels)—how did it also maintain cohesion amongst a group?
These were 32 x 32 pixel creations, utilizing a 256-color palette, designed pixel-by-pixel as mini mosaics. To me, this was the embodiment of digital visual communication under such ridiculous constraints. And often, ridiculous restrictions can yield the purification of concept and theme.
So I began to research and do my homework. I was a student of this new medium, hungry to dissect, process, discover, and make it my own.
Expanding upon the notion of exploration, I wanted to see how I could push the limits of a 32x32 pixel grid with that 256-color palette. Those ridiculous constraints forced a clarity of concept and presentation that I found incredibly appealing. The digital gauntlet had been tossed, and that challenge fueled me. And so, in my dorm room into the wee hours of the morning, I toiled away, bringing conceptual sketches into mini mosaic fruition.
These are some of my creations, utilizing the only tool available at the time to create icons called ResEdit. ResEdit was a clunky, built-in Mac OS utility not really made for exactly what we were using it for. At the core of all of this work: Research. Challenge. Problem- solving. Again, these core connection-based values are agnostic of medium.
Figure 4: A selection of my pixel art design, 32x32 pixel canvas, 8-bit paletteThere’s one more design portal I want to talk about, which also serves as the second reason for my story to bring this all together.
This is K10k, short for Kaliber 1000. K10k was founded in 1998 by Michael Schmidt and Toke Nygaard, and was the design news portal on the web during this period. With its pixel art-fueled presentation, ultra-focused care given to every facet and detail, and with many of the more influential designers of the time who were invited to be news authors on the site, well... it was the place to be, my friend. With respect where respect is due, GUI Galaxy’s concept was inspired by what these folks were doing.
Figure 5: The K10k websiteFor my part, the combination of my web design work and pixel art exploration began to get me some notoriety in the design scene. Eventually, K10k noticed and added me as one of their very select group of news authors to contribute content to the site.
Amongst my personal work and side projects—and now with this inclusion—in the design community, this put me on the map. My design work also began to be published in various printed collections, in magazines domestically and overseas, and featured on other design news portals. With that degree of success while in my early twenties, something else happened:
I evolved—devolved, really—into a colossal asshole (and in just about a year out of art school, no less). The press and the praise became what fulfilled me, and they went straight to my head. They inflated my ego. I actually felt somewhat superior to my fellow designers.
The casualties? My design stagnated. Its evolution—my evolution— stagnated.
I felt so supremely confident in my abilities that I effectively stopped researching and discovering. When previously sketching concepts or iterating ideas in lead was my automatic step one, I instead leaped right into Photoshop. I drew my inspiration from the smallest of sources (and with blinders on). Any critique of my work from my peers was often vehemently dismissed. The most tragic loss: I had lost touch with my values.
My ego almost cost me some of my friendships and burgeoning professional relationships. I was toxic in talking about design and in collaboration. But thankfully, those same friends gave me a priceless gift: candor. They called me out on my unhealthy behavior.
Admittedly, it was a gift I initially did not accept but ultimately was able to deeply reflect upon. I was soon able to accept, and process, and course correct. The realization laid me low, but the re-awakening was essential. I let go of the “reward” of adulation and re-centered upon what stoked the fire for me in art school. Most importantly: I got back to my core values.
Always StudentsFollowing that short-term regression, I was able to push forward in my personal design and career. And I could self-reflect as I got older to facilitate further growth and course correction as needed.
As an example, let’s talk about the Large Hadron Collider. The LHC was designed “to help answer some of the fundamental open questions in physics, which concern the basic laws governing the interactions and forces among the elementary objects, the deep structure of space and time, and in particular the interrelation between quantum mechanics and general relativity.” Thanks, Wikipedia.
Around fifteen years ago, in one of my earlier professional roles, I designed the interface for the application that generated the LHC’s particle collision diagrams. These diagrams are the rendering of what’s actually happening inside the Collider during any given particle collision event and are often considered works of art unto themselves.
Designing the interface for this application was a fascinating process for me, in that I worked with Fermilab physicists to understand what the application was trying to achieve, but also how the physicists themselves would be using it. To that end, in this role,
I cut my teeth on usability testing, working with the Fermilab team to iterate and improve the interface. How they spoke and what they spoke about was like an alien language to me. And by making myself humble and working under the mindset that I was but a student, I made myself available to be a part of their world to generate that vital connection.
I also had my first ethnographic observation experience: going to the Fermilab location and observing how the physicists used the tool in their actual environment, on their actual terminals. For example, one takeaway was that due to the level of ambient light-driven contrast within the facility, the data columns ended up using white text on a dark gray background instead of black text-on-white. This enabled them to pore over reams of data during the day and ease their eye strain. And Fermilab and CERN are government entities with rigorous accessibility standards, so my knowledge in that realm also grew. The barrier-free design was another essential form of connection.
So to those core drivers of my visual problem-solving soul and ultimate fulfillment: discovery, exposure to new media, observation, human connection, and evolution. What opened the door for those values was me checking my ego before I walked through it.
An evergreen willingness to listen, learn, understand, grow, evolve, and connect yields our best work. In particular, I want to focus on the words ‘grow’ and ‘evolve’ in that statement. If we are always students of our craft, we are also continually making ourselves available to evolve. Yes, we have years of applicable design study under our belt. Or the focused lab sessions from a UX bootcamp. Or the monogrammed portfolio of our work. Or, ultimately, decades of a career behind us.
But all that said: experience does not equal “expert.”
As soon as we close our minds via an inner monologue of ‘knowing it all’ or branding ourselves a “#thoughtleader” on social media, the designer we are is our final form. The designer we can be will never exist.
As a UX professional in today’s data-driven landscape, it’s increasingly likely that you’ve been asked to design a personalized digital experience, whether it’s a public website, user portal, or native application. Yet while there continues to be no shortage of marketing hype around personalization platforms, we still have very few standardized approaches for implementing personalized UX.
That’s where we come in. After completing dozens of personalization projects over the past few years, we gave ourselves a goal: could you create a holistic personalization framework specifically for UX practitioners? The Personalization Pyramid is a designer-centric model for standing up human-centered personalization programs, spanning data, segmentation, content delivery, and overall goals. By using this approach, you will be able to understand the core components of a contemporary, UX-driven personalization program (or at the very least know enough to get started).
Growing tools for personalization: According to a Dynamic Yield survey, 39% of respondents felt support is available on-demand when a business case is made for it (up 15% from 2020).
Source: “The State of Personalization Maturity – Q4 2021” Dynamic Yield conducted its annual maturity survey across roles and sectors in the Americas (AMER), Europe and the Middle East (EMEA), and the Asia-Pacific (APAC) regions. This marks the fourth consecutive year publishing our research, which includes more than 450 responses from individuals in the C-Suite, Marketing, Merchandising, CX, Product, and IT.
Getting StartedFor the sake of this article, we’ll assume you’re already familiar with the basics of digital personalization. A good overview can be found here: Website Personalization Planning. While UX projects in this area can take on many different forms, they often stem from similar starting points.
Common scenarios for starting a personalization project:
Regardless of where you begin, a successful personalization program will require the same core building blocks. We’ve captured these as the “levels” on the pyramid. Whether you are a UX designer, researcher, or strategist, understanding the core components can help make your contribution successful.
From the ground up: Soup-to-nuts personalization, without going nuts.From top to bottom, the levels include:
We’ll go through each of these levels in turn. To help make this actionable, we created an accompanying deck of cards to illustrate specific examples from each level. We’ve found them helpful in personalization brainstorming sessions, and will include examples for you here.
Personalization pack: Deck of cards to help kickstart your personalization brainstorming. Starting at the TopThe components of the pyramid are as follows:
North StarA north star is what you are aiming for overall with your personalization program (big or small). The North Star defines the (one) overall mission of the personalization program. What do you wish to accomplish? North Stars cast a shadow. The bigger the star, the bigger the shadow. Example of North Starts might include:
As in any good UX design, personalization can help accelerate designing with customer intentions. Goals are the tactical and measurable metrics that will prove the overall program is successful. A good place to start is with your current analytics and measurement program and metrics you can benchmark against. In some cases, new goals may be appropriate. The key thing to remember is that personalization itself is not a goal, rather it is a means to an end. Common goals include:
Touchpoints are where the personalization happens. As a UX designer, this will be one of your largest areas of responsibility. The touchpoints available to you will depend on how your personalization and associated technology capabilities are instrumented, and should be rooted in improving a user’s experience at a particular point in the journey. Touchpoints can be multi-device (mobile, in-store, website) but also more granular (web banner, web pop-up etc.). Here are some examples:
Channel-level Touchpoints
Wireframe-level Touchpoints
If you’re designing for web interfaces, for example, you will likely need to include personalized “zones” in your wireframes. The content for these can be presented programmatically in touchpoints based on our next step, contexts and campaigns.
Targeted Zones: Examples from Kibo of personalized “zones” on page-level wireframes occurring at various stages of a user journey (Engagement phase at left and Purchase phase at right.)Once you’ve outlined some touchpoints, you can consider the actual personalized content a user will receive. Many personalization tools will refer to these as “campaigns” (so, for example, a campaign on a web banner for new visitors to the website). These will programmatically be shown at certain touchpoints to certain user segments, as defined by user data. At this stage, we find it helpful to consider two separate models: a context model and a content model. The context helps you consider the level of engagement of the user at the personalization moment, for example a user casually browsing information vs. doing a deep-dive. Think of it in terms of information retrieval behaviors. The content model can then help you determine what type of personalization to serve based on the context (for example, an “Enrich” campaign that shows related articles may be a suitable supplement to extant content).
Personalization Context Model:
Personalization Content Model:
We’ve written extensively about each of these models elsewhere, so if you’d like to read more you can check out Colin’s Personalization Content Model and Jeff’s Personalization Context Model.
Campaign and Context cards: This level of the pyramid can help your team focus around the types of personalization to deliver end users and the use-cases in which they will experience it. User SegmentsUser segments can be created prescriptively or adaptively, based on user research (e.g. via rules and logic tied to set user behaviors or via A/B testing). At a minimum you will likely need to consider how to treat the unknown or first-time visitor, the guest or returning visitor for whom you may have a stateful cookie (or equivalent post-cookie identifier), or the authenticated visitor who is logged in. Here are some examples from the personalization pyramid:
Every organization with any digital presence has data. It’s a matter of asking what data you can ethically collect on users, its inherent reliability and value, as to how can you use it (sometimes known as “data activation.”) Fortunately, the tide is turning to first-party data: a recent study by Twilio estimates some 80% of businesses are using at least some type of first-party data to personalize the customer experience.
Source: “The State of Personalization 2021” by Twilio. Survey respondents were n=2,700 adult consumers who have purchased something online in the past 6 months, and n=300 adult manager+ decision-makers at consumer-facing companies that provide goods and/or services online. Respondents were from the United States, United Kingdom, Australia, and New Zealand.Data was collected from April 8 to April 20, 2021.First-party data represents multiple advantages on the UX front, including being relatively simple to collect, more likely to be accurate, and less susceptible to the “creep factor” of third-party data. So a key part of your UX strategy should be to determine what the best form of data collection is on your audiences. Here are some examples:
Figure 1.1.2: Example of a personalization maturity curve, showing progression from basic recommendations functionality to true individualization. Credit: https://kibocommerce.com/blog/kibos-personalization-maturity-chart/There is a progression of profiling when it comes to recognizing and making decisioning about different audiences and their signals. It tends to move towards more granular constructs about smaller and smaller cohorts of users as time and confidence and data volume grow.
While some combination of implicit / explicit data is generally a prerequisite for any implementation (more commonly referred to as first party and third-party data) ML efforts are typically not cost-effective directly out of the box. This is because a strong data backbone and content repository is a prerequisite for optimization. But these approaches should be considered as part of the larger roadmap and may indeed help accelerate the organization’s overall progress. Typically at this point you will partner with key stakeholders and product owners to design a profiling model. The profiling model includes defining approach to configuring profiles, profile keys, profile cards and pattern cards. A multi-faceted approach to profiling which makes it scalable.
Pulling it TogetherWhile the cards comprise the starting point to an inventory of sorts (we provide blanks for you to tailor your own), a set of potential levers and motivations for the style of personalization activities you aspire to deliver, they are more valuable when thought of in a grouping.
In assembling a card “hand”, one can begin to trace the entire trajectory from leadership focus down through a strategic and tactical execution. It is also at the heart of the way both co-authors have conducted workshops in assembling a program backlog—which is a fine subject for another article.
In the meantime, what is important to note is that each colored class of card is helpful to survey in understanding the range of choices potentially at your disposal, it is threading through and making concrete decisions about for whom this decisioning will be made: where, when, and how.
Scenario A: We want to use personalization to improve customer satisfaction on the website. For unknown users, we will create a short quiz to better identify what the user has come to do. This is sometimes referred to as “badging” a user in onboarding contexts, to better characterize their present intent and context. Lay Down Your CardsAny sustainable personalization strategy must consider near, mid and long-term goals. Even with the leading CMS platforms like Sitecore and Adobe or the most exciting composable CMS DXP out there, there is simply no “easy button” wherein a personalization program can be stood up and immediately view meaningful results. That said, there is a common grammar to all personalization activities, just like every sentence has nouns and verbs. These cards attempt to map that territory.
The mobile-first design methodology is great—it focuses on what really matters to the user, it’s well-practiced, and it’s been a common design pattern for years. So developing your CSS mobile-first should also be great, too…right?
Well, not necessarily. Classic mobile-first CSS development is based on the principle of overwriting style declarations: you begin your CSS with default style declarations, and overwrite and/or add new styles as you add breakpoints with min-width
media queries for larger viewports (for a good overview see “What is Mobile First CSS and Why Does It Rock?”). But all those exceptions create complexity and inefficiency, which in turn can lead to an increased testing effort and a code base that’s harder to maintain. Admit it—how many of us willingly want that?
On your own projects, mobile-first CSS may yet be the best tool for the job, but first you need to evaluate just how appropriate it is in light of the visual design and user interactions you’re working on. To help you get started, here’s how I go about tackling the factors you need to watch for, and I’ll discuss some alternate solutions if mobile-first doesn’t seem to suit your project.
Advantages of mobile-firstSome of the things to like with mobile-first CSS development—and why it’s been the de facto development methodology for so long—make a lot of sense:
Development hierarchy. One thing you undoubtedly get from mobile-first is a nice development hierarchy—you just focus on the mobile view and get developing.
Tried and tested. It’s a tried and tested methodology that’s worked for years for a reason: it solves a problem really well.
Prioritizes the mobile view. The mobile view is the simplest and arguably the most important, as it encompasses all the key user journeys, and often accounts for a higher proportion of user visits (depending on the project).
Prevents desktop-centric development. As development is done using desktop computers, it can be tempting to initially focus on the desktop view. But thinking about mobile from the start prevents us from getting stuck later on; no one wants to spend their time retrofitting a desktop-centric site to work on mobile devices!
Disadvantages of mobile-firstSetting style declarations and then overwriting them at higher breakpoints can lead to undesirable ramifications:
More complexity. The farther up the breakpoint hierarchy you go, the more unnecessary code you inherit from lower breakpoints.
Higher CSS specificity. Styles that have been reverted to their browser default value in a class name declaration now have a higher specificity. This can be a headache on large projects when you want to keep the CSS selectors as simple as possible.
Requires more regression testing. Changes to the CSS at a lower view (like adding a new style) requires all higher breakpoints to be regression tested.
The browser can’t prioritize CSS downloads. At wider breakpoints, classic mobile-first min-width
media queries don’t leverage the browser’s capability to download CSS files in priority order.
There is nothing inherently wrong with overwriting values; CSS was designed to do just that. Still, inheriting incorrect values is unhelpful and can be burdensome and inefficient. It can also lead to increased style specificity when you have to overwrite styles to reset them back to their defaults, something that may cause issues later on, especially if you are using a combination of bespoke CSS and utility classes. We won’t be able to use a utility class for a style that has been reset with a higher specificity.
With this in mind, I’m developing CSS with a focus on the default values much more these days. Since there’s no specific order, and no chains of specific values to keep track of, this frees me to develop breakpoints simultaneously. I concentrate on finding common styles and isolating the specific exceptions in closed media query ranges (that is, any range with a max-width
set).
This approach opens up some opportunities, as you can look at each breakpoint as a clean slate. If a component’s layout looks like it should be based on Flexbox at all breakpoints, it’s fine and can be coded in the default style sheet. But if it looks like Grid would be much better for large screens and Flexbox for mobile, these can both be done entirely independently when the CSS is put into closed media query ranges. Also, developing simultaneously requires you to have a good understanding of any given component in all breakpoints up front. This can help surface issues in the design earlier in the development process. We don’t want to get stuck down a rabbit hole building a complex component for mobile, and then get the designs for desktop and find they are equally complex and incompatible with the HTML we created for the mobile view!
Though this approach isn’t going to suit everyone, I encourage you to give it a try. There are plenty of tools out there to help with concurrent development, such as Responsively App, Blisk, and many others.
Having said that, I don’t feel the order itself is particularly relevant. If you are comfortable with focusing on the mobile view, have a good understanding of the requirements for other breakpoints, and prefer to work on one device at a time, then by all means stick with the classic development order. The important thing is to identify common styles and exceptions so you can put them in the relevant stylesheet—a sort of manual tree-shaking process! Personally, I find this a little easier when working on a component across breakpoints, but that’s by no means a requirement.
Closed media query ranges in practiceIn classic mobile-first CSS we overwrite the styles, but we can avoid this by using media query ranges. To illustrate the difference (I’m using SCSS for brevity), let’s assume there are three visual designs:
Take a simple example where a block-level element has a default padding
of “20px,” which is overwritten at tablet to be “40px” and set back to “20px” on desktop.
Classic min-width
mobile-first
.my-block {
padding: 20px;
@media (min-width: 768px) {
padding: 40px;
}
@media (min-width: 1024px) {
padding: 20px;
}
}
Closed media query range
.my-block {
padding: 20px;
@media (min-width: 768px) and (max-width: 1023.98px) {
padding: 40px;
}
}
The subtle difference is that the mobile-first example sets the default padding
to “20px” and then overwrites it at each breakpoint, setting it three times in total. In contrast, the second example sets the default padding
to “20px” and only overrides it at the relevant breakpoint where it isn’t the default value (in this instance, tablet is the exception).
The goal is to:
To this end, closed media query ranges are our best friend. If we need to make a change to any given view, we make it in the CSS media query range that applies to the specific breakpoint. We’ll be much less likely to introduce unwanted alterations, and our regression testing only needs to focus on the breakpoint we have actually edited.
Taking the above example, if we find that .my-block
spacing on desktop is already accounted for by the margin at that breakpoint, and since we want to remove the padding altogether, we could do this by setting the mobile padding
in a closed media query range.
.my-block {
@media (max-width: 767.98px) {
padding: 20px;
}
@media (min-width: 768px) and (max-width: 1023.98px) {
padding: 40px;
}
}
The browser default padding
for our block is “0,” so instead of adding a desktop media query and using unset
or “0” for the padding
value (which we would need with mobile-first), we can wrap the mobile padding
in a closed media query (since it is now also an exception) so it won’t get picked up at wider breakpoints. At the desktop breakpoint, we won’t need to set any padding
style, as we want the browser default value.
Back in the day, keeping the number of requests to a minimum was very important due to the browser’s limit of concurrent requests (typically around six). As a consequence, the use of image sprites and CSS bundling was the norm, with all the CSS being downloaded in one go, as one stylesheet with highest priority.
With HTTP/2 and HTTP/3 now on the scene, the number of requests is no longer the big deal it used to be. This allows us to separate the CSS into multiple files by media query. The clear benefit of this is the browser can now request the CSS it currently needs with a higher priority than the CSS it doesn’t. This is more performant and can reduce the overall time page rendering is blocked.
Which HTTP version are you using?To determine which version of HTTP you’re using, go to your website and open your browser’s dev tools. Next, select the Network tab and make sure the Protocol column is visible. If “h2” is listed under Protocol, it means HTTP/2 is being used.
Note: to view the Protocol in your browser’s dev tools, go to the Network tab, reload your page, right-click any column header (e.g., Name), and check the Protocol column.
Note: for a summarized comparison, see ImageKit’s “HTTP/2 vs. HTTP/1.”Also, if your site is still using HTTP/1...WHY?!! What are you waiting for? There is excellent user support for HTTP/2.
Splitting the CSSSeparating the CSS into individual files is a worthwhile task. Linking the separate CSS files using the relevant media
attribute allows the browser to identify which files are needed immediately (because they’re render-blocking) and which can be deferred. Based on this, it allocates each file an appropriate priority.
In the following example of a website visited on a mobile breakpoint, we can see the mobile and default CSS are loaded with “Highest” priority, as they are currently needed to render the page. The remaining CSS files (print, tablet, and desktop) are still downloaded in case they’ll be needed later, but with “Lowest” priority.
With bundled CSS, the browser will have to download the CSS file and parse it before rendering can start.
While, as noted, with the CSS separated into different files linked and marked up with the relevant media
attribute, the browser can prioritize the files it currently needs. Using closed media query ranges allows the browser to do this at all widths, as opposed to classic mobile-first min-width
queries, where the desktop browser would have to download all the CSS with Highest priority. We can’t assume that desktop users always have a fast connection. For instance, in many rural areas, internet connection speeds are still slow.
The media queries and number of separate CSS files will vary from project to project based on project requirements, but might look similar to the example below.
Bundled CSS
<link href="site.css" rel="stylesheet">
This single file contains all the CSS, including all media queries, and it will be downloaded with Highest priority.
Separated CSS
<link href="default.css" rel="stylesheet"><link href="mobile.css" media="screen and (max-width: 767.98px)" rel="stylesheet"><link href="tablet.css" media="screen and (min-width: 768px) and (max-width: 1083.98px)" rel="stylesheet"><link href="desktop.css" media="screen and (min-width: 1084px)" rel="stylesheet"><link href="print.css" media="print" rel="stylesheet">
Separating the CSS and specifying a media
attribute value on each link
tag allows the browser to prioritize what it currently needs. Out of the five files listed above, two will be downloaded with Highest priority: the default file, and the file that matches the current media query. The others will be downloaded with Lowest priority.
Depending on the project’s deployment strategy, a change to one file (mobile.css
, for example) would only require the QA team to regression test on devices in that specific media query range. Compare that to the prospect of deploying the single bundled site.css
file, an approach that would normally trigger a full regression test.
The uptake of mobile-first CSS was a really important milestone in web development; it has helped front-end developers focus on mobile web applications, rather than developing sites on desktop and then attempting to retrofit them to work on other devices.
I don’t think anyone wants to return to that development model again, but it’s important we don’t lose sight of the issue it highlighted: that things can easily get convoluted and less efficient if we prioritize one particular device—any device—over others. For this reason, focusing on the CSS in its own right, always mindful of what is the default setting and what’s an exception, seems like the natural next step. I’ve started noticing small simplifications in my own CSS, as well as other developers’, and that testing and maintenance work is also a bit more simplified and productive.
In general, simplifying CSS rule creation whenever we can is ultimately a cleaner approach than going around in circles of overrides. But whichever methodology you choose, it needs to suit the project. Mobile-first may—or may not—turn out to be the best choice for what’s involved, but first you need to solidly understand the trade-offs you’re stepping into.
About two and a half years ago, I introduced the idea of daily ethical design. It was born out of my frustration with the many obstacles to achieving design that’s usable and equitable; protects people’s privacy, agency, and focus; benefits society; and restores nature. I argued that we need to overcome the inconveniences that prevent us from acting ethically and that we need to elevate design ethics to a more practical level by structurally integrating it into our daily work, processes, and tools.
Unfortunately, we’re still very far from this ideal.
At the time, I didn’t know yet how to structurally integrate ethics. Yes, I had found some tools that had worked for me in previous projects, such as using checklists, assumption tracking, and “dark reality” sessions, but I didn’t manage to apply those in every project. I was still struggling for time and support, and at best I had only partially achieved a higher (moral) quality of design—which is far from my definition of structurally integrated.
I decided to dig deeper for the root causes in business that prevent us from practicing daily ethical design. Now, after much research and experimentation, I believe that I’ve found the key that will let us structurally integrate ethics. And it’s surprisingly simple! But first we need to zoom out to get a better understanding of what we’re up against.
Influence the systemSadly, we’re trapped in a capitalistic system that reinforces consumerism and inequality, and it’s obsessed with the fantasy of endless growth. Sea levels, temperatures, and our demand for energy continue to rise unchallenged, while the gap between rich and poor continues to widen. Shareholders expect ever-higher returns on their investments, and companies feel forced to set short-term objectives that reflect this. Over the last decades, those objectives have twisted our well-intended human-centered mindset into a powerful machine that promotes ever-higher levels of consumption. When we’re working for an organization that pursues “double-digit growth” or “aggressive sales targets” (which is 99 percent of us), that’s very hard to resist while remaining human friendly. Even with our best intentions, and even though we like to say that we create solutions for people, we’re a part of the problem.
What can we do to change this?
We can start by acting on the right level of the system. Donella H. Meadows, a system thinker, once listed ways to influence a system in order of effectiveness. When you apply these to design, you get:
The takeaway? If we truly want to incorporate ethics into our daily design practice, we must first change the measurable objectives of the company we work for, from the bottom up.
Redefine successTraditionally, we consider a product or service successful if it’s desirable to humans, technologically feasible, and financially viable. You tend to see these represented as equals; if you type the three words in a search engine, you’ll find diagrams of three equally sized, evenly arranged circles.
But in our hearts, we all know that the three dimensions aren’t equally weighted: it’s viability that ultimately controls whether a product will go live. So a more realistic representation might look like this:
Desirability and feasibility are the means; viability is the goal. Companies—outside of nonprofits and charities—exist to make money.
A genuinely purpose-driven company would try to reverse this dynamic: it would recognize finance for what it was intended for: a means. So both feasibility and viability are means to achieve what the company set out to achieve. It makes intuitive sense: to achieve most anything, you need resources, people, and money. (Fun fact: the Italian language knows no difference between feasibility and viability; both are simply fattibilità.)
But simply swapping viable for desirable isn’t enough to achieve an ethical outcome. Desirability is still linked to consumerism because the associated activities aim to identify what people want—whether it’s good for them or not. Desirability objectives, such as user satisfaction or conversion, don’t consider whether a product is healthy for people. They don’t prevent us from creating products that distract or manipulate people or stop us from contributing to society’s wealth inequality. They’re unsuitable for establishing a healthy balance with nature.
There’s a fourth dimension of success that’s missing: our designs also need to be ethical in the effect that they have on the world.
This is hardly a new idea. Many similar models exist, some calling the fourth dimension accountability, integrity, or responsibility. What I’ve never seen before, however, is the necessary step that comes after: to influence the system as designers and to make ethical design more practical, we must create objectives for ethical design that are achievable and inspirational. There’s no one way to do this because it highly depends on your culture, values, and industry. But I’ll give you the version that I developed with a group of colleagues at a design agency. Consider it a template to get started.
Pursue well-being, equity, and sustainabilityWe created objectives that address design’s effect on three levels: individual, societal, and global.
An objective on the individual level tells us what success is beyond the typical focus of usability and satisfaction—instead considering matters such as how much time and attention is required from users. We pursued well-being:
We create products and services that allow for people’s health and happiness. Our solutions are calm, transparent, nonaddictive, and nonmisleading. We respect our users’ time, attention, and privacy, and help them make healthy and respectful choices.
An objective on the societal level forces us to consider our impact beyond just the user, widening our attention to the economy, communities, and other indirect stakeholders. We called this objective equity:
We create products and services that have a positive social impact. We consider economic equality, racial justice, and the inclusivity and diversity of people as teams, users, and customer segments. We listen to local culture, communities, and those we affect.
Finally, the objective on the global level aims to ensure that we remain in balance with the only home we have as humanity. Referring to it simply as sustainability, our definition was:
We create products and services that reward sufficiency and reusability. Our solutions support the circular economy: we create value from waste, repurpose products, and prioritize sustainable choices. We deliver functionality instead of ownership, and we limit energy use.
In short, ethical design (to us) meant achieving wellbeing for each user and an equitable value distribution within society through a design that can be sustained by our living planet. When we introduced these objectives in the company, for many colleagues, design ethics and responsible design suddenly became tangible and achievable through practical—and even familiar—actions.
Measure impactBut defining these objectives still isn’t enough. What truly caught the attention of senior management was the fact that we created a way to measure every design project’s well-being, equity, and sustainability.
This overview lists example metrics that you can use as you pursue well-being, equity, and sustainability:
There’s a lot of power in measurement. As the saying goes, what gets measured gets done. Donella Meadows once shared this example:
“If the desired system state is national security, and that is defined as the amount of money spent on the military, the system will produce military spending. It may or may not produce national security.”
This phenomenon explains why desirability is a poor indicator of success: it’s typically defined as the increase in customer satisfaction, session length, frequency of use, conversion rate, churn rate, download rate, and so on. But none of these metrics increase the health of people, communities, or ecosystems. What if instead we measured success through metrics for (digital) well-being, such as (reduced) screen time or software energy consumption?
There’s another important message here. Even if we set an objective to build a calm interface, if we were to choose the wrong metric for calmness—say, the number of interface elements—we could still end up with a screen that induces anxiety. Choosing the wrong metric can completely undo good intentions.
Additionally, choosing the right metric is enormously helpful in focusing the design team. Once you go through the exercise of choosing metrics for our objectives, you’re forced to consider what success looks like concretely and how you can prove that you’ve reached your ethical objectives. It also forces you to consider what we as designers have control over: what can I include in my design or change in my process that will lead to the right type of success? The answer to this question brings a lot of clarity and focus.
And finally, it’s good to remember that traditional businesses run on measurements, and managers love to spend much time discussing charts (ideally hockey-stick shaped)—especially if they concern profit, the one-above-all of metrics. For good or ill, to improve the system, to have a serious discussion about ethical design with managers, we’ll need to speak that business language.
Practice daily ethical designOnce you’ve defined your objectives and you have a reasonable idea of the potential metrics for your design project, only then do you have a chance to structurally practice ethical design. It “simply” becomes a matter of using your creativity and choosing from all the knowledge and toolkits already available to you.
I think this is quite exciting! It opens a whole new set of challenges and considerations for the design process. Should you go with that energy-consuming video or would a simple illustration be enough? Which typeface is the most calm and inclusive? Which new tools and methods do you use? When is the website’s end of life? How can you provide the same service while requiring less attention from users? How do you make sure that those who are affected by decisions are there when those decisions are made? How can you measure our effects?
The redefinition of success will completely change what it means to do good design.
There is, however, a final piece of the puzzle that’s missing: convincing your client, product owner, or manager to be mindful of well-being, equity, and sustainability. For this, it’s essential to engage stakeholders in a dedicated kickoff session.
Kick it off or fall back to status quoThe kickoff is the most important meeting that can be so easy to forget to include. It consists of two major phases: 1) the alignment of expectations, and 2) the definition of success.
In the first phase, the entire (design) team goes over the project brief and meets with all the relevant stakeholders. Everyone gets to know one another and express their expectations on the outcome and their contributions to achieving it. Assumptions are raised and discussed. The aim is to get on the same level of understanding and to in turn avoid preventable miscommunications and surprises later in the project.
For example, for a recent freelance project that aimed to design a digital platform that facilitates US student advisors’ documentation and communication, we conducted an online kickoff with the client, a subject-matter expert, and two other designers. We used a combination of canvases on Miro: one with questions from “Manual of Me” (to get to know each other), a Team Canvas (to express expectations), and a version of the Project Canvas to align on scope, timeline, and other practical matters.
The above is the traditional purpose of a kickoff. But just as important as expressing expectations is agreeing on what success means for the project—in terms of desirability, viability, feasibility, and ethics. What are the objectives in each dimension?
Agreement on what success means at such an early stage is crucial because you can rely on it for the remainder of the project. If, for example, the design team wants to build an inclusive app for a diverse user group, they can raise diversity as a specific success criterion during the kickoff. If the client agrees, the team can refer back to that promise throughout the project. “As we agreed in our first meeting, having a diverse user group that includes A and B is necessary to build a successful product. So we do activity X and follow research process Y.” Compare those odds to a situation in which the team didn’t agree to that beforehand and had to ask for permission halfway through the project. The client might argue that that came on top of the agreed scope—and she’d be right.
In the case of this freelance project, to define success I prepared a round canvas that I call the Wheel of Success. It consists of an inner ring, meant to capture ideas for objectives, and a set of outer rings, meant to capture ideas on how to measure those objectives. The rings are divided into five dimensions of successful design: healthy, equitable, sustainable, desirable, feasible, and viable.
We went through each dimension, writing down ideas on digital sticky notes. Then we discussed our ideas and verbally agreed on the most important ones. For example, our client agreed that sustainability and progressive enhancement are important success criteria for the platform. And the subject-matter expert emphasized the importance of including students from low-income and disadvantaged groups in the design process.
After the kickoff, we summarized our ideas and shared understanding in a project brief that captured these aspects:
With such a brief in place, you can use the agreed-upon objectives and concrete metrics as a checklist of success, and your design team will be ready to pursue the right objective—using the tools, methods, and metrics at their disposal to achieve ethical outcomes.
ConclusionOver the past year, quite a few colleagues have asked me, “Where do I start with ethical design?” My answer has always been the same: organize a session with your stakeholders to (re)define success. Even though you might not always be 100 percent successful in agreeing on goals that cover all responsibility objectives, that beats the alternative (the status quo) every time. If you want to be an ethical, responsible designer, there’s no skipping this step.
To be even more specific: if you consider yourself a strategic designer, your challenge is to define ethical objectives, set the right metrics, and conduct those kick-off sessions. If you consider yourself a system designer, your starting point is to understand how your industry contributes to consumerism and inequality, understand how finance drives business, and brainstorm which levers are available to influence the system on the highest level. Then redefine success to create the space to exercise those levers.
And for those who consider themselves service designers or UX designers or UI designers: if you truly want to have a positive, meaningful impact, stay away from the toolkits and meetups and conferences for a while. Instead, gather your colleagues and define goals for well-being, equity, and sustainability through design. Engage your stakeholders in a workshop and challenge them to think of ways to achieve and measure those ethical goals. Take their input, make it concrete and visible, ask for their agreement, and hold them to it.
Otherwise, I’m genuinely sorry to say, you’re wasting your precious time and creative energy.
Of course, engaging your stakeholders in this way can be uncomfortable. Many of my colleagues expressed doubts such as “What will the client think of this?,” “Will they take me seriously?,” and “Can’t we just do it within the design team instead?” In fact, a product manager once asked me why ethics couldn’t just be a structured part of the design process—to just do it without spending the effort to define ethical objectives. It’s a tempting idea, right? We wouldn’t have to have difficult discussions with stakeholders about what values or which key-performance indicators to pursue. It would let us focus on what we like and do best: designing.
But as systems theory tells us, that’s not enough. For those of us who aren’t from marginalized groups and have the privilege to be able to speak up and be heard, that uncomfortable space is exactly where we need to be if we truly want to make a difference. We can’t remain within the design-for-designers bubble, enjoying our privileged working-from-home situation, disconnected from the real world out there. For those of us who have the possibility to speak up and be heard: if we solely keep talking about ethical design and it remains at the level of articles and toolkits—we’re not designing ethically. It’s just theory. We need to actively engage our colleagues and clients by challenging them to redefine success in business.
With a bit of courage, determination, and focus, we can break out of this cage that finance and business-as-usual have built around us and become facilitators of a new type of business that can see beyond financial value. We just need to agree on the right objectives at the start of each design project, find the right metrics, and realize that we already have everything that we need to get started. That’s what it means to do daily ethical design.
For their inspiration and support over the years, I would like to thank Emanuela Cozzi Schettini, José Gallegos, Annegret Bönemann, Ian Dorr, Vera Rademaker, Virginia Rispoli, Cecilia Scolaro, Rouzbeh Amini, and many others.
CSS is about styling boxes. In fact, the whole web is made of boxes, from the browser viewport to elements on a page. But every once in a while a new feature comes along that makes us rethink our design approach.
Round displays, for example, make it fun to play with circular clip areas. Mobile screen notches and virtual keyboards offer challenges to best organize content that stays clear of them. And dual screen or foldable devices make us rethink how to best use available space in a number of different device postures.
Sketches of a round display, a common rectangular mobile display, and a device with a foldable display.These recent evolutions of the web platform made it both more challenging and more interesting to design products. They’re great opportunities for us to break out of our rectangular boxes.
I’d like to talk about a new feature similar to the above: the Window Controls Overlay for Progressive Web Apps (PWAs).
Progressive Web Apps are blurring the lines between apps and websites. They combine the best of both worlds. On one hand, they’re stable, linkable, searchable, and responsive just like websites. On the other hand, they provide additional powerful capabilities, work offline, and read files just like native apps.
As a design surface, PWAs are really interesting because they challenge us to think about what mixing web and device-native user interfaces can be. On desktop devices in particular, we have more than 40 years of history telling us what applications should look like, and it can be hard to break out of this mental model.
At the end of the day though, PWAs on desktop are constrained to the window they appear in: a rectangle with a title bar at the top.
Here’s what a typical desktop PWA app looks like:
Sketches of two rectangular user interfaces representing the desktop Progressive Web App status quo on the macOS and Windows operating systems, respectively.Sure, as the author of a PWA, you get to choose the color of the title bar (using the Web Application Manifest theme_color property), but that’s about it.
What if we could think outside this box, and reclaim the real estate of the app’s entire window? Doing so would give us a chance to make our apps more beautiful and feel more integrated in the operating system.
This is exactly what the Window Controls Overlay offers. This new PWA functionality makes it possible to take advantage of the full surface area of the app, including where the title bar normally appears.
About the title bar and window controlsLet’s start with an explanation of what the title bar and window controls are.
The title bar is the area displayed at the top of an app window, which usually contains the app’s name. Window controls are the affordances, or buttons, that make it possible to minimize, maximize, or close the app’s window, and are also displayed at the top.
A sketch of a rectangular application user interface highlighting the title bar area and window control buttons.Window Controls Overlay removes the physical constraint of the title bar and window controls areas. It frees up the full height of the app window, enabling the title bar and window control buttons to be overlaid on top of the application’s web content.
A sketch of a rectangular application user interface using Window Controls Overlay. The title bar and window controls are no longer in an area separated from the app’s content.If you are reading this article on a desktop computer, take a quick look at other apps. Chances are they’re already doing something similar to this. In fact, the very web browser you are using to read this uses the top area to display tabs.
A screenshot of the top area of a browser’s user interface showing a group of tabs that share the same horizontal space as the app window controls.Spotify displays album artwork all the way to the top edge of the application window.
A screenshot of an album in Spotify’s desktop application. Album artwork spans the entire width of the main content area, all the way to the top and right edges of the window, and the right edge of the main navigation area on the left side. The application and album navigation controls are overlaid directly on top of the album artwork.Microsoft Word uses the available title bar space to display the auto-save and search functionalities, and more.
A screenshot of Microsoft Word’s toolbar interface. Document file information, search, and other functionality appear at the top of the window, sharing the same horizontal space as the app’s window controls.The whole point of this feature is to allow you to make use of this space with your own content while providing a way to account for the window control buttons. And it enables you to offer this modified experience on a range of platforms while not adversely affecting the experience on browsers or devices that don’t support Window Controls Overlay. After all, PWAs are all about progressive enhancement, so this feature is a chance to enhance your app to use this extra space when it’s available.
Let’s use the featureFor the rest of this article, we’ll be working on a demo app to learn more about using the feature.
The demo app is called 1DIV. It’s a simple CSS playground where users can create designs using CSS and a single HTML element.
The app has two pages. The first lists the existing CSS designs you’ve created:
A screenshot of the 1DIV app displaying a thumbnail grid of CSS designs a user created.The second page enables you to create and edit CSS designs:
A screenshot of the 1DIV app editor page. The top half of the window displays a rendered CSS design, and a text editor on the bottom half of the window displays the CSS used to create it.Since I’ve added a simple web manifest and service worker, we can install the app as a PWA on desktop. Here is what it looks like on macOS:
Screenshots of the 1DIV app thumbnail view and CSS editor view on macOS. This version of the app’s window has a separate control bar at the top for the app name and window control buttons.And on Windows:
Screenshots of the 1DIV app thumbnail view and CSS editor view on the Windows operating system. This version of the app’s window also has a separate control bar at the top for the app name and window control buttons.Our app is looking good, but the white title bar in the first page is wasted space. In the second page, it would be really nice if the design area went all the way to the top of the app window.
Let’s use the Window Controls Overlay feature to improve this.
Enabling Window Controls OverlayThe feature is still experimental at the moment. To try it, you need to enable it in one of the supported browsers.
As of now, it has been implemented in Chromium, as a collaboration between Microsoft and Google. We can therefore use it in Chrome or Edge by going to the internal about://flags page, and enabling the Desktop PWA Window Controls Overlay flag.
Using Window Controls OverlayTo use the feature, we need to add the following display_override member to our web app’s manifest file:
{
"name": "1DIV",
"description": "1DIV is a mini CSS playground",
"lang": "en-US",
"start_url": "/",
"theme_color": "#ffffff",
"background_color": "#ffffff",
"display_override": [
"window-controls-overlay"
],
"icons": [
...
]
}
On the surface, the feature is really simple to use. This manifest change is the only thing we need to make the title bar disappear and turn the window controls into an overlay.
However, to provide a great experience for all users regardless of what device or browser they use, and to make the most of the title bar area in our design, we’ll need a bit of CSS and JavaScript code.
Here is what the app looks like now:
Screenshot of the 1DIV app thumbnail view using Window Controls Overlay on macOS. The separate top bar area is gone, but the window controls are now blocking some of the app’s interfaceThe title bar is gone, which is what we wanted, but our logo, search field, and NEW button are partially covered by the window controls because now our layout starts at the top of the window.
It’s similar on Windows, with the difference that the close, maximize, and minimize buttons appear on the right side, grouped together with the PWA control buttons:
Screenshot of the 1DIV app thumbnail display using Window Controls Overlay on the Windows operating system. The separate top bar area is gone, but the window controls are now blocking some of the app’s content. Using CSS to keep clear of the window controlsAlong with the feature, new CSS environment variables have been introduced:
titlebar-area-x
titlebar-area-y
titlebar-area-width
titlebar-area-height
You use these variables with the CSS env() function to position your content where the title bar would have been while ensuring it won’t overlap with the window controls. In our case, we’ll use two of the variables to position our header, which contains the logo, search bar, and NEW button.
header {
position: absolute;
left: env(titlebar-area-x, 0);
width: env(titlebar-area-width, 100%);
height: var(--toolbar-height);
}
The titlebar-area-x
variable gives us the distance from the left of the viewport to where the title bar would appear, and titlebar-area-width
is its width. (Remember, this is not equivalent to the width of the entire viewport, just the title bar portion, which as noted earlier, doesn’t include the window controls.)
By doing this, we make sure our content remains fully visible. We’re also defining fallback values (the second parameter in the env()
function) for when the variables are not defined (such as on non-supporting browsers, or when the Windows Control Overlay feature is disabled).
Now our header adapts to its surroundings, and it doesn’t feel like the window control buttons have been added as an afterthought. The app looks a lot more like a native app.
Changing the window controls background color so it blends inNow let’s take a closer look at our second page: the CSS playground editor.
Screenshots of the 1DIV app CSS editor view with Window Controls Overlay in macOS and Windows, respectively. The window controls overlay areas have a solid white background color, which contrasts with the hot pink color of the example CSS design displayed in the editor.Not great. Our CSS demo area does go all the way to the top, which is what we wanted, but the way the window controls appear as white rectangles on top of it is quite jarring.
We can fix this by changing the app’s theme color. There are a couple of ways to define it:
theme_color
.In our case, we can set the manifest theme_color
to white to provide the right default color for our app. The OS will read this color value when the app is installed and use it to make the window controls background color white. This color works great for our main page with the list of demos.
The theme-color
meta tag can be changed at runtime, using JavaScript. So we can do that to override the white with the right demo background color when one is opened.
Here is the function we’ll use:
function themeWindow(bgColor) {
document.querySelector("meta[name=theme-color]").setAttribute('content', bgColor);
}
With this in place, we can imagine how using color and CSS transitions can produce a smooth change from the list page to the demo page, and enable the window control buttons to blend in with the rest of the app’s interface.
Screenshot of the 1DIV app CSS editor view on the Windows operating system with Window Controls Overlay and updated CSS demonstrating how the window control buttons blend in with the rest of the app’s interface. Dragging the windowNow, getting rid of the title bar entirely does have an important accessibility consequence: it’s much more difficult to move the application window around.
The title bar provides a sizable area for users to click and drag, but by using the Window Controls Overlay feature, this area becomes limited to where the control buttons are, and users have to very precisely aim between these buttons to move the window.
Fortunately, this can be fixed using CSS with the app-region
property. This property is, for now, only supported in Chromium-based browsers and needs the -webkit-
vendor prefix.
To make any element of the app become a dragging target for the window, we can use the following:
-webkit-app-region: drag;
It is also possible to explicitly make an element non-draggable:
-webkit-app-region: no-drag;
These options can be useful for us. We can make the entire header a dragging target, but make the search field and NEW button within it non-draggable so they can still be used as normal.
However, because the editor page doesn’t display the header, users wouldn’t be able to drag the window while editing code. So let's use a different approach. We’ll create another element before our header, also absolutely positioned, and dedicated to dragging the window.
<div class="drag"></div>
<header>...</header>
.drag {
position: absolute;
top: 0;
width: 100%;
height: env(titlebar-area-height, 0);
-webkit-app-region: drag;
}
With the above code, we’re making the draggable area span the entire viewport width, and using the titlebar-area-height
variable to make it as tall as what the title bar would have been. This way, our draggable area is aligned with the window control buttons as shown below.
And, now, to make sure our search field and button remain usable:
header .search,
header .new {
-webkit-app-region: no-drag;
}
With the above code, users can click and drag where the title bar used to be. It is an area that users expect to be able to use to move windows on desktop, and we’re not breaking this expectation, which is good.
An animated view of the 1DIV app being dragged across a Windows desktop with the mouse. Adapting to window resizeIt may be useful for an app to know both whether the window controls overlay is visible and when its size changes. In our case, if the user made the window very narrow, there wouldn’t be enough space for the search field, logo, and button to fit, so we’d want to push them down a bit.
The Window Controls Overlay feature comes with a JavaScript API we can use to do this: navigator.windowControlsOverlay
.
The API provides three interesting things:
navigator.windowControlsOverlay.visible
lets us know whether the overlay is visible.navigator.windowControlsOverlay.getBoundingClientRect()
lets us know the position and size of the title bar area.navigator.windowControlsOverlay.ongeometrychange
lets us know when the size or visibility changes.Let’s use this to be aware of the size of the title bar area and move the header down if it’s too narrow.
if (navigator.windowControlsOverlay) {
navigator.windowControlsOverlay.addEventListener('geometrychange', () => {
const { width } = navigator.windowControlsOverlay.getBoundingClientRect();
document.body.classList.toggle('narrow', width < 250);
});
}
In the example above, we set the narrow
class on the body
of the app if the title bar area is narrower than 250px. We could do something similar with a media query, but using the windowControlsOverlay
API has two advantages for our use case:
.narrow header {
top: env(titlebar-area-height, 0);
left: 0;
width: 100%;
}
Using the above CSS code, we can move our header down to stay clear of the window control buttons when the window is too narrow, and move the thumbnails down accordingly.
A screenshot of the 1DIV app on Windows showing the app’s content adjusted for a much narrower viewport. Thirty pixels of exciting design opportunities
Using the Window Controls Overlay feature, we were able to take our simple demo app and turn it into something that feels so much more integrated on desktop devices. Something that reaches out of the usual window constraints and provides a custom experience for its users.
In reality, this feature only gives us about 30 pixels of extra room and comes with challenges on how to deal with the window controls. And yet, this extra room and those challenges can be turned into exciting design opportunities.
More devices of all shapes and forms get invented all the time, and the web keeps on evolving to adapt to them. New features get added to the web platform to allow us, web authors, to integrate more and more deeply with those devices. From watches or foldable devices to desktop computers, we need to evolve our design approach for the web. Building for the web now lets us think outside the rectangular box.
So let’s embrace this. Let’s use the standard technologies already at our disposal, and experiment with new ideas to provide tailored experiences for all devices, all from a single codebase!
If you get a chance to try the Window Controls Overlay feature and have feedback about it, you can open issues on the spec’s repository. It’s still early in the development of this feature, and you can help make it even better. Or, you can take a look at the feature’s existing documentation, or this demo app and its source code.
Do you find yourself designing screens with only a vague idea of how the things on the screen relate to the things elsewhere in the system? Do you leave stakeholder meetings with unclear directives that often seem to contradict previous conversations? You know a better understanding of user needs would help the team get clear on what you are actually trying to accomplish, but time and budget for research is tight. When it comes to asking for more direct contact with your users, you might feel like poor Oliver Twist, timidly asking, “Please, sir, I want some more.”
Here’s the trick. You need to get stakeholders themselves to identify high-risk assumptions and hidden complexity, so that they become just as motivated as you to get answers from users. Basically, you need to make them think it’s their idea.
In this article, I’ll show you how to collaboratively expose misalignment and gaps in the team’s shared understanding by bringing the team together around two simple questions:
These two questions align to the first two steps of the ORCA process, which might become your new best friend when it comes to reducing guesswork. Wait, what’s ORCA?! Glad you asked.
ORCA stands for Objects, Relationships, CTAs, and Attributes, and it outlines a process for creating solid object-oriented user experiences. Object-oriented UX is my design philosophy. ORCA is an iterative methodology for synthesizing user research into an elegant structural foundation to support screen and interaction design. OOUX and ORCA have made my work as a UX designer more collaborative, effective, efficient, fun, strategic, and meaningful.
The ORCA process has four iterative rounds and a whopping fifteen steps. In each round we get more clarity on our Os, Rs, Cs, and As.
The four rounds and fifteen steps of the ORCA process. In the OOUX world, we love color-coding. Blue is reserved for objects! (Yellow is for core content, pink is for metadata, and green is for calls-to-action. Learn more about the color-coded object map and connecting CTAs to objects.)I sometimes say that ORCA is a “garbage in, garbage out” process. To ensure that the testable prototype produced in the final round actually tests well, the process needs to be fed by good research. But if you don’t have a ton of research, the beginning of the ORCA process serves another purpose: it helps you sell the need for research.
ORCA strengthens the weak spot between research and design by helping distill research into solid information architecture—scaffolding for the screen design and interaction design to hang on.In other words, the ORCA process serves as a gauntlet between research and design. With good research, you can gracefully ride the killer whale from research into design. But without good research, the process effectively spits you back into research and with a cache of specific open questions.
Getting in the same curiosity-boatWhat gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t so.
Mark Twain
The first two steps of the ORCA process—Object Discovery and Relationship Discovery—shine a spotlight on the dark, dusty corners of your team’s misalignments and any inherent complexity that’s been swept under the rug. It begins to expose what this classic comic so beautifully illustrates:
The original “Tree Swing Project Management” cartoon dates back to the 1960s or 1970s and has no artist attribution we could find.This is one reason why so many UX designers are frustrated in their job and why many projects fail. And this is also why we often can’t sell research: every decision-maker is confident in their own mental picture.
Once we expose hidden fuzzy patches in each picture and the differences between them all, the case for user research makes itself.
But how we do this is important. However much we might want to, we can’t just tell everyone, “YOU ARE WRONG!” Instead, we need to facilitate and guide our team members to self-identify holes in their picture. When stakeholders take ownership of assumptions and gaps in understanding, BAM! Suddenly, UX research is not such a hard sell, and everyone is aboard the same curiosity-boat.
Say your users are doctors. And you have no idea how doctors use the system you are tasked with redesigning.
You might try to sell research by honestly saying: “We need to understand doctors better! What are their pain points? How do they use the current app?” But here’s the problem with that. Those questions are vague, and the answers to them don’t feel acutely actionable.
Instead, you want your stakeholders themselves to ask super-specific questions. This is more like the kind of conversation you need to facilitate. Let’s listen in:
“Wait a sec, how often do doctors share patients? Does a patient in this system have primary and secondary doctors?”
“Can a patient even have more than one primary doctor?”
“Is it a ‘primary doctor’ or just a ‘primary caregiver’… Can’t that role be a nurse practitioner?”
“No, caregivers are something else… That’s the patient’s family contacts, right?”
“So are caregivers in scope for this redesign?”
“Yeah, because if a caregiver is present at an appointment, the doctor needs to note that. Like, tag the caregiver on the note… Or on the appointment?”
Now we are getting somewhere. Do you see how powerful it can be getting stakeholders to debate these questions themselves? The diabolical goal here is to shake their confidence—gently and diplomatically.
When these kinds of questions bubble up collaboratively and come directly from the mouths of your stakeholders and decision-makers, suddenly, designing screens without knowing the answers to these questions seems incredibly risky, even silly.
If we create software without understanding the real-world information environment of our users, we will likely create software that does not align to the real-world information environment of our users. And this will, hands down, result in a more confusing, more complex, and less intuitive software product.
The two questionsBut how do we get to these kinds of meaty questions diplomatically, efficiently, collaboratively, and reliably?
We can do this by starting with those two big questions that align to the first two steps of the ORCA process:
In practice, getting to these answers is easier said than done. I’m going to show you how these two simple questions can provide the outline for an Object Definition Workshop. During this workshop, these “seed” questions will blossom into dozens of specific questions and shine a spotlight on the need for more user research.
Prep work: Noun foragingIn the next section, I’ll show you how to run an Object Definition Workshop with your stakeholders (and entire cross-functional team, hopefully). But first, you need to do some prep work.
Basically, look for nouns that are particular to the business or industry of your project, and do it across at least a few sources. I call this noun foraging.
Here are just a few great noun foraging sources:
Put your detective hat on, my dear Watson. Get resourceful and leverage what you have. If all you have is a marketing website, some screenshots of the existing legacy system, and access to customer service chat logs, then use those.
As you peruse these sources, watch for the nouns that are used over and over again, and start listing them (preferably on blue sticky notes if you’ll be creating an object map later!).
You’ll want to focus on nouns that might represent objects in your system. If you are having trouble determining if a noun might be object-worthy, remember the acronym SIP and test for:
Think of a library app, for example. Is “book” an object?
Structure: can you think of a few attributes for this potential object? Title, author, publish date… Yep, it has structure. Check!
Instance: what are some examples of this potential “book” object? Can you name a few? The Alchemist, Ready Player One, Everybody Poops… OK, check!
Purpose: why is this object important to the users and business? Well, “book” is what our library client is providing to people and books are why people come to the library… Check, check, check!
SIP: Structure, Instances, and Purpose! (Here’s a flowchart where I elaborate even more on SIP.)As you are noun foraging, focus on capturing the nouns that have SIP. Avoid capturing components like dropdowns, checkboxes, and calendar pickers—your UX system is not your design system! Components are just the packaging for objects—they are a means to an end. No one is coming to your digital place to play with your dropdown! They are coming for the VALUABLE THINGS and what they can do with them. Those things, or objects, are what we are trying to identify.
Let’s say we work for a startup disrupting the email experience. This is how I’d start my noun foraging.
First I’d look at my own email client, which happens to be Gmail. I’d then look at Outlook and the new HEY email. I’d look at Yahoo, Hotmail…I’d even look at Slack and Basecamp and other so-called “email replacers.” I’d read some articles, reviews, and forum threads where people are complaining about email. While doing all this, I would look for and write down the nouns.
(Before moving on, feel free to go noun foraging for this hypothetical product, too, and then scroll down to see how much our lists match up. Just don’t get lost in your own emails! Come back to me!)
Drumroll, please…
Here are a few nouns I came up with during my noun foraging:
Scan your list of nouns and pick out words that you are completely clueless about. In our email example, it might be client or automation. Do as much homework as you can before your session with stakeholders: google what’s googleable. But other terms might be so specific to the product or domain that you need to have a conversation about them.
Aside: here are some real nouns foraged during my own past project work that I needed my stakeholders to help me understand:
This is really all you need to prepare for the workshop session: a list of nouns that represent potential objects and a short list of nouns that need to be defined further.
Facilitate an Object Definition WorkshopYou could actually start your workshop with noun foraging—this activity can be done collaboratively. If you have five people in the room, pick five sources, assign one to every person, and give everyone ten minutes to find the objects within their source. When the time’s up, come together and find the overlap. Affinity mapping is your friend here!
If your team is short on time and might be reluctant to do this kind of grunt work (which is usually the case) do your own noun foraging beforehand, but be prepared to show your work. I love presenting screenshots of documents and screens with all the nouns already highlighted. Bring the artifacts of your process, and start the workshop with a five-minute overview of your noun foraging journey.
HOT TIP: before jumping into the workshop, frame the conversation as a requirements-gathering session to help you better understand the scope and details of the system. You don’t need to let them know that you’re looking for gaps in the team’s understanding so that you can prove the need for more user research—that will be our little secret. Instead, go into the session optimistically, as if your knowledgeable stakeholders and PMs and biz folks already have all the answers.
Then, let the question whack-a-mole commence.
1. What is this thing?Want to have some real fun? At the beginning of your session, ask stakeholders to privately write definitions for the handful of obscure nouns you might be uncertain about. Then, have everyone show their cards at the same time and see if you get different definitions (you will). This is gold for exposing misalignment and starting great conversations.
As your discussion unfolds, capture any agreed-upon definitions. And when uncertainty emerges, quietly (but visibly) start an “open questions” parking lot. 😉
After definitions solidify, here’s a great follow-up:
2. Do our users know what these things are? What do users call this thing?Stakeholder 1: They probably call email clients “apps.” But I’m not sure.
Stakeholder 2: Automations are often called “workflows,” I think. Or, maybe users think workflows are something different.
If a more user-friendly term emerges, ask the group if they can agree to use only that term moving forward. This way, the team can better align to the users’ language and mindset.
OK, moving on.
If you have two or more objects that seem to overlap in purpose, ask one of these questions:
3. Are these the same thing? Or are these different? If they are not the same, how are they different?You: Is a saved response the same as a template?
Stakeholder 1: Yes! Definitely.
Stakeholder 2: I don’t think so… A saved response is text with links and variables, but a template is more about the look and feel, like default fonts, colors, and placeholder images.
Continue to build out your growing glossary of objects. And continue to capture areas of uncertainty in your “open questions” parking lot.
If you successfully determine that two similar things are, in fact, different, here’s your next follow-up question:
4. What’s the relationship between these objects?You: Are saved responses and templates related in any way?
Stakeholder 3: Yeah, a template can be applied to a saved response.
You, always with the follow-ups: When is the template applied to a saved response? Does that happen when the user is constructing the saved response? Or when they apply the saved response to an email? How does that actually work?
Listen. Capture uncertainty. Once the list of “open questions” grows to a critical mass, pause to start assigning questions to groups or individuals. Some questions might be for the dev team (hopefully at least one developer is in the room with you). One question might be specifically for someone who couldn’t make it to the workshop. And many questions will need to be labeled “user.”
Do you see how we are building up to our UXR sales pitch?
5. Is this object in scope?Your next question narrows the team’s focus toward what’s most important to your users. You can simply ask, “Are saved responses in scope for our first release?,” but I’ve got a better, more devious strategy.
By now, you should have a list of clearly defined objects. Ask participants to sort these objects from most to least important, either in small breakout groups or individually. Then, like you did with the definitions, have everyone reveal their sort order at once. Surprisingly—or not so surprisingly—it’s not unusual for the VP to rank something like “saved responses” as #2 while everyone else puts it at the bottom of the list. Try not to look too smug as you inevitably expose more misalignment.
I did this for a startup a few years ago. We posted the three groups’ wildly different sort orders on the whiteboard.
Here’s a snippet of the very messy middle from this session: three columns of object cards, showing the same cards prioritized completely differently by three different groups.The CEO stood back, looked at it, and said, “This is why we haven’t been able to move forward in two years.”
Admittedly, it’s tragic to hear that, but as a professional, it feels pretty awesome to be the one who facilitated a watershed realization.
Once you have a good idea of in-scope, clearly defined things, this is when you move on to doing more relationship mapping.
6. Create a visual representation of the objects’ relationshipsWe’ve already done a bit of this while trying to determine if two things are different, but this time, ask the team about every potential relationship. For each object, ask how it relates to all the other objects. In what ways are the objects connected? To visualize all the connections, pull out your trusty boxes-and-arrows technique. Here, we are connecting our objects with verbs. I like to keep my verbs to simple “has a” and “has many” statements.
A work-in-progress system model of our new email solution.This system modeling activity brings up all sorts of new questions:
Solid answers might emerge directly from the workshop participants. Great! Capture that new shared understanding. But when uncertainty surfaces, continue to add questions to your growing parking lot.
Light the fuseYou’ve positioned the explosives all along the floodgates. Now you simply have to light the fuse and BOOM. Watch the buy-in for user research flooooow.
Before your workshop wraps up, have the group reflect on the list of open questions. Make plans for getting answers internally, then focus on the questions that need to be brought before users.
Here’s your final step. Take those questions you’ve compiled for user research and discuss the level of risk associated with NOT answering them. Ask, “if we design without an answer to this question, if we make up our own answer and we are wrong, how bad might that turn out?”
With this methodology, we are cornering our decision-makers into advocating for user research as they themselves label questions as high-risk. Sorry, not sorry.
Now is your moment of truth. With everyone in the room, ask for a reasonable budget of time and money to conduct 6–8 user interviews focused specifically on these questions.
HOT TIP: if you are new to UX research, please note that you’ll likely need to rephrase the questions that came up during the workshop before you present them to users. Make sure your questions are open-ended and don’t lead the user into any default answers.
Final words: Hold the screen design!Seriously, if at all possible, do not ever design screens again without first answering these fundamental questions: what are the objects and how do they relate?
I promise you this: if you can secure a shared understanding between the business, design, and development teams before you start designing screens, you will have less heartache and save more time and money, and (it almost feels like a bonus at this point!) users will be more receptive to what you put out into the world.
I sincerely hope this helps you win time and budget to go talk to your users and gain clarity on what you are designing before you start building screens. If you find success using noun foraging and the Object Definition Workshop, there’s more where that came from in the rest of the ORCA process, which will help prevent even more late-in-the-game scope tugs-of-war and strategy pivots.
All the best of luck! Now go sell research!
Do you remember when having a great website was enough? Now, people are getting answers from Siri, Google search snippets, and mobile apps, not just our websites. Forward-thinking organizations have adopted an omnichannel content strategy, whose mission is to reach audiences across multiple digital channels and platforms.
But how do you set up a content management system (CMS) to reach your audience now and in the future? I learned the hard way that creating a content model—a definition of content types, attributes, and relationships that let people and systems understand content—with my more familiar design-system thinking would capsize my customer’s omnichannel content strategy. You can avoid that outcome by creating content models that are semantic and that also connect related content.
I recently had the opportunity to lead the CMS implementation for a Fortune 500 company. The client was excited by the benefits of an omnichannel content strategy, including content reuse, multichannel marketing, and robot delivery—designing content to be intelligible to bots, Google knowledge panels, snippets, and voice user interfaces.
A content model is a critical foundation for an omnichannel content strategy, and for our content to be understood by multiple systems, the model needed semantic types—types named according to their meaning instead of their presentation. Our goal was to let authors create content and reuse it wherever it was relevant. But as the project proceeded, I realized that supporting content reuse at the scale that my customer needed required the whole team to recognize a new pattern.
Despite our best intentions, we kept drawing from what we were more familiar with: design systems. Unlike web-focused content strategies, an omnichannel content strategy can’t rely on WYSIWYG tools for design and layout. Our tendency to approach the content model with our familiar design-system thinking constantly led us to veer away from one of the primary purposes of a content model: delivering content to audiences on multiple marketing channels.
Two essential principles for an effective content modelWe needed to help our designers, developers, and stakeholders understand that we were doing something very different from their prior web projects, where it was natural for everyone to think about content as visual building blocks fitting into layouts. The previous approach was not only more familiar but also more intuitive—at least at first—because it made the designs feel more tangible. We discovered two principles that helped the team understand how a content model differs from the design systems that we were used to:
A semantic content model uses type and attribute names that reflect the meaning of the content, not how it will be displayed. For example, in a nonsemantic model, teams might create types like teasers, media blocks, and cards. Although these types might make it easy to lay out content, they don’t help delivery channels understand the content’s meaning, which in turn would have opened the door to the content being presented in each marketing channel. In contrast, a semantic content model uses type names like product, service, and testimonial so that each delivery channel can understand the content and use it as it sees fit.
When you’re creating a semantic content model, a great place to start is to look over the types and properties defined by Schema.org, a community-driven resource for type definitions that are intelligible to platforms like Google search.
A semantic content model has several benefits:
For example, using a semantic content model for articles, events, people, and locations lets A List Apart provide cleanly structured data for search engines so that users can read the content on the website, in Google knowledge panels, and even with hypothetical voice interfaces in the future.
Content models that connectAfter struggling to describe what makes a good content model, I’ve come to realize that the best models are those that are semantic and that also connect related content components (such as a FAQ item’s question and answer pair), instead of slicing up related content across disparate content components. A good content model connects content that should remain together so that multiple delivery channels can use it without needing to first put those pieces back together.
Think about writing an article or essay. An article’s meaning and usefulness depends upon its parts being kept together. Would one of the headings or paragraphs be meaningful on their own without the context of the full article? On our project, our familiar design-system thinking often led us to want to create content models that would slice content into disparate chunks to fit the web-centric layout. This had a similar impact to an article that were to have been separated from its headline. Because we were slicing content into standalone pieces based on layout, content that belonged together became difficult to manage and nearly impossible for multiple delivery channels to understand.
To illustrate, let’s look at how connecting related content applies in a real-world scenario. The design team for our customer presented a complex layout for a software product page that included multiple tabs and sections. Our instincts were to follow suit with the content model. Shouldn’t we make it as easy and as flexible as possible to add any number of tabs in the future?
Because our design-system instincts were so familiar, it felt like we had needed a content type called “tab section” so that multiple tab sections could be added to a page. Each tab section would display various types of content. One tab might provide the software’s overview or its specifications. Another tab might provide a list of resources.
Our inclination to break down the content model into “tab section” pieces would have led to an unnecessarily complex model and a cumbersome editing experience, and it would have also created content that couldn’t have been understood by additional delivery channels. For example, how would another system have been able to tell which “tab section” referred to a product’s specifications or its resource list—would that other system have to have resorted to counting tab sections and content blocks? This would have prevented the tabs from ever being reordered, and it would have required adding logic in every other delivery channel to interpret the design system’s layout. Furthermore, if the customer were to have no longer wanted to display this content in a tab layout, it would have been tedious to migrate to a new content model to reflect the new page redesign.
A content model based on design components is unnecessarily complex, and it’s unintelligible to systems.We had a breakthrough when we discovered that our customer had a specific purpose in mind for each tab: it would reveal specific information such as the software product’s overview, specifications, related resources, and pricing. Once implementation began, our inclination to focus on what’s visual and familiar had obscured the intent of the designs. With a little digging, it didn’t take long to realize that the concept of tabs wasn’t relevant to the content model. The meaning of the content that they were planning to display in the tabs was what mattered.
In fact, the customer could have decided to display this content in a different way—without tabs—somewhere else. This realization prompted us to define content types for the software product based on the meaningful attributes that the customer had wanted to render on the web. There were obvious semantic attributes like name and description as well as rich attributes like screenshots, software requirements, and feature lists. The software’s product information stayed together because it wasn’t sliced across separate components like “tab sections” that were derived from the content’s presentation. Any delivery channel—including future ones—could understand and present this content.
A good content model connects content that belongs together so it can be easily managed and reused. ConclusionIn this omnichannel marketing project, we discovered that the best way to keep our content model on track was to ensure that it was semantic (with type and attribute names that reflected the meaning of the content) and that it kept content together that belonged together (instead of fragmenting it). These two concepts curtailed our temptation to shape the content model based on the design. So if you’re working on a content model to support an omnichannel content strategy—or even if you just want to make sure that Google and other interfaces understand your content—remember:
By rigorously advocating for these principles, you’ll help your team treat content the way that it deserves—as the most critical asset in your user experience and the best way to connect with your audience.
Antiracist economist Kim Crayton says that “intention without strategy is chaos.” We’ve discussed how our biases, assumptions, and inattention toward marginalized and vulnerable groups lead to dangerous and unethical tech—but what, specifically, do we need to do to fix it? The intention to make our tech safer is not enough; we need a strategy.
This chapter will equip you with that plan of action. It covers how to integrate safety principles into your design work in order to create tech that’s safe, how to convince your stakeholders that this work is necessary, and how to respond to the critique that what we actually need is more diversity. (Spoiler: we do, but diversity alone is not the antidote to fixing unethical, unsafe tech.)
The process for inclusive safetyWhen you are designing for safety, your goals are to:
The Process for Inclusive Safety is a tool to help you reach those goals (Fig 5.1). It’s a methodology I created in 2018 to capture the various techniques I was using when designing products with safety in mind. Whether you are creating an entirely new product or adding to an existing feature, the Process can help you make your product safe and inclusive. The Process includes five general areas of action:
The Process is meant to be flexible—it won’t make sense for teams to implement every step in some situations. Use the parts that are relevant to your unique work and context; this is meant to be something you can insert into your existing design practice.
And once you use it, if you have an idea for making it better or simply want to provide context of how it helped your team, please get in touch with me. It’s a living document that I hope will continue to be a useful and realistic tool that technologists can use in their day-to-day work.
If you’re working on a product specifically for a vulnerable group or survivors of some form of trauma, such as an app for survivors of domestic violence, sexual assault, or drug addiction, be sure to read Chapter 7, which covers that situation explicitly and should be handled a bit differently. The guidelines here are for prioritizing safety when designing a more general product that will have a wide user base (which, we already know from statistics, will include certain groups that should be protected from harm). Chapter 7 is focused on products that are specifically for vulnerable groups and people who have experienced trauma.
Step 1: Conduct researchDesign research should include a broad analysis of how your tech might be weaponized for abuse as well as specific insights into the experiences of survivors and perpetrators of that type of abuse. At this stage, you and your team will investigate issues of interpersonal harm and abuse, and explore any other safety, security, or inclusivity issues that might be a concern for your product or service, like data security, racist algorithms, and harassment.
Broad researchYour project should begin with broad, general research into similar products and issues around safety and ethical concerns that have already been reported. For example, a team building a smart home device would do well to understand the multitude of ways that existing smart home devices have been used as tools of abuse. If your product will involve AI, seek to understand the potentials for racism and other issues that have been reported in existing AI products. Nearly all types of technology have some kind of potential or actual harm that’s been reported on in the news or written about by academics. Google Scholar is a useful tool for finding these studies.
Specific research: SurvivorsWhen possible and appropriate, include direct research (surveys and interviews) with people who are experts in the forms of harm you have uncovered. Ideally, you’ll want to interview advocates working in the space of your research first so that you have a more solid understanding of the topic and are better equipped to not retraumatize survivors. If you’ve uncovered possible domestic violence issues, for example, the experts you’ll want to speak with are survivors themselves, as well as workers at domestic violence hotlines, shelters, other related nonprofits, and lawyers.
Especially when interviewing survivors of any kind of trauma, it is important to pay people for their knowledge and lived experiences. Don’t ask survivors to share their trauma for free, as this is exploitative. While some survivors may not want to be paid, you should always make the offer in the initial ask. An alternative to payment is to donate to an organization working against the type of violence that the interviewee experienced. We’ll talk more about how to appropriately interview survivors in Chapter 6.
Specific research: AbusersIt’s unlikely that teams aiming to design for safety will be able to interview self-proclaimed abusers or people who have broken laws around things like hacking. Don’t make this a goal; rather, try to get at this angle in your general research. Aim to understand how abusers or bad actors weaponize technology to use against others, how they cover their tracks, and how they explain or rationalize the abuse.
Step 2: Create archetypesOnce you’ve finished conducting your research, use your insights to create abuser and survivor archetypes. Archetypes are not personas, as they’re not based on real people that you interviewed and surveyed. Instead, they’re based on your research into likely safety issues, much like when we design for accessibility: we don’t need to have found a group of blind or low-vision users in our interview pool to create a design that’s inclusive of them. Instead, we base those designs on existing research into what this group needs. Personas typically represent real users and include many details, while archetypes are broader and can be more generalized.
The abuser archetype is someone who will look at the product as a tool to perform harm (Fig 5.2). They may be trying to harm someone they don’t know through surveillance or anonymous harassment, or they may be trying to control, monitor, abuse, or torment someone they know personally.
Fig 5.2: Harry Oleson, an abuser archetype for a fitness product, is looking for ways to stalk his ex-girlfriend through the fitness apps she uses.The survivor archetype is someone who is being abused with the product. There are various situations to consider in terms of the archetype’s understanding of the abuse and how to put an end to it: Do they need proof of abuse they already suspect is happening, or are they unaware they’ve been targeted in the first place and need to be alerted (Fig 5.3)?
Fig 5.3: The survivor archetype Lisa Zwaan suspects her husband is weaponizing their home’s IoT devices against her, but in the face of his insistence that she simply doesn’t understand how to use the products, she’s unsure. She needs some kind of proof of the abuse.You may want to make multiple survivor archetypes to capture a range of different experiences. They may know that the abuse is happening but not be able to stop it, like when an abuser locks them out of IoT devices; or they know it’s happening but don’t know how, such as when a stalker keeps figuring out their location (Fig 5.4). Include as many of these scenarios as you need to in your survivor archetype. You’ll use these later on when you design solutions to help your survivor archetypes achieve their goals of preventing and ending abuse.
Fig 5.4: The survivor archetype Eric Mitchell knows he’s being stalked by his ex-boyfriend Rob but can’t figure out how Rob is learning his location information.It may be useful for you to create persona-like artifacts for your archetypes, such as the three examples shown. Instead of focusing on the demographic information we often see in personas, focus on their goals. The goals of the abuser will be to carry out the specific abuse you’ve identified, while the goals of the survivor will be to prevent abuse, understand that abuse is happening, make ongoing abuse stop, or regain control over the technology that’s being used for abuse. Later, you’ll brainstorm how to prevent the abuser’s goals and assist the survivor’s goals.
And while the “abuser/survivor” model fits most cases, it doesn’t fit all, so modify it as you need to. For example, if you uncovered an issue with security, such as the ability for someone to hack into a home camera system and talk to children, the malicious hacker would get the abuser archetype and the child’s parents would get survivor archetype.
Step 3: Brainstorm problemsAfter creating archetypes, brainstorm novel abuse cases and safety issues. “Novel” means things not found in your research; you’re trying to identify completely new safety issues that are unique to your product or service. The goal with this step is to exhaust every effort of identifying harms your product could cause. You aren’t worrying about how to prevent the harm yet—that comes in the next step.
How could your product be used for any kind of abuse, outside of what you’ve already identified in your research? I recommend setting aside at least a few hours with your team for this process.
If you’re looking for somewhere to start, try doing a Black Mirror brainstorm. This exercise is based on the show Black Mirror, which features stories about the dark possibilities of technology. Try to figure out how your product would be used in an episode of the show—the most wild, awful, out-of-control ways it could be used for harm. When I’ve led Black Mirror brainstorms, participants usually end up having a good deal of fun (which I think is great—it’s okay to have fun when designing for safety!). I recommend time-boxing a Black Mirror brainstorm to half an hour, and then dialing it back and using the rest of the time thinking of more realistic forms of harm.
After you’ve identified as many opportunities for abuse as possible, you may still not feel confident that you’ve uncovered every potential form of harm. A healthy amount of anxiety is normal when you’re doing this kind of work. It’s common for teams designing for safety to worry, “Have we really identified every possible harm? What if we’ve missed something?” If you’ve spent at least four hours coming up with ways your product could be used for harm and have run out of ideas, go to the next step.
It’s impossible to guarantee you’ve thought of everything; instead of aiming for 100 percent assurance, recognize that you’ve taken this time and have done the best you can, and commit to continuing to prioritize safety in the future. Once your product is released, your users may identify new issues that you missed; aim to receive that feedback graciously and course-correct quickly.
Step 4: Design solutionsAt this point, you should have a list of ways your product can be used for harm as well as survivor and abuser archetypes describing opposing user goals. The next step is to identify ways to design against the identified abuser’s goals and to support the survivor’s goals. This step is a good one to insert alongside existing parts of your design process where you’re proposing solutions for the various problems your research uncovered.
Some questions to ask yourself to help prevent harm and support your archetypes include:
In some products, it’s possible to proactively recognize that harm is happening. For example, a pregnancy app might be modified to allow the user to report that they were the victim of an assault, which could trigger an offer to receive resources for local and national organizations. This sort of proactiveness is not always possible, but it’s worth taking a half hour to discuss if any type of user activity would indicate some form of harm or abuse, and how your product could assist the user in receiving help in a safe manner.
That said, use caution: you don’t want to do anything that could put a user in harm’s way if their devices are being monitored. If you do offer some kind of proactive help, always make it voluntary, and think through other safety issues, such as the need to keep the user in-app in case an abuser is checking their search history. We’ll walk through a good example of this in the next chapter.
Step 5: Test for safetyThe final step is to test your prototypes from the point of view of your archetypes: the person who wants to weaponize the product for harm and the victim of the harm who needs to regain control over the technology. Just like any other kind of product testing, at this point you’ll aim to rigorously test out your safety solutions so that you can identify gaps and correct them, validate that your designs will help keep your users safe, and feel more confident releasing your product into the world.
Ideally, safety testing happens along with usability testing. If you’re at a company that doesn’t do usability testing, you might be able to use safety testing to cleverly perform both; a user who goes through your design attempting to weaponize the product against someone else can also be encouraged to point out interactions or other elements of the design that don’t make sense to them.
You’ll want to conduct safety testing on either your final prototype or the actual product if it’s already been released. There’s nothing wrong with testing an existing product that wasn’t designed with safety goals in mind from the onset—“retrofitting” it for safety is a good thing to do.
Remember that testing for safety involves testing from the perspective of both an abuser and a survivor, though it may not make sense for you to do both. Alternatively, if you made multiple survivor archetypes to capture multiple scenarios, you’ll want to test from the perspective of each one.
As with other sorts of usability testing, you as the designer are most likely too close to the product and its design by this point to be a valuable tester; you know the product too well. Instead of doing it yourself, set up testing as you would with other usability testing: find someone who is not familiar with the product and its design, set the scene, give them a task, encourage them to think out loud, and observe how they attempt to complete it.
Abuser testingThe goal of this testing is to understand how easy it is for someone to weaponize your product for harm. Unlike with usability testing, you want to make it impossible, or at least difficult, for them to achieve their goal. Reference the goals in the abuser archetype you created earlier, and use your product in an attempt to achieve them.
For example, for a fitness app with GPS-enabled location features, we can imagine that the abuser archetype would have the goal of figuring out where his ex-girlfriend now lives. With this goal in mind, you’d try everything possible to figure out the location of another user who has their privacy settings enabled. You might try to see her running routes, view any available information on her profile, view anything available about her location (which she has set to private), and investigate the profiles of any other users somehow connected with her account, such as her followers.
If by the end of this you’ve managed to uncover some of her location data, despite her having set her profile to private, you know now that your product enables stalking. Your next step is to go back to step 4 and figure out how to prevent this from happening. You may need to repeat the process of designing solutions and testing them more than once.
Survivor testingSurvivor testing involves identifying how to give information and power to the survivor. It might not always make sense based on the product or context. Thwarting the attempt of an abuser archetype to stalk someone also satisfies the goal of the survivor archetype to not be stalked, so separate testing wouldn’t be needed from the survivor’s perspective.
However, there are cases where it makes sense. For example, for a smart thermostat, a survivor archetype’s goals would be to understand who or what is making the temperature change when they aren’t doing it themselves. You could test this by looking for the thermostat’s history log and checking for usernames, actions, and times; if you couldn’t find that information, you would have more work to do in step 4.
Another goal might be regaining control of the thermostat once the survivor realizes the abuser is remotely changing its settings. Your test would involve attempting to figure out how to do this: are there instructions that explain how to remove another user and change the password, and are they easy to find? This might again reveal that more work is needed to make it clear to the user how they can regain control of the device or account.
Stress testingTo make your product more inclusive and compassionate, consider adding stress testing. This concept comes from Design for Real Life by Eric Meyer and Sara Wachter-Boettcher. The authors pointed out that personas typically center people who are having a good day—but real users are often anxious, stressed out, having a bad day, or even experiencing tragedy. These are called “stress cases,” and testing your products for users in stress-case situations can help you identify places where your design lacks compassion. Design for Real Life has more details about what it looks like to incorporate stress cases into your design as well as many other great tactics for compassionate design.
In the 1950s, many in the elite running community had begun to believe it wasn’t possible to run a mile in less than four minutes. Runners had been attempting it since the late 19th century and were beginning to draw the conclusion that the human body simply wasn’t built for the task.
But on May 6, 1956, Roger Bannister took everyone by surprise. It was a cold, wet day in Oxford, England—conditions no one expected to lend themselves to record-setting—and yet Bannister did just that, running a mile in 3:59.4 and becoming the first person in the record books to run a mile in under four minutes.
This shift in the benchmark had profound effects; the world now knew that the four-minute mile was possible. Bannister’s record lasted only forty-six days, when it was snatched away by Australian runner John Landy. Then a year later, three runners all beat the four-minute barrier together in the same race. Since then, over 1,400 runners have officially run a mile in under four minutes; the current record is 3:43.13, held by Moroccan athlete Hicham El Guerrouj.
We achieve far more when we believe that something is possible, and we will believe it’s possible only when we see someone else has already done it—and as with human running speed, so it is with what we believe are the hard limits for how a website needs to perform.
Establishing standards for a sustainable webIn most major industries, the key metrics of environmental performance are fairly well established, such as miles per gallon for cars or energy per square meter for homes. The tools and methods for calculating those metrics are standardized as well, which keeps everyone on the same page when doing environmental assessments. In the world of websites and apps, however, we aren’t held to any particular environmental standards, and only recently have gained the tools and methods we need to even make an environmental assessment.
The primary goal in sustainable web design is to reduce carbon emissions. However, it’s almost impossible to actually measure the amount of CO2 produced by a web product. We can’t measure the fumes coming out of the exhaust pipes on our laptops. The emissions of our websites are far away, out of sight and out of mind, coming out of power stations burning coal and gas. We have no way to trace the electrons from a website or app back to the power station where the electricity is being generated and actually know the exact amount of greenhouse gas produced. So what do we do?
If we can’t measure the actual carbon emissions, then we need to find what we can measure. The primary factors that could be used as indicators of carbon emissions are:
Let’s take a look at how we can use these metrics to quantify the energy consumption, and in turn the carbon footprint, of the websites and web apps we create.
Data transferMost researchers use kilowatt-hours per gigabyte (kWh/GB) as a metric of energy efficiency when measuring the amount of data transferred over the internet when a website or application is used. This provides a great reference point for energy consumption and carbon emissions. As a rule of thumb, the more data transferred, the more energy used in the data center, telecoms networks, and end user devices.
For web pages, data transfer for a single visit can be most easily estimated by measuring the page weight, meaning the transfer size of the page in kilobytes the first time someone visits the page. It’s fairly easy to measure using the developer tools in any modern web browser. Often your web hosting account will include statistics for the total data transfer of any web application (Fig 2.1).
Fig 2.1: The Kinsta hosting dashboard displays data transfer alongside traffic volumes. If you divide data transfer by visits, you get the average data per visit, which can be used as a metric of efficiency.The nice thing about page weight as a metric is that it allows us to compare the efficiency of web pages on a level playing field without confusing the issue with constantly changing traffic volumes.
Reducing page weight requires a large scope. By early 2020, the median page weight was 1.97 MB for setups the HTTP Archive classifies as “desktop” and 1.77 MB for “mobile,” with desktop increasing 36 percent since January 2016 and mobile page weights nearly doubling in the same period (Fig 2.2). Roughly half of this data transfer is image files, making images the single biggest source of carbon emissions on the average website.
History clearly shows us that our web pages can be smaller, if only we set our minds to it. While most technologies become ever more energy efficient, including the underlying technology of the web such as data centers and transmission networks, websites themselves are a technology that becomes less efficient as time goes on.
Fig 2.2: The historical page weight data from HTTP Archive can teach us a lot about what is possible in the future.You might be familiar with the concept of performance budgeting as a way of focusing a project team on creating faster user experiences. For example, we might specify that the website must load in a maximum of one second on a broadband connection and three seconds on a 3G connection. Much like speed limits while driving, performance budgets are upper limits rather than vague suggestions, so the goal should always be to come in under budget.
Designing for fast performance does often lead to reduced data transfer and emissions, but it isn’t always the case. Web performance is often more about the subjective perception of load times than it is about the true efficiency of the underlying system, whereas page weight and transfer size are more objective measures and more reliable benchmarks for sustainable web design.
We can set a page weight budget in reference to a benchmark of industry averages, using data from sources like HTTP Archive. We can also benchmark page weight against competitors or the old version of the website we’re replacing. For example, we might set a maximum page weight budget as equal to our most efficient competitor, or we could set the benchmark lower to guarantee we are best in class.
If we want to take it to the next level, then we could also start looking at the transfer size of our web pages for repeat visitors. Although page weight for the first time someone visits is the easiest thing to measure, and easy to compare on a like-for-like basis, we can learn even more if we start looking at transfer size in other scenarios too. For example, visitors who load the same page multiple times will likely have a high percentage of the files cached in their browser, meaning they don’t need to transfer all of the files on subsequent visits. Likewise, a visitor who navigates to new pages on the same website will likely not need to load the full page each time, as some global assets from areas like the header and footer may already be cached in their browser. Measuring transfer size at this next level of detail can help us learn even more about how we can optimize efficiency for users who regularly visit our pages, and enable us to set page weight budgets for additional scenarios beyond the first visit.
Page weight budgets are easy to track throughout a design and development process. Although they don’t actually tell us carbon emission and energy consumption analytics directly, they give us a clear indication of efficiency relative to other websites. And as transfer size is an effective analog for energy consumption, we can actually use it to estimate energy consumption too.
In summary, reduced data transfer translates to energy efficiency, a key factor to reducing carbon emissions of web products. The more efficient our products, the less electricity they use, and the less fossil fuels need to be burned to produce the electricity to power them. But as we’ll see next, since all web products demand some power, it’s important to consider the source of that electricity, too.
Carbon intensity of electricityRegardless of energy efficiency, the level of pollution caused by digital products depends on the carbon intensity of the energy being used to power them. Carbon intensity is a term used to define the grams of CO2 produced for every kilowatt-hour of electricity (gCO2/kWh). This varies widely, with renewable energy sources and nuclear having an extremely low carbon intensity of less than 10 gCO2/kWh (even when factoring in their construction); whereas fossil fuels have very high carbon intensity of approximately 200–400 gCO2/kWh.
Most electricity comes from national or state grids, where energy from a variety of different sources is mixed together with varying levels of carbon intensity. The distributed nature of the internet means that a single user of a website or app might be using energy from multiple different grids simultaneously; a website user in Paris uses electricity from the French national grid to power their home internet and devices, but the website’s data center could be in Dallas, USA, pulling electricity from the Texas grid, while the telecoms networks use energy from everywhere between Dallas and Paris.
We don’t have control over the full energy supply of web services, but we do have some control over where we host our projects. With a data center using a significant proportion of the energy of any website, locating the data center in an area with low carbon energy will tangibly reduce its carbon emissions. Danish startup Tomorrow reports and maps this user-contributed data, and a glance at their map shows how, for example, choosing a data center in France will have significantly lower carbon emissions than a data center in the Netherlands (Fig 2.3).
Fig 2.3: Tomorrow’s electricityMap shows live data for the carbon intensity of electricity by country.That said, we don’t want to locate our servers too far away from our users; it takes energy to transmit data through the telecom’s networks, and the further the data travels, the more energy is consumed. Just like food miles, we can think of the distance from the data center to the website’s core user base as “megabyte miles”—and we want it to be as small as possible.
Using the distance itself as a benchmark, we can use website analytics to identify the country, state, or even city where our core user group is located and measure the distance from that location to the data center used by our hosting company. This will be a somewhat fuzzy metric as we don’t know the precise center of mass of our users or the exact location of a data center, but we can at least get a rough idea.
For example, if a website is hosted in London but the primary user base is on the West Coast of the USA, then we could look up the distance from London to San Francisco, which is 5,300 miles. That’s a long way! We can see that hosting it somewhere in North America, ideally on the West Coast, would significantly reduce the distance and thus the energy used to transmit the data. In addition, locating our servers closer to our visitors helps reduce latency and delivers better user experience, so it’s a win-win.
Converting it back to carbon emissionsIf we combine carbon intensity with a calculation for energy consumption, we can calculate the carbon emissions of our websites and apps. A tool my team created does this by measuring the data transfer over the wire when loading a web page, calculating the amount of electricity associated, and then converting that into a figure for CO2 (Fig 2.4). It also factors in whether or not the web hosting is powered by renewable energy.
If you want to take it to the next level and tailor the data more accurately to the unique aspects of your project, the Energy and Emissions Worksheet accompanying this book shows you how.
Fig 2.4: The Website Carbon Calculator shows how the Riverford Organic website embodies their commitment to sustainability, being both low carbon and hosted in a data center using renewable energy.With the ability to calculate carbon emissions for our projects, we could actually take a page weight budget one step further and set carbon budgets as well. CO2 is not a metric commonly used in web projects; we’re more familiar with kilobytes and megabytes, and can fairly easily look at design options and files to assess how big they are. Translating that into carbon adds a layer of abstraction that isn’t as intuitive—but carbon budgets do focus our minds on the primary thing we’re trying to reduce, and support the core objective of sustainable web design: reducing carbon emissions.
Browser EnergyData transfer might be the simplest and most complete analog for energy consumption in our digital projects, but by giving us one number to represent the energy used in the data center, the telecoms networks, and the end user’s devices, it can’t offer us insights into the efficiency in any specific part of the system.
One part of the system we can look at in more detail is the energy used by end users’ devices. As front-end web technologies become more advanced, the computational load is increasingly moving from the data center to users’ devices, whether they be phones, tablets, laptops, desktops, or even smart TVs. Modern web browsers allow us to implement more complex styling and animation on the fly using CSS and JavaScript. Furthermore, JavaScript libraries such as Angular and React allow us to create applications where the “thinking” work is done partly or entirely in the browser.
All of these advances are exciting and open up new possibilities for what the web can do to serve society and create positive experiences. However, more computation in the user’s web browser means more energy used by their devices. This has implications not just environmentally, but also for user experience and inclusivity. Applications that put a heavy processing load on the user’s device can inadvertently exclude users with older, slower devices and cause batteries on phones and laptops to drain faster. Furthermore, if we build web applications that require the user to have up-to-date, powerful devices, people throw away old devices much more frequently. This isn’t just bad for the environment, but it puts a disproportionate financial burden on the poorest in society.
In part because the tools are limited, and partly because there are so many different models of devices, it’s difficult to measure website energy consumption on end users’ devices. One tool we do currently have is the Energy Impact monitor inside the developer console of the Safari browser (Fig 2.5).
Fig 2.5: The Energy Impact meter in Safari (on the right) shows how a website consumes CPU energy.You know when you load a website and your computer’s cooling fans start spinning so frantically you think it might actually take off? That’s essentially what this tool is measuring.
It shows us the percentage of CPU used and the duration of CPU usage when loading the web page, and uses these figures to generate an energy impact rating. It doesn’t give us precise data for the amount of electricity used in kilowatts, but the information it does provide can be used to benchmark how efficiently your websites use energy and set targets for improvement.
We’ve been having conversations for thousands of years. Whether to convey information, conduct transactions, or simply to check in on one another, people have yammered away, chattering and gesticulating, through spoken conversation for countless generations. Only in the last few millennia have we begun to commit our conversations to writing, and only in the last few decades have we begun to outsource them to the computer, a machine that shows much more affinity for written correspondence than for the slangy vagaries of spoken language.
Computers have trouble because between spoken and written language, speech is more primordial. To have successful conversations with us, machines must grapple with the messiness of human speech: the disfluencies and pauses, the gestures and body language, and the variations in word choice and spoken dialect that can stymie even the most carefully crafted human-computer interaction. In the human-to-human scenario, spoken language also has the privilege of face-to-face contact, where we can readily interpret nonverbal social cues.
In contrast, written language immediately concretizes as we commit it to record and retains usages long after they become obsolete in spoken communication (the salutation “To whom it may concern,” for example), generating its own fossil record of outdated terms and phrases. Because it tends to be more consistent, polished, and formal, written text is fundamentally much easier for machines to parse and understand.
Spoken language has no such luxury. Besides the nonverbal cues that decorate conversations with emphasis and emotional context, there are also verbal cues and vocal behaviors that modulate conversation in nuanced ways: how something is said, not what. Whether rapid-fire, low-pitched, or high-decibel, whether sarcastic, stilted, or sighing, our spoken language conveys much more than the written word could ever muster. So when it comes to voice interfaces—the machines we conduct spoken conversations with—we face exciting challenges as designers and content strategists.
Voice InteractionsWe interact with voice interfaces for a variety of reasons, but according to Michael McTear, Zoraida Callejas, and David Griol in The Conversational Interface, those motivations by and large mirror the reasons we initiate conversations with other people, too (http://bkaprt.com/vcu36/01-01). Generally, we start up a conversation because:
These three categories—which I call transactional, informational, and prosocial—also characterize essentially every voice interaction: a single conversation from beginning to end that realizes some outcome for the user, starting with the voice interface’s first greeting and ending with the user exiting the interface. Note here that a conversation in our human sense—a chat between people that leads to some result and lasts an arbitrary length of time—could encompass multiple transactional, informational, and prosocial voice interactions in succession. In other words, a voice interaction is a conversation, but a conversation is not necessarily a single voice interaction.
Purely prosocial conversations are more gimmicky than captivating in most voice interfaces, because machines don’t yet have the capacity to really want to know how we’re doing and to do the sort of glad-handing humans crave. There’s also ongoing debate as to whether users actually prefer the sort of organic human conversation that begins with a prosocial voice interaction and shifts seamlessly into other types. In fact, in Voice User Interface Design, Michael Cohen, James Giangola, and Jennifer Balogh recommend sticking to users’ expectations by mimicking how they interact with other voice interfaces rather than trying too hard to be human—potentially alienating them in the process (http://bkaprt.com/vcu36/01-01).
That leaves two genres of conversations we can have with one another that a voice interface can easily have with us, too: a transactional voice interaction realizing some outcome (“buy iced tea”) and an informational voice interaction teaching us something new (“discuss a musical”).
Transactional voice interactionsUnless you’re tapping buttons on a food delivery app, you’re generally having a conversation—and therefore a voice interaction—when you order a Hawaiian pizza with extra pineapple. Even when we walk up to the counter and place an order, the conversation quickly pivots from an initial smattering of neighborly small talk to the real mission at hand: ordering a pizza (generously topped with pineapple, as it should be).
Alison: Hey, how’s it going?
Burhan: Hi, welcome to Crust Deluxe! It’s cold out there. How can I help you?
Alison: Can I get a Hawaiian pizza with extra pineapple?
Burhan: Sure, what size?
Alison: Large.
Burhan: Anything else?
Alison: No thanks, that’s it.
Burhan: Something to drink?
Alison: I’ll have a bottle of Coke.
Burhan: You got it. That’ll be $13.55 and about fifteen minutes.
Each progressive disclosure in this transactional conversation reveals more and more of the desired outcome of the transaction: a service rendered or a product delivered. Transactional conversations have certain key traits: they’re direct, to the point, and economical. They quickly dispense with pleasantries.
Informational voice interactionsMeanwhile, some conversations are primarily about obtaining information. Though Alison might visit Crust Deluxe with the sole purpose of placing an order, she might not actually want to walk out with a pizza at all. She might be just as interested in whether they serve halal or kosher dishes, gluten-free options, or something else. Here, though we again have a prosocial mini-conversation at the beginning to establish politeness, we’re after much more.
Alison: Hey, how’s it going?
Burhan: Hi, welcome to Crust Deluxe! It’s cold out there. How can I help you?
Alison: Can I ask a few questions?
Burhan: Of course! Go right ahead.
Alison: Do you have any halal options on the menu?
Burhan: Absolutely! We can make any pie halal by request. We also have lots of vegetarian, ovo-lacto, and vegan options. Are you thinking about any other dietary restrictions?
Alison: What about gluten-free pizzas?
Burhan: We can definitely do a gluten-free crust for you, no problem, for both our deep-dish and thin-crust pizzas. Anything else I can answer for you?
Alison: That’s it for now. Good to know. Thanks!
Burhan: Anytime, come back soon!
This is a very different dialogue. Here, the goal is to get a certain set of facts. Informational conversations are investigative quests for the truth—research expeditions to gather data, news, or facts. Voice interactions that are informational might be more long-winded than transactional conversations by necessity. Responses tend to be lengthier, more informative, and carefully communicated so the customer understands the key takeaways.
Voice InterfacesAt their core, voice interfaces employ speech to support users in reaching their goals. But simply because an interface has a voice component doesn’t mean that every user interaction with it is mediated through voice. Because multimodal voice interfaces can lean on visual components like screens as crutches, we’re most concerned in this book with pure voice interfaces, which depend entirely on spoken conversation, lack any visual component whatsoever, and are therefore much more nuanced and challenging to tackle.
Though voice interfaces have long been integral to the imagined future of humanity in science fiction, only recently have those lofty visions become fully realized in genuine voice interfaces.
Interactive voice response (IVR) systemsThough written conversational interfaces have been fixtures of computing for many decades, voice interfaces first emerged in the early 1990s with text-to-speech (TTS) dictation programs that recited written text aloud, as well as speech-enabled in-car systems that gave directions to a user-provided address. With the advent of interactive voice response (IVR) systems, intended as an alternative to overburdened customer service representatives, we became acquainted with the first true voice interfaces that engaged in authentic conversation.
IVR systems allowed organizations to reduce their reliance on call centers but soon became notorious for their clunkiness. Commonplace in the corporate world, these systems were primarily designed as metaphorical switchboards to guide customers to a real phone agent (“Say Reservations to book a flight or check an itinerary”); chances are you will enter a conversation with one when you call an airline or hotel conglomerate. Despite their functional issues and users’ frustration with their inability to speak to an actual human right away, IVR systems proliferated in the early 1990s across a variety of industries (http://bkaprt.com/vcu36/01-02, PDF).
While IVR systems are great for highly repetitive, monotonous conversations that generally don’t veer from a single format, they have a reputation for less scintillating conversation than we’re used to in real life (or even in science fiction).
Screen readersParallel to the evolution of IVR systems was the invention of the screen reader, a tool that transcribes visual content into synthesized speech. For Blind or visually impaired website users, it’s the predominant method of interacting with text, multimedia, or form elements. Screen readers represent perhaps the closest equivalent we have today to an out-of-the-box implementation of content delivered through voice.
Among the first screen readers known by that moniker was the Screen Reader for the BBC Micro and NEEC Portable developed by the Research Centre for the Education of the Visually Handicapped (RCEVH) at the University of Birmingham in 1986 (http://bkaprt.com/vcu36/01-03). That same year, Jim Thatcher created the first IBM Screen Reader for text-based computers, later recreated for computers with graphical user interfaces (GUIs) (http://bkaprt.com/vcu36/01-04).
With the rapid growth of the web in the 1990s, the demand for accessible tools for websites exploded. Thanks to the introduction of semantic HTML and especially ARIA roles beginning in 2008, screen readers started facilitating speedy interactions with web pages that ostensibly allow disabled users to traverse the page as an aural and temporal space rather than a visual and physical one. In other words, screen readers for the web “provide mechanisms that translate visual design constructs—proximity, proportion, etc.—into useful information,” writes Aaron Gustafson in A List Apart. “At least they do when documents are authored thoughtfully” (http://bkaprt.com/vcu36/01-05).
Though deeply instructive for voice interface designers, there’s one significant problem with screen readers: they’re difficult to use and unremittingly verbose. The visual structures of websites and web navigation don’t translate well to screen readers, sometimes resulting in unwieldy pronouncements that name every manipulable HTML element and announce every formatting change. For many screen reader users, working with web-based interfaces exacts a cognitive toll.
In Wired, accessibility advocate and voice engineer Chris Maury considers why the screen reader experience is ill-suited to users relying on voice:
From the beginning, I hated the way that Screen Readers work. Why are they designed the way they are? It makes no sense to present information visually and then, and only then, translate that into audio. All of the time and energy that goes into creating the perfect user experience for an app is wasted, or even worse, adversely impacting the experience for blind users. (http://bkaprt.com/vcu36/01-06)
In many cases, well-designed voice interfaces can speed users to their destination better than long-winded screen reader monologues. After all, visual interface users have the benefit of darting around the viewport freely to find information, ignoring areas irrelevant to them. Blind users, meanwhile, are obligated to listen to every utterance synthesized into speech and therefore prize brevity and efficiency. Disabled users who have long had no choice but to employ clunky screen readers may find that voice interfaces, particularly more modern voice assistants, offer a more streamlined experience.
Voice assistantsWhen we think of voice assistants (the subset of voice interfaces now commonplace in living rooms, smart homes, and offices), many of us immediately picture HAL from 2001: A Space Odyssey or hear Majel Barrett’s voice as the omniscient computer in Star Trek. Voice assistants are akin to personal concierges that can answer questions, schedule appointments, conduct searches, and perform other common day-to-day tasks. And they’re rapidly gaining more attention from accessibility advocates for their assistive potential.
Before the earliest IVR systems found success in the enterprise, Apple published a demonstration video in 1987 depicting the Knowledge Navigator, a voice assistant that could transcribe spoken words and recognize human speech to a great degree of accuracy. Then, in 2001, Tim Berners-Lee and others formulated their vision for a Semantic Web “agent” that would perform typical errands like “checking calendars, making appointments, and finding locations” (http://bkaprt.com/vcu36/01-07, behind paywall). It wasn’t until 2011 that Apple’s Siri finally entered the picture, making voice assistants a tangible reality for consumers.
Thanks to the plethora of voice assistants available today, there is considerable variation in how programmable and customizable certain voice assistants are over others (Fig 1.1). At one extreme, everything except vendor-provided features is locked down; for example, at the time of their release, the core functionality of Apple’s Siri and Microsoft’s Cortana couldn’t be extended beyond their existing capabilities. Even today, it isn’t possible to program Siri to perform arbitrary functions, because there’s no means by which developers can interact with Siri at a low level, apart from predefined categories of tasks like sending messages, hailing rideshares, making restaurant reservations, and certain others.
At the opposite end of the spectrum, voice assistants like Amazon Alexa and Google Home offer a core foundation on which developers can build custom voice interfaces. For this reason, programmable voice assistants that lend themselves to customization and extensibility are becoming increasingly popular for developers who feel stifled by the limitations of Siri and Cortana. Amazon offers the Alexa Skills Kit, a developer framework for building custom voice interfaces for Amazon Alexa, while Google Home offers the ability to program arbitrary Google Assistant skills. Today, users can choose from among thousands of custom-built skills within both the Amazon Alexa and Google Assistant ecosystems.
Fig 1.1: Voice assistants like Amazon Alexa and Google Home tend to be more programmable, and thus more flexible, than their counterpart Apple Siri.As corporations like Amazon, Apple, Microsoft, and Google continue to stake their territory, they’re also selling and open-sourcing an unprecedented array of tools and frameworks for designers and developers that aim to make building voice interfaces as easy as possible, even without code.
Often by necessity, voice assistants like Amazon Alexa tend to be monochannel—they’re tightly coupled to a device and can’t be accessed on a computer or smartphone instead. By contrast, many development platforms like Google’s Dialogflow have introduced omnichannel capabilities so users can build a single conversational interface that then manifests as a voice interface, textual chatbot, and IVR system upon deployment. I don’t prescribe any specific implementation approaches in this design-focused book, but in Chapter 4 we’ll get into some of the implications these variables might have on the way you build out your design artifacts.
Voice ContentSimply put, voice content is content delivered through voice. To preserve what makes human conversation so compelling in the first place, voice content needs to be free-flowing and organic, contextless and concise—everything written content isn’t.
Our world is replete with voice content in various forms: screen readers reciting website content, voice assistants rattling off a weather forecast, and automated phone hotline responses governed by IVR systems. In this book, we’re most concerned with content delivered auditorily—not as an option, but as a necessity.
For many of us, our first foray into informational voice interfaces will be to deliver content to users. There’s only one problem: any content we already have isn’t in any way ready for this new habitat. So how do we make the content trapped on our websites more conversational? And how do we write new copy that lends itself to voice interactions?
Lately, we’ve begun slicing and dicing our content in unprecedented ways. Websites are, in many respects, colossal vaults of what I call macrocontent: lengthy prose that can extend for infinitely scrollable miles in a browser window, like microfilm viewers of newspaper archives. Back in 2002, well before the present-day ubiquity of voice assistants, technologist Anil Dash defined microcontent as permalinked pieces of content that stay legible regardless of environment, such as email or text messages:
A day’s weather forcast [sic], the arrival and departure times for an airplane flight, an abstract from a long publication, or a single instant message can all be examples of microcontent. (http://bkaprt.com/vcu36/01-08)
I’d update Dash’s definition of microcontent to include all examples of bite-sized content that go well beyond written communiqués. After all, today we encounter microcontent in interfaces where a small snippet of copy is displayed alone, unmoored from the browser, like a textbot confirmation of a restaurant reservation. Microcontent offers the best opportunity to gauge how your content can be stretched to the very edges of its capabilities, informing delivery channels both established and novel.
As microcontent, voice content is unique because it’s an example of how content is experienced in time rather than in space. We can glance at a digital sign underground for an instant and know when the next train is arriving, but voice interfaces hold our attention captive for periods of time that we can’t easily escape or skip, something screen reader users are all too familiar with.
Because microcontent is fundamentally made up of isolated blobs with no relation to the channels where they’ll eventually end up, we need to ensure that our microcontent truly performs well as voice content—and that means focusing on the two most important traits of robust voice content: voice content legibility and voice content discoverability.
Fundamentally, the legibility and discoverability of our voice content both have to do with how voice content manifests in perceived time and space.
I’m not sure when I first heard this quote, but it’s something that has stayed with me over the years. How do you create services for situations you can’t imagine? Or design products that work on devices yet to be invented?
Flash, Photoshop, and responsive designWhen I first started designing websites, my go-to software was Photoshop. I created a 960px canvas and set about creating a layout that I would later drop content in. The development phase was about attaining pixel-perfect accuracy using fixed widths, fixed heights, and absolute positioning.
Ethan Marcotte’s talk at An Event Apart and subsequent article “Responsive Web Design” in A List Apart in 2010 changed all this. I was sold on responsive design as soon as I heard about it, but I was also terrified. The pixel-perfect designs full of magic numbers that I had previously prided myself on producing were no longer good enough.
The fear wasn’t helped by my first experience with responsive design. My first project was to take an existing fixed-width website and make it responsive. What I learned the hard way was that you can’t just add responsiveness at the end of a project. To create fluid layouts, you need to plan throughout the design phase.
A new way to designDesigning responsive or fluid sites has always been about removing limitations, producing content that can be viewed on any device. It relies on the use of percentage-based layouts, which I initially achieved with native CSS and utility classes:
.column-span-6 {
width: 49%;
float: left;
margin-right: 0.5%;
margin-left: 0.5%;
}
.column-span-4 {
width: 32%;
float: left;
margin-right: 0.5%;
margin-left: 0.5%;
}
.column-span-3 {
width: 24%;
float: left;
margin-right: 0.5%;
margin-left: 0.5%;
}
Then with Sass so I could take advantage of @includes to re-use repeated blocks of code and move back to more semantic markup:
.logo {
@include colSpan(6);
}
.search {
@include colSpan(3);
}
.social-share {
@include colSpan(3);
}
Media queries
The second ingredient for responsive design is media queries. Without them, content would shrink to fit the available space regardless of whether that content remained readable (The exact opposite problem occurred with the introduction of a mobile-first approach).
Components becoming too small at mobile breakpointsMedia queries prevented this by allowing us to add breakpoints where the design could adapt. Like most people, I started out with three breakpoints: one for desktop, one for tablets, and one for mobile. Over the years, I added more and more for phablets, wide screens, and so on.
For years, I happily worked this way and improved both my design and front-end skills in the process. The only problem I encountered was making changes to content, since with our Sass grid system in place, there was no way for the site owners to add content without amending the markup—something a small business owner might struggle with. This is because each row in the grid was defined using a div
as a container. Adding content meant creating new row markup, which requires a level of HTML knowledge.
Row markup was a staple of early responsive design, present in all the widely used frameworks like Bootstrap and Skeleton.
<section class="row">
<div class="column-span-4">1 of 7</div>
<div class="column-span-4">2 of 7</div>
<div class="column-span-4">3 of 7</div>
</section>
<section class="row">
<div class="column-span-4">4 of 7</div>
<div class="column-span-4">5 of 7</div>
<div class="column-span-4">6 of 7</div>
</section>
<section class="row">
<div class="column-span-4">7 of 7</div>
</section>
Components placed in the rows of a Sass grid
Another problem arose as I moved from a design agency building websites for small- to medium-sized businesses, to larger in-house teams where I worked across a suite of related sites. In those roles I started to work much more with reusable components.
Our reliance on media queries resulted in components that were tied to common viewport sizes. If the goal of component libraries is reuse, then this is a real problem because you can only use these components if the devices you’re designing for correspond to the viewport sizes used in the pattern library—in the process not really hitting that “devices that don’t yet exist” goal.
Then there’s the problem of space. Media queries allow components to adapt based on the viewport size, but what if I put a component into a sidebar, like in the figure below?
Components responding to the viewport width with media queries Container queries: our savior or a false dawn?Container queries have long been touted as an improvement upon media queries, but at the time of writing are unsupported in most browsers. There are JavaScript workarounds, but they can create dependency and compatibility issues. The basic theory underlying container queries is that elements should change based on the size of their parent container and not the viewport width, as seen in the following illustrations.
Components responding to their parent container with container queriesOne of the biggest arguments in favor of container queries is that they help us create components or design patterns that are truly reusable because they can be picked up and placed anywhere in a layout. This is an important step in moving toward a form of component-based design that works at any size on any device.
In other words, responsive components to replace responsive layouts.
Container queries will help us move from designing pages that respond to the browser or device size to designing components that can be placed in a sidebar or in the main content, and respond accordingly.
My concern is that we are still using layout to determine when a design needs to adapt. This approach will always be restrictive, as we will still need pre-defined breakpoints. For this reason, my main question with container queries is, How would we decide when to change the CSS used by a component?
A component library removed from context and real content is probably not the best place for that decision.
As the diagrams below illustrate, we can use container queries to create designs for specific container widths, but what if I want to change the design based on the image size or ratio?
Cards responding to their parent container with container queries Cards responding based on their own contentIn this example, the dimensions of the container are not what should dictate the design; rather, the image is.
It’s hard to say for sure whether container queries will be a success story until we have solid cross-browser support for them. Responsive component libraries would definitely evolve how we design and would improve the possibilities for reuse and design at scale. But maybe we will always need to adjust these components to suit our content.
CSS is changingWhilst the container query debate rumbles on, there have been numerous advances in CSS that change the way we think about design. The days of fixed-width elements measured in pixels and floated div
elements used to cobble layouts together are long gone, consigned to history along with table layouts. Flexbox and CSS Grid have revolutionized layouts for the web. We can now create elements that wrap onto new rows when they run out of space, not when the device changes.
.wrapper {
display: grid;
grid-template-columns: repeat(auto-fit, 450px);
gap: 10px;
}
The repeat()
function paired with auto-fit
or auto-fill
allows us to specify how much space each column should use while leaving it up to the browser to decide when to spill the columns onto a new line. Similar things can be achieved with Flexbox, as elements can wrap over multiple rows and “flex” to fill available space.
.wrapper {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
.child {
flex-basis: 32%;
margin-bottom: 20px;
}
The biggest benefit of all this is you don’t need to wrap elements in container rows. Without rows, content isn’t tied to page markup in quite the same way, allowing for removals or additions of content without additional development.
A traditional Grid layout without the usual row containersThis is a big step forward when it comes to creating designs that allow for evolving content, but the real game changer for flexible designs is CSS Subgrid.
Remember the days of crafting perfectly aligned interfaces, only for the customer to add an unbelievably long header almost as soon as they're given CMS access, like the illustration below?
Cards unable to respond to a sibling’s content changesSubgrid allows elements to respond to adjustments in their own content and in the content of sibling elements, helping us create designs more resilient to change.
Cards responding to content in sibling cards.wrapper {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
grid-template-rows: auto 1fr auto;
gap: 10px;
}
.sub-grid {
display: grid;
grid-row: span 3;
grid-template-rows: subgrid; /* sets rows to parent grid */
}
CSS Grid allows us to separate layout and content, thereby enabling flexible designs. Meanwhile, Subgrid allows us to create designs that can adapt in order to suit morphing content. Subgrid at the time of writing is only supported in Firefox but the above code can be implemented behind an @supports feature query.
Intrinsic layoutsI’d be remiss not to mention intrinsic layouts, the term created by Jen Simmons to describe a mixture of new and old CSS features used to create layouts that respond to available space.
Responsive layouts have flexible columns using percentages. Intrinsic layouts, on the other hand, use the fr unit to create flexible columns that won’t ever shrink so much that they render the content illegible.
fr
units is a way to say I want you to distribute the extra space in this way, but...don’t ever make it smaller than the content that’s inside of it.
—Jen Simmons, “Designing Intrinsic Layouts”
Intrinsic layouts can also utilize a mixture of fixed and flexible units, allowing the content to dictate the space it takes up.
Slide from “Designing Intrinsic Layouts” by Jen SimmonsWhat makes intrinsic design stand out is that it not only creates designs that can withstand future devices but also helps scale design without losing flexibility. Components and patterns can be lifted and reused without the prerequisite of having the same breakpoints or the same amount of content as in the previous implementation.
We can now create designs that adapt to the space they have, the content within them, and the content around them. With an intrinsic approach, we can construct responsive components without depending on container queries.
Another 2010 moment?This intrinsic approach should in my view be every bit as groundbreaking as responsive web design was ten years ago. For me, it’s another “everything changed” moment.
But it doesn’t seem to be moving quite as fast; I haven’t yet had that same career-changing moment I had with responsive design, despite the widely shared and brilliant talk that brought it to my attention.
One reason for that could be that I now work in a large organization, which is quite different from the design agency role I had in 2010. In my agency days, every new project was a clean slate, a chance to try something new. Nowadays, projects use existing tools and frameworks and are often improvements to existing websites with an existing codebase.
Another could be that I feel more prepared for change now. In 2010 I was new to design in general; the shift was frightening and required a lot of learning. Also, an intrinsic approach isn’t exactly all-new; it’s about using existing skills and existing CSS knowledge in a different way.
You can’t framework your way out of a content problemAnother reason for the slightly slower adoption of intrinsic design could be the lack of quick-fix framework solutions available to kick-start the change.
Responsive grid systems were all over the place ten years ago. With a framework like Bootstrap or Skeleton, you had a responsive design template at your fingertips.
Intrinsic design and frameworks do not go hand in hand quite so well because the benefit of having a selection of units is a hindrance when it comes to creating layout templates. The beauty of intrinsic design is combining different units and experimenting with techniques to get the best for your content.
And then there are design tools. We probably all, at some point in our careers, used Photoshop templates for desktop, tablet, and mobile devices to drop designs in and show how the site would look at all three stages.
How do you do that now, with each component responding to content and layouts flexing as and when they need to? This type of design must happen in the browser, which personally I’m a big fan of.
The debate about “whether designers should code” is another that has rumbled on for years. When designing a digital product, we should, at the very least, design for a best- and worst-case scenario when it comes to content. To do this in a graphics-based software package is far from ideal. In code, we can add longer sentences, more radio buttons, and extra tabs, and watch in real time as the design adapts. Does it still work? Is the design too reliant on the current content?
Personally, I look forward to the day intrinsic design is the standard for design, when a design component can be truly flexible and adapt to both its space and content with no reliance on device or container dimensions.
Content firstContent is not constant. After all, to design for the unknown or unexpected we need to account for content changes like our earlier Subgrid card example that allowed the cards to respond to adjustments to their own content and the content of sibling elements.
Thankfully, there’s more to CSS than layout, and plenty of properties and values can help us put content first. Subgrid and pseudo-elements like ::first-line
and ::first-letter
help to separate design from markup so we can create designs that allow for changes.
Instead of old markup hacks like this—
<p>
<span class="first-line">First line of text with different styling</span>...
</p>
—we can target content based on where it appears.
.element::first-line {
font-size: 1.4em;
}
.element::first-letter {
color: red;
}
Much bigger additions to CSS include logical properties, which change the way we construct designs using logical dimensions (start and end) instead of physical ones (left and right), something CSS Grid also does with functions like min()
, max()
,
and clamp()
.
This flexibility allows for directional changes according to content, a common requirement when we need to present content in multiple languages. In the past, this was often achieved with Sass mixins but was often limited to switching from left-to-right to right-to-left orientation.
In the Sass version, directional variables need to be set.
$direction: rtl;
$opposite-direction: ltr;
$start-direction: right;
$end-direction: left;
These variables can be used as values—
body {
direction: $direction;
text-align: $start-direction;
}
—or as properties.
margin-#{$end-direction}: 10px;
padding-#{$start-direction}: 10px;
However, now we have native logical properties, removing the reliance on both Sass (or a similar tool) and pre-planning that necessitated using variables throughout a codebase. These properties also start to break apart the tight coupling between a design and strict physical dimensions, creating more flexibility for changes in language and in direction.
margin-block-end: 10px;
padding-block-start: 10px;
There are also native start and end values for properties like text-align
, which means we can replace text-align: right
with text-align: start
.
Like the earlier examples, these properties help to build out designs that aren’t constrained to one language; the design will reflect the content’s needs.
Fixed and fluidWe briefly covered the power of combining fixed widths with fluid widths with intrinsic layouts. The min()
and max()
functions are a similar concept, allowing you to specify a fixed value with a flexible alternative.
For min()
this means setting a fluid minimum value and a maximum fixed value.
.element {
width: min(50%, 300px);
}
The element in the figure above will be 50% of its container as long as the element’s width doesn’t exceed 300px.
For max()
we can set a flexible max value and a minimum fixed value.
.element {
width: max(50%, 300px);
}
Now the element will be 50% of its container as long as the element’s width is at least 300px. This means we can set limits but allow content to react to the available space.
The clamp()
function builds on this by allowing us to set a preferred value with a third parameter. Now we can allow the element to shrink or grow if it needs to without getting to a point where it becomes unusable.
.element {
width: clamp(300px, 50%, 600px);
}
This time, the element’s width will be 50% (the preferred value) of its container but never less than 300px and never more than 600px.
With these techniques, we have a content-first approach to responsive design. We can separate content from markup, meaning the changes users make will not affect the design. We can start to future-proof designs by planning for unexpected changes in language or direction. And we can increase flexibility by setting desired dimensions alongside flexible alternatives, allowing for more or less content to be displayed correctly.
Situation firstThanks to what we’ve discussed so far, we can cover device flexibility by changing our approach, designing around content and space instead of catering to devices. But what about that last bit of Jeffrey Zeldman’s quote, “...situations you haven’t imagined”?
It’s a very different thing to design for someone seated at a desktop computer as opposed to someone using a mobile phone and moving through a crowded street in glaring sunshine. Situations and environments are hard to plan for or predict because they change as people react to their own unique challenges and tasks.
This is why choice is so important. One size never fits all, so we need to design for multiple scenarios to create equal experiences for all our users.
Thankfully, there is a lot we can do to provide choice.
Responsible design“There are parts of the world where mobile data is prohibitively expensive, and where there is little or no broadband infrastructure.”
“I Used the Web for a Day on a 50 MB Budget”
Chris Ashton
One of the biggest assumptions we make is that people interacting with our designs have a good wifi connection and a wide screen monitor. But in the real world, our users may be commuters traveling on trains or other forms of transport using smaller mobile devices that can experience drops in connectivity. There is nothing more frustrating than a web page that won’t load, but there are ways we can help users use less data or deal with sporadic connectivity.
The srcset
attribute allows the browser to decide which image to serve. This means we can create smaller ‘cropped’ images to display on mobile devices in turn using less bandwidth and less data.
<img
src="image-file.jpg"
srcset="large.jpg 1024w,
medium.jpg 640w,
small.jpg 320w"
alt="Image alt text" />
The preload
attribute can also help us to think about how and when media is downloaded. It can be used to tell a browser about any critical assets that need to be downloaded with high priority, improving perceived performance and the user experience.
<link rel="stylesheet" href="style.css"> <!--Standard stylesheet markup-->
<link rel="preload" href="style.css" as="style"> <!--Preload stylesheet markup-->
There’s also native lazy loading, which indicates assets that should only be downloaded when they are needed.
<img src="image.png" loading="lazy" alt="…">
With srcset
, preload
, and lazy loading, we can start to tailor a user’s experience based on the situation they find themselves in. What none of this does, however, is allow the user themselves to decide what they want downloaded, as the decision is usually the browser’s to make.
So how can we put users in control?
The return of media queriesMedia queries have always been about much more than device sizes. They allow content to adapt to different situations, with screen size being just one of them.
We’ve long been able to check for media types like print and speech and features such as hover, resolution, and color. These checks allow us to provide options that suit more than one scenario; it’s less about one-size-fits-all and more about serving adaptable content.
As of this writing, the Media Queries Level 5 spec is still under development. It introduces some really exciting queries that in the future will help us design for multiple other unexpected situations.
For example, there’s a light-level feature that allows you to modify styles if a user is in sunlight or darkness. Paired with custom properties, these features allow us to quickly create designs or themes for specific environments.
@media (light-level: normal) {
--background-color: #fff;
--text-color: #0b0c0c;
}
@media (light-level: dim) {
--background-color: #efd226;
--text-color: #0b0c0c;
}
Another key feature of the Level 5 spec is personalization. Instead of creating designs that are the same for everyone, users can choose what works for them. This is achieved by using features like prefers-reduced-data
, prefers-color-scheme
, and prefers-reduced-motion
, the latter two of which already enjoy broad browser support. These features tap into preferences set via the operating system or browser so people don’t have to spend time making each site they visit more usable.
Media queries like this go beyond choices made by a browser to grant more control to the user.
Expect the unexpectedIn the end, the one thing we should always expect is for things to change. Devices in particular change faster than we can keep up, with foldable screens already on the market.
We can’t design the same way we have for this ever-changing landscape, but we can design for content. By putting content first and allowing that content to adapt to whatever space surrounds it, we can create more robust, flexible designs that increase the longevity of our products.
A lot of the CSS discussed here is about moving away from layouts and putting content at the heart of design. From responsive components to fixed and fluid units, there is so much more we can do to take a more intrinsic approach. Even better, we can test these techniques during the design phase by designing in-browser and watching how our designs adapt in real-time.
When it comes to unexpected situations, we need to make sure our products are usable when people need them, whenever and wherever that might be. We can move closer to achieving this by involving users in our design decisions, by creating choice via browsers, and by giving control to our users with user-preference-based media queries.
Good design for the unexpected should allow for change, provide choice, and give control to those we serve: our users themselves.
“Any comment?” is probably one of the worst ways to ask for feedback. It’s vague and open ended, and it doesn’t provide any indication of what we’re looking for. Getting good feedback starts earlier than we might expect: it starts with the request.
It might seem counterintuitive to start the process of receiving feedback with a question, but that makes sense if we realize that getting feedback can be thought of as a form of design research. In the same way that we wouldn’t do any research without the right questions to get the insights that we need, the best way to ask for feedback is also to craft sharp questions.
Design critique is not a one-shot process. Sure, any good feedback workflow continues until the project is finished, but this is particularly true for design because design work continues iteration after iteration, from a high level to the finest details. Each level needs its own set of questions.
And finally, as with any good research, we need to review what we got back, get to the core of its insights, and take action. Question, iteration, and review. Let’s look at each of those.
The questionBeing open to feedback is essential, but we need to be precise about what we’re looking for. Just saying “Any comment?”, “What do you think?”, or “I’d love to get your opinion” at the end of a presentation—whether it’s in person, over video, or through a written post—is likely to get a number of varied opinions or, even worse, get everyone to follow the direction of the first person who speaks up. And then... we get frustrated because vague questions like those can turn a high-level flows review into people instead commenting on the borders of buttons. Which might be a hearty topic, so it might be hard at that point to redirect the team to the subject that you had wanted to focus on.
But how do we get into this situation? It’s a mix of factors. One is that we don’t usually consider asking as a part of the feedback process. Another is how natural it is to just leave the question implied, expecting the others to be on the same page. Another is that in nonprofessional discussions, there’s often no need to be that precise. In short, we tend to underestimate the importance of the questions, so we don’t work on improving them.
The act of asking good questions guides and focuses the critique. It’s also a form of consent: it makes it clear that you’re open to comments and what kind of comments you’d like to get. It puts people in the right mental state, especially in situations when they weren’t expecting to give feedback.
There isn’t a single best way to ask for feedback. It just needs to be specific, and specificity can take many shapes. A model for design critique that I’ve found particularly useful in my coaching is the one of stage versus depth.
“Stage” refers to each of the steps of the process—in our case, the design process. In progressing from user research to the final design, the kind of feedback evolves. But within a single step, one might still review whether some assumptions are correct and whether there’s been a proper translation of the amassed feedback into updated designs as the project has evolved. A starting point for potential questions could derive from the layers of user experience. What do you want to know: Project objectives? User needs? Functionality? Content? Interaction design? Information architecture? UI design? Navigation design? Visual design? Branding?
Here’re a few example questions that are precise and to the point that refer to different layers:
The other axis of specificity is about how deep you’d like to go on what’s being presented. For example, we might have introduced a new end-to-end flow, but there was a specific view that you found particularly challenging and you’d like a detailed review of that. This can be especially useful from one iteration to the next where it’s important to highlight the parts that have changed.
There are other things that we can consider when we want to achieve more specific—and more effective—questions.
A simple trick is to remove generic qualifiers from your questions like “good,” “well,” “nice,” “bad,” “okay,” and “cool.” For example, asking, “When the block opens and the buttons appear, is this interaction good?” might look specific, but you can spot the “good” qualifier, and convert it to an even better question: “When the block opens and the buttons appear, is it clear what the next action is?”
Sometimes we actually do want broad feedback. That’s rare, but it can happen. In that sense, you might still make it explicit that you’re looking for a wide range of opinions, whether at a high level or with details. Or maybe just say, “At first glance, what do you think?” so that it’s clear that what you’re asking is open ended but focused on someone’s impression after their first five seconds of looking at it.
Sometimes the project is particularly expansive, and some areas may have already been explored in detail. In these situations, it might be useful to explicitly say that some parts are already locked in and aren’t open to feedback. It’s not something that I’d recommend in general, but I’ve found it useful to avoid falling again into rabbit holes of the sort that might lead to further refinement but aren’t what’s most important right now.
Asking specific questions can completely change the quality of the feedback that you receive. People with less refined critique skills will now be able to offer more actionable feedback, and even expert designers will welcome the clarity and efficiency that comes from focusing only on what’s needed. It can save a lot of time and frustration.
The iterationDesign iterations are probably the most visible part of the design work, and they provide a natural checkpoint for feedback. Yet a lot of design tools with inline commenting tend to show changes as a single fluid stream in the same file, and those types of design tools make conversations disappear once they’re resolved, update shared UI components automatically, and compel designs to always show the latest version—unless these would-be helpful features were to be manually turned off. The implied goal that these design tools seem to have is to arrive at just one final copy with all discussions closed, probably because they inherited patterns from how written documents are collaboratively edited. That’s probably not the best way to approach design critiques, but even if I don’t want to be too prescriptive here: that could work for some teams.
The asynchronous design-critique approach that I find most effective is to create explicit checkpoints for discussion. I’m going to use the term iteration post for this. It refers to a write-up or presentation of the design iteration followed by a discussion thread of some kind. Any platform that can accommodate this structure can use this. By the way, when I refer to a “write-up or presentation,” I’m including video recordings or other media too: as long as it’s asynchronous, it works.
Using iteration posts has many advantages:
These posts of course don’t mean that no other feedback approach should be used, just that iteration posts could be the primary rhythm for a remote design team to use. And other feedback approaches (such as live critique, pair designing, or inline comments) can build from there.
I don’t think there’s a standard format for iteration posts. But there are a few high-level elements that make sense to include as a baseline:
Each project is likely to have a goal, and hopefully it’s something that’s already been summarized in a single sentence somewhere else, such as the client brief, the product manager’s outline, or the project owner’s request. So this is something that I’d repeat in every iteration post—literally copy and pasting it. The idea is to provide context and to repeat what’s essential to make each iteration post complete so that there’s no need to find information spread across multiple posts. If I want to know about the latest design, the latest iteration post will have all that I need.
This copy-and-paste part introduces another relevant concept: alignment comes from repetition. So having posts that repeat information is actually very effective toward making sure that everyone is on the same page.
The design is then the actual series of information-architecture outlines, diagrams, flows, maps, wireframes, screens, visuals, and any other kind of design work that’s been done. In short, it’s any design artifact. For the final stages of work, I prefer the term blueprint to emphasize that I’ll be showing full flows instead of individual screens to make it easier to understand the bigger picture.
It can also be useful to label the artifacts with clear titles because that can make it easier to refer to them. Write the post in a way that helps people understand the work. It’s not too different from organizing a good live presentation.
For an efficient discussion, you should also include a bullet list of the changes from the previous iteration to let people focus on what’s new, which can be especially useful for larger pieces of work where keeping track, iteration after iteration, could become a challenge.
And finally, as noted earlier, it’s essential that you include a list of the questions to drive the design critique in the direction you want. Doing this as a numbered list can also help make it easier to refer to each question by its number.
Not all iterations are the same. Earlier iterations don’t need to be as tightly focused—they can be more exploratory and experimental, maybe even breaking some of the design-language guidelines to see what’s possible. Then later, the iterations start settling on a solution and refining it until the design process reaches its end and the feature ships.
I want to highlight that even if these iteration posts are written and conceived as checkpoints, by no means do they need to be exhaustive. A post might be a draft—just a concept to get a conversation going—or it could be a cumulative list of each feature that was added over the course of each iteration until the full picture is done.
Over time, I also started using specific labels for incremental iterations: i1, i2, i3, and so on. This might look like a minor labelling tip, but it can help in multiple ways:
To mark when a design is complete enough to be worked on, even if there might be some bits still in need of attention and in turn more iterations needed, the wording release candidate (RC) could be used to describe it: “with i8, we reached RC” or “i12 is an RC.”
The reviewWhat usually happens during a design critique is an open discussion, with a back and forth between people that can be very productive. This approach is particularly effective during live, synchronous feedback. But when we work asynchronously, it’s more effective to use a different approach: we can shift to a user-research mindset. Written feedback from teammates, stakeholders, or others can be treated as if it were the result of user interviews and surveys, and we can analyze it accordingly.
This shift has some major benefits that make asynchronous feedback particularly effective, especially around these friction points:
The first friction point is feeling a pressure to reply to every single comment. Sometimes we write the iteration post, and we get replies from our team. It’s just a few of them, it’s easy, and it doesn’t feel like a problem. But other times, some solutions might require more in-depth discussions, and the amount of replies can quickly increase, which can create a tension between trying to be a good team player by replying to everyone and doing the next design iteration. This might be especially true if the person who’s replying is a stakeholder or someone directly involved in the project who we feel that we need to listen to. We need to accept that this pressure is absolutely normal, and it’s human nature to try to accommodate people who we care about. Sometimes replying to all comments can be effective, but if we treat a design critique more like user research, we realize that we don’t have to reply to every comment, and in asynchronous spaces, there are alternatives:
The second friction point is the swoop-by comment, which is the kind of feedback that comes from someone outside the project or team who might not be aware of the context, restrictions, decisions, or requirements—or of the previous iterations’ discussions. On their side, there’s something that one can hope that they might learn: they could start to acknowledge that they’re doing this and they could be more conscious in outlining where they’re coming from. Swoop-by comments often trigger the simple thought “We’ve already discussed this…”, and it can be frustrating to have to repeat the same reply over and over.
Let’s begin by acknowledging again that there’s no need to reply to every comment. If, however, replying to a previously litigated point might be useful, a short reply with a link to the previous discussion for extra details is usually enough. Remember, alignment comes from repetition, so it’s okay to repeat things sometimes!
Swoop-by commenting can still be useful for two reasons: they might point out something that still isn’t clear, and they also have the potential to stand in for the point of view of a user who’s seeing the design for the first time. Sure, you’ll still be frustrated, but that might at least help in dealing with it.
The third friction point is the personal stake we could have with the design, which could make us feel defensive if the review were to feel more like a discussion. Treating feedback as user research helps us create a healthy distance between the people giving us feedback and our ego (because yes, even if we don’t want to admit it, it’s there). And ultimately, treating everything in aggregated form allows us to better prioritize our work.
Always remember that while you need to listen to stakeholders, project owners, and specific advice, you don’t have to accept every piece of feedback. You have to analyze it and make a decision that you can justify, but sometimes “no” is the right answer.
As the designer leading the project, you’re in charge of that decision. Ultimately, everyone has their specialty, and as the designer, you’re the one who has the most knowledge and the most context to make the right decision. And by listening to the feedback that you’ve received, you’re making sure that it’s also the best and most balanced decision.
Thanks to Brie Anne Demkiw and Mike Shelton for reviewing the first draft of this article.
Feedback, in whichever form it takes, and whatever it may be called, is one of the most effective soft skills that we have at our disposal to collaboratively get our designs to a better place while growing our own skills and perspectives.
Feedback is also one of the most underestimated tools, and often by assuming that we’re already good at it, we settle, forgetting that it’s a skill that can be trained, grown, and improved. Poor feedback can create confusion in projects, bring down morale, and affect trust and team collaboration over the long term. Quality feedback can be a transformative force.
Practicing our skills is surely a good way to improve, but the learning gets even faster when it’s paired with a good foundation that channels and focuses the practice. What are some foundational aspects of giving good feedback? And how can feedback be adjusted for remote and distributed work environments?
On the web, we can identify a long tradition of asynchronous feedback: from the early days of open source, code was shared and discussed on mailing lists. Today, developers engage on pull requests, designers comment in their favorite design tools, project managers and scrum masters exchange ideas on tickets, and so on.
Design critique is often the name used for a type of feedback that’s provided to make our work better, collaboratively. So it shares a lot of the principles with feedback in general, but it also has some differences.
The contentThe foundation of every good critique is the feedback’s content, so that’s where we need to start. There are many models that you can use to shape your content. The one that I personally like best—because it’s clear and actionable—is this one from Lara Hogan.
While this equation is generally used to give feedback to people, it also fits really well in a design critique because it ultimately answers some of the core questions that we work on: What? Where? Why? How? Imagine that you’re giving some feedback about some design work that spans multiple screens, like an onboarding flow: there are some pages shown, a flow blueprint, and an outline of the decisions made. You spot something that could be improved. If you keep the three elements of the equation in mind, you’ll have a mental model that can help you be more precise and effective.
Here is a comment that could be given as a part of some feedback, and it might look reasonable at a first glance: it seems to superficially fulfill the elements in the equation. But does it?
Not sure about the buttons’ styles and hierarchy—it feels off. Can you change them?
Observation for design feedback doesn’t just mean pointing out which part of the interface your feedback refers to, but it also refers to offering a perspective that’s as specific as possible. Are you providing the user’s perspective? Your expert perspective? A business perspective? The project manager’s perspective? A first-time user’s perspective?
When I see these two buttons, I expect one to go forward and one to go back.
Impact is about the why. Just pointing out a UI element might sometimes be enough if the issue may be obvious, but more often than not, you should add an explanation of what you’re pointing out.
When I see these two buttons, I expect one to go forward and one to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow.
The question approach is meant to provide open guidance by eliciting the critical thinking in the designer receiving the feedback. Notably, in Lara’s equation she provides a second approach: request, which instead provides guidance toward a specific solution. While that’s a viable option for feedback in general, for design critiques, in my experience, defaulting to the question approach usually reaches the best solutions because designers are generally more comfortable in being given an open space to explore.
The difference between the two can be exemplified with, for the question approach:
When I see these two buttons, I expect one to go forward and one to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Would it make sense to unify them?
Or, for the request approach:
When I see these two buttons, I expect one to go forward and one to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same pair of forward and back buttons.
At this point in some situations, it might be useful to integrate with an extra why: why you consider the given suggestion to be better.
When I see these two buttons, I expect one to go forward and one to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same two forward and back buttons so that users don’t get confused.
Choosing the question approach or the request approach can also at times be a matter of personal preference. A while ago, I was putting a lot of effort into improving my feedback: I did rounds of anonymous feedback, and I reviewed feedback with other people. After a few rounds of this work and a year later, I got a positive response: my feedback came across as effective and grounded. Until I changed teams. To my shock, my next round of feedback from one specific person wasn’t that great. The reason is that I had previously tried not to be prescriptive in my advice—because the people who I was previously working with preferred the open-ended question format over the request style of suggestions. But now in this other team, there was one person who instead preferred specific guidance. So I adapted my feedback for them to include requests.
One comment that I heard come up a few times is that this kind of feedback is quite long, and it doesn’t seem very efficient. No… but also yes. Let’s explore both sides.
No, this style of feedback is actually efficient because the length here is a byproduct of clarity, and spending time giving this kind of feedback can provide exactly enough information for a good fix. Also if we zoom out, it can reduce future back-and-forth conversations and misunderstandings, improving the overall efficiency and effectiveness of collaboration beyond the single comment. Imagine that in the example above the feedback were instead just, “Let’s make sure that all screens have the same two forward and back buttons.” The designer receiving this feedback wouldn’t have much to go by, so they might just apply the change. In later iterations, the interface might change or they might introduce new features—and maybe that change might not make sense anymore. Without the why, the designer might imagine that the change is about consistency… but what if it wasn’t? So there could now be an underlying concern that changing the buttons would be perceived as a regression.
Yes, this style of feedback is not always efficient because the points in some comments don’t always need to be exhaustive, sometimes because certain changes may be obvious (“The font used doesn’t follow our guidelines”) and sometimes because the team may have a lot of internal knowledge such that some of the whys may be implied.
So the equation above isn’t meant to suggest a strict template for feedback but a mnemonic to reflect and improve the practice. Even after years of active work on my critiques, I still from time to time go back to this formula and reflect on whether what I just wrote is effective.
The toneWell-grounded content is the foundation of feedback, but that’s not really enough. The soft skills of the person who’s providing the critique can multiply the likelihood that the feedback will be well received and understood. Tone alone can make the difference between content that’s rejected or welcomed, and it’s been demonstrated that only positive feedback creates sustained change in people.
Since our goal is to be understood and to have a positive working environment, tone is essential to work on. Over the years, I’ve tried to summarize the required soft skills in a formula that mirrors the one for content: the receptivity equation.
Respectful feedback comes across as grounded, solid, and constructive. It’s the kind of feedback that, whether it’s positive or negative, is perceived as useful and fair.
Timing refers to when the feedback happens. To-the-point feedback doesn’t have much hope of being well received if it’s given at the wrong time. Questioning the entire high-level information architecture of a new feature when it’s about to ship might still be relevant if that questioning highlights a major blocker that nobody saw, but it’s way more likely that those concerns will have to wait for a later rework. So in general, attune your feedback to the stage of the project. Early iteration? Late iteration? Polishing work in progress? These all have different needs. The right timing will make it more likely that your feedback will be well received.
Attitude is the equivalent of intent, and in the context of person-to-person feedback, it can be referred to as radical candor. That means checking before we write to see whether what we have in mind will truly help the person and make the project better overall. This might be a hard reflection at times because maybe we don’t want to admit that we don’t really appreciate that person. Hopefully that’s not the case, but that can happen, and that’s okay. Acknowledging and owning that can help you make up for that: how would I write if I really cared about them? How can I avoid being passive aggressive? How can I be more constructive?
Form is relevant especially in a diverse and cross-cultural work environments because having great content, perfect timing, and the right attitude might not come across if the way that we write creates misunderstandings. There might be many reasons for this: sometimes certain words might trigger specific reactions; sometimes nonnative speakers might not understand all the nuances of some sentences; sometimes our brains might just be different and we might perceive the world differently—neurodiversity must be taken into consideration. Whatever the reason, it’s important to review not just what we write but how.
A few years back, I was asking for some feedback on how I give feedback. I received some good advice but also a comment that surprised me. They pointed out that when I wrote “Oh, […],” I made them feel stupid. That wasn’t my intent! I felt really bad, and I just realized that I provided feedback to them for months, and every time I might have made them feel stupid. I was horrified… but also thankful. I made a quick fix: I added “oh” in my list of replaced words (your choice between: macOS’s text replacement, aText, TextExpander, or others) so that when I typed “oh,” it was instantly deleted.
Something to highlight because it’s quite frequent—especially in teams that have a strong group spirit—is that people tend to beat around the bush. It’s important to remember here that a positive attitude doesn’t mean going light on the feedback—it just means that even when you provide hard, difficult, or challenging feedback, you do so in a way that’s respectful and constructive. The nicest thing that you can do for someone is to help them grow.
We have a great advantage in giving feedback in written form: it can be reviewed by another person who isn’t directly involved, which can help to reduce or remove any bias that might be there. I found that the best, most insightful moments for me have happened when I’ve shared a comment and I’ve asked someone who I highly trusted, “How does this sound?,” “How can I do it better,” and even “How would you have written it?”—and I’ve learned a lot by seeing the two versions side by side.
The formatAsynchronous feedback also has a major inherent advantage: we can take more time to refine what we’ve written to make sure that it fulfills two main goals: the clarity of communication and the actionability of the suggestions.
Let’s imagine that someone shared a design iteration for a project. You are reviewing it and leaving a comment. There are many ways to do this, and of course context matters, but let’s try to think about some elements that may be useful to consider.
In terms of clarity, start by grounding the critique that you’re about to give by providing context. Specifically, this means describing where you’re coming from: do you have a deep knowledge of the project, or is this the first time that you’re seeing it? Are you coming from a high-level perspective, or are you figuring out the details? Are there regressions? Which user’s perspective are you taking when providing your feedback? Is the design iteration at a point where it would be okay to ship this, or are there major things that need to be addressed first?
Providing context is helpful even if you’re sharing feedback within a team that already has some information on the project. And context is absolutely essential when giving cross-team feedback. If I were to review a design that might be indirectly related to my work, and if I had no knowledge about how the project arrived at that point, I would say so, highlighting my take as external.
We often focus on the negatives, trying to outline all the things that could be done better. That’s of course important, but it’s just as important—if not more—to focus on the positives, especially if you saw progress from the previous iteration. This might seem superfluous, but it’s important to keep in mind that design is a discipline where there are hundreds of possible solutions for every problem. So pointing out that the design solution that was chosen is good and explaining why it’s good has two major benefits: it confirms that the approach taken was solid, and it helps to ground your negative feedback. In the longer term, sharing positive feedback can help prevent regressions on things that are going well because those things will have been highlighted as important. As a bonus, positive feedback can also help reduce impostor syndrome.
There’s one powerful approach that combines both context and a focus on the positives: frame how the design is better than the status quo (compared to a previous iteration, competitors, or benchmarks) and why, and then on that foundation, you can add what could be improved. This is powerful because there’s a big difference between a critique that’s for a design that’s already in good shape and a critique that’s for a design that isn’t quite there yet.
Another way that you can improve your feedback is to depersonalize the feedback: the comments should always be about the work, never about the person who made it. It’s “This button isn’t well aligned” versus “You haven’t aligned this button well.” This is very easy to change in your writing by reviewing it just before sending.
In terms of actionability, one of the best approaches to help the designer who’s reading through your feedback is to split it into bullet points or paragraphs, which are easier to review and analyze one by one. For longer pieces of feedback, you might also consider splitting it into sections or even across multiple comments. Of course, adding screenshots or signifying markers of the specific part of the interface you’re referring to can also be especially useful.
One approach that I’ve personally used effectively in some contexts is to enhance the bullet points with four markers using emojis. So a red square 🟥 means that it’s something that I consider blocking; a yellow diamond 🔶 is something that I can be convinced otherwise, but it seems to me that it should be changed; and a green circle 🟢 is a detailed, positive confirmation. I also use a blue spiral 🌀 for either something that I’m not sure about, an exploration, an open alternative, or just a note. But I’d use this approach only on teams where I’ve already established a good level of trust because if it happens that I have to deliver a lot of red squares, the impact could be quite demoralizing, and I’d reframe how I’d communicate that a bit.
Let’s see how this would work by reusing the example that we used earlier as the first bullet point in this list:
What about giving feedback directly in Figma or another design tool that allows in-place feedback? In general, I find these difficult to use because they hide discussions and they’re harder to track, but in the right context, they can be very effective. Just make sure that each of the comments is separate so that it’s easier to match each discussion to a single task, similar to the idea of splitting mentioned above.
One final note: say the obvious. Sometimes we might feel that something is obviously good or obviously wrong, and so we don’t say it. Or sometimes we might have a doubt that we don’t express because the question might sound stupid. Say it—that’s okay. You might have to reword it a little bit to make the reader feel more comfortable, but don’t hold it back. Good feedback is transparent, even when it may be obvious.
There’s another advantage of asynchronous feedback: written feedback automatically tracks decisions. Especially in large projects, “Why did we do this?” could be a question that pops up from time to time, and there’s nothing better than open, transparent discussions that can be reviewed at any time. For this reason, I recommend using software that saves these discussions, without hiding them once they are resolved.
Content, tone, and format. Each one of these subjects provides a useful model, but working to improve eight areas—observation, impact, question, timing, attitude, form, clarity, and actionability—is a lot of work to put in all at once. One effective approach is to take them one by one: first identify the area that you lack the most (either from your perspective or from feedback from others) and start there. Then the second, then the third, and so on. At first you’ll have to put in extra time for every piece of feedback that you give, but after a while, it’ll become second nature, and your impact on the work will multiply.
Thanks to Brie Anne Demkiw and Mike Shelton for reviewing the first draft of this article.
Are you like me, reading about people fading away as they burn out, and feeling unable to relate? Do you feel like your feelings are invisible to the world because you’re experiencing burnout differently? When burnout starts to push down on us, our core comes through more. Beautiful, peaceful souls get quieter and fade into that distant and distracted burnout we’ve all read about. But some of us, those with fires always burning on the edges of our core, get hotter. In my heart I am fire. When I face burnout I double down, triple down, burning hotter and hotter to try to best the challenge. I don’t fade—I am engulfed in a zealous burnout.
So what on earth is a zealous burnout?Imagine a woman determined to do it all. She has two amazing children whom she, along with her husband who is also working remotely, is homeschooling during a pandemic. She has a demanding client load at work—all of whom she loves. She gets up early to get some movement in (or often catch up on work), does dinner prep as the kids are eating breakfast, and gets to work while positioning herself near “fourth grade” to listen in as she juggles clients, tasks, and budgets. Sound like a lot? Even with a supportive team both at home and at work, it is.
Sounds like this woman has too much on her plate and needs self-care. But no, she doesn’t have time for that. In fact, she starts to feel like she’s dropping balls. Not accomplishing enough. There’s not enough of her to be here and there; she is trying to divide her mind in two all the time, all day, every day. She starts to doubt herself. And as those feelings creep in more and more, her internal narrative becomes more and more critical.
Suddenly she KNOWS what she needs to do! She should DO MORE.
This is a hard and dangerous cycle. Know why? Because once she doesn’t finish that new goal, that narrative will get worse. Suddenly she’s failing. She isn’t doing enough. SHE is not enough. She might fail, she might fail her family...so she’ll find more she should do. She doesn’t sleep as much, move as much, all in the efforts to do more. Caught in this cycle of trying to prove herself to herself, never reaching any goal. Never feeling “enough.”
So, yeah, that’s what zealous burnout looks like for me. It doesn’t happen overnight in some grand gesture but instead slowly builds over weeks and months. My burning out process looks like speeding up, not a person losing focus. I speed up and up and up...and then I just stop.
I am the one who couldIt’s funny the things that shape us. Through the lens of childhood, I viewed the fears, struggles, and sacrifices of someone who had to make it all work without having enough. I was lucky that my mother was so resourceful and my father supportive; I never went without and even got an extra here or there.
Growing up, I did not feel shame when my mother paid with food stamps; in fact, I’d have likely taken on any debate on the topic, verbally eviscerating anyone who dared to criticize the disabled woman trying to make sure all our needs were met with so little. As a child, I watched the way the fear of not making those ends meet impacted people I love. As the non-disabled person in my home, I would take on many of the physical tasks because I was “the one who could” make our lives a little easier. I learned early to associate fears or uncertainty with putting more of myself into it—I am the one who can. I learned early that when something frightens me, I can double down and work harder to make it better. I can own the challenge. When people have seen this in me as an adult, I’ve been told I seem fearless, but make no mistake, I’m not. If I seem fearless, it’s because this behavior was forged from other people’s fears.
And here I am, more than 30 years later still feeling the urge to mindlessly push myself forward when faced with overwhelming tasks ahead of me, assuming that I am the one who can and therefore should. I find myself driven to prove that I can make things happen if I work longer hours, take on more responsibility, and do more.
I do not see people who struggle financially as failures, because I have seen how strong that tide can be—it pulls you along the way. I truly get that I have been privileged to be able to avoid many of the challenges that were present in my youth. That said, I am still “the one who can” who feels she should, so if I were faced with not having enough to make ends meet for my own family, I would see myself as having failed. Though I am supported and educated, most of this is due to good fortune. I will, however, allow myself the arrogance of saying I have been careful with my choices to have encouraged that luck. My identity stems from the idea that I am “the one who can” so therefore feel obligated to do the most. I can choose to stop, and with some quite literal cold water splashed in my face, I’ve made the choice to before. But that choosing to stop is not my go-to; I move forward, driven by a fear that is so a part of me that I barely notice it’s there until I’m feeling utterly worn away.
So why all the history? You see, burnout is a fickle thing. I have heard and read a lot about burnout over the years. Burnout is real. Especially now, with COVID, many of us are balancing more than we ever have before—all at once! It’s hard, and the procrastinating, the avoidance, the shutting down impacts so many amazing professionals. There are important articles that relate to what I imagine must be the majority of people out there, but not me. That’s not what my burnout looks like.
The dangerous invisibility of zealous burnoutA lot of work environments see the extra hours, extra effort, and overall focused commitment as an asset (and sometimes that’s all it is). They see someone trying to rise to challenges, not someone stuck in their fear. Many well-meaning organizations have safeguards in place to protect their teams from burnout. But in cases like this, those alarms are not always tripped, and then when the inevitable stop comes, some members of the organization feel surprised and disappointed. And sometimes maybe even betrayed.
Parents—more so mothers, statistically speaking—are praised as being so on top of it all when they can work, be involved in the after-school activities, practice self-care in the form of diet and exercise, and still meet friends for coffee or wine. During COVID many of us have binged countless streaming episodes showing how it’s so hard for the female protagonist, but she is strong and funny and can do it. It’s a “very special episode” when she breaks down, cries in the bathroom, woefully admits she needs help, and just stops for a bit. Truth is, countless people are hiding their tears or are doom-scrolling to escape. We know that the media is a lie to amuse us, but often the perception that it’s what we should strive for has penetrated much of society.
Women and burnoutI love men. And though I don’t love every man (heads up, I don’t love every woman or nonbinary person either), I think there is a beautiful spectrum of individuals who represent that particular binary gender.
That said, women are still more often at risk of burnout than their male counterparts, especially in these COVID stressed times. Mothers in the workplace feel the pressure to do all the “mom” things while giving 110%. Mothers not in the workplace feel they need to do more to “justify” their lack of traditional employment. Women who are not mothers often feel the need to do even more because they don’t have that extra pressure at home. It’s vicious and systemic and so a part of our culture that we’re often not even aware of the enormity of the pressures we put on ourselves and each other.
And there are prices beyond happiness too. Harvard Health Publishing released a study a decade ago that “uncovered strong links between women’s job stress and cardiovascular disease.” The CDC noted, “Heart disease is the leading cause of death for women in the United States, killing 299,578 women in 2017—or about 1 in every 5 female deaths.”
This relationship between work stress and health, from what I have read, is more dangerous for women than it is for their non-female counterparts.
But what if your burnout isn’t like that either?That might not be you either. After all, each of us is so different and how we respond to stressors is too. It’s part of what makes us human. Don’t stress what burnout looks like, just learn to recognize it in yourself. Here are a few questions I sometimes ask friends if I am concerned about them.
Are you happy? This simple question should be the first thing you ask yourself. Chances are, even if you’re burning out doing all the things you love, as you approach burnout you’ll just stop taking as much joy from it all.
Do you feel empowered to say no? I have observed in myself and others that when someone is burning out, they no longer feel they can say no to things. Even those who don’t “speed up” feel pressure to say yes to not disappoint the people around them.
What are three things you’ve done for yourself? Another observance is that we all tend to stop doing things for ourselves. Anything from skipping showers and eating poorly to avoiding talking to friends. These can be red flags.
Are you making excuses? Many of us try to disregard feelings of burnout. Over and over I have heard, “It’s just crunch time,” “As soon as I do this one thing, it will all be better,” and “Well I should be able to handle this, so I’ll figure it out.” And it might really be crunch time, a single goal, and/or a skill set you need to learn. That happens—life happens. BUT if this doesn’t stop, be honest with yourself. If you’ve worked more 50-hour weeks since January than not, maybe it’s not crunch time—maybe it’s a bad situation that you’re burning out from.
Do you have a plan to stop feeling this way? If something is truly temporary and you do need to just push through, then it has an exit route with a
defined end.
Take the time to listen to yourself as you would a friend. Be honest, allow yourself to be uncomfortable, and break the thought cycles that prevent you from healing.
So now what?What I just described is a different path to burnout, but it’s still burnout. There are well-established approaches to working through burnout:
Those are hard for me because they feel like more tasks. If I’m in the burnout cycle, doing any of the above for me feels like a waste. The narrative is that if I’m already failing, why would I take care of myself when I’m dropping all those other balls? People need me, right?
If you’re deep in the cycle, your inner voice might be pretty awful by now. If you need to, tell yourself you need to take care of the person your people depend on. If your roles are pushing you toward burnout, use them to help make healing easier by justifying the time spent working on you.
To help remind myself of the airline attendant message about putting the mask on yourself first, I have come up with a few things that I do when I start feeling myself going into a zealous burnout.
Cook an elaborate meal for someone!OK, I am a “food-focused” individual so cooking for someone is always my go-to. There are countless tales in my home of someone walking into the kitchen and turning right around and walking out when they noticed I was “chopping angrily.” But it’s more than that, and you should give it a try. Seriously. It’s the perfect go-to if you don’t feel worthy of taking time for yourself—do it for someone else. Most of us work in a digital world, so cooking can fill all of your senses and force you to be in the moment with all the ways you perceive the world. It can break you out of your head and help you gain a better perspective. In my house, I’ve been known to pick a place on the map and cook food that comes from wherever that is (thank you, Pinterest). I love cooking Indian food, as the smells are warm, the bread needs just enough kneading to keep my hands busy, and the process takes real attention for me because it’s not what I was brought up making. And in the end, we all win!
Vent like a foul-mouthed foolBe careful with this one!
I have been making an effort to practice more gratitude over the past few years, and I recognize the true benefits of that. That said, sometimes you just gotta let it all out—even the ugly. Hell, I’m a big fan of not sugarcoating our lives, and that sometimes means that to get past the big pile of poop, you’re gonna wanna complain about it a bit.
When that is what’s needed, turn to a trusted friend and allow yourself some pure verbal diarrhea, saying all the things that are bothering you. You need to trust this friend not to judge, to see your pain, and, most importantly, to tell you to remove your cranium from your own rectal cavity. Seriously, it’s about getting a reality check here! One of the things I admire the most about my husband (though often after the fact) is his ability to break things down to their simplest. “We’re spending our lives together, of course you’re going to disappoint me from time to time, so get over it” has been his way of speaking his dedication, love, and acceptance of me—and I could not be more grateful. It also, of course, has meant that I needed to remove my head from that rectal cavity. So, again, usually those moments are appreciated in hindsight.
Pick up a book!There are many books out there that aren’t so much self-help as they are people just like you sharing their stories and how they’ve come to find greater balance. Maybe you’ll find something that speaks to you. Titles that have stood out to me include:
Or, another tactic I love to employ is to read or listen to a book that has NOTHING to do with my work-life balance. I’ve read the following books and found they helped balance me out because my mind was pondering their interesting topics instead of running in circles:
If you’re not into reading, pick up a topic on YouTube or choose a podcast to subscribe to. I’ve watched countless permaculture and gardening topics in addition to how to raise chickens and ducks. For the record, I do not have a particularly large food garden, nor do I own livestock of any kind...yet. I just find the topic interesting, and it has nothing to do with any aspect of my life that needs anything from me.
Forgive yourselfYou are never going to be perfect—hell, it would be boring if you were. It’s OK to be broken and flawed. It’s human to be tired and sad and worried. It’s OK to not do it all. It’s scary to be imperfect, but you cannot be brave if nothing were scary.
This last one is the most important: allow yourself permission to NOT do it all. You never promised to be everything to everyone at all times. We are more powerful than the fears that drive us.
This is hard. It is hard for me. It’s what’s driven me to write this—that it’s OK to stop. It’s OK that your unhealthy habit that might even benefit those around you needs to end. You can still be successful in life.
I recently read that we are all writing our eulogy in how we live. Knowing that your professional accomplishments won’t be mentioned in that speech, what will yours say? What do you want it to say?
Look, I get that none of these ideas will “fix it,” and that’s not their purpose. None of us are in control of our surroundings, only how we respond to them. These suggestions are to help stop the spiral effect so that you are empowered to address the underlying issues and choose your response. They are things that work for me most of the time. Maybe they’ll work for you.
Does this sound familiar?If this sounds familiar, it’s not just you. Don’t let your negative self-talk tell you that you “even burn out wrong.” It’s not wrong. Even if rooted in fear like my own drivers, I believe that this need to do more comes from a place of love, determination, motivation, and other wonderful attributes that make you the amazing person you are. We’re going to be OK, ya know. The lives that unfold before us might never look like that story in our head—that idea of “perfect” or “done” we’re looking for, but that’s OK. Really, when we stop and look around, usually the only eyes that judge us are in the mirror.
Do you remember that Winnie the Pooh sketch that had Pooh eat so much at Rabbit’s house that his buttocks couldn’t fit through the door? Well, I already associate a lot with Rabbit, so it came as no surprise when he abruptly declared that this was unacceptable. But do you recall what happened next? He put a shelf across poor Pooh’s ankles and decorations on his back, and made the best of the big butt in his kitchen.
At the end of the day we are resourceful and know that we are able to push ourselves if we need to—even when we are tired to our core or have a big butt of fluff ‘n’ stuff in our room. None of us has to be afraid, as we can manage any obstacle put in front of us. And maybe that means we will need to redefine success to allow space for being uncomfortably human, but that doesn’t really sound so bad either.
So, wherever you are right now, please breathe. Do what you need to do to get out of your head. Forgive and take care.
This Person Does Not Exist is a website that generates human faces with a machine learning algorithm. It takes real portraits and recombines them into fake human faces. We recently scrolled past a LinkedIn post stating that this website could be useful “if you are developing a persona and looking for a photo.”
We agree: the computer-generated faces could be a great match for personas—but not for the reason you might think. Ironically, the website highlights the core issue of this very common design method: the person(a) does not exist. Like the pictures, personas are artificially made. Information is taken out of natural context and recombined into an isolated snapshot that’s detached from reality.
But strangely enough, designers use personas to inspire their design for the real world.
Personas: A step backMost designers have created, used, or come across personas at least once in their career. In their article “Personas - A Simple Introduction,” the Interaction Design Foundation defines personas as “fictional characters, which you create based upon your research in order to represent the different user types that might use your service, product, site, or brand.” In their most complete expression, personas typically consist of a name, profile picture, quotes, demographics, goals, needs, behavior in relation to a certain service/product, emotions, and motivations (for example, see Creative Companion’s Persona Core Poster). The purpose of personas, as stated by design agency Designit, is “to make the research relatable, [and] easy to communicate, digest, reference, and apply to product and service development.”
The decontextualization of personasPersonas are popular because they make “dry” research data more relatable, more human. However, this method constrains the researcher’s data analysis in such a way that the investigated users are removed from their unique contexts. As a result, personas don’t portray key factors that make you understand their decision-making process or allow you to relate to users’ thoughts and behavior; they lack stories. You understand what the persona did, but you don’t have the background to understand why. You end up with representations of users that are actually less human.
This “decontextualization” we see in personas happens in four ways, which we’ll explain below.
Personas assume people are staticAlthough many companies still try to box in their employees and customers with outdated personality tests (referring to you, Myers-Briggs), here’s a painfully obvious truth: people are not a fixed set of features. You act, think, and feel differently according to the situations you experience. You appear different to different people; you might act friendly to some, rough to others. And you change your mind all the time about decisions you’ve taken.
Modern psychologists agree that while people generally behave according to certain patterns, it’s actually a combination of background and environment that determines how people act and take decisions. The context—the environment, the influence of other people, your mood, the entire history that led up to a situation—determines the kind of person you are in each specific moment.
In their attempt to simplify reality, personas do not take this variability into account; they present a user as a fixed set of features. Like personality tests, personas snatch people away from real life. Even worse, people are reduced to a label and categorized as “that kind of person” with no means to exercise their innate flexibility. This practice reinforces stereotypes, lowers diversity, and doesn’t reflect reality.
Personas focus on individuals, not the environmentIn the real world, you’re designing for a context, not for an individual. Each person lives in a family, a community, an ecosystem, where there are environmental, political, and social factors you need to consider. A design is never meant for a single user. Rather, you design for one or more particular contexts in which many people might use that product. Personas, however, show the user alone rather than describe how the user relates to the environment.
Would you always make the same decision over and over again? Maybe you’re a committed vegan but still decide to buy some meat when your relatives are coming over. As they depend on different situations and variables, your decisions—and behavior, opinions, and statements—are not absolute but highly contextual. The persona that “represents” you wouldn’t take into account this dependency, because it doesn’t specify the premises of your decisions. It doesn’t provide a justification of why you act the way you do. Personas enact the well-known bias called fundamental attribution error: explaining others’ behavior too much by their personality and too little by the situation.
As mentioned by the Interaction Design Foundation, personas are usually placed in a scenario that’s a “specific context with a problem they want to or have to solve”—does that mean context actually is considered? Unfortunately, what often happens is that you take a fictional character and based on that fiction determine how this character might deal with a certain situation. This is made worse by the fact that you haven’t even fully investigated and understood the current context of the people your persona seeks to represent; so how could you possibly understand how they would act in new situations?
Personas are meaningless averagesAs mentioned in Shlomo Goltz’s introductory article on Smashing Magazine, “a persona is depicted as a specific person but is not a real individual; rather, it is synthesized from observations of many people.” A well-known critique to this aspect of personas is that the average person does not exist, as per the famous example of the USA Air Force designing planes based on the average of 140 of their pilots’ physical dimensions and not a single pilot actually fitting within that average seat.
The same limitation applies to mental aspects of people. Have you ever heard a famous person say, “They took what I said out of context! They used my words, but I didn’t mean it like that.” The celebrity’s statement was reported literally, but the reporter failed to explain the context around the statement and didn’t describe the non-verbal expressions. As a result, the intended meaning was lost. You do the same when you create personas: you collect somebody’s statement (or goal, or need, or emotion), of which the meaning can only be understood if you provide its own specific context, yet report it as an isolated finding.
But personas go a step further, extracting a decontextualized finding and joining it with another decontextualized finding from somebody else. The resulting set of findings often does not make sense: it’s unclear, or even contrasting, because it lacks the underlying reasons on why and how that finding has arisen. It lacks meaning. And the persona doesn’t give you the full background of the person(s) to uncover this meaning: you would need to dive into the raw data for each single persona item to find it. What, then, is the usefulness of the persona?
The relatability of personas is deceivingTo a certain extent, designers realize that a persona is a lifeless average. To overcome this, designers invent and add “relatable” details to personas to make them resemble real individuals. Nothing captures the absurdity of this better than a sentence by the Interaction Design Foundation: “Add a few fictional personal details to make the persona a realistic character.” In other words, you add non-realism in an attempt to create more realism. You deliberately obscure the fact that “John Doe” is an abstract representation of research findings; but wouldn’t it be much more responsible to emphasize that John is only an abstraction? If something is artificial, let’s present it as such.
It’s the finishing touch of a persona’s decontextualization: after having assumed that people’s personalities are fixed, dismissed the importance of their environment, and hidden meaning by joining isolated, non-generalizable findings, designers invent new context to create (their own) meaning. In doing so, as with everything they create, they introduce a host of biases. As phrased by Designit, as designers we can “contextualize [the persona] based on our reality and experience. We create connections that are familiar to us.” This practice reinforces stereotypes, doesn’t reflect real-world diversity, and gets further away from people’s actual reality with every detail added.
To do good design research, we should report the reality “as-is” and make it relatable for our audience, so everyone can use their own empathy and develop their own interpretation and emotional response.
Dynamic Selves: The alternative to personasIf we shouldn’t use personas, what should we do instead?
Designit has proposed using Mindsets instead of personas. Each Mindset is a “spectrum of attitudes and emotional responses that different people have within the same context or life experience.” It challenges designers to not get fixated on a single user’s way of being. Unfortunately, while being a step in the right direction, this proposal doesn’t take into account that people are part of an environment that determines their personality, their behavior, and, yes, their mindset. Therefore, Mindsets are also not absolute but change in regard to the situation. The question remains, what determines a certain Mindset?
Another alternative comes from Margaret P., author of the article “Kill Your Personas,” who has argued for replacing personas with persona spectrums that consist of a range of user abilities. For example, a visual impairment could be permanent (blindness), temporary (recovery from eye surgery), or situational (screen glare). Persona spectrums are highly useful for more inclusive and context-based design, as they’re based on the understanding that the context is the pattern, not the personality. Their limitation, however, is that they have a very functional take on users that misses the relatability of a real person taken from within a spectrum.
In developing an alternative to personas, we aim to transform the standard design process to be context-based. Contexts are generalizable and have patterns that we can identify, just like we tried to do previously with people. So how do we identify these patterns? How do we ensure truly context-based design?
Understand real individuals in multiple contextsNothing is more relatable and inspiring than reality. Therefore, we have to understand real individuals in their multi-faceted contexts, and use this understanding to fuel our design. We refer to this approach as Dynamic Selves.
Let’s take a look at what the approach looks like, based on an example of how one of us applied it in a recent project that researched habits of Italians around energy consumption. We drafted a design research plan aimed at investigating people’s attitudes toward energy consumption and sustainable behavior, with a focus on smart thermostats.
1. Choose the right sampleWhen we argue against personas, we’re often challenged with quotes such as “Where are you going to find a single person that encapsulates all the information from one of these advanced personas[?]” The answer is simple: you don’t have to. You don’t need to have information about many people for your insights to be deep and meaningful.
In qualitative research, validity does not derive from quantity but from accurate sampling. You select the people that best represent the “population” you’re designing for. If this sample is chosen well, and you have understood the sampled people in sufficient depth, you’re able to infer how the rest of the population thinks and behaves. There’s no need to study seven Susans and five Yuriys; one of each will do.
Similarly, you don’t need to understand Susan in fifteen different contexts. Once you’ve seen her in a couple of diverse situations, you’ve understood the scheme of Susan’s response to different contexts. Not Susan as an atomic being but Susan in relation to the surrounding environment: how she might act, feel, and think in different situations.
Given that each person is representative of a part of the total population you’re researching, it becomes clear why each should be represented as an individual, as each already is an abstraction of a larger group of individuals in similar contexts. You don’t want abstractions of abstractions! These selected people need to be understood and shown in their full expression, remaining in their microcosmos—and if you want to identify patterns you can focus on identifying patterns in contexts.
Yet the question remains: how do you select a representative sample? First of all, you have to consider what’s the target audience of the product or service you are designing: it might be useful to look at the company’s goals and strategy, the current customer base, and/or a possible future target audience.
In our example project, we were designing an application for those who own a smart thermostat. In the future, everyone could have a smart thermostat in their house. Right now, though, only early adopters own one. To build a significant sample, we needed to understand the reason why these early adopters became such. We therefore recruited by asking people why they had a smart thermostat and how they got it. There were those who had chosen to buy it, those who had been influenced by others to buy it, and those who had found it in their house. So we selected representatives of these three situations, from different age groups and geographical locations, with an equal balance of tech savvy and non-tech savvy participants.
2. Conduct your researchAfter having chosen and recruited your sample, conduct your research using ethnographic methodologies. This will make your qualitative data rich with anecdotes and examples. In our example project, given COVID-19 restrictions, we converted an in-house ethnographic research effort into remote family interviews, conducted from home and accompanied by diary studies.
To gain an in-depth understanding of attitudes and decision-making trade-offs, the research focus was not limited to the interviewee alone but deliberately included the whole family. Each interviewee would tell a story that would then become much more lively and precise with the corrections or additional details coming from wives, husbands, children, or sometimes even pets. We also focused on the relationships with other meaningful people (such as colleagues or distant family) and all the behaviors that resulted from those relationships. This wide research focus allowed us to shape a vivid mental image of dynamic situations with multiple actors.
It’s essential that the scope of the research remains broad enough to be able to include all possible actors. Therefore, it normally works best to define broad research areas with macro questions. Interviews are best set up in a semi-structured way, where follow-up questions will dive into topics mentioned spontaneously by the interviewee. This open-minded “plan to be surprised” will yield the most insightful findings. When we asked one of our participants how his family regulated the house temperature, he replied, “My wife has not installed the thermostat’s app—she uses WhatsApp instead. If she wants to turn on the heater and she is not home, she will text me. I am her thermostat.”
3. Analysis: Create the Dynamic SelvesDuring the research analysis, you start representing each individual with multiple Dynamic Selves, each “Self” representing one of the contexts you have investigated. The core of each Dynamic Self is a quote, which comes supported by a photo and a few relevant demographics that illustrate the wider context. The research findings themselves will show which demographics are relevant to show. In our case, as our research focused on families and their lifestyle to understand their needs for thermal regulation, the important demographics were family type, number and nature of houses owned, economic status, and technological maturity. (We also included the individual’s name and age, but they’re optional—we included them to ease the stakeholders’ transition from personas and be able to connect multiple actions and contexts to the same person).
To capture exact quotes, interviews need to be video-recorded and notes need to be taken verbatim as much as possible. This is essential to the truthfulness of the several Selves of each participant. In the case of real-life ethnographic research, photos of the context and anonymized actors are essential to build realistic Selves. Ideally, these photos should come directly from field research, but an evocative and representative image will work, too, as long as it’s realistic and depicts meaningful actions that you associate with your participants. For example, one of our interviewees told us about his mountain home where he used to spend every weekend with his family. Therefore, we portrayed him hiking with his little daughter.
At the end of the research analysis, we displayed all of the Selves’ “cards” on a single canvas, categorized by activities. Each card displayed a situation, represented by a quote and a unique photo. All participants had multiple cards about themselves.
4. Identify design opportunitiesOnce you have collected all main quotes from the interview transcripts and diaries, and laid them all down as Self cards, you will see patterns emerge. These patterns will highlight the opportunity areas for new product creation, new functionalities, and new services—for new design.
In our example project, there was a particularly interesting insight around the concept of humidity. We realized that people don’t know what humidity is and why it is important to monitor it for health: an environment that’s too dry or too wet can cause respiratory problems or worsen existing ones. This highlighted a big opportunity for our client to educate users on this concept and become a health advisor.
Benefits of Dynamic SelvesWhen you use the Dynamic Selves approach in your research, you start to notice unique social relations, peculiar situations real people face and the actions that follow, and that people are surrounded by changing environments. In our thermostat project, we have come to know one of the participants, Davide, as a boyfriend, dog-lover, and tech enthusiast.
Davide is an individual we might have once reduced to a persona called “tech enthusiast.” But we can have tech enthusiasts who have families or are single, who are rich or poor. Their motivations and priorities when deciding to purchase a new thermostat can be opposite according to these different frames.
Once you have understood Davide in multiple situations, and for each situation have understood in sufficient depth the underlying reasons for his behavior, you’re able to generalize how he would act in another situation. You can use your understanding of him to infer what he would think and do in the contexts (or scenarios) that you design for.
The Dynamic Selves approach aims to dismiss the conflicted dual purpose of personas—to summarize and empathize at the same time—by separating your research summary from the people you’re seeking to empathize with. This is important because our empathy for people is affected by scale: the bigger the group, the harder it is to feel empathy for others. We feel the strongest empathy for individuals we can personally relate to.
If you take a real person as inspiration for your design, you no longer need to create an artificial character. No more inventing details to make the character more “realistic,” no more unnecessary additional bias. It’s simply how this person is in real life. In fact, in our experience, personas quickly become nothing more than a name in our priority guides and prototype screens, as we all know that these characters don’t really exist.
Another powerful benefit of the Dynamic Selves approach is that it raises the stakes of your work: if you mess up your design, someone real, a person you and the team know and have met, is going to feel the consequences. It might stop you from taking shortcuts and will remind you to conduct daily checks on your designs.
And finally, real people in their specific contexts are a better basis for anecdotal storytelling and therefore are more effective in persuasion. Documentation of real research is essential in achieving this result. It adds weight and urgency behind your design arguments: “When I met Alessandra, the conditions of her workplace struck me. Noise, bad ergonomics, lack of light, you name it. If we go for this functionality, I’m afraid we’re going to add complexity to her life.”
ConclusionDesignit mentioned in their article on Mindsets that “design thinking tools offer a shortcut to deal with reality’s complexities, but this process of simplification can sometimes flatten out people’s lives into a few general characteristics.” Unfortunately, personas have been culprits in a crime of oversimplification. They are unsuited to represent the complex nature of our users’ decision-making processes and don’t account for the fact that humans are immersed in contexts.
Design needs simplification but not generalization. You have to look at the research elements that stand out: the sentences that captured your attention, the images that struck you, the sounds that linger. Portray those, use them to describe the person in their multiple contexts. Both insights and people come with a context; they cannot be cut from that context because it would remove meaning.
It’s high time for design to move away from fiction, and embrace reality—in its messy, surprising, and unquantifiable beauty—as our guide and inspiration.
It feels like it was yesterday when I became an engineering manager but it has been almost a year. I want to take this time to reflect on the challenges and learnings from this journey.
The journey from individual contributor to engineering manager isn’t always straightforward. Today, I’ll share what it means to become an engineering manager from my point of view, and a few important points to be aware of before making this transition.
It’s been a while since I haven’t posted anything on my website, it’s because there have been a few changes in 2022 that kept me away from writing. It’s time to resume it.
Security is a big topic in software engineering but how does it apply to mobile development? We care about user experience or mobile performance, security issues are rarely prioritized. This week, I’ll share how to integrate security tools into your CI pipeline to stay aware of your codebase health.
I was reading this week about “10x engineer” and what it means in the tech industry. If the title can be questionable, I wanted to reflect on its definition and what it can mean in mobile engineering.
For most mobile engineers, the end game is to release our own apps. For the few projects that make it to the App Store, it can be pretty hard to keep them alive over time. Eventually, the question comes up: should I remove my app from the App Store? Today, I’ll share about the thought process that makes me sunset one.
Memory management is a big topic in Swift and iOS development. If there are plenty of tutorials explaining when to use weak self
with closure, here is a short story when memory leaks can still happen with it.
In iOS development, content alignment and spacing is something that can take a lot of our time. Today, let’s explore how to set constraint with UIKit, update them and resolve constraint conflicts.
Most of people don’t know but I’ve been blogging for some time now. Actually, tomorrow will be ten years. Today is a good time to take a walk on memory lane.
Opening an app from an URL is such a powerful iOS feature. Its drives users to your app, and can create shortcuts to specific features. This week, we’ll dive into deep linking on iOS and how to create an URL scheme for your app.
I’ve been exploring more and more tooling around iOS ecosystem. One tool I really enjoy using those days is Github Action as a continuous integration for my projects. Today we’ll dive into tips and tweaks to make the most of it.
When it comes to iOS development, everybody have their own favorite language and framework: Swift, Objective-C, SwiftUI, React-Native, Flutter and so on. Unlike most of my previous post, today we’re going to leverage some iOS tooling for cross platforms technology: fastlane and Flutter.
Between banking and crypto apps, it’s quite often we interact with currency inputs on daily basis. If creating a localized UITextField
can already be tricky in UIKit, I was wondering how hard it would be to do a similar one in SwiftUI. Let’s see today how to create a localized currency TextField
in SwiftUI.
Like many developers, I use open source tools on daily basis. Recently, I’ve got the chance to create one for other teammates and try to think about what I should consider before launching it. Today I share this checklist.
A big part of the developer journey is make sure our code behaves as expected. It’s best practice to setup tests that allow us to test quickly and often that nothing is broken. If unit testing is common practice to check the business logic, we can also extend it to cover some specific UI behaviors. Let’s how to unit test views and gesture in UIKit.
When we talk about modular app, we rarely mention how complex it can be over time and get out of hand. In most cases, importing frameworks into one another is a reasonable solution but we can do more. Let’s explore how with dependency inversion in Swift and how to create order into our components.
For the past few years, I had the opportunity to mentor new joiners through different roles. In some aspects, I could see myself in them the same way I started years back: eager to prove themselves, jumping on the code and hacking around.
I tried to think about what I learnt the hard way since my first role in the tech industry and how could I help them learn the easy way.
Recently, I’ve been more and more curious about web experience through mobile apps. Most of web browser apps look alike, I was wondering how could I recreate one with WebKit and SwiftUI. Let’s dive in.
To move an existing iOS app codebase to SwiftUI can quickly become a challenge if we don’t scope the difficulties ahead. After covering the navigation and design layer last week, it’s time to dive deeper into the logic and handle the code migration for a database and the user preferences.
If SwiftUI is great for many things, migrating completely an existing app codebase to it can be really tricky. In a series of blog posts, I’ll share how to migrate an iOS app written in Swift with UIKit to SwiftUI. Today, let’s start with the navigation and the UI components with storyboards.
Did you ever have to share your screen and camera together? I recently did and it was that easy. How hard could it be to create our own? Today, we’ll code our own webcam utility app for macOS in SwiftUI.
It’s been almost two years that Combine has been introduced to the Apple developer community. As many developer, you want to migrate your codebase to it. You don’t want to be left behind but you’re not sure where to start, maybe not sure if you want to jump to SwiftUI either. Nothing to worry, let’s see step by step how to migrate an iOS sample app using UIKit and RxSwift to Combine.
Displaying dates or times is a very common requirement for many apps, often using a specific date formatter. Let’s see what SwiftUI brings to the table to make it easier for developers.
When creating new features, it’s really important to think about how our users will use it. Most of the time, the UI is straightforward enough. However, sometimes, you will want to give some guidance, to highlight a button or a switch, with a message attached. Today, we’ll create a reusable and adaptable overlay in Swift to help onboard mobile users for any of your features.
Close to the end of the year, I tend to list what I’ve accomplished but also what didn’t go so well, to help me see what can I do better next year. With couple days early, it’s time to look back at 2020.
A question that comes back often when using Coordinator pattern in iOS development is how to pass data between views. Today I’ll share different approaches for a same solution, regardless if you are using MVVM, MVC or other architectural design pattern.
One reason I like so much working on native mobile apps is to deliver the user experience based on their region and location. Although, for every update, it can be painful for developers to recapture screenshots foreach available language. Today, I’ll share how to automate this with UI tests and Xcode tools.
I’ve been experiencing more and more with SwiftUI and I really wanted to see what we can do with video content. Today I’ll share my findings, showing how to play video using AVFoundation
in SwiftUI, including some mistakes to avoid.
With Mac Catalyst and SwiftUI support for macOS, Apple has been pushing new tools to the community for the past couple years to create new services on Mac computers. Does it mean you should do too? Here are couple things to consider first.
Designing a watchOS app in Swift always felt to be quite tricky. I could spend hours tweaking redoing layout and constraints. With SwiftUI supporting watchOS, I wanted to have a new try at it, releasing a standalone app for Apple Watch.
Shortly stepping back from coding for a week and reading about the community, I realized it how easy it is to be crushed by anxiety: I see so many great things happening every day, things I want to be part of, but at the same time getting anxiety to be good enough. This is my thoughts of how to face the impostor syndrome.
In the last couple years, Apple has made some good efforts to improve their testing tools. Today, I’ll walk you through some tips to make sure your test suite run at their best capacity.
A recurring challenge in programming is accessing a shared resource concurrently. How to make sure the code doesn’t behave differently when multiple thread or operations tries to access the same property. In short, how to protect from a race condition?
About a month ago, it became possible to run Swift code on AWS Lambda. I was really interesting to try and see how easy it would be to deploy small Swift functions as serverless application. Let’s see how.
Even though the iOS ecosystem is growing further every day from Objective-C, some companies still heavily rely on it. A week away for another wave of innovation from WWDC 2020, I thought it would be interesting to dive back into Objective-C starting with a MVVM pattern implementation.
Since January, I’ve been slowing down blogging for couple reasons: I started doubting about myself and the quality of my content but I also wanted to focus more on some fundamentals I felt I was missing. So I committed to a “100 day challenge” coding challenge, focused on data structure and algorithm in Swift.
Following up previous articles about common data structure in Swift, this week it’s time to cover the Tree, a very important concept that we use everyday in iOS development. Let’s dive in.
Recently, I was looking into a bug where the UITabBar was inconsistently disappearing on specific pages. I tried different approaches but I couldn’t get where it got displayed and hidden. That’s where I thought about KVO.
After covering last week how to code a Queue in Swift, it sounds natural to move on to the Stack, another really handy data structure which also find his place in iOS development. Let’s see why.
Recently revisiting computer science fundamentals, I was interested to see how specific data structure applies to iOS development, starting this week one of most common data structure: the queue.
When I started this blog in 2012, it was at first to share solution to technical problem I encountered on my daily work, to give back to the community. Over the years, I extended the content to other projects and ideas I had. Nowadays, I get more and more feedbacks on it, sometimes good, sometimes bad, either way something always good to learn from.
Last year, I shared a solution to tackle A/B testing on iOS in swift. Now that we have SwiftUI, I want to see if there is a better way to implement A/B testing. Starting from the same idea, I’ll share different implementations to find the best one.
For quite some time now, I’ve been developing an interest to data analysis to find new ways to improve mobile app. I’ve recently found some time to experiment neural language processing for a very specific usecase related to my daily work, sentiment analysis of customer reviews on fashion items.
With SwiftUI being recently introduced, I was curious if we could take advantage of SwiftUI preview to speed up testing localization and make sure your app looks great for any language.
Introduced in 2019, Apple made UI implementation much simpler with With SwiftUI its UI declarative framework. After some time experiencing with it, I’m wondering today if MVVM is still the best pattern to use with. Let’s see what has changed, implementing MVVM with SwiftUI.
When asked about data structure and algorithm for an iOS development role, there is always this idea that it’s not a knowledge needed. Swift got already native data structure, right? Isn’t the rest only UI components? That’s definitely not true. Let’s step back and discuss about data structure and algorithm applied to iOS development.
For last couple years, I’ve been experimenting different architectures to understand pros and cons of each one of them. Redux architecture is definitely one that peek my curiosity. In this new post, I’ll share my finding pairing Redux with MVVM, another pattern I’m familiar with and more importantly why you probably shouldn’t pair them.
There is a believe that any software developer must contribute or have a side project to work on. Even if it’s great to have, I think there is something bigger at stake doing that.
Over time, any code base grows along with the project evolves and matures. It creates two main constraints for developers: how to have a code well organized while keeping a build time as low as possible. Let’s see how a modular architecture can fix that.
I have been interested in analytics tools for a while, especially when it’s applied to mobile development. Over the time, I saw many code mistakes when implementing an analytical solution. Some of them can be easily avoided when developer got the right insights, let’s see how.
Since Xcode 7, iOS developers can generate a code coverage for their app: a report showing which area of their app is covered by unit tests. However, this is isn’t always accurate, let’s see why you should not base your code health only on code coverage.
It has been a while since I wanted to create something helpful to others, not than just another random app. Then I found out there were not so many great sobriety apps, so I launched one. Here is Appy, to help you quit your bad habits.
With iOS13, Apple is introducing “Sign In with Apple”, an authentication system that allows user create an account for your app based on their Apple ID. Let’s see how to integrate it in your app and be ready for iOS13 launch.
I have been a bit more quite for the past couple weeks to take a break of my weekly routine of blogging. It’s not because I was lazy, but I wanted to take time to digest WWDC. At the same time I had other running projects, one was my first talk at an iOS meetup. Here is couple tips I would have love to hear earlier.
One debate over the past year in the iOS ecosystem was the around functional reactive framework like RxSwift or ReactiveCocoa. This year at WWDC2019, Apple took position on it and released their own functional reactive programming framework, here is Combine.
I have been recently asked to review an iOS application to see how healthy was the code base, if it follows the best practices and how easy it would be to add new features to it. If I review some code on daily basis for small pull requests, analyzing one whole app at once is quite different exercise. Here is some guidelines to help doing that analysis.
After weeks experimenting different patterns and code structures, I wanted to go further in functional reactive programming and see how to take advantage of it while following Coordinator pattern. This post describes how integrate RxSwift with Coordinator pattern and which mistakes to avoid.
If you are not familiar with it, Redux a Javascript open source library designed to manage web application states. It helps a lot to make sure your app always behaves as expected and makes your code easier to test. ReSwift is the same concept but in Swift. Let’s see how.
We often talk about scalability of iOS app but not much about the project itself or the team. How to prepare your project to move from 2 developers to 6? How about 10 or 20 more? In that research, I’ve listed different tools to prepare your team and project to scale.
For the past months, I keep going further in RxSwift usage. I really like the idea of forwarding events through different layers but the user interface stays sometimes a challenge. Today, I’ll describe how to use RxDataSources to keep things as easy as possible.
Even if I usually stay focus on the customer facing side of mobile development, I like the idea of writing backend api with all the security that Swift includes. Starting small, why not using Swift Server for our UI Tests to mock content and be at the closest of the real app.
I love developing new iOS apps and create new products. However, regardless of the project, it often need a team to mix the required skills: design, coding, marketing. Although, this less and less true, so let’s see how to bootstrap your iOS app.
Not that long ago, I wrote how to pair RxSwift with MVVM architecture in an iOS project. Even if I refactored my code to be reactive, I omitted to mention the unit tests. Today I’ll show step by step how to use RxTest to unit test your code.
For years now, the whole iOS community has written content about the best way to improve or replace the Apple MVC we all started with, myself included. MVC, MVVM, MVP, VIPER? Regardless the type of snake you have chosen, it’s time to reflect on that journey.
After introducing how to implement Coordinator pattern with an MVVM structure, it feels natural for me to go further and cover some of the blank spots of Coordinator and how to fix along the way.
Couple weeks ago, I heard somebody talking about A/B testing in iOS and how “mobile native A/B testing is hard to implement”. It didn’t sound right to me. So I build a tiny framework for that in Swift. Here is Reversi.
I was recently searching for onboarding journey in iOS, that succession of screens displayed at the first launch of a freshly installed mobile app. But regardless how beautiful the design can be, why so many people are tempted to skip it. I listed things to consider while creating an onboarding journey for your iOS app.
After some times creating different iOS apps following an MVVM pattern, I’m often not sure how to implement the navigation. If the View handles the rendering and user’s interactions and the ViewModel the service or business logic, where does the navigation sit? That’s where Coordinator pattern takes place.
Last year, I launched with a friend Japan Direct, an itinerary app for Japan travellers. Even if the first version came up quite quickly, I kept iterate but always staying focus on customer feedback first. Almost a year later, it’s good time for synthesis, see what worked and how we created a customer focused app.
Apple introduced in iOS8 trait variations that let developers create more adaptive design for their mobile apps, reducing code complexity and avoiding duplicated code between devices. But how to take advantage of variations for UICollectionView?
This post will cover how to setup variations via Interface Builder as well but also programatically, using AutoLayout and UITraitVariation with a UICollectionView to create a unique adaptive design.
For last couple weeks, I’ve worked a lot about how to integrate RxSwift into an iOS project but I wasn’t fully satisfied with the view model. After reading many documentation and trying on my side, I’ve finally found a structure I’m happy with.
Since WWDC18, Apple made it way easier to developers to create model for machine learning to integrate iOS apps. I have tried myself in the past different models, one for face detection and create another with Tensorflow to fashion classification during a hackathon. Today I’ll share with you how I create a model dedicated to fashion brands.
It took me quite some time to get into Reactive Programming and its variant adapted for iOS development with RxSwift and RxCocoa. However, being fan of MVVM architecture and using an observer design pattern with it, it was natural for me to revisit my approach and use RxSwift instead. Thats what I’m going to cover in this post.
The delegation pattern is one of the most common design pattern in iOS. You probably use it on daily basis without noticing, every time you create a UITableView or UICollectionView and implementing their delegates. Let’s see how it works and how to implement it in Swift.
Part of the journey in software development is testability. Regarding mobile development, testability for your iOS app goes through UI testing. Let’s see different way to inspect any UI elements and prepare your iOS app for UI automation testing.
While wishing a happy new year around me, people helped me realised how many good things happened to me this year. Funny enough, while listing my goals for 2019, I found the matching list for 2018 and here is what really happened.
From my first year studying computer science, I’ve always wanted to do more on my free time and create simple projects that could be useful for others. I won’t lie, I wish I was able to monetize them but regardless the outcome, learning was always part of the journey.
During this year, I have blogged quite a bit about code architecture in Swift and I’ve realized that I didn’t explain much about which design pattern to use with it. In a series of coming posts, I will cover different design patterns, starting now with observer.
For a while now, I really wanted to work on a machine learning project, especially since Apple let you import trained model in your iOS app now. Last September, I took part of a 24h hackathon for an e-commerce business, that was my chance to test it. The idea was simple: a visual search app, listing similar products based on a picture.
It has been couple months since my last post and despite the idea, a lot of things kept me busy far from blogging. Looking back, it all articulates around the same idea: why it’s important to always keep your skills sharp.
Couple months ago, I’ve built an app and released it on the App Store. Since published, I really wanted to see how it lives and understand how to make it grow. Ideally, I wanted to know if there is a product / market fit. In the article, I describe each steps and ideas that helped my app grow and what I learnt from it.
Most of mobile apps interact at some point with remote services, fetching data from an api, submitting a form… Let’s see how to use Codable in Swift to easily encode objects and decode JSON in couple lines of codes.
I recently went for a Swift conference and UI automation testing was one of the subject. I already mentioned it with Appium in the past but I think it’s time to go back to it and explain why today I still prefer using Apple’s testing framework instead.
I recently implemented 3D touch for an app and I was very interested about home screen quick actions. If it can be a good way to improve user experience, it doesn’t mean your app always needs it. In this article, I explain how to add home screen shortcut for your app in Swift but mostly why can justify implementing it.
I recently realised that my first blog post was 6 years ago. It’s a good occasion for me to do a little retrospective and share what I learnt from blogging over the years.
If you care about user experience, error handling is a big part you have to cover. We can design how an mobile app looks like when it works, but what happen when something goes wrong. Should we display an alert to the user? Can the error stay silent? And mostly how to implement it the best way with your current design pattern? Let’s see our options while following MVVM pattern.
The best way to learn and become more creative as a developer is to focus on a side project. A really good friend coming back from Japan came to me with an idea when I needed that side project. This is how we created Japan Direct, from the idea to the App Store in almost no time.
For the last couple weeks, I tried to step back on my development to analyse what is time consuming in mobile development. I realised that most of new views are based on same approach, reimplementing an similar structure around a UICollectionView or UITableView.
What if I can have a more generic approach where I can focus only on what matters, the user experience. That’s what I tried to explore in this article.
Last couple weeks, I have traveled with only my iPhone with me and I realised how many apps I daily used still relying on their websites. Even with the right iOS app installed, I had to browse on Safari app to get specific details. That is why it’s so important to support universal links in iOS. Let me show you how.
Enumerations have changed a lot between Objective-C and Swift. We can easily forget how useful and powerful it can. I wanted to get back to it through simple examples to make the most of it.
Firebase is a set of tools introduced by Google to build better mobile apps. I worked with this many times and even if it’s straight forward to integrate, here are couple advices of implementation to make the most of it.
I recently followed a growth marketing course, introducing mindset and methodology to make a company grow. I learnt a lot from it and since, I try to apply this knowledge on a daily basis. After more reflection on it, a lot of ideas looked very similar to software development job, this is the part I would like to share.
When I started coding years ago, it was all about object oriented programming. With Swift, a new approach came up, making the code even easier to reuse and to test, Protocol-Oriented Programming.
If you have an iOS app, you might have integrated external libraries and tools to help you getting your product ready faster. However your iOS architecture and swift code shouldn’t depend on those libraries.
The best part of continuous integration is the ability to automatically run tests and build apps, ready to be deployed. However, automatic build doesn’t mean smart or optimised build. Here are some tips I collected along the way to speed up delivery process.
To be sure new code won’t break old one already implemented, it’s best practice to write unit tests. When it comes to app architectures, it can be a challenge to write those tests. Following an MVVM pattern, how to unit test a view and its viewModel? That’s what I would like to cover here using dependency injection.
Creating a new app often raise the question of what architecture to choose, which pattern would fit best. In this post, I show how to implement an MVVM pattern around a sample app in Swift.
In 2017, I managed to run about 750 miles (1200 km), it’s 250 miles more than the year before. I know it because Strava tracked it for me. I’m such a fan of their product than using it becomes part of my routine and my training. Although, during that journey, I always missed numbers that talked to me. That is how I created Kronos.
Starting a new year is always exciting. Most of us have new resolutions and a bucket list we want to accomplish for 2018 but it’s quite often that as soon something go wrong, the whole list goes wrong. Here is some advices to keep track on it.
For the last couple months, I observed Today extensions of some of iOS apps I daily use to see when those widgets are useful and how to justify developing one. Here are my conclusions.
With iOS11, Apple introduced the ability to integrate machine learning into mobile apps with Core ML. As promising as it sounds, it also has some limitations, let’s discover it around a face detection sample app.
I always thought a good way to stay motivated and look forward is to have goal you can accomplish in a short term, about 3 to 12 months maximum. It’s at least the way I dealt with my life after being graduated.
Embedding web into native apps is a frequent approach to quickly add content into a mobile app. It can be for a contact form but also for more complex content to bootstrap a missing native feature. But you can go further and build a two bridge between Web and Mobile using JavaScript and Swift.
Most of apps use HTTPS request to access data, and because of SSL encryption, it can be tough to debug it from iOS apps that are already on the App Store. Charles is the perfect tool to help you inspect your HTTPS requests.
Libraries and external dependencies have always been a good way to avoid developers recreate something already existing. It’s also a good way to help each other and leaving something reusable. CocoaPods is the most used tool to manage dependencies around Xcode projects. Let’s see how to create your own private pod.
Starting 2017, I decided that this year would be mine. It doesn’t mean everything would be given, but I would stay open to new opportunities and stay actor of my life, be what I want to be. Half way, here is time for reflection.
Configuring a continuous integration can be tricky for mobile apps. Let’s see how quick it is to build an Android app with Bitbucket Pipeline and deliver it with App Center app (ex HockeyApp).
Recently, I got a reminder that my domain name and shared host would eventually expire this summer. I always had a WordPress for my website and thought it was time to move on for something easier to maintain. Here is how I managed to migrate my WordPress blog to a static website with Hugo on AWS.
This year, I finally signed up for a marathon and the way I use running apps and their services have clearly changed. Giving the best user experience around those services is essential to make the app useful. Here is my feedback as a mobile developer during my last 10 weeks training.
Technology has never been as important as today in politics. Everything is related to numeric data. If we only analyze news around US elections in 2016, it was mostly about email hacks, fake news in daily news feed, or online surveys. Concerned about French elections 2017, I wanted to be a bit more active and do something related the last one: to online surveys.
In my current role at Qudini, I started as an iOS developer. My main task was to create and improve our mobile products for iOS devices based on what was already done on Android. However I wanted to be more efficient in my job and I thought it could be by impacting more users through Android development. Once our iOS apps were at the same level as the Android one, I push the idea that it would be better I start doing Android too. Here is my feedback after 6 months developing on Android.
Recently, I got the chance to integrate feature flags into a mobile app I work on. The idea of feature flag is simple, it lets you enable and manage features in your mobile app remotely without requiring a new release. Let see the benefice of it and how integrate a feature flag solution like Apptimize’s one.
Couple months ago, I’ve tried to set a mobile testing environment with Appium and one of the best tools to execute these tests was SauceLabs, a cloud platform dedicated for testing. SauceLabs is pretty easy to use but here is couple tricks to make even easier.
Continuous integration and continuous delivery is something I wanted to do a while ago, specially since Apple accelerated its approval process to publish new apps on its mobile store. It can now takes less than a day to have an update available for your mobile users: continuous integration and continuous delivery makes more sense than ever on mobile apps.
Working as a mobile developer, I created multiple apps during last couple years for companies I worked for, and eventually for personal projects. At the beginning, I though the goal for any developer was the release itself: shipping code and moving on, but I quickly found out that it was more frustrating than everything to stop here. That’s how I started thinking about what should be the next step and if a developer can actually do marketing and how.
I recently finished Growth Hacking Marketing by Ryan Holiday and learn a lot of things about it. Some of them remembered me the way I found my job in London and how I tweaked my LinkedIn profile to fit the targeted audience.
Sens’it is small tracker developed by Sigfox and given for free during events to let people test the Sigfox low frequency IoT network. Let’s see how to create an iOS app in Swift based on Sens’it api.
Couple years ago, I worked on a mobile app linked to video and audio recording. I quickly see that, once the user agreed for permissions, it can be easy to track personal data without user noticed it. Let see how limit mobile app permissions to maintain user privacy.
Appium is an UI automation testing framework, helping developers to automatically test their app. This tool can be really powerful but my experience with it let me think it’s not enough accurate to be used everyday and at its full potential.
During WWDC2015, Apple announced big stuff, but they also released awesome features for developers. One of them was dedicated to UI Testing. Working around UI Automation test, I’ve just discovered last Xcode 7 and how life is going to be easier with their last feature for that.
Recently I worked on a small iOS mobile project around Javascript. I wanted to load web content from iOS with Javascript inside and get callbacks from Javascript into iOS, to save native data and transmit it to an other controller if needed. The second part was also to call Javascript methods from iOS part.
Philips created few years ago Ambilight, a TV with a dynamic lights on it back. With two friends, we wanted to design an app with a similar function based on connected light bulb during an hackathon. Here is what we have done in 24h hours of code, let’s meet AmbiMac.
HealthKit is a powerful tool if you want to create an iOS mobile app based on health data. However, it’s not only for body measurements, fitness or nutrition; it’s also sleep analysis. In this HealthKit tutorial, I will show you how to read and write some sleep data and save them in Health app.
UPDATE - April 2020: Originally written for Swift 1.0, then 2.0, I’ve updated this post for latest Swift 5.1 version and Xcode 11.3.
I work with CodeIgniter almost exclusively on API, but sometimes it can help on short-lived websites. Rewrite url is a good thing to know if you want to optimize SEO for your key pages of a website. That’s what I want to show you and how it’s easy to set it up.
Pour la fin de mes études, j’ai choisi de rédiger mon mémoire sur les objets connectés et plus précisément sur le développement de services numériques autour de ces objets. Ce travail de fond m’a permis de prendre du recul sur mon travail mais c’était aussi l’occasion de trouver une définition de ce qu’est un développeur d’objet connecté.
En Octobre dernier, j’avais travaillé sur le cocktailMaker, un objet connecté facilitant la création de cocktails. Voulant pousser le concept un peu plus loin, je me suis inscrit au startup weekend de Novembre organisé à l’EM Lyon pour découvrir les aspects marketing et business qui me manque aujourd’hui. Retour sur ces 54h de travail acharné.
Ces temps ci, il y a beaucoup de bruits autour des objets connectés. Tous les jours, on découvre de nouveaux articles sur des objets connectés annoncés sur le marché ou financés sur des plateformes de “crowdfunding”. On a bien moins d’informations sur toutes les difficultés liées autour de ces projets innovants. Voici mes conclusions sur les recherches que j’ai faites à ce sujet.
L’année dernière à cette même période, j’ai participé au Fhacktory, ce hackathon nouvelle génération né à Lyon, avec une application mobile dédiée à la chute libre. Cette année, j’ai pu à nouveau monter sur le podium de cet évènement en développement un objet connecté, le CocktailMaker. Retour sur ce week-end 100% hack.
Sur la place des objets connectés, Jawbone est rapidement devenu un pilier du “quantified-self” (auto-mesure) avec ses bracelets UP et UP24. Je vous propose un décryptage des leurs dernières évolutions afin de rester à la pointe du “wearable”.
De plus en plus de montres connectées font leur apparition, mais d’après moi, la plupart passe à côté de l’essentiel: la montre reste l’un des seuls accessoires masculin, il faut donc la rendre élégante en respectant sa forme historique. C’est pourquoi, je m’intéresse dans cet article principalement aux montres “habillées” et en attendant la sortie de celle d’Apple, je vous propose un comparatif entre la montre connectée de Motorola et celle de Withings, fraichement annoncée.
Ne voulant pas me limiter à mon background technique, j’essaie de plus en plus de développer des notions d’entrepreneuriat dans l’idée d’être plus utile dans mon analyse technique et de continuer la reflexion autour de différents développement d’applications dans une start-up. L’idée est de ne pas se limiter au développement demandé, mais d’essayer d’appréhender toute la chaine de réflexion, à savoir du besoin de clients jusqu’à l’utilisation d’un nouveau service/produit développé et de voir comment celui-ci est utilisé et ce qu’il faut améliorer.
Pour cela, et avec les conseils avisés d’un ami , Maxime Salomon, j’ai commencé à lire The Lean Startup de Eric Ries. Ce livre aborde de nombreux sujets autour de l’entrepreneuriat, du marketing ainsi que de développement de produit à proprement parlé. L’idée est de proposer un cycle itératif de développement pouvant permettre de mesurer rapidement différents paramètres pour faire évoluer un produit en fonction de nouvelles données.
Etant d’un formation plus scientifique, j’ai ce besoin de mettre en pratique ce dont il est question pour mieux comprendre la solution proposée, j’ai aussi un besoin de me documenter sur les différents termes employés pour ne pas passer à côté du sujet, c’est pourquoi je prends mon temps pour lire ce livre, mais je vous propose mon retour d’expérience sur mes premiers acquis et comment j’essaie de les mettre en pratique.
Nous découvrons chaque jour de plus en plus d’objets connectés, ils se divisent en plusieurs catégories comme la santé, la musique, la lumière, etc. Une bonne partie se retrouve aussi dans le tracking d’activité comme le bracelet Jawbone UP. Etant intéressé de connaitre les performances de ces objets connectés dit “wearable”, je vous propose mon retour d’experience sur le bracelet UP24 ainsi que les services proposés autour.
Soundcloud est une des plus grosses plateformes de musique indépendante, c’est plus de 200 millions d’utilisateurs pour ce réseau sociale basé sur le partage musicale. Certains artistes ne publient leurs musiques que sur cette plateforme. C’est aussi la place pour des novices qui veulent essayer leurs titres et se faire connaitre. Vous pouvez aussi y retrouver des discours, des podcasts et tout autres types de contenu audio.
Dans cette optique de toujours avoir de la bonne musique, Soundcloud est disponible sur toutes les plateformes (web et mobile) et l’écoute est gratuite. Pour une utilisation encore plus variée de leur service, SoundCloud propose une API ainsi que de nombreux SDK (Javascript, Ruby, Python, PHP, Cocoa et Java). Nous allons voir ensemble comment intégrer SoundCloud dans une application mobile iPhone.
Passer un entretien pour un poste est toujours un peu stressant. Suivant comment ce stress est géré, la personne peut donner une image de quelqu’un qui n’est pas sûre de soi par ses gestes (tremblement, bafouillement, se frotter les mains) ou par ses mots (ne pas finir ses phrases, phrases à rallonge trop complexe, etc). Difficile dans ces cas là de donner la meilleure image de soi pour montrer qu’on est travailleur, motivé et prêt à l’emploi.
Je vous propose par mon retour d’experience quelques conseils simples.
Après avoir travaillé avec les technologies Deezer, nous allons voir quels outils sont proposés par Spotify pour une intégration web ou mobile. Spotify proposant une écoute gratuite sur son client ordinateur et depuis peu sur mobile (parsemé de publicité), il se démarque de Deezer qui nécessite d’avoir un compte Premium pour une utilisation sur smartphone. L’intégration pour les développeurs est aussi différente, mais à quelle mesure? C’est ce que nous allons voir.
Les objets connectės sont de plus en plus présents chez nous. On y retrouve des produits comme des ampoules, des enceintes audio ainsi que des prises intelligentes. On y retrouve aussi des produits plus innovants comme le pèse personne de Withings, la balle de Sphero, la lampe connectée “holî” ou encore le capteur pour plante de Parrot.
C’est dans cette optique là que l’entreprise Direct Energie a organisée un hackathon autour des objets connectés pour présenter différentes solutions autour de la maîtrise d’énergie et des objets intelligents.
C’est en tant que support technique sur le produit “holî” et son SDK que j’y ai participé, afin d’aider les développeurs à se familiariser avec l’outil. Ayant fait un hackathon du côté développeur, c’est un nouveau retour d’expérience cette fois ci du côté partenaire.
Au jour d’aujourd’hui, les jeux vidéos sont de plus en plus présent. Avec l’univers du smartphone, il est de plus en plus facile d’embarquer des jeux vidéos avec nous et ce partout.
Plusieurs jeux ont eu un tel succès qu’il reste difficile d’ignorer cet utilisation de nos téléphones en tant que console. A n’en citer que quelques-uns: DoodleJump, AngryBird ou encore le fameux CandyCrush.
Depuis la sortie d’iOS7, Apple a rajouté un framework de jeu vidéo 2D directement dans son SDK: SpriteKit. Nous allons voir ensemble comment l’utiliser.
Un hackathon est l’équivalent d’un marathon sur le domaine du développement informatique. Bien connu sous le système de “Startup Weekend”, ce principe a été adapté dans l’informatique au développement de projet en un temps donné. Le but est de monter en un weekend une équipe qui évoluera autour d’une idée et proposera une solution à un problème. J’ai récemment participé à l’un d’entre eux, le Fhactory: un hackathon se définissant “100% hack, 0% bullshit” et voici mon retour d’expérience.
Deezer étant l’une des plus grosse plateforme d’écoute et de partage de musique, il est intéressant de voir comment se servir des différents outils qu’il nous met à disposition à savoir son API de recherche de morceau et ses différents SDK pour une intégration web ou mobile.
Nous allons voir ensemble, comment les utiliser, à quelles fins et quelles en sont les limites. Pour le SDK, je ne m’intéresserai qu’à celui pour iOS.
En lançant le portail web de météo Weather, mon idée était d’en faire un support pour une version mobile. En effet l’intérêt pour des données météorologiques est de rester nomade et suivre son utilisateur. En intégrant différentes notions associées à la chute libre et avec l’aide de la Fédération Française de Parachutisme, voici iJump: l’application mobile pour les parachutistes.
Il y a maintenant 6 mois, j’ai commencé une formation afin de devenir enseignant sur les languages Cocoa et Objective-C.
Cette formation a compris plusieurs étapes, chacune finissant par un examen afin de passer à la suivante:
Voici mes différents retours sur ma première experience de formateur.
Introduction:
Sencha est un framework HTML5 pour créer des application mobiles multiplateformes. L’intérêt de celui-ci est de faire, à partir d’un projet HTML et de code JSON, une même application mobile sur plusieurs plateformes, un gain de temps incroyable si le code s’y tient. Nous allons voir les premiers pas d’une application à partir de Sencha.
Le nouveau système d’exploitation Windows 8 va de paire avec la mise à jour de son système sur mobile: Windows Phone 8.
Voici une petite introduction à MVVM Light Toolkit, un jeu de composant se basant sur une structure Model-View-ViewModel sur les frameworks XAML/C#, pouvant être utilisé pour un développement sur Windows Phone 8.
Contexte:
Ayant récemment été initié à la chute libre, cette discipline est largement dépendante de la météo.
Malheureusement, trouver la météo en temps en “temps réel” suivant son centre de saut n’est pas chose aisé. Même à 10km de son centre de saut, la différence météorologique peut être significative quant à la pratique du parachutisme.
C’est pourquoi j’ai décidé de developper un portail web permettant de consulter le dernier relevé météo de n’importe quel centre de saut en France, datant de moins de 12h.
Introduction:
Un ORM (Object-relational mapping) est utilisé dans la programmation orienté objet afin de créer virtuellement un modèle en se basant sur une base de donnée. Cela évite de devoir écrire les requêtes dans la base de donnée soit même, un vrai gain de temps.
Introduction
La synchronisation de données en ligne est une pratique courante afin d’avoir un contenu mis à jour à chaque utilisation (applications d’informations, de news et autres).
Trouver un moyen simple d’embarquer ces données avant une synchronisation en ligne est intéressant, permettant une utilisation de l’application même si les données ne sont pas à jour.
Travaillant en Objective-C sur des applications mobiles pour iphone/ipad, nous allons voir comment utiliser Restkit à ces fins.
Après avoir fini ma première année d’étude en informatique, j’ai eu l’idée de réaliser un site internet pour une première experience professionnelle à mon compte.
Des idées à l’étude:
Après quelques idées ainsi que des conseils avisés d’un jeune entrepreneur, j’ai décidé de choisir la branche du tourisme et plus précisément le domaine de l’hotellerie de plein air.
En effet, ce domaine est peu exploité sur internet alors que le nombre de réservation de séjour en camping continuait d’augmenter.
Quand on est développeur web, il arrive qu’on travaille sur plusieurs projets en même temps et qu’on conserve d’anciens projets sans les supprimer.
En utilisant MAMP sous MAC OS X, il faut accéder à l’url exacte du dossier pour pouvoir accéder au site web, il n’existe pas par défaut une page qui indexe les dossiers contenus dans le dossier de développement.
C’est là que j’ai eu l’idée de développer un petit portail en php qui listerait les dossiers contenus dans mon dossier de développement, cela éviterait de devoir se rappeler du nom du projet ainsi que du chemin exacte pour y accéder.
Le principe de réécriture d’urls permet de “transformer” les urls pour référencer plus simplement des pages clés d’un site internet. Pour cela on utilise le fichier htaccess, un fichier caché situé à la racine du dossier de l’application.
Nous allons voir comment est géré par défaut les urls dans le framework CodeIgniter et comment les modifier pour éviter de perdre le référencement déjà acquis sur un site web.
CodeIgniter est un framework php open source basé sur une architecture MVC.
Rappel:
L’architecture MVC (Modèle – Vue – Controller) permet d’organiser plus simplement une application.
Un framework est un kit qui permet de créer la base d’une application plus rapidement et avec une structure plus solide.
Présentation:
CodeIgniter a pour avantage d’être libre mais surtout d’être plus léger comparé aux autres frameworks php connus. Il possède un “guide utilisateur” (en ligne sur le site officiel et localement dans le dossier téléchargé) plus que complet qui propose de nombreux exemples d’applications. La mise en place est intuitive et aucune configuration n’est nécessaire pour une utilisation simple.
The View Transitions API is more a set of features than it is about any one particular thing. And it gets complex fast. But in this post, we’ll cover a couple ways to dip your toes into the waters without having to dive in head-first.
Toe Dipping Into View Transitions originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
I know, super niche, but it could be any loop, really. The challenge is having multiple tooltips on the same page that make use of the Popover API for toggling goodness and CSS Anchor Positioning for attaching a tooltip to its respective anchor element.
Working With Multiple CSS Anchors and Popovers Inside the WordPress Loop originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
My thesis for today's article offers further reassurance that inline conditionals are probably not the harbinger of the end of civilization: I reckon we can achieve the same functionality right now with style queries, which are gaining pretty good browser support.
The What If Machine: Bringing the “Iffy” Future of CSS into the Present originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
A while back on CSS-Tricks, we shared several ways to draw hearts, and the response was dreamy. Now, to show my love, I wanted to do something personal, something crafty, something with a mild amount of effort.
Handwriting an SVG Heart, With Our Hearts originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Adam’s such a mad scientist with CSS. He’s been putting together a series of “notebooks” that make it easy for him to demo code. He’s got one for gradient text, one for a comparison slider, another for accordions…
Scroll Driven Animations Notebook originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
We’ve been able to get the length of the viewport in CSS since… checks notes… 2013! Surprisingly, that was more than a decade ago. Getting the viewport width is as easy these days as easy as writing 100vw
, but …
Typecasting and Viewport Transitions in CSS With tan(atan2()) originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
I enjoy organizing code and find cascade layers a fantastic way to organize code explicitly as the cascade looks at it. The neat part is, that as much as it helps with "top-level" organization, cascade layers can be nested, which allows us to author more precise styles based on the cascade and inheritance.
Organizing Design System Component Patterns With CSS Cascade Layers originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Stationery Pad is a handy way to nix a step in your workflow if you regularly use document templates on your Mac. The long-standing Finder feature essentially tells a file’s parent application to open a copy of it by default, ensuring that the original file remains unedited.
Make Any File a Template Using This Hidden macOS Tool originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
A little gem from Kevin Powell's "HTML & CSS Tip of the Week" website, reminding us that using container queries opens up container query units for sizing things based on the size of the queried container.
Container query units: cqi and cqb originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
The steps for how I took the Baseline Status web component and made it into a WordPress block that can be used on any page of post.
Baseline Status in a WordPress Block originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Are partials the only thing keeping you writing CSS in Sass? With a little configuration, it's possible to compile partial CSS files without a Sass dependency. Ryan Trimble has the details.
Compiling CSS With Vite and Lightning CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Did you see the release notes for Chrome 133? It's currently in beta, but the Chrome team has been publishing a slew of new articles with pretty incredible demos that are tough to ignore. I figured I'd round those up in one place.
Chrome 133 Goodies originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
When using View Transitions you’ll notice the page becomes unresponsive to clicks while a View Transition is running. […] This happens because of the
::view-transition
pseudo element – the one that contains all animated snapshots – gets overlayed on top
…
Keeping the page interactive while a View Transition is running originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
All of the things that the CSS Working Group would change if they had a time-traveling Delorean to go back and do things over.
The Mistakes of CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
The @view-transition
at-rule has two descriptions. One is the commonly used navigation
descriptor. The second is types
, the lesser-known of the two, and one that probably envies how much attention navigation gets. But read on to learn why we need types
and how it opens up new possibilities for custom view transitions when navigating between pages.
What on Earth is the `types` Descriptor in View Transitions? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
The beta versions of iOS 18.4, iPadOS 18.4, macOS 15.4, tvOS 18.4, visionOS 2.4, and watchOS 11.4 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 16.3.
As previewed last year, iOS 18.4 and iPadOS 18.4 include support for default translation apps for all users worldwide, and default navigation apps for EU users.
Beginning April 24, 2025, apps uploaded to App Store Connect must be built with Xcode 16 or later using an SDK for iOS 18, iPadOS 18, tvOS 18, visionOS 2, or watchOS 11.
As of today, apps without trader status have been removed from the App Store in the European Union (EU) until trader status is provided and verified by Apple.
Account Holders or Admins in the Apple Developer Program will need to enter this status in App Store Connect to comply with the Digital Services Act.
You can now take advantage of upgraded security options when creating new token authentication keys for the Apple Push Notification service (APNs).
Team-scoped keys enable you to restrict your token authentication keys to either development or production environments, providing an additional layer of security and ensuring that keys are used only in their intended environments.
Topic-specific keys provide more granular control by enabling you to associate each key with a specific bundle ID, allowing for more streamlined and organized key management. This is particularly beneficial for large organizations that manage multiple apps across different teams.
Your existing keys will continue to work for all push topics and environments. At this time, you don’t have to update your keys unless you want to take advantage of the new capabilities.
For detailed instructions on how to secure your communications with APNs, read Establishing a token-based connection to APNs.
Starting February 14, 2025, new regulatory requirements in South Korea will apply to all apps with offers and trials for auto-renewing subscriptions.
To comply, if you offer trials or offers for auto-renewing subscriptions to your app or game, additional consent must be obtained for your trial or offer after the initial transaction. The App Store will help to get consent by informing the affected subscribers with an email, push notification, and in-app price consent sheet, and asking your subscribers to agree to the new price.
This additional consent must be obtained from customers within 30 days from the payment or conversion date for:
Apps that do not offer a free trial or discounted offer before a subscription converts to the regular price are not affected.
The App Store is designed to make it easy to sell your digital goods and services globally, with support for 44 currencies across 175 storefronts.
From time to time, we may need to adjust prices or your proceeds due to changes in tax regulations or foreign exchange rates. These adjustments are made using publicly available exchange rate information from financial data providers to help make sure prices for apps and In-App Purchases stay consistent across all storefronts.
Tax and pricing updates for FebruaryAs of February 6:
Your proceeds from the sale of eligible apps and In‑App Purchases have been modified in:
Exhibit B of the Paid Applications Agreement has been updated to indicate that Apple collects and remits applicable taxes in Azerbaijan and Peru.¹
As of February 24:
Pricing for apps and In-App Purchases will be updated for the Azerbaijan and Peru storefronts if you haven’t selected one of these as the base for your app or In‑App Purchase.² These updates also consider VAT introductions listed in the tax updates section above.
If you’ve selected the Azerbaijan or Peru storefront as the base for your app or In-App Purchase, prices won’t change. On other storefronts, prices will be updated to maintain equalization with your chosen base price.
Prices won’t change in any region if your In‑App Purchase is an auto‑renewable subscription. Prices also won’t change on the storefronts where you manually manage prices instead of using the automated equalized prices.
The Pricing and Availability section of Apps has been updated in App Store Connect to display these upcoming price changes. As always, you can change the prices of your apps, In‑App Purchases, and auto‑renewable subscriptions at any time.
Learn more about managing your prices
View or edit upcoming price changes
Edit your app’s base country or region
Pricing and availability start times by country or region
Set a price for an In-App Purchase
Beginning April 1:
As a result of last year’s change in Japan’s tax regulations, Apple (through iTunes K.K. in Japan) is now designated as a Specified Platform Operator by the Japan tax authority. All paid apps and In-App Purchases, (including game items, such as coins) sold by non-Japan-based developers on the App Store in Japan will be subject to the platform tax regime. Apple will collect and remit a 10% Japanese consumption tax (JCT) to the National Tax Agency JAPAN on such transactions at the time of purchase. Your proceeds will be adjusted accordingly.
Please note any prepaid payment instruments (such as coins) sold prior to April 1, 2025, will not be subject to platform taxation, and the relevant JCT compliance should continue to be managed by the developer.
For specific information on how the JCT affects in-game items, see Question 7 in the Tax Agency of Japan’s Q&A about Platform Taxation of Consumption Tax.
Learn more about your proceeds
¹ Translations of the updated agreement are available on the Apple Developer website today.
² Excludes auto-renewable subscriptions.
The Vietnamese Ministry of Information and Communications (MIC) requires games to be licensed to remain available on the App Store in Vietnam. To learn more and apply for a game license, review the regulations.
Once you have obtained your license:
If you have questions on how to comply with these requirements, please contact the Authority of Broadcasting and Electronic Information (ABEI) under the Vietnamese Ministry of Information and Communications.
In this edition: The latest on developer activities, the Swift Student Challenge, the team behind Bears Gratitude, and more.
Here’s the story of how a few little bears led their creators right to an Apple Design Award.
Bears Gratitude is a warm and welcoming title developed by the Australian husband-and-wife team of Isuru Wanasinghe and Nayomi Hettiarachchi.
Journaling apps just don’t get much cuter: Through prompts like “Today isn’t over yet,” “I’m literally a new me,” and “Compliment someone,” the Swift-built app and its simple hand-drawn mascots encourage people to get in the habit of celebrating accomplishments, fostering introspection, and building gratitude. “And gratitude doesn’t have to be about big moments like birthdays or anniversaries,” says Wanasinghe. “It can be as simple as having a hot cup of coffee in the morning.”
ADA FACT SHEET
Bears GratitudeDownload Bears Gratitude from the App Store
Wanasinghe is a longtime programmer who’s run an afterschool tutoring center in Sydney, Australia, for nearly a decade. But the true spark for Bears Gratitude and its predecessor, Bears Countdown, came from Hettiarachchi, a Sri Lankan-born illustrator who concentrated on her drawing hobby during the Covid-19 lockdown.
Wanasinghe is more direct. “The art is the heart of everything we do,” he says.
In fact, the art is the whole reason the app exists. As the pandemic months and drawings stacked up, Hettiarachchi and Wanasinghe found themselves increasingly attached to her cartoon creations, enough that they began to consider how to share them with the world. The usual social media routes beckoned, but given Wanasinghe’s background, the idea of an app offered a stronger pull.
“In many cases, you get an idea, put together a design, and then do the actual development,” he says. “In our case, it’s the other way around. The art drives everything.”
The art is the heart of everything we do.
Isuru Wanasinghe, Bears Gratitude cofounder
With hundreds of drawings at their disposal, the couple began thinking about the kinds of apps that could host them. Their first release was Bears Countdown, which employed the drawings to help people look ahead to birthdays, vacations, and other marquee moments. Countdown was never intended to be a mass-market app; the pair didn’t even check its launch stats on App Store Connect. “We’d have been excited to have 100 people enjoy what Nayomi had drawn,” says Wanasinghe. “That’s where our heads were at.”
But Countdown caught on with a few influencers and become enough of a success that the pair began thinking of next steps. “We thought, well, we’ve given people a way to look forward,” says Wanasinghe. “What about reflecting on the day you just had?’”
Gratitude keeps the cuddly cast from Countdown, but otherwise the app is an entirely different beast. It was also designed in what Wanasinghe says was a deliberately unusual manner. “Our design approach was almost bizarrely linear,” says Wanasinghe. “We purposely didn’t map out the app. We designed it in the same order that users experience it.”
Other unorthodox decisions followed, including the absence of a sign-in screen. “We wanted people to go straight into the experience and start writing,” he says. The home-screen journaling prompts are presented via cards that users flip through by tapping left and right. “It’s definitely a nonstandard UX,” says Wanasinghe, “but we found over and over again that the first thing users did was flip through the cards.”
Our design approach was almost bizarrely linear. We purposely didn’t map out the app. We designed it in the same order that users experience it.
Isuru Wanasinghe, Bears Gratitude cofounder
Another twist: The app’s prompts are written in the voice of the user, which Wanasinghe says was done to emphasize the personal nature of the app. “We wrote the app as if we were the only ones using it, which made it more relatable,” he says.
Then there are the bears, which serve not only as a distinguishing hook in a busy field, but also as a design anchor for its creators. “We’re always thinking: ‘Instead of trying to set our app apart, how do we make it ours?’ We use apps all the time, and we know how they behave. But here we tried to detach ourselves from all that, think of it as a blank canvas, and ask, ‘What do we want this experience to be?’”
Bears Gratitude isn’t a mindfulness app — Wanasinghe is careful to clarify that neither he nor Hettiarachchi are therapists or mental health professionals. “All we know about are the trials and tribulations of life,” he says.
But those trials and tribulations have reached a greater world. “People have said, ‘This is just something I visit every day that brings me comfort,’” says Wanasinghe. “We’re so grateful this is the way we chose to share the art. We’re plugged into people’s lives in a meaningful way.”
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Submissions for the Swift Student Challenge 2025 are now open through February 23. You have three more weeks to design, test, refine, and submit your app playground for consideration to be named one of 350 winners.
What to know:
Where to start:
The App Store facilitates billions of transactions annually to help developers grow their businesses and provide a world-class customer experience. To further support developers’ evolving business models — such as exceptionally large content catalogs, creator experiences, and subscriptions with optional add-ons — we’re introducing the Advanced Commerce API.
Developers can apply to use the Advanced Commerce API to support eligible App Store business models and more flexibly manage their In-App Purchases within their app. These purchases leverage the power of the trusted App Store commerce system, including end-to-end payment processing, tax support, customer service, and more, so developers can focus on providing great app experiences.
Starting February 17, 2025: Due to the European Union’s Digital Services Act, apps without trader status will be removed from the App Store in the European Union until trader status is provided and verified, if necessary.
As a reminder, Account Holders or Admins in the Apple Developer Program need to enter trader status in App Store Connect for apps on the App Store in the European Union in order to comply with the Digital Services Act.
As part of ongoing efforts to improve security and privacy on Apple platforms, the App Store receipt signing intermediate certificate is being updated to use the SHA-256 cryptographic algorithm. This certificate is used to sign App Store receipts, which are the proof of purchase for apps and In-App Purchases.
This update is being completed in multiple phases and some existing apps on the App Store may be impacted by the next update, depending on how they verify receipts.
Starting January 24, 2025, if your app performs on-device receipt validation and doesn’t support the SHA-256 algorithm, your app will fail to validate the receipt. If your app prevents customers from accessing the app or premium content when receipt validation fails, your customers may lose access to their content.
If your app performs on-device receipt validation, update your app to support certificates that use the SHA-256 algorithm; alternatively, use the AppTransaction and Transaction APIs to verify App Store transactions.
For more details, view TN3138: Handling App Store receipt signing certificate changes.
Starting next month, Apple will change the supported algorithms that secure server connections for Apple Pay on the Web. In order to maintain uninterrupted service, you’ll need to ensure that your production servers support one or more of the designated six ciphers before February 4, 2025.
These algorithm changes will affect any secure connection you’ve established as part of your Apple Pay integration, including the following touchpoints:
In the first edition of the new year: Bring SwiftUI to your app in Cupertino, get ready for the Swift Student Challenge, meet the team behind Oko, and more.
Oko is a testament to the power of simplicity.
The 2024 Apple Design Award winner for Inclusivity and 2024 App Store Award winner for Cultural Impact leverages Artificial Intelligence to help blind or low-vision people navigate pedestrian walkways by alerting them to the state of signals — “Walk,” “Don’t Walk,” and the like — through haptic, audio, and visual feedback. The app instantly affords more confidence to its users. Its bare-bones UI masks a powerful blend of visual and AI tools under the hood. And it’s an especially impressive achievement for a team that had no iOS or Swift development experience before launch.
“The biggest feedback we get is, ‘It’s so simple, there’s nothing complex about it,’ and that’s great to hear,” says Vincent Janssen, one of Oko’s three Belgium-based founders. “But we designed it that way because that’s what we knew how to do. It just happened to also be the right thing.”
ADA FACT SHEET
OkoDownload Oko from the App Store
For Janssen and his cofounders, brother Michiel and longtime friend Willem Van de Mierop, Oko — the name translates to “eye” — was a passion project that came about during the pandemic. All three studied computer science with a concentration in AI, and had spent years working in their hometown of Antwerp. But by the beginning of 2021, the trio felt restless. “We all had full-time jobs,” says Janssen, “but the weekends were pretty boring.” Yet they knew their experience couldn’t compare to that of a longtime friend with low vision, who Janssen noticed was feeling more affected as the autumn and winter months went on.
“We really started to notice that he was feeling isolated more than others,” says Janssen. “Here in Belgium, we were allowed to go for walks, but you had to be alone or with your household. That meant he couldn’t go with a volunteer or guide. As AI engineers, that got us thinking, ‘Well, there are all these stories about autonomous vehicles. Could we come up with a similar system of images or videos that would help people find their way around public spaces?’”
I had maybe opened Xcode three times a few years before, but otherwise none of us had any iOS or Swift experience.
Vincent Janssen, Oko founder
The trio began building a prototype that consisted of a microcomputer, 3D-printed materials, and a small portable speaker borrowed from the Janssen brothers’ father. Today, Janssen calls it “hacky hardware,” something akin to a small computer with a camera. But it allowed the team and their friend — now their primary tester — to walk the idea around and poke at the technology’s potential. Could AI recognize the state of a pedestrian signal? How far away could it detect a Don’t Walk sign? How would it perform in rain or wind or snow? There was just one way to know. “We went out for long walks,” says Janssen.
And while the AI and hardware performed well in their road tests, issues arose around the hardware’s size and usability, and the team begin to realize that software offered a better solution. The fact that none of the three had the slightest experience building iOS apps was simply a hurdle to clear. “I had maybe opened Xcode three times a few years before,” says Janssen, “but otherwise none of us had any iOS or Swift experience.”
So that summer, the team pivoted to software, quitting their full-time jobs and throwing themselves into learning Swift through tutorials, videos, and trusty web searches. The core idea crystallized quickly: Build a simple app that relied on Camera, the Maps SDK, and a powerful AI algorithm that could help people get around town. “Today, it’s a little more complex, but in the beginning the app basically opened up a camera feed and a Core ML model to process the images,” says Janssen, noting that the original model was brought over from Python. “Luckily, the tools made the conversion really smooth.” (Oko’s AI models run locally on device.)
With the software taking shape, more field testing was needed. The team reached out to accessibility-oriented organizations throughout Belgium, drafting a team of 100 or so testers to “codevelop the app,” says Janssen. Among the initial feedback: Though Oko was originally designed to be used in landscape mode, pretty much everyone preferred holding their phones in portrait mode. “I had the same experience, to be honest,” said Janssen, “but that meant we needed to redesign the whole thing.”
Other changes included amending the audio feedback to more closely mimic existing real-world sounds, and addressing requests to add more visual feedback. The experience amounted to getting a real-world education about accessibility on the fly. “We found ourselves learning about VoiceOver and haptic feedback very quickly,” says Janssen.
Still, the project went remarkably fast — Oko launched on the App Store in December 2021, not even a year after the trio conceived of it. “It took a little while to do things, like make sure the UI wasn’t blocked, especially since we didn’t fully understand the code we wrote in Swift,” laughs Janssen, “but in the end, the app was doing what it needed to do.”
We found ourselves learning about VoiceOver and haptic feedback.
Vincent Janssen, Oko founder
The accessibility community took notice. And in the following months, the Oko team continued expanding its reach — Michiel Janssen and Van de Mierop traveled to the U.S. to meet with accessibility organizations and get firsthand experience with American street traffic and pedestrian patterns. But even as the app expanded, the team retained its focus on simplicity. In fact, Janssen says, they explored and eventually jettisoned some expansion ideas — including one designed to help people find and board public transportation — that made the app feel a little too complex.
Today, the Oko team numbers 6, including a fleet of developers who handle more advanced Swift matters. “About a year after we launched, we got feedback about extra features and speed improvements, and needed to find people who were better at Swift than we are,” laughs Janssen. At the same time, the original trio is now learning about business, marketing, and expansion.
At its core, Oko remains a sparkling example of a simple app that completes its task well. “It’s still a work in progress, and we’re learning every day,” says Janssen. In other words, there are many roads yet to cross.
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
The beta versions of iOS 18.3, iPadOS 18.3, macOS 15.3, tvOS 18.3, visionOS 2.3, and watchOS 11.3 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 16.2.
Join us in celebrating the outstanding work of these developers from around the world.
Attachment 2 of the Apple Developer Program License Agreement has been amended to specify requirements for use of the In-App Purchase API. Please review the changes and accept the updated terms in your account.
View the full terms and conditions
Translations of the updated agreement will be available on the Apple Developer website within one month.
The busiest season on the App Store is almost here. Make sure your apps and games are up to date and ready.
App Review will continue to accept submissions throughout the holiday season. Please plan to submit time-sensitive submissions early, as we anticipate high volume and reviews may take longer to complete from December 20-26.
Every year, the App Store Awards celebrate exceptional apps and games that improve people's lives while showcasing the highest levels of technical innovation, user experience, design, and positive cultural impact. This year, the App Store Editorial team is proud to recognize over 40 outstanding finalists. Winners will be announced in the coming weeks.
The App Store is designed to make it easy to sell your digital goods and services globally, with support for 44 currencies across 175 storefronts.
From time to time, we may need to adjust prices or your proceeds due to changes in tax regulations or foreign exchange rates. These adjustments are made using publicly available exchange rate information from financial data providers to help make sure prices for apps and In-App Purchases stay consistent across all storefronts.
Tax updates as of October:
Your proceeds from the sale of eligible apps and In‑App Purchases have been increased in:
Exhibit B of the Paid Applications Agreement has been updated to indicate that Apple will not remit VAT in Nepal and Kazakhstan for local developers.
Learn more about your proceeds
Price updates as of December 2:
If you’ve selected the Japan or Türkiye storefront as the base for your app or In-App Purchase, prices won’t change. On other storefronts, prices will be updated to maintain equalization with your chosen base price.
Prices won’t change in any region if your In‑App Purchase is an auto‑renewable subscription and won’t change on the storefronts where you manually manage prices instead of using the automated equalized prices.
The Pricing and Availability section of Apps has been updated in App Store Connect to display these upcoming price changes. As always, you can change the prices of your apps, In‑App Purchases, and auto‑renewable subscriptions at any time.
Learn more about managing your prices
View or edit upcoming price changes
Edit your app’s base country or region
Share your app or game’s upcoming content and enhancements for App Store featuring consideration with new Featuring Nominations in App Store Connect. Submit a nomination to tell our team about a new launch, in-app content, or added functionality. If you’re featured in select placements on the Today tab, you’ll also receive a notification via the App Store Connect app.
In addition, you can promote your app or game’s biggest moments — such as an app launch, new version, or select featuring placements on the App Store — with readymade marketing assets. Use the App Store Connect app to generate Apple-designed assets and share them to your social media channels. Include the provided link alongside your assets so people can easily download your app or game on the App Store.
The Push Notifications Console now includes metrics for broadcast push notifications sent in the Apple Push Notification service (APNs) production environment. The console’s interface provides an aggregated view of the broadcast push notifications that are successfully accepted by APNs, the number of devices that receive them, and a snapshot of the maximum number of devices subscribed to your channels.
Let’s get this out of the way: Yes, Devin Davies is an excellent cook. “I’m not, like, a professional or anything,” he says, in the way that people say they’re not good at something when they are.
But in addition to knowing his way around the kitchen, Davies is also a seasoned developer whose app Crouton, a Swift-built cooking aid, won him the 2024 Apple Design Award for Interaction.
Crouton is part recipe manager, part exceptionally organized kitchen assistant. For starters, the app collects recipes from wherever people find them — blogs, family cookbooks, scribbled scraps from the ’90s, wherever — and uses tasty ML models to import and organize them. “If you find something online, just hit the Share button to pull it into Crouton,” says the New Zealand-based developer. “If you find a recipe in an old book, just snap a picture to save it.”
And when it’s time to start cooking, Crouton reduces everything to the basics by displaying only the current step, ingredients, and measurements (including conversions). There’s no swiping around between apps to figure out how many fl oz are in a cup; no setting a timer in a different app. It’s all handled right in Crouton. “The key for me is: How quickly can I get you back to preparing the meal, rather than reading?” Davies says.
ADA FACT SHEET
CroutonDownload Crouton from the App Store
Crouton is the classic case of a developer whipping up something he needed. As the de facto chef in the house, Davies had previously done his meal planning in the Notes app, which worked until, as he laughs, “it got a little out of hand.”
At the time, Davies was in his salad days as an iOS developer, so he figured he could build something that would save him a little time. (It’s in his blood: Davies’s father is a developer too.) "Programming was never my strong suit,” he says, “but once I started building something that solved a problem, I started thinking of programming as a means to an end, and that helped.”
Davies’s full-time job was his meal ticket, but he started teaching himself Swift on the side. Swift, he says, clicked a lot faster than the other languages he’d tried, especially as someone who was still developing a taste for programming. “It still took me a while to get my head into it,” he says, “but I found pretty early on that Swift worked the way I wanted a language to work. You can point Crouton at some text, import that text, and do something with it. The amount of steps you don’t have to think about is astounding.”
I found pretty early on that Swift worked the way I wanted a language to work.
Devin Davies, Crouton
Coding with Swift offered plenty of baked-in benefits. Davies leaned on platform conventions to make navigating Crouton familiar and easy. Lists and collection views took advantage of Camera APIs. VisionKit powered text recognition; a separate model organized imported ingredients by category.
“I could separate out a roughly chopped onion from a regular onion and then add the quantity using a Core ML model,” he says. “It’s amazing how someone like me can build a model to detect ingredients when I really have zero understanding of how it works.”
The app came together quickly: The first version was done in about six months, but Crouton simmered for a while before finding its audience. “My mom and I were the main active users for maybe a year,” Davies laughs. “But it’s really important to build something that you use yourself — especially when you’re an indie — so there’s motivation to carry on.”
Davies served up Crouton updates for a few years, and eventually the app gained more traction, culminating with its Apple Design Award for Interaction at WWDC24. That’s an appropriate category, Davies says, because he believes his approach to interaction is his app’s special sauce. “My skillset is figuring out how the pieces of an app fit together, and how you move through them from point A to B to C,” he says. “I spent a lot of time figuring out what to leave out rather than bring in.”
Davies hopes to use the coming months to explore spicing up Crouton with Apple Intelligence, Live Activities on Apple Watch, and translation APIs. (Though Crouton is his primary app, he’s also built an Apple Vision Pro app called Plate Smash, which is presumably very useful for cooking stress relief.)
But it’s important to him that any new features or upgrades pair nicely with the current Crouton. “I’m a big believer in starting out with core intentions and holding true to them,” he says. “I don’t think that the interface, over time, has to be completely different.”
My skillset is figuring out how the pieces of an app fit together, and how you move through them from point A to B to C.
Devin Davies, Crouton
Because it’s a kitchen assistant, Crouton is a very personal app. It’s in someone’s kitchen at mealtime, it’s helping people prepare means for their loved ones, it’s enabling them to expand their culinary reach. It makes a direct impact on a person’s day. That’s a lot of influence to have as an app developer — even when a recipe doesn’t quite pan out.
“Sometimes I’ll hear from people who discover a bug, or even a kind of misunderstanding, but they’re always very kind about it,” laughs Davies. “They’ll tell me, ‘Oh, I was baking a cake for my daughter’s birthday, and I put in way too much cream cheese and I ruined it. But, great app!’”
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
In this edition: The Swift Pathway, new developer activities around the world, and an interview with the creator of recipe app Crouton.
As part of ongoing efforts to improve security and privacy on Apple platforms, the App Store receipt signing intermediate certificate is being updated to use the SHA-256 cryptographic algorithm. This certificate is used to sign App Store receipts, which are the proof of purchase for apps and In-App Purchases.
This update is being completed in multiple phases and some existing apps on the App Store may be impacted by the next update, depending on how they verify receipts.
Starting January 24, 2025, if your app performs on-device receipt validation and doesn't support a SHA-256 algorithm, your app will fail to validate the receipt. If your app prevents customers from accessing the app or premium content when receipt validation fails, your customers may lose access to their content.
If your app performs on-device receipt validation, update your app to support certificates that use the SHA-256 algorithm; alternatively, use the AppTransaction and Transaction APIs to verify App Store transactions.
For more details, view TN3138: Handling App Store receipt signing certificate change.
Beta testing your apps, games, and App Clips is even better with new enhancements to TestFlight. Updates include:
To get started with TestFlight, upload your build, add test information, and invite testers.
The beta versions of iOS 18.2, iPadOS 18.2, and macOS 15.2 are now available. Get your apps ready by confirming they work as expected on these releases. And make sure to build and test with Xcode 16.2 beta to take advantage of the advancements in the latest SDKs.
As previewed earlier this year, changes to the browser choice screen, default apps, and app deletion for EU users, as well as support in Safari for exporting user data and for web browsers to import that data, are now available in the beta versions of iOS 18.2 and iPadOS 18.2.
These releases also include improvements to the Apps area in Settings first introduced in iOS 18 and iPadOS 18. All users worldwide will be able to manage their default apps via a Default Apps section at the top of the Apps area. New calling and messaging defaults are also now available for all users worldwide.
Following feedback from the European Commission and from developers, in these releases developers can develop and test EU-specific features, such as alternative browser engines, contactless apps, marketplace installations from web browsers, and marketplace apps, from anywhere in the world. Developers of apps that use alternative browser engines can now use WebKit in those same apps.
View details about the browser choice screen, how to make an app available for users to choose as a default, how to create a calling or messaging app that can be a default, and how to import user data from Safari.
The Apple Developer Program License Agreement and its Schedules 1, 2, and 3 have been updated to support updated policies and upcoming features, and to provide clarification. Please review the changes below and accept the updated terms in your account.
Apple Developer Program License Agreement
Schedules 1, 2, and 3
Apple Services Pte. Ltd. is now the Apple legal entity responsible for the marketing and End-User download of the Licensed and Custom Applications by End-Users located in the following regions:
Paid Applications Agreement (Schedules 2 and 3)
Exhibit B: Indicated that Apple shall not collect and remit taxes for local developers in Nepal and Kazakhstan, and such developers shall be solely responsible for the collection and remittance of such taxes as may be required by local law.
Exhibit C:
View the full terms and conditions
Translations of the Apple Developer Program License Agreement will be available on the Apple Developer website within one month.
Starting today, in order to submit updates for apps on the App Store in the European Union (EU) Account Holders or Admins in the Apple Developer Program need to enter trader status in App Store Connect. If you’re a trader, you’ll need to provide your trader information before you can submit your app for review.
Starting February 17, 2025, apps without trader status will be removed from the App Store in the EU until trader status is provided and verified in order to comply with the Digital Services Act.
The Certification Authority (CA) for Apple Push Notification service (APNs) is changing. APNs will update the server certificates in sandbox on January 20, 2025, and in production on February 24, 2025. All developers using APNs will need to update their application’s Trust Store to include the new server certificate: SHA-2 Root : USERTrust RSA Certification Authority certificate.
To ensure a smooth transition and avoid push notification delivery failures, please make sure that both old and new server certificates are included in the Trust Store before the cut-off date for each of your application servers that connect to sandbox and production.
At this time, you don’t need to update the APNs SSL provider certificates issued to you by Apple.
Get your app up to speed, meet the team behind Lies of P, explore new student resources, and more.
Lies of P is closer to its surprising source material than you might think.
Based on Carlo Collodi’s 1883 novel The Adventures of Pinocchio, the Apple Design Award-winning game is a macabre reimagining of the story of a puppet who longs to be a real boy. Collodi’s story is still best known as a children’s fable. But it’s also preprogrammed with more than its share of darkness — which made it an appealing foundation for Lies of P director Jiwon Choi.
“When we were looking for stories to base the game on, we had a checklist of needs,” says Choi. “We wanted something dark. We wanted a story that was familiar but not entirely childish. And the deeper we dove into Pinocchio, the more we found that it checked off everything we were looking for.”
ADA FACT SHEET
Lies of PDeveloped by the South Korea-based ROUND8 Studio and published by its parent company, NEOWIZ, Lies of P is a lavishly rendered dark fantasy adventure and a technical showpiece for Mac with Apple silicon. Yes, players control a humanoid puppet created by Geppetto. But instead of a little wooden boy with a penchant for little white lies, the game’s protagonist is a mechanical warrior with an array of massive swords and a mission to battle through the burned-out city of Krat to find his maker — who isn’t exactly the genial old woodcarver from the fable.
“The story is well-known, and so are the characters,” says Choi. “We knew that to create a lasting memory for gamers, we had to add our own twists.”
Those twists abound. The puppet is accompanied by a digital lamp assistant named Gemini — pronounced “jim-i-nee,” of course. A major character is a play on the original’s kindly Blue Fairy. A game boss named Mad Donkey is a lot more irritable than the donkeys featured in Collodi’s story. And though nobody’s nose grows in Lies of P, characters have opportunities to lie in a way that directly affects the storyline — and potentially one of the game’s multiple endings.
We knew that to create a lasting memory for gamers, we had to add our own twists.
Jiwon Choi, Lies of P director
“If you play without knowing the original story, you might not catch all those twists,” says Choi. “But it goes the other way, too. We’ve heard from players who became curious about the original story, so they went back and found out about our twists that way.”
There’s nothing curious about the game’s success: In addition to winning a 2024 Apple Design Award for Visuals and Graphics, Lies of P was named the App Store’s 2023 Mac Game of the Year and has collected a bounty of accolades from the gaming community. Many of those call out the game’s visual beauty, a world of rich textures, detailed lighting, and visual customization options like MetalFX upscaling and volumetric fog effects that let you style the ruined city to your liking.
For that city, the ROUND8 team added another twist by moving the story from its original Italian locale to the Belle Èpoque era of pre-WWI France. “Everyone expected Italy, and everyone expected steampunk,” says Choi, “but we wanted something that wasn’t quite as common in the gaming industry. We considered a few other locations, like the wild west, but the Belle Èpoque was the right mix of beauty and prosperity. We just made it darker and gloomier.”
We considered a few other locations, like the wild west, but the Belle Èpoque was the right mix of beauty and prosperity. We just made it darker and gloomier.
Jiwon Choi, Lies of P director
To create the game’s fierce (and oily) combat, Choi and the team took existing Soulslike elements and added their own touches, like customizable weapons that can be assembled from items lying around Krat. “We found that players will often find a weapon they like and use it until the ending,” says Choi. “We found that inefficient. But we also know that everyone has a different taste for weapons.”
The system, he says, gives players the freedom to choose their own combinations instead of pursuing a “best” pre-ordained weapon. And the strategy worked: Choi says players are often found online discussing the best combinations rather than the best weapons. “That was our intention when creating the system,” he says.
Also intentional: The game’s approach to lying, another twist on the source material. “Lying in the game isn’t just about deceiving a counterpart,” says Choi. “Humans are the only species who can lie to one another, so lying is about exploring the core of this character.”
It’s also about the murky ethics of lying: Lies of P suggests that, at times, nothing is as human — or humane — as a well-intentioned falsehood.
“The puppet of Geppetto is not human,” says Choi. “But at the same time, the puppet acts like a human and occasionally exhibits human behavior, like getting emotional listening to music. The idea was: Lying is something a human might do. That’s why it’s part of the game.”
The Lies of P story might not be done just yet. Choi and team are working on downloadable content and a potential sequel — possibly starring another iconic character who’s briefly teased in the game’s ending. But in the meantime, the team is taking a moment to enjoy the fruits of their success. “At the beginning of development, I honestly doubted that we could even pull this off,” says Choi. “For me, the most surprising thing is that we achieved this. And that makes us think, ‘Well, maybe we could do better next time.’”
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
We’re thrilled to announce the Swift Student Challenge 2025. The Challenge provides the next generation of student developers the opportunity to showcase their creativity and coding skills by building app playgrounds with Swift.
Applications for the next Challenge will open in February 2025 for three weeks.
We’ll select 350 Swift Student Challenge winners whose submissions demonstrate excellence in innovation, creativity, social impact, or inclusivity. From this esteemed group, we’ll name 50 Distinguished Winners whose work is truly exceptional and invite them to join us at Apple in Cupertino for three incredible days where they’ll gain invaluable insights from Apple experts and engineers, connect with their peers, and enjoy a host of unforgettable experiences.
All Challenge winners will receive one year of membership in the Apple Developer Program, a special gift from Apple, and more.
To help you get ready, we’re launching new coding resources, including Swift Coding Clubs designed for students to develop skills for a future career, build community, and get ready for the Challenge.
Apple is committed to making the App Store a safe place for everyone — especially kids. Within the next few months, the following regional age ratings for Australia and France will be implemented in accordance with local laws. No action is needed on your part. Where required by local regulations, regional ratings will appear alongside Apple global age ratings.
Australia
Apps with any instances of simulated gambling will display an R18+ regional age rating in addition to the Apple global age rating on the App Store in Australia.
France
Apps with a 17+ Apple global age rating will also display an 18+ regional age rating on the App Store in France.
The App Review Guidelines have been revised to add iPadOS to Notarization.
Starting September 16:
If you’ve entered into a previous version of the following agreements, be sure to sign the latest version, which supports iPadOS:
Learn more about the update on apps distributed in the EU
Translations of the guidelines will be available on the Apple Developer website within one month.
You can now configure win-back offers — a new type of offer for auto-renewable subscriptions — in App Store Connect. Win-back offers allow you to reach previous subscribers and encourage them to resubscribe to your app or game. For example, you can create a pay up front offer for a reduced subscription price of $9.99 for six months, with a standard renewal price of $39.99 per year. Based on your offer configuration, Apple displays these offers to eligible customers in various places, including:
When creating win-back offers in App Store Connect, you’ll determine customer eligibility, select regional availability, and choose the discount type. Eligible customers will be able to discover win-back offers this fall.
iOS 18, iPadOS 18, macOS Sequoia, tvOS 18, visionOS 2, and watchOS 11 will soon be available to customers worldwide. Build your apps and games using the Xcode 16 Release Candidate and latest SDKs, test them using TestFlight, and submit them for review to the App Store. You can now start deploying seamlessly to TestFlight and the App Store from Xcode Cloud. With exciting new features like watchOS Live Activities, app icon customization, and powerful updates to Swift, Siri, Controls, and Core ML, you can deliver even more unique experiences on Apple platforms.
And beginning next month, you’ll be able to bring the incredible new features of Apple Intelligence into your apps to help inspire the way users communicate, work, and express themselves.
Starting April 2025, apps uploaded to App Store Connect must be built with SDKs for iOS 18, iPadOS 18, tvOS 18, visionOS 2, or watchOS 11.
Get your apps ready by digging into these video sessions and resources.
Explore machine learning on Apple platforms Watch now Bring expression to your app with Genmoji Watch now Browse new resourcesLearn how to make actions available to Siri and Apple Intelligence.
Need a boost?Check out our curated guide to machine learning and AI.
FEATURED
Get ready for OS updatesDive into the latest updates with these developer sessions.
Level up your games Port advanced games to Apple platforms Watch now Design advanced games for Apple platforms Watch now Bring your vision to life Design great visionOS apps Watch now Design interactive experiences for visionOS Watch now Upgrade your iOS and iPadOS apps Extend your app’s controls across the system Watch now Elevate your tab and sidebar experience in iPadOS Watch nowBrowse Apple Developer on YouTube
Get expert guidanceCheck out curated guides to the latest features and technologies.
BEHIND THE DESIGN
Rytmos: A puzzle game with a global beatFind out how Floppy Club built an Apple Design Award winner that sounds as good as it looks.
Behind the Design: The rhythms of Rytmos View nowMEET WITH APPLE
Reserve your spot for upcoming developer activitiesWant to get Hello Developer in your inbox? Make sure you’ve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWe’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
Rytmos is a game that sounds as good as it looks.
With its global rhythms, sci-fi visuals, and clever puzzles, the 2024 Apple Design Award winner for Interaction is both a challenge and an artistic achievement. To solve each level, players must create linear pathways on increasingly complex boards, dodging obstacles and triggering buttons along the way. It’s all set to a world-music backdrop; different levels feature genres as diverse as Ethiopian jazz, Hawaiian slack key guitar, and Gamelan from Indonesia, just to name a few.
And here’s the hook: Every time you clear a level, you add an instrument to an ever-growing song.
“The idea is that instead of reacting to the music, you’re creating it,” says Asger Strandby, cofounder of Floppy Club, the Denmark-based studio behind Rytmos. “We do a lot to make sure it doesn’t sound too wild. But the music in Rytmos is entirely generated by the way you solve the puzzles.”
ADA FACT SHEET
RytmosDownload Rytmos from the App Store
The artful game is the result of a partnership that dates back decades. In addition to being developers, Strandby and Floppy Club cofounder Niels Böttcher are both musicians who hail from the town of Aarhus in Denmark. “It’s a small enough place that if you work in music, you probably know everyone in the community,” laughs Böttcher.
The music in Rytmos comes mostly from traveling and being curious.
Niels Böttcher, Floppy Club cofounder
The pair connected back in the early 2000s, bonding over music more than games. “For me, games were this magical thing that you could never really make yourself,” says Strandby. “I was a geeky kid, so I made music and eventually web pages on computers, but I never really thought I could make games until I was in my twenties.” Instead, Strandby formed bands like Analogik, which married a wild variety of crate-digging samples — swing music, Eastern European folk, Eurovision-worthy pop — with hip-hop beats. Strandby was the frontman, while Böttcher handled the behind-the-scenes work. “I was the manager in everything but name,” he says.
The band was a success: Analogik went on to release five studio albums and perform at Glastonbury, Roskilde, and other big European festivals. But when their music adventure ended, the pair moved back into separate tech jobs for several years — until the time came to join forces again. “We found ourselves brainstorming one day, thinking about, ‘Could we combine music and games in some way?’” says Böttcher. “There are fun similarities between the two in terms of structures and patterns. We thought, ‘Well, let’s give it a shot.’”
The duo launched work on a rhythm game that was powered by their histories and travels. “I’ve collected CDs and tapes from all over the world, so the genres in Rytmos are very carefully chosen,” says Böttcher. “We really love Ethiopian jazz music, so we included that. Gamelan music (traditional Indonesian ensemble music that’s heavy on percussion) is pretty wild, but incredible. And sometimes, you just hear an instrument and say, ‘Oh, that tabla has a really nice sound.’ So the music in Rytmos comes mostly from traveling and being curious.”
The game took shape early, but the mazes in its initial versions were much more intricate. To help bring them down to a more approachable level, the Floppy Club team brought on art director Niels Fyrst. “He was all about making things cleaner and clearer,” says Böttcher. “Once we saw what he was proposing — and how it made the game stronger — we realized, ‘OK, maybe we’re onto something.’”
Success in Rytmos isn't just that you're beating a level. It's that you're creating something.
Asger Strandby, Floppy Club cofounder
Still, even with a more manageable set of puzzles, a great deal of design complexity remained. Building Rytmos levels was like stacking a puzzle on a puzzle; the team not only had to build out the levels, but also create the music to match. To do so, Strandby and his brother, Bo, would sketch out a level and then send it over to Böttcher, who would sync it to music — a process that proved even more difficult than it seems.
“The sound is very dependent on the location of the obstacles in the puzzles,” says Strandby. “That’s what shapes the music that comes out of the game. So we’d test and test again to make sure the sound didn’t break the idea of the puzzle.”
The process, he says, was “quite difficult” to get right. “Usually with something like this, you create a loop, and then maybe add another loop, and then add layers on top of it,” says Böttcher. “In Rytmos, hitting an emitter triggers a tone, percussion sound, or chord. One tone hits another tone, and then another, and then another. In essence, you’re creating a pattern while playing the game.”
We’ve actually gone back to make some of the songs more imprecise, because we want them to sound human.
Niels Böttcher, Floppy Club cofounder
The unorthodox approach leaves room for creativity. “Two different people’s solutions can sound different,” says Strandby. And when players win a level, they unlock a “jam mode” where they can play and practice freely. "It’s just something to do with no rules after all the puzzling,” laughs Strandby.
Yet despite all the technical magic happening behind the scenes, the actual musical results had to have a human feel. “We’re dealing with genres that are analog and organic, so they couldn’t sound electronic at all,” says Böttcher. “We’ve actually gone back to make some of the songs more imprecise, because we want them to sound human.”
Best of all, the game is shot through with creativity and cleverness — even offscreen. Each letter in the Rytmos logo represents the solution to a puzzle. The company’s logo is a 3.5-inch floppy disk, a little nod to their first software love. (“That’s all I wished for every birthday,” laughs Böttcher.) And both Böttcher and Strandby hope that the game serves as an introduction to both sounds and people they might not be familiar with. "Learning about music is a great way to learn about a culture,” says Strandby.
But mostly, Rytmos is an inspirational experience that meets its lofty goal. “Success in Rytmos isn’t just that you’re beating a level,” says Strandby. “It’s that you’re creating something.”
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
The App Store is designed to make it easy to sell your digital goods and services globally, with support for 44 currencies across 175 storefronts.
From time to time, we may need to adjust prices or your proceeds due to changes in tax regulations or foreign exchange rates. These adjustments are made using publicly available exchange rate information from financial data providers to help make sure prices for apps and In-App Purchases stay consistent across all storefronts.
Price updatesOn September 16:
If you’ve selected the Chile, Laos, or Senegal storefront as the base for your app or In-App Purchase, prices won’t change. On other storefronts, prices will be updated to maintain equalization with your chosen base price.
Prices won’t change in any region if your In‑App Purchase is an auto‑renewable subscription and won’t change on the storefronts where you manually manage prices instead of using the automated equalized prices.
The Pricing and Availability section of Apps has been updated in App Store Connect to display these upcoming price changes. As always, you can change the prices of your apps, In‑App Purchases, and auto‑renewable subscriptions at any time.
Learn more about managing your prices
View or edit upcoming price changes
Edit your app’s base country or region
Pricing and availability start times by region
Set a price for an In-App Purchase
Tax updatesAs of August 29:
Your proceeds from the sale of eligible apps and In‑App Purchases have been modified in:
Exhibit B of the Paid Applications Agreement has been updated to indicate that Apple collects and remits applicable taxes in Laos and Senegal.
Beginning in September:
Your proceeds from the sale of eligible apps and In‑App Purchases will be modified in:
Learn more about your proceeds
1: Excludes auto-renewable subscriptions.
Join us for a special Apple Event on September 9 at 10 a.m. PT.
Watch on apple.com, Apple TV, or YouTube Live.
By the end of this year, we’ll make changes to the browser choice screen, default apps, and app deletion for iOS and iPadOS for users in the EU. These updates come from our ongoing and continuing dialogue with the European Commission about compliance with the Digital Market Act’s requirements in these areas.
Developers of browsers offered in the browser choice screen in the EU will have additional information about their browser shown to users who view the choice screen, and will get access to more data about the performance of the choice screen. The updated choice screen will be shown to all EU users who have Safari set as their default browser. For details about the changes coming to the browser choice screen, view About the browser choice screen in the EU.
For users in the EU, iOS 18 and iPadOS 18 will also include a new Default Apps section in Settings that lists defaults available to each user. In future software updates, users will get new default settings for dialing phone numbers, sending messages, translating text, navigation, managing passwords, keyboards, and call spam filters. To learn more, view Update on apps distributed in the European Union.
Additionally, the App Store, Messages, Photos, Camera, and Safari apps will now be deletable for users in the EU.
As a reminder, Account Holders or Admins in the Apple Developer Program need to enter trader status in App Store Connect for apps on the App Store in the European Union (EU) in order to comply with the Digital Services Act.
Please note these new dates and requirements:
Apple Entrepreneur Camp supports underrepresented founders and developers, and encourages the pipeline and longevity of these entrepreneurs in technology. Attendees benefit from one-on-one code-level guidance, receive unprecedented access to Apple engineers and experts, and become part of the extended global network of Apple Entrepreneur Camp alumni.
Applications are now open for female,* Black, Hispanic/Latinx, and Indigenous founders and developers. And this year we’re thrilled to bring back our in-person programming at Apple in Cupertino. For those who can’t attend in person, we’re still offering our full program online. We welcome established entrepreneurs with app-driven businesses to learn more about eligibility requirements and apply today.
Apply by September 3, 2024.
* Apple believes that gender expression is a fundamental right. We welcome all women to apply to this program.
In response to the announcement by the European Commission in June, we’re making the following changes to Apple’s Digital Markets Act compliance plan. We’re introducing updated terms that will apply this fall for developers with apps in the European Union storefronts of the App Store that use the StoreKit External Purchase Link Entitlement. Key changes include:
Learn more by visiting Alternative payment options on the App Store in the European Union or request a 30-minute online consultation to ask questions and provide feedback about these changes.
Explore the latest developer activities — including sessions, consultations, and labs — all around the world.
BEHIND THE DESIGN
Creating the make-believe magic of Lost in PlayDiscover how the developers of this Apple Design Award-winning game conjured up an imaginative world of oversized frogs, mischievous gnomes, and occasional pizzas.
Behind the Design: Creating the make-believe magic of Lost in Play View now Get resourcefulSESSION OF THE MONTH
Extend your Xcode Cloud workflowsDiscover how Xcode Cloud can adapt to your development needs.
Extend your Xcode Cloud workflows Watch now Subscribe to Hello DeveloperWant to get Hello Developer in your inbox? Make sure you’ve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWe’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
Lost in Play is a game created by and for people who love to play make-believe.
The 2024 Apple Design Award (ADA) winner for Innovation is a point-and-click adventure that follows two young siblings, Toto and Gal, through a beautifully animated world of forbidden forests, dark caverns, friendly frogs, and mischievous gnomes. To advance through the game’s story, players complete fun mini-games and puzzles, all of which feel like a Saturday morning cartoon: Before the journey is out, the pair will fetch a sword from a stone, visit a goblin village, soar over the sea on an enormous bird, and navigate the real-world challenges of sibling rivalry. They will also order several pizzas.
ADA FACT SHEET
Lost in PlayLost in Play is the brainchild of Happy Juice Games, a small Israel-based team whose three cofounders drew inspiration from their own childhoods — and their own families. “We’ve all watched our kids get totally immersed playing make-believe games,” says Happy Juice’s Yuval Markovich. “We wanted to recreate that feeling. And we came up with the idea of kids getting lost, partly in their imaginations, and partly in real life.”
The team was well-equipped for the job. Happy Juice cofounders Markovich, Oren Rubin, and Alon Simon, all have backgrounds in TV and film animation, and knew what they wanted a playful, funny adventure even before drawing their first sketch. “As adults, we can forget how to enjoy simple things like that,” says Simon, “so we set out to make a game about imagination, full of crazy creatures and colorful places.”
For his part, Markovich didn’t just have a history in gaming; he taught himself English by playing text-based adventure games in the ‘80s. “You played those games by typing ‘go north’ or ‘look around,’ so every time I had to do something, I’d open the dictionary to figure out how to say it,” he laughs. “At some point I realized, ‘Oh wait, I know this language.’”
The story became a matter of, ‘OK, a goblin village sounds fun — how do we get there?’
Yuval Markovich, Happy Juice Games cofounder
But those games could be frustrating, as anyone who ever tried to “leave house” or “get ye flask” can attest. Lost in Play was conceived from day one to be light and navigable. “We wanted to keep it comic, funny, and easy,” says Rubin. “That’s what we had in mind from the very beginning.”
Lost in Play may be a linear experience — it feels closer to playing a movie than a sandbox game — but it’s hardly simple. As befitting a playable dream, its story feels a little unmoored, like it’s being made up on the fly. That’s because the team started with art, characters, and environments, and then went back to add a hero’s journey to the elements.
“We knew we’d have a dream in the beginning that introduced a few characters. We knew we’d end up back at the house. And we knew we wanted one scene under the sea, and another in a maker space, and so on,” says Markovich. “The story became a matter of, ‘OK, a goblin village sounds fun — how do we get there?’”
Naturally, the team drew on their shared backgrounds in animation to shape the game all throughout its three-year development process — and not just in terms of art. Like a lot of cartoons, Lost in Play has no dialogue, both to increase accessibility and to enhance the story’s illusion. Characters speak in a silly gibberish. And there are little cartoon-inspired tricks throughout; for instance, the camera shakes when something is scary. “When you study animation, you also study script writing, cinematography, acting, and everything else,” Markovich says. “I think that’s why I like making games so much: They have everything.”
The best thing we hear is that it’s a game parents enjoy playing with their kids.
Oren Rubin, Happy Juice games cofounder
And in a clever acknowledgment of the realities of childhood, brief story beats return Toto and Gal to the real world to navigate practical issues like sibling rivalries. That’s on purpose: Simon says early versions of the game were maybe a little too cute. “Early on, we had the kids sleeping neatly in their beds,” says Simon. “But we decided that wasn’t realistic. We added a bit more of them picking on each other, and a conflict in the middle of the game.” Still, Markovich says that even the real-world interludes keep one foot in the imaginary world. “They may go through a park where an old lady is feeding pigeons, but then they walk left and there’s a goblin in a swamp,” he laughs.
On the puzzle side, Lost in Play’s mini-games are designed to strike the right level of challenging. The team is especially proud of the game’s system of hints, which often present challenges in themselves. “We didn’t want people getting trapped like I did in those old adventure games,” laughs Markovich. “I loved those, but you could get stuck for months. And we didn’t want people going online to find answers either.” The answer: A hint system that doesn’t just hand over the answer but gives players a feeling of accomplishment, an incentive to go back for more.
It all adds up to a unique experience for players of all ages — and that’s by design too. “The best feedback we get is that it’s suitable for all audiences,” says Rubin, “and the best thing we hear is that it’s a game parents enjoy playing with their kids.”
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
In macOS Sequoia, users will no longer be able to Control-click to override Gatekeeper when opening software that isn’t signed correctly or notarized. They’ll need to visit System Settings > Privacy & Security to review security information for software before allowing it to run.
If you distribute software outside of the Mac App Store, we recommend that you submit your software to be notarized. The Apple notary service automatically scans your Developer ID-signed software and performs security checks. When your software is ready for distribution, it’s assigned a ticket to let Gatekeeper know it’s been notarized so customers can run it with confidence.
The App Review Guidelines have been revised to support updated policies and upcoming features, and to provide clarification.
View the App Review Guidelines
Get resources and support to prepare for App Review
Translations of the guidelines will be available on the Apple Developer website within one month.
Our doors are open. Join us to explore all the new sessions, documentation, and features through online and in-person activities held in 15 cities around the world.
Join us July 22–26 for online office hours to get one-on-one guidance about your app or game. And visit the forums where more engineers are ready to answer your questions.
WWDC24 highlights View nowBEHIND THE DESIGN
Positive vibrations: How Gentler Streak approaches fitness with “humanity”Find out why the team behind this Apple Design Award-winning lifestyle app believes success is about more than stats.
Behind the Design: How Gentler Streak approaches fitness with “humanity“ View nowGET RESOURCEFUL
New sample codeSESSION OF THE MONTH
Say hello to the next generation of CarPlay design systemLearn how the system at the heart of CarPlay allows each automaker to express their vehicle’s character and brand.
Say hello to the next generation of CarPlay design system Watch now Subscribe to Hello DeveloperWant to get Hello Developer in your inbox? Make sure you’ve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWe’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
Gentler Streak is a different kind of fitness tracker. In fact, to hear cofounder and CEO Katarina Lotrič tell it, it’s not really a fitness tracker at all.
“We think of it more as a lifestyle app,” says Lotrič, from the team’s home office in Kranj, Slovenia. “We want it to feel like a compass, a reminder to get moving, no matter what that means for you,” she says.
ADA FACT SHEET
Gentler StreakDownload Gentler Streak from the App Store
Learn more about Gentler Streak
Meet the 2024 Apple Design Award winners
That last part is key. True to its name, the Apple Design Award-winning Gentler Streak takes a friendlier approach to fitness. Instead of focusing on performance — on the bigger, faster, and stronger — Gentler Streak meets people where they are, presenting workout suggestions, statistics, and encouragement for all skill levels.
“A lot of mainstream fitness apps can seem to be about pushing all the time,” Lotrič says. “But for a lot of people, that isn’t the reality. Everyone has different demands and capabilities on different days. We thought, ‘Can we create a tool to help anyone know where they’re at on any given day, and guide them to a sustainably active lifestyle?’”
If a 15-minute walk is what your body can do at that moment, that’s great.
Katarina Lotrič, CEO and cofounder of Gentler Stories
To reach those goals, Lotrič and her Gentler Stories cofounders — UI/UX designer Andrej Mihelič, senior developer Luka Orešnik, and CTO and iOS developer Jasna Krmelj — created an app powered by an optimistic and encouraging vibe that considers physical fitness and mental well-being equally.
Fitness and workout data (collected from HealthKit) is presented in a colorful, approachable design. The app’s core functions are available for free; a subscription unlocks premium features. And an abstract mascot named Yorhart (sound it out) adds to the light touch. “Yorhart helps you establish a relationship with the app and with yourself, because it’s what your heart would be telling you,” Lotrič says.
It’s working: In addition to the 2024 Apple Design Award for Social Impact, Gentler Streak was named 2022 Apple Watch App of the Year. What’s more, it has an award-winning ancestor: Lotrič and Orešnik won an Apple Design Award in 2017 for Lake: Coloring Book for Adults.
The trio used the success of Lake to learn more about navigating the industry. But something else was happening during that time: The team, all athletes, began revisiting their own relationships with fitness. Lotrič suffered an injury that kept her from running for months and affected her mental health; she writes about her experiences in Gentler Streak’s editorial section. Mihelič had a different issue. “My problem wasn’t that I lacked motivation,” he says. “It was that I worked out too much. I needed something that let me know when it was enough.”
Statistics are just numbers. Without knowing how to interpret them, they are meaningless.
Katarina Lotrič, CEO and cofounder of Gentler Stories
As a way to reset, Mihelič put together an internal app, a simple utility that encouraged him to move but also allowed time for recuperation. “It wasn’t very gentle,” he laughs. “But the core idea was more or less the same. It guided but it didn’t push. And it wasn’t based on numbers; it was more explanatory.”
Over time, the group began using Mihelič’s app. “We saw right away that it was sticky,” says Lotrič. “I came back to it daily, and it was just this basic prototype. After a while, we realized, ‘Well, this works and is built, to an extent. Why don’t we see if there’s anything here?’”
That’s when Lotrič, Orešnik, and Krmelj split from Lake to create Gentler Stories with Mihelič. "I wanted in because I loved the idea behind the whole company,” Krmelj says. “It wasn’t just about the app. I really like the app. But I really believed in this idea about mental well-being.”
Early users believed it too: The team found that initial TestFlight audience members returned at a stronger rate than expected. “Our open and return rates were high enough that we kept thinking, “Are these numbers even real?’” laughs Lotrič. The team found that those early users responded strongly to the “gentler” side, the approachable repositioning of statistics.
“We weren’t primarily addressing the audience that most fitness apps seemed to target,” says Lotrič. “We focused on everyone else, the people who maybe didn’t feel like they belonged in a gym. Statistics are just numbers. Without knowing how to interpret them, they are meaningless. We wanted to change that and focus on the humanity.” By fall of 2021, Gentler Streak was ready for prime time.
Today’s version of the app follows the same strategy of Mihelič’s original prototype. Built largely in UIKit, its health data is smartly organized, the design is friendly and consistent, and features like its Monthly Summary view — which shows how you’re doing in relation to your history — focus less on comparison and more on progress, whatever that may mean. “If a 15-minute walk is what your body can do at that moment, that’s great,” Lotrič says. “That how we make people feel represented.”
The app’s social impact continues to grow. In the spring of 2024, Gentler Streak added support for Japanese, Korean, and traditional and simplified Chinese languages; previous updates added support for French, German, Italian, Spanish, and Brazilian Portuguese.
And those crucial features — fitness tracking, workout suggestions, metrics, and activity recaps — will remain available to everyone. “That goes with the Gentler Stories philosophy,” says Lotrič. “We’re bootstrapped, but at the same time we know that not everyone is in a position to support us. We still want to be a tool that helps people stay healthy not just for the first two weeks of the year or the summer, but all year long.”
Meet the 2024 Apple Design Award winners
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Alternative payment options are now supported starting in visionOS 1.2 for apps distributed on the App Store in the EU.
The changes for apps in the European Union (EU), currently available to iOS users in the 27 EU member countries, can now be tested in iPadOS 18 beta 2 with Xcode 16 beta 2.
Also, the Web Browser Engine Entitlement Addendum for Apps in the EU and Embedded Browser Engine Entitlement Addendum for Apps in the EU now include iPadOS. If you’ve already entered into either of these addendums, be sure to sign the updated terms.
Learn more about the recent changes:
Apple Vision Pro will launch in China mainland, Hong Kong, Japan, and Singapore on June 28 and in Australia, Canada, France, Germany, and the United Kingdom on July 12. Your apps and games will be automatically available on the App Store in regions you’ve selected in App Store Connect.
If you’d like, you can:
You can also learn how to build native apps to fully take advantage of exciting visionOS features.
Apple is committed to making sure that the App Store is a safe place for everyone — especially kids. Within the next few months, you’ll need to indicate in App Store Connect if your app includes loot boxes available for purchase. In addition, a regional age rating based on local laws will automatically appear on the product page of the apps listed below on the App Store in Australia and South Korea. No other action is needed. Regional age ratings appear in addition to Apple global age ratings.
Australia
A regional age rating is shown if Games is selected as the primary or secondary category in App Store Connect.
South Korea
A regional age rating is shown if either Games or Entertainment is selected as the primary or secondary category in App Store Connect, or if the app has Frequent/Intense instances of Simulated Gambling in any category.
Thank you to everyone who joined us for an amazing week. We hope you found value, connection, and fun. You can continue to:
We’d love to know what you thought of this year’s conference. If you’d like to tell us about your experience, please complete the WWDC24 survey.
Browse the biggest moments from an incredible week of sessions.
Machine Learning & AI Explore machine learning on Apple platforms Watch now Bring expression to your app with Genmoji Watch now Get started with Writing Tools Watch now Bring your app to Siri Watch now Design App Intents for system experiences Watch now Swift What’s new in Swift Watch now Meet Swift Testing Watch now Migrate your app to Swift 6 Watch now Go small with Embedded Swift Watch now SwiftUI & UI Frameworks What’s new in SwiftUI Watch now SwiftUI essentials Watch now Enhance your UI animations and transitions Watch now Evolve your document launch experience Watch now Squeeze the most out of Apple Pencil Watch now Developer Tools What’s new in Xcode 16 Watch now Extend your Xcode Cloud workflows Watch now Spatial Computing Design great visionOS apps Watch now Design interactive experiences for visionOS Watch now Explore game input in visionOS Watch now Bring your iOS or iPadOS game to visionOS Watch now Create custom hover effects in visionOS Watch now Work with windows in SwiftUI Watch now Dive deep into volumes and immersive spaces Watch now Customize spatial Persona templates in SharePlay Watch now Design Design great visionOS apps Watch now Design interactive experiences for visionOS Watch now Design App Intents for system experiences Watch now Design Live Activities for Apple Watch Watch now Say hello to the next generation of CarPlay design system Watch now Add personality to your app through UX writing Watch now Graphics & Games Port advanced games to Apple platforms Watch now Design advanced games for Apple platforms Watch now Bring your iOS or iPadOS game to visionOS Watch now Meet TabletopKit for visionOS Watch now App Store Distribution and Marketing What’s new in StoreKit and In-App Purchase Watch now What’s new in App Store Connect Watch now Implement App Store Offers Watch now Privacy & Security Streamline sign-in with passkey upgrades and credential managers Watch now What’s new in privacy Watch now App and System Services Meet the Contact Access Button Watch now Use CloudKit Console to monitor and optimize database activity Watch now Extend your app’s controls across the system Watch now Safari & Web Optimize for the spatial web Watch now Build immersive web experiences with WebXR Watch now Accessibility & Inclusion Catch up on accessibility in SwiftUI Watch now Get started with Dynamic Type Watch now Build multilingual-ready apps Watch now Photos & Camera Build a great Lock Screen camera capture experience Watch now Build compelling spatial photo and video experiences Watch now Keep colors consistent across captures Watch now Use HDR for dynamic image experiences in your app Watch now Audio & Video Enhance the immersion of media viewing in custom environments Watch now Explore multiview video playback in visionOS Watch now Build compelling spatial photo and video experiences Watch now Business & Education Introducing enterprise APIs for visionOS Watch now What’s new in device management Watch now Health & Fitness Explore wellbeing APIs in HealthKit Watch now Build custom swimming workouts with WorkoutKit Watch now Get started with HealthKit in visionOS Watch nowExplore the highlights.
WWDC24 highlights View now Catch WWDC24 recaps around the worldJoin us for special in-person activities at Apple locations worldwide this summer.
Explore apps and games from the KeynoteCheck out all the incredible featured titles.
How’d we do?We’d love to know your thoughts about this year’s conference.
Today’s WWDC24 playlist: Power UpGet ready for one last day.
And that’s a wrap!Thanks for being part of another incredible WWDC. It’s been a fantastic week of celebrating, connecting, and exploring, and we appreciate the opportunity to share it all with you.
Find out what’s new across Apple platforms.
Design great visionOS apps Watch now Bring your iOS or iPadOS game to visionOS Watch now Design App Intents for system experiences Watch now Explore all platforms sessions GuidesSessions, labs, documentation, and sample code — all in one place.
WWDC24 iOS & iPadOS guide View now WWDC24 Games guide View now WWDC24 visionOS guide View now WWDC24 watchOS guide View now Today’s WWDC24 playlist: Coffee ShopComfy acoustic sounds for quieter moments.
One more to goWhat a week! But we’re not done yet — we’ll be back tomorrow for a big Friday. #WWDC24
Explore new Swift and SwiftUI sessions.
What’s new in Swift Watch now What’s new in SwiftUI Watch now Meet Swift Testing Watch now Explore all Swift sessions GuidesSessions, labs, documentation, and sample code — all in one place.
WWDC24 Swift guide View now WWDC24 Developer Tools guide View now WWDC24 SwiftUI & UI Frameworks guide View now Go further with SwiftConnect with Apple experts and the worldwide developer community.
Cutting-edge sounds from the global frontiers of jazz.
More to comeThanks for being a part of #WWDC24. We’ll be back tomorrow with even more.
Explore everything announced at WWDC24 >
Introducing Apple IntelligenceGet smarter.
Explore machine learning on Apple platforms Watch now Get started with Writing Tools Watch now Bring your app to Siri Watch now Explore all Machine Learning and AI sessions GuidesSessions, labs, documentation, and sample code — all in one place.
WWDC24 Machine Learning & AI guide View now WWDC24 Design guide View now Go further with Apple IntelligenceSummer sounds to change your latitude.
More tomorrowThanks for being a part of this incredible week. We’ll catch you tomorrow for another big day of technology and creativity. #WWDC24
Discover the latest advancements across Apple platforms, including the all-new Apple Intelligence, that can help you create even more powerful, intuitive, and unique experiences.
To start exploring and building with the latest features, download beta versions of Xcode 16, iOS 18, iPadOS 18, macOS 15, tvOS 18, visionOS 2, and watchOS 11.
Browse new and updated documentation and sample code to learn about the latest technologies, frameworks, and APIs introduced at WWDC24.
Discover how this year’s design announcements can help make your app shine on Apple platforms.
Whether you’re refining your design, building for visionOS, or starting from scratch, this year’s design sessions can take your app to the next level on Apple platforms. Find out what makes a great visionOS app, and learn how to design interactive experiences for the spatial canvas. Dive into creating advanced games for Apple devices, explore the latest SF Symbols, learn how to add personality to your app through writing, and much more.
Get the highlights
Download the design one-sheet.
DownloadVIDEOS
Explore the latest video sessions Design great visionOS apps Watch now Design advanced games for Apple platforms Watch now Create custom environments for your immersive apps in visionOS Watch now Explore game input in visionOS Watch now Design Live Activities for Apple Watch Watch now What’s new in SF Symbols 6 Watch now Design interactive experiences for visionOS Watch now Design App Intents for system experiences Watch now Build multilingual-ready apps Watch now Add personality to your app through UX writing Watch now Get started with Dynamic Type Watch now Create custom visual effects with SwiftUI Watch nowFORUMS
Find answers and get adviceAsk questions and get advice about design topics on the Apple Developer Forums.
COMMUNITY
Meet the communityExplore a selection of developer activities all over the world during and after WWDC.
RESOURCES
Explore the latest resourcesYour guide to everything new in Swift, related tools, and supporting frameworks.
From expanded support across platforms and community resources, to an optional language mode with an emphasis on data-race safety, this year’s Swift updates meet you where you are. Explore this year’s video sessions to discover everything that’s new in Swift 6, find tools that support migrating to the new language mode at your own pace, learn about new frameworks that support developing with Swift, and much more.
Get the highlights
Download the Swift one-sheet.
DownloadVIDEOS
Explore the latest video sessions What’s new in Swift Watch now What’s new in SwiftData Watch now Migrate your app to Swift 6 Watch now Go small with Embedded Swift Watch now A Swift Tour: Explore Swift’s features and design Watch now Create a custom data store with SwiftData Watch now Explore the Swift on Server ecosystem Watch now Explore Swift performance Watch now Consume noncopyable types in Swift Watch now Track model changes with SwiftData history Watch nowFORUMS
Find answers and get adviceFind support from Apple experts and the developer community on the Apple Developer Forums, and check out the Swift Forums on swift.org.
Explore Swift on the Apple Developer Forums
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Dive into Apple Developer documentationDesign and build your apps like never before.
With enhancements to live previews in Xcode, new customization options for animations and styling, and updates to interoperability with UIKit and AppKit views, SwiftUI is the best way to build apps for Apple platforms. Dive into the latest sessions to discover everything new in SwiftUI, UIKit, AppKit, and more. Make your app stand out with more options for custom visual effects and enhanced animations. And explore sessions that cover the essentials of building apps with SwiftUI.
Get the highlights
Download the SwiftUI one-sheet.
DownloadVIDEOS
Explore the latest video sessions What’s new in SwiftUI Watch now What’s new in AppKit Watch now What’s new in UIKit Watch now SwiftUI essentials Watch now What’s new in watchOS 11 Watch now Swift Charts: Vectorized and function plots Watch now Elevate your tab and sidebar experience in iPadOS Watch now Bring expression to your app with Genmoji Watch now Squeeze the most out of Apple Pencil Watch now Catch up on accessibility in SwiftUI Watch now Migrate your TVML app to SwiftUI Watch now Get started with Writing Tools Watch now Dive deep into volumes and immersive spaces Watch now Work with windows in SwiftUI Watch now Enhance your UI animations and transitions Watch now Evolve your document launch experience Watch now Build multilingual-ready apps Watch now Create custom hover effects in visionOS Watch now Tailor macOS windows with SwiftUI Watch now Demystify SwiftUI containers Watch now Support semantic search with Core Spotlight Watch now Create custom visual effects with SwiftUI Watch nowFORUMS
Find answers and get adviceConnect with Apple experts and other developers on the Apple Developer Forums.
View discussions about SwiftUI & UI frameworks
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Dive into documentationKeynote
The exciting reveal of the latest Apple software and technologies. 10 a.m. PT.
Keynote Watch nowPlatforms State of the Union
The newest advancements on Apple platforms. 1 p.m. PT.
Platforms State of the Union Watch nowWhere to watch
The full lineup of sessions arrives after the Keynote. And you can start exploring the first batch right after the Platforms State of the Union.
What to do at WWDC24The Keynote is only the beginning. Explore the first day of activities.
The Apple Design Awards recognize unique achievements in app and game design — and provide a moment to step back and celebrate the innovations of the Apple developer community.
More to comeThanks for reading and get some rest! We’ll be back tomorrow for a very busy Day 2. #WWDC24
Explore a wave of updates to developer tools that make building apps and games easier and more efficient than ever.
Watch the latest video sessions to explore a redesigned code completion experience in Xcode 16, and say hello to Swift Assist — a companion for all your coding tasks. Level up your code with the help of Swift Testing, the new, easy-to-learn framework that leverages Swift features to help enhance your testing experience. Dive deep into debugging, updates to Xcode Cloud, and more.
Get the highlights
Download the developer tools one-sheet.
DownloadVIDEOS
Explore the latest video sessions Meet Swift Testing Watch now What’s new in Xcode 16 Watch now Go further with Swift Testing Watch now Xcode essentials Watch now Run, Break, Inspect: Explore effective debugging in LLDB Watch now Break into the RealityKit debugger Watch now Demystify explicitly built modules Watch now Extend your Xcode Cloud workflows Watch now Analyze heap memory Watch nowFORUMS
Find answers and get adviceFind support from Apple experts and the developer community on the Apple Developer Forums.
Explore developer tools on the forums
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Dive into documentationExpand your tool belt with new and updated articles and documentation.
Your guide to all the new features and tools for building apps for iPhone and iPad.
Learn how to create more customized and intelligent apps that appear in more places across the system with the latest Apple technologies. And with Apple Intelligence, you can bring personal intelligence into your apps to deliver new capabilities — all with great performance and built-in privacy. Explore new video sessions about controls, Live Activities, App Intents, and more.
Get the highlights
Download the iOS & iPadOS one-sheet.
DownloadVIDEOS
Explore the latest video sessions Bring your app to Siri Watch now Discover RealityKit APIs for iOS, macOS, and visionOS Watch now Explore machine learning on Apple platforms Watch now Elevate your tab and sidebar experience in iPadOS Watch now Extend your app’s controls across the system Watch now Streamline sign-in with passkey upgrades and credential managers Watch now What’s new in App Intents Watch now Squeeze the most out of Apple Pencil Watch now Meet FinanceKit Watch now Bring your iOS or iPadOS game to visionOS Watch now Build a great Lock Screen camera capture experience Watch now Design App Intents for system experiences Watch now Bring your app’s core features to users with App Intents Watch now Broadcast updates to your Live Activities Watch now Unlock the power of places with MapKit Watch now Implement App Store Offers Watch now What’s new in Wallet and Apple Pay Watch now Meet the Contact Access Button Watch now What’s new in device management Watch nowFORUMS
Find answers and get adviceConnect with Apple experts and other developers on the Apple Developer Forums.
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Get a head start with sample code Dive into documentationBring personal intelligence to your apps.
Apple Intelligence brings powerful, intuitive, and integrated personal intelligence to Apple platforms — designed with privacy from the ground up. And enhancements to our machine learning frameworks let you run and train your machine learning and artificial intelligence models on Apple devices like never before.
Get the highlights
Download the Machine Learning & AI one-sheet.
DownloadVIDEOS
Explore the latest video sessionsGet the most out of Apple Intelligence by diving into sessions that cover updates to Siri integration and App Intents, and how to support Writing Tools and Genmoji in your app. And learn how to bring machine learning and AI directly into your apps using our machine learning frameworks.
Explore machine learning on Apple platforms Watch now Bring your app to Siri Watch now Bring your app’s core features to users with App Intents Watch now Bring your machine learning and AI models to Apple silicon Watch now Get started with Writing Tools Watch now Deploy machine learning and AI models on-device with Core ML Watch now Support real-time ML inference on the CPU Watch now Bring expression to your app with Genmoji Watch now What’s new in App Intents Watch now What’s new in Create ML Watch now Design App Intents for system experiences Watch now Discover Swift enhancements in the Vision framework Watch now Meet the Translation API Watch now Accelerate machine learning with Metal Watch now Train your machine learning and AI models on Apple GPUs Watch nowFORUMS
Find answers and get adviceConnect with Apple experts and other developers on the Apple Developer Forums.
Dive into Machine learning and AI on the forums
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Dive into documentationUITextView
for UIKit and NSTextView
for AppKit.NSAdaptiveImageGlyph
in UIKit and AppKit.Create the next generation of games for millions of players worldwide.
Learn how to create cutting-edge gaming experiences across a unified gaming platform built with tightly integrated graphics software and a scalable hardware architecture. Explore new video sessions about gaming in visionOS, game input, the Game Porting Toolkit 2, and more.
Get the highlights
Download the games one-sheet.
DownloadVIDEOS
Explore the latest video sessions Render Metal with passthrough in visionOS Watch now Meet TabletopKit for visionOS Watch now Port advanced games to Apple platforms Watch now Design advanced games for Apple platforms Watch now Explore game input in visionOS Watch now Bring your iOS or iPadOS game to visionOS Watch now Accelerate machine learning with Metal Watch nowFORUMS
Find answers and get adviceConnect with Apple experts and other developers on the Apple Developer Forums.
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Get a head start with sample code Dive into documentationYour guide to all the new features and tools for building apps for Apple Watch.
Learn how to take advantage of the increased intelligence and capabilities of the Smart Stack. Explore new video sessions about relevancy cues, interactivity, Live Activities, and double tap.
Get the highlights
Download the watchOS one-sheet.
DownloadVIDEOS
Explore the latest video sessions What’s new in watchOS 11 Watch now Bring your Live Activity to Apple Watch Watch now What’s new in SwiftUI Watch now SwiftUI essentials Watch now Design Live Activities for Apple Watch Watch now Catch up on accessibility in SwiftUI Watch now Build custom swimming workouts with WorkoutKit Watch now Demystify SwiftUI containers Watch nowFORUMS
Find answers and get adviceConnect with Apple experts and other developers on the Apple Developer Forums.
View discussions about watchOS
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Dive into documentationWWDC24 is here! Here’s how to make the most of your week:
The infinite canvas is waiting for you.
In this year’s sessions, you’ll get an overview of great visionOS app design, explore object tracking, and discover new RealityKit APIs. You’ll also find out how to build compelling spatial photo and video experiences, explore enterprise APIs for visionOS, find out how to render Metal with passthrough, and much more.
Get the highlights
Download the visionOS one-sheet.
DownloadVIDEOS
Explore the latest video sessions Design great visionOS apps Watch now Explore object tracking for visionOS Watch now Compose interactive 3D content in Reality Composer Pro Watch now Discover RealityKit APIs for iOS, macOS, and visionOS Watch now Create enhanced spatial computing experiences with ARKit Watch now Enhance your spatial computing app with RealityKit audio Watch now Build compelling spatial photo and video experiences Watch now Meet TabletopKit for visionOS Watch now Render Metal with passthrough in visionOS Watch now Explore multiview video playback in visionOS Watch now Introducing enterprise APIs for visionOS Watch now Dive deep into volumes and immersive spaces Watch now Build a spatial drawing app with RealityKit Watch now Optimize for the spatial web Watch now Explore game input in visionOS Watch now Create custom environments for your immersive apps in visionOS Watch now Enhance the immersion of media viewing in custom environments Watch now Design interactive experiences for visionOS Watch now Create custom hover effects in visionOS Watch now Optimize your 3D assets for spatial computing Watch now Discover area mode for Object Capture Watch now Bring your iOS or iPadOS game to visionOS Watch now Build immersive web experiences with WebXR Watch now Get started with HealthKit in visionOS Watch now What’s new in Quick Look for visionOS Watch now What’s new in USD and MaterialX Watch now Customize spatial Persona templates in SharePlay Watch now Create enhanced spatial computing experiences with ARKit Watch now Break into the RealityKit debugger Watch now What’s new in SwiftUI Watch nowFORUMS
Find answers and get adviceConnect with Apple experts and other developers on the Apple Developer Forums.
View discussions about visionOS
COMMUNITY
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
RESOURCES
Get a head start with sample codeThe App Review Guidelines, Apple Developer Program License Agreement, and Apple Developer Agreement have been updated to support updated policies and upcoming features, and to provide clarification. Please review the changes below and accept the updated terms as needed.
App Review GuidelinesPlease sign in to your account to review and accept the updated terms.
View all agreements and guidelines
Translations of the terms will be available on the Apple Developer website within one month.
With WWDC24 just days away, there’s a lot of ground to cover, so let’s get right to it.
WWDC24
Introducing the 2024 Apple Design Award winnersInnovation. Ingenuity. Inspiration.
WWDC24: Everything you need to knowFrom the Keynote to the last session drop, here are the details for an incredible week of sessions, labs, community activities, and more.
Download the Apple Developer app >
Subscribe to Apple Developer on YouTube >
Watch the KeynoteDon’t miss the exciting reveal of the latest Apple software and technologies at 10 a.m. PT on Monday, June 10.
Watch the Platforms State of the UnionHere’s your deep dive into the newest advancements on Apple platforms. Join us at 1 p.m. PT on Monday, June 10.
Get ready for sessionsLearn something new in video sessions posted to the Apple Developer app, website, and YouTube channel. The full schedule drops after the Keynote on Monday, June 10.
Prepare for labsHere’s everything you need to know to get ready for online labs.
Find answers on the forumsDiscuss the conference’s biggest moments on the Apple Developer Forums.
Get the most out of the forums >
Meet the communityExplore a selection of activities hosted by developer organizations during and after WWDC.
Explore community activities >
Say hello to the first WWDC24 playlistThe official WWDC24 playlists drop right after the Keynote. Until then, here’s a teaser playlist to get you excited for the week.
Coming up: One incredible weekHave a great weekend, and we’ll catch you on Monday. #WWDC24
WWDC24
Tune in at 10 a.m. PT on June 10 to catch the exciting reveal of the latest Apple software and technologies.
Keynote Watch now Keynote (ASL) Watch nowWWDC24
Tune in at 1 p.m. PT on June 10 to dive deep into the newest advancements on Apple platforms.
Platforms State of the Union Watch now Platforms State of the Union (ASL) Watch nowThe App Store is designed to make it easy to sell your digital goods and services globally, with support for 44 currencies across 175 storefronts.
From time to time, we may need to adjust prices or your proceeds due to changes in tax regulations or foreign exchange rates. These adjustments are made using publicly available exchange rate information from financial data providers to help make sure prices for apps and In-App Purchases stay consistent across all storefronts.
Price updatesOn June 21, pricing for apps and In-App Purchases¹ will be updated for the Egypt, Ivory Coast, Nepal, Nigeria, Suriname, and Zambia storefronts if you haven’t selected one of these as the base for your app or In‑App Purchase.¹ These updates also consider the following value‑added tax (VAT) changes:
Prices won’t change on the Egypt, Ivory Coast, Nepal, Nigeria, Suriname, or Zambia storefront if you’ve selected that storefront as the base for your app or In-App Purchase.¹ Prices on other storefronts will be updated to maintain equalization with your chosen base price.
Prices won’t change in any region if your In‑App Purchase is an auto‑renewable subscription and won’t change on the storefronts where you manually manage prices instead of using the automated equalized prices.
The Pricing and Availability section of Apps has been updated in App Store Connect to display these upcoming price changes. As always, you can change the prices of your apps, In‑App Purchases, and auto‑renewable subscriptions at any time.
Learn more about managing your pricesView or edit upcoming price changes
Edit your app’s base country or region
Pricing and availability start times by region
Set a price for an In-App Purchase
Tax updatesYour proceeds for sales of apps and In-App Purchases will change to reflect the new tax rates and updated prices. Exhibit B of the Paid Applications Agreement has been updated to indicate that Apple collects and remits applicable taxes in Ivory Coast, Nepal, Suriname, and Zambia.
As of today, June 6, your proceeds from the sale of eligible apps and In‑App Purchases have been modified in the following countries to reflect introductions of or changes in tax rates.
The Fitness and Health category has a new attribute: “Content is primarily accessed through streaming”. If this is relevant to your apps or In-App Purchases that offer fitness video streaming, review and update your selections in the Pricing and Availability section of Apps in App Store Connect.
Learn about setting tax categories
1: Excludes auto-renewable subscriptions.
Every year, the Apple Design Awards recognize innovation, ingenuity, and technical achievement in app and game design.
The incredible developers behind this year’s finalists have shown what can be possible on Apple platforms — and helped lay the foundation for what’s to come.
We’re thrilled to present the winners of the 2024 Apple Design Awards.
One week to go. Don’t miss the exciting reveal of the latest Apple software and technologies.
Keynote kicks off at 10 a.m. PT on June 10.
Join us for the Platforms State of the Union at 1 p.m. PT on June 10.
Every year, the Apple Design Awards recognize innovation, ingenuity, and technical achievement in app and game design.
But they’ve also become something more: A moment to step back and celebrate the Apple developer community in all its many forms.
Join the worldwide developer community for an incredible week of technology and creativity — all online and free. WWDC24 takes place from June 10-14.
The Apple Developer Forums have been redesigned for WWDC24 to help developers connect with Apple experts, engineers, and each other to find answers and get advice.
Apple Developer Relations and Apple engineering are joining forces to field your questions and work to solve your technical issues. You’ll have access to an expanded knowledge base and enjoy quick response times — so you can get back to creating and enhancing your app or game. Plus, Apple Developer Program members now have priority access to expert advice on the forums.
It won’t be long now! WWDC24 takes place online from June 10 through 14, and we’re here to help you get ready for the biggest developer event of the year. In this edition:
WWDC24
Introducing PathwaysIf you’re new to developing for Apple platforms, we’ve got an exciting announcement. Pathways are simple and easy-to-navigate collections of the videos, documentation, and resources you’ll need to start building great apps and games. Because Pathways are self-directed and can be followed at your own pace, they’re the perfect place to begin your journey.
Explore Pathways for Swift, SwiftUI, design, games, visionOS, App Store distribution, and getting started as an Apple developer.
Meet three Distinguished Winners of the Swift Student ChallengeElena Galluzzo, Dezmond Blair, and Jawaher Shaman all drew inspiration from their families to create their winning app playgrounds. Now, they share the hope that their apps can make an impact on others as well.
Meet Elena, Dezmond, and Jawaher >
MEET WITH APPLE EXPERTS
Check out the latest worldwide developer activitiesBrowse the full schedule of activities >
NEWS
Explore Apple Pencil ProBring even richer and more immersive interactions to your iPad app with new features, like squeeze gestures, haptic feedback, and barrel-roll angle tracking.
BEHIND THE DESIGN
The rise of Tide GuideHere’s the swell story of how fishing with his grandfather got Tucker MacDonald hooked into creating his tide-predicting app.
‘I taught myself’: Tucker MacDonald and the rise of Tide Guide View nowGROW YOUR BUSINESS
Explore simple, safe transactions with In-App PurchaseTake advantage of powerful global pricing tools, promotional features, analytics only available from Apple, built-in customer support, and fraud detection.
Q&A
Get shared insights from the SharePlay teamLearn about shared experiences, spatial Personas, that magic “shockwave” effect, and more.
Q&A with the SharePlay team View nowDOCUMENTATION
Browse new and updated docsWant to get Hello Developer in your inbox? Make sure you’ve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWe’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
SharePlay is all about creating meaningful shared experiences in your app. By taking advantage of SharePlay, your app can provide a real-time connection that synchronizes everything from media playback to 3D models to collaborative tools across iPhone, iPad, Mac, Apple TV, and Apple Vision Pro. We caught up with the SharePlay team to ask about creating great SharePlay experiences, spatial Personas, that magic “shockwave” effect, and more.
How does a person start a SharePlay experience?Anyone can begin a group activity by starting a FaceTime call and then launching a SharePlay-supported app. When they do, a notification about the group activity will appear on all participants’ screens. From there, participants can join — and come and go — as they like. You can also start a group activity from your app, from the share sheet, or by adding a SharePlay button to your app.
How can I use SharePlay to keep media playback in sync?SharePlay supports coordinated media playback using AVKit. You can use the system coordinator to synchronize your own player across multiple participants. If you have an ad-supported app, you can synchronize both playback and ad breaks. SharePlay also provides the GroupSessionMessenger API, which lets participants communicate in near-real time.
What’s the difference between SharePlay and Shared with You? Can they work together?SharePlay allows people to share rich experiences with each other. Shared with You helps make app content that people are sharing in Messages available to your app. For example, if a group chat is discussing a funny meme video from your app, adopting Shared with You would allow your app to highlight that content in the app. And if your app supports SharePlay, you can surface that relevant content as an option for watching together.
Separately, Shared with You offers ways to initiate collaboration on shared, persisted content (such as documents) over Messages and FaceTime. You can choose to support SharePlay on that collaborative content, but if you do, consider the ephemerality of a SharePlay experience compared to the persistence of collaboration. For example, if your document is a presentation, you may wish to leverage Shared with You to get editors into the space while using SharePlay to launch an interactive presentation mode that just isn’t possible with screen sharing alone.
What’s the easiest way for people to share content?When your app lets your system know that your current view has shareable content on screen, people who bring their devices together can seamlessly share that content — much like NameDrop, which presents a brief “shockwave” animation when they do. This method supports the discrete actions of sharing documents, initiating SharePlay, and starting a collaboration. This can also connect your content to the system share sheet and help you expose shareable content to the Share menu in visionOS.
Can someone on iPhone join a SharePlay session with someone on Apple Vision Pro?Yes! SharePlay is supported across iOS, iPadOS, macOS, tvOS, and visionOS. That means people can watch a show together on Apple TV+ and keep their playback synchronized across all platforms. To support a similar playback situation in your app, watch Coordinate media playback in Safari with Group Activities. If you’re looking to maintain your app’s visual consistency across platforms, check out the Group Session Messenger and DrawTogether sample project. Remember: SharePlay keeps things synchronized, but your UI is up to you.
How do I get started adopting spatial Personas with SharePlay in visionOS?When you add Group Activities to your app, people can share in that activity over FaceTime while appearing windowed — essentially the same SharePlay experience they’d see on other platforms. In visionOS, you have the ability to create a shared spatial experience using spatial Personas in which participants are placed according to a template. For example:
Using spatial Personas, the environment is kept consistent and participants can see each others’ facial expressions in real time.
How do I maintain visual and spatial consistency with all participants in visionOS?FaceTime in visionOS provides a shared spatial context by placing spatial Personas in a consistent way around your app. This is what we refer to as “visual consistency.” You can use SharePlay to maintain the same content in your app for all participants.
Can both a window and a volume be shared at the same time in a SharePlay session?No. Only one window or volume can be associated with a SharePlay session, but you can help the system choose the proper window or volume.
How many people can participate in a group activity?SharePlay supports 33 total participants, including yourself. Group activities on visionOS involving spatial Personas support five participants at a time.
Do iOS and iPadOS apps that are compatible with visionOS also support SharePlay in visionOS?Yes. During a FaceTime call, your app will appear in a window, and participants in the FaceTime call will appear next to it.
Learn more about SharePlay Design spatial SharePlay experiences Watch now Build spatial SharePlay experiences Watch now Share files with SharePlay Watch now Add SharePlay to your app Watch nowLots of apps have great origin stories, but the tale of Tucker MacDonald and Tide Guide seems tailor-made for the Hollywood treatment. It begins in the dawn hours on Cape Cod, where a school-age MacDonald first learned to fish with his grandfather.
“Every day, he’d look in the paper for the tide tables,” says MacDonald. “Then he’d call me up and say, ‘Alright Tucker, we’ve got a good tide and good weather. Let’s be at the dock by 5:30 a.m.’”
That was MacDonald’s first introduction to tides — and the spark behind Tide Guide, which delivers comprehensive forecasts through top-notch data visualizations, an impressive array of widgets, an expanded iPad layout, and Live Activities that look especially great in, appropriately enough, the Dynamic Island. The SwiftUI-built app also offers beautiful Apple Watch complications and a UI that can be easily customized, depending how deep you want to dive into its data. It’s a remarkable blend of original design and framework standards, perfect for plotting optimal times for a boat launch, research project, or picnic on the beach.
Impressively, Tide Guide was named a 2023 Apple Design Award finalist — no mean feat for a solo developer who had zero previous app-building experience and started his career as a freelance filmmaker.
“I wanted to be a Hollywood director since I was in the fifth grade,” says MacDonald. Early in his filmmaking career, MacDonald found himself in need of a tool that could help him pre-visualize different camera and lens combinations — “like a director’s viewfinder app,” he says. And while he caught a few decent options on the market, MacDonald wanted an app with iOS design language that felt more at home on his iPhone. “So I dove in, watched videos, and taught myself how to make it,” he says.
My primary use cases were going fishing, heading to the beach, or trying to catch a sunset.
Tucker MacDonald, Tide Guide
Before too long, MacDonald drifted away from filmmaking and into development, taking a job as a UI designer for a social app. “The app ended up failing, but the job taught me how a designer works with an engineer,” he says. “I also learned a lot about design best practices, because I had been creating apps that used crazy elements, non-standard navigation, stuff like that.”
Armed with growing design knowledge, he started thinking about those mornings with his grandfather, and how he might create something that could speed up the crucial process of finding optimal fishing conditions. And it didn’t need to be rocket science. “My primary use cases were going fishing, heading to the beach, or trying to catch a sunset,” he says. “I just needed to show current conditions.”
I’d say my designs were way prettier than the code I wrote.
Tucker MacDonald, Tide Guide
In the following years, Tide Guide grew in parallel with MacDonald’s self-taught skill set. “There was a lot of trial and error, and I’d say my designs were way prettier than the code I wrote,” he laughs. “But I learned both coding and design by reading documentation and asking questions in the developer community.”
Today’s Tide Guide is quite the upgrade from that initial version. MacDonald continues to target anyone heading to the ocean but includes powerful metrics — like an hour-by-hour 10-day forecast, water temperatures, and swell height — that advanced users can seek out as needed. The app’s palette is even designed to match the color of the sky throughout the day. “The more time you spend with it, the more you can dig into different layers,” he says.
People around the world have dug into those layers, including an Alaskan tour company operator who can only land in a remote area when the tide is right, and a nonprofit national rescue service in Scotland, whose members weighed in with a Siri shortcut-related workflow request that MacDonald promptly included. And as Tide Guide gets bigger, MacDonald’s knowledge of developing — and oceanography — continues to swell. “I’m just happy that my passion for crafting an incredible experience comes through,” he says, “because I really do have so much fun making it.”
Download Tide Guide from the App Store
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
The CTF is an element of the alternative business terms in the EU that reflects the value Apple provides developers through tools, technologies, and services that enable them to build and share innovative apps. We believe anyone with a good idea and the ingenuity to bring it to life should have the opportunity to offer their app to the world. Only developers who reach significant scale (more than one million first annual installs per year in the EU) pay the CTF. Nonprofit organizations, government entities, and educational institutions approved for a fee waiver don’t pay the CTF. Today, we’re introducing two additional conditions in which the CTF is not required:
This week, the European Commission designated iPadOS a gatekeeper platform under the Digital Markets Act. Apple will bring our recent iOS changes for apps in the European Union (EU) to iPadOS later this fall, as required. Developers can choose to adopt the Alternative Terms Addendum for Apps in the EU that will include these additional capabilities and options on iPadOS, or stay on Apple’s existing terms.
Once these changes are publicly available to users in the EU, the CTF will also apply to iPadOS apps downloaded through the App Store, Web Distribution, and/or alternative marketplaces. Users who install the same app on both iOS and iPadOS within a 12-month period will only generate one first annual install for that app. To help developers estimate any potential impact on their app businesses under the Alternative Terms Addendum for Apps in the EU, we’ve updated the App Install reports in App Store Connect that can be used with our fee calculator.
For more details, visit Understanding the Core Technology Fee for iOS apps in the European Union. If you’ve already entered into the Alternative Terms Addendum for Apps in the EU, be sure to sign the updated terms.
Global business revenue takes into account revenue across all commercial activity, including from associated corporate entities. For additional details, read the Alternative Terms Addendum for Apps in the EU.
The App Store was created to be a safe place for users to discover and get millions of apps all around the world. Over the years, we‘ve built many critical privacy and security features that help protect users and give them transparency and control — from Privacy Nutrition Labels to app tracking transparency, and so many more.
An essential requirement of maintaining user trust is that developers are responsible for all of the code in their apps, including code frameworks and libraries from other sources. That‘s why we’ve created privacy manifests and signature requirements for the most popular third-party SDKs, as well as required reasons for covered APIs.
Starting May 1, 2024, new or updated apps that have a newly added third-party SDK that‘s on the list of commonly used third-party SDKs will need all of the following to be submitted in App Store Connect:
Apps won’t be accepted if they fail to meet the manifest and signature requirements. Apps also won’t be accepted if all of the following apply:
In the future, these required reason requirements will expand to include the entire app binary. If you’re not using an API for an approved reason, please find an alternative. These changes are designed to help you better understand how third-party SDKs use data, secure software dependencies, and provide additional privacy protection for users.
This is a step forward for all apps and we encourage all SDKs to adopt this functionality to better support the apps that depend on them.
Apple Search Ads helps you drive discovery of your app or game on the App Store. We caught up with the Apple Search Ads team to learn more about successfully using the service, including signing up for the free online Apple Search Ads Certification course.
How might my app or game benefit from promotion on the App Store?With Apple Search Ads, developers are seeing an increase in downloads, retention, return on ad spend, and more. Find out how the developers behind The Chefz, Tiket, and Petit BamBou have put the service into practice.
Where will my ad appear?You can reach people in the following places:
Online Apple Search Ads Certification training teaches proven best practices for driving stronger campaign performance. Certification training is designed for all skill levels, from marketing pros to those just starting out. To become certified, complete all of the Certification lessons (each takes between 10 and 20 minutes), then test your skills with a free exam. Once you’re certified, you can share your certificate with your professional network on platforms like LinkedIn.
Sign up here with your Apple ID.
Will my certification expire?Although your Apple Search Ads certification never expires, training is regularly updated. You can choose to be notified about these updates through email or web push notifications.
Can I highlight specific content or features in my ads?You can use the custom product pages you create in App Store Connect to tailor your ads for a specific audience, feature launch, seasonal promotion, and more. For instance, you can create an ad for the Today tab that leads people to a specific custom product page or create ad variations for different search queries. Certification includes a lesson on how to do so.
Can I advertise my app before launch?You can use Apple Search Ads to create ads for apps you’ve made available for pre-order. People can order your app before it’s released, and it’ll automatically download onto their devices on release day.
Drive discovery and downloads on the App Store with Apple Search Ads in 70 countries and regions, now including Brazil, Bolivia, Costa Rica, the Dominican Republic, El Salvador, Guatemala, Honduras, Panama, and Paraguay.
Visit the Apple Search Ads site and Q&A.
And explore best practices to improve your campaign performance with the free Apple Search Ads Certification course.
Watch the May 7 event at apple.com, on Apple TV, or on YouTube Live.
Join us around the world to learn about growing your business, elevating your app design, and preparing for the App Review process. Here’s a sample of our new activities — and you can always browse the full schedule to find more.
Web Distribution lets authorized developers distribute their iOS apps to users in the European Union (EU) directly from a website owned by the developer. Apple will provide developers access to APIs that facilitate the distribution of their apps from the web, integrate with system functionality, and back up and restore users’ apps, once they meet certain requirements designed to help protect users and platform integrity. For details, visit Getting started with Web Distribution in the EU.
The beta versions of iOS 17.5, iPadOS 17.5, macOS 14.5, tvOS 17.5, visionOS 1.2, and watchOS 10.5 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 15.3.
The App Review Guidelines have been revised to support updated policies, upcoming features, and to provide clarification. The following guidelines have been updated:
Welcome to Hello Developer — and the kickoff to WWDC season. In this edition:
WWDC24
The countdown is onWWDC season is officially here.
This year’s Worldwide Developers Conference takes place online from June 10 through 14, offering you the chance to explore the new tools, frameworks, and technologies that’ll help you create your best apps and games yet.
All week long, you can learn and refine new skills through video sessions, meet with Apple experts to advance your projects and ideas, and join the developer community for fun activities. It’s an innovative week of technology and creativity — all online at no cost.
And for the first time, WWDC video sessions will be available on YouTube, in addition to the Apple Developer app and website. Visit the new Apple Developer channel to subscribe and catch up on select sessions.
TUTORIALS
Check out the new Develop in Swift TutorialsKnow a student or aspiring developer looking to start their coding journey? Visit the all-new Develop in Swift Tutorials, designed to introduce Swift, SwiftUI, and spatial computing through the experience of building a project in Xcode.
BEHIND THE DESIGN
Gage and Schlesinger at the crossroadsLearn how acclaimed game designers Zach Gage and Jack Schlesinger reimagined the crossword with Knotwords.
Knotwords: Gage and Schlesinger at the crossroads View nowMEET WITH APPLE EXPERTS
Browse new developer activitiesCheck out this month’s sessions, labs, and consultations, held online and in person around the world.
NEWS AND DOCUMENTATION
Explore and create with new and updated docsView the complete list of new resources.
Subscribe to Hello DeveloperWant to get Hello Developer in your inbox? Make sure you’ve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWe’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
Knotwords is a clever twist on crossword puzzles — so much so that one would expect creators Zach Gage and Jack Schlesinger to be longtime crossword masters who set out to build themselves a new challenge.
One would be totally wrong.
“Crosswords never hit with me,” says Gage, with a laugh. “I dragged myself kicking and screaming into this one.”
It’s not about ‘What random box of words will you get?’ but, ‘What are the decisions you’ll make as a player?’
Jack Schlesinger, Knotwords
In fact, Gage and Schlesinger created the Apple Design Award finalist Knotwords — and the Apple Arcade version, Knotwords+ — not to revolutionize the humble crossword but to learn it. “We know people like crosswords,” says Schlesinger, “so we wanted to figure out what we were missing.” And the process didn’t just result in a new game — it led them straight to the secret of word-game design success. “It’s not about ‘What random box of words will you get?’” says Schlesinger, “but, ‘What are the decisions you’ll make as a player?’”
Gage and Schlesinger are longtime design partners; in addition to designing Knotwords and Good Sudoku with Gage, Schlesinger contributed to the 2020 reboot of SpellTower and the Apple Arcade title Card of Darkness. Neither came to game design through traditional avenues: Gage has a background in interactive art, while Schlesinger is the coding mastermind with a history in theater and, of all things, rock operas. (He’s responsible for the note-perfect soundtracks for many of the duo’s games.) And they’re as likely to talk about the philosophy behind a game as the development of it.
I had been under the mistaken impression that the magic of a simple game was in its simple rule set. The magic actually comes from having an amazing algorithmic puzzle constructor.
Zach Gage
“When you’re playing a crossword, you’re fully focused on the clues. You’re not focused on the grid at all,” explains Gage. “But when you’re building a crossword, you’re always thinking about the grid. I wondered if there was a way to ask players not to solve a crossword but recreate the grid instead,” he says.
Knotwords lets players use only specific letters in specific sections of the grid — a good idea, but one that initially proved elusive to refine and difficult to scale. “At first, the idea really wasn’t coming together,” says Gage, “so we took a break and built Good Sudoku.” Building their take on sudoku — another game with simple rules and extraordinary complexity — proved critical to restarting Knotwords. “I had been under the mistaken impression that the magic of a simple game was in its simple rule set,” Gage says. “The magic actually comes from having an amazing algorithmic puzzle constructor.”
Problematically, they didn’t just have one of those just lying around. But they did have Schlesinger. “I said, ‘I will make you a generator for Knotwords in two hours,’” Schlesinger laughs. That was maybe a little ambitious. The first version took eight hours and was, by his own account, not great. However, it proved a valuable learning experience. “We learned that we needed to model a player. What would someone do here? What steps could they take? If they make a mistake, how long would it take them to correct it?” In short, the puzzle generation algorithm needed to take into account not just rules, but also player behavior.
The work provided the duo an answer for why people liked crosswords. It also did one better by addressing one of Gage’s longstanding game-design philosophies. “To me, the only thing that’s fun in a game is the process of getting better,” says Gage. “In every game I’ve made, the most important questions have been: What’s the journey that people are going through and how can we make that journey fun? And it turns out it's easy to discover that if I've never played a game before.”
Find Knotwords+ on Apple Arcade
Behind the Design is a series that explores design practices and philosophies from each of the winners and finalists of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Join the worldwide developer community online for a week of technology and creativity.
Be there for the unveiling of the latest Apple platforms, technologies, and tools. Learn how to create and elevate your apps and games. Engage with Apple designers and engineers and connect with the worldwide developer community. All online and at no cost.
To align with the Digital Services Act (DSA) in the European Union (EU), Account Holders and Admins in the Apple Developer Program can now enter their trader status in App Store Connect.
Submission requirementsYou’ll need to let us know whether or not you’re a trader to submit new apps to the App Store. If you’re a trader, you may be asked for documentation that verifies your trader contact information.
We’re providing more flexibility for developers who distribute apps in the European Union (EU), including introducing a new way to distribute apps directly from a developer’s website.
More flexibilityDevelopers who’ve agreed to the Alternative Terms Addendum for Apps in the EU have new options for their apps in the EU:
Web Distribution, available with a software update later this spring, will let authorized developers distribute their iOS apps to EU users directly from a website owned by the developer. Apple will provide authorized developers access to APIs that facilitate the distribution of their apps from the web, integrate with system functionality, back up and restore users’ apps, and more. For details, visit Getting ready for Web Distribution in the EU.
On its surface, Finding Hannah is a bright and playful hidden-object game — but dig a little deeper and you’ll find something much more.
The Hannah of Finding Hannah is a 38-year-old Berlin resident trying to navigate career, relationships (including with her best friend/ex, Emma), and the nagging feeling that something’s missing in her life. To help find answers, Hannah turns to her nurturing grandmother and free-spirited mother — whose own stories gradually come into focus and shape the game’s message as well.
“It’s really a story about three women from three generations looking for happiness,” says Franziska Zeiner, cofounder and co-CEO of the Fein Games studio. “For each one, times are changing. But the question is: Are they getting better?”
To move the story along, players comb through a series of richly drawn scenes — a packed club, a bustling train, a pleasantly cluttered bookstore. Locating (and merging) hidden items unlocks new chapters, and the more you find, the more the time-hopping story unfolds. The remarkable mix of message and mechanic made the game a 2023 Apple Design Award finalist, as well as a Cultural Impact winner in the 2023 App Store Awards.
Fein Games is the brainchild of Zeiner and Lea Schönfelder, longtime friends from the same small town in Germany who both pursued careers in game design — despite not being all that into video games growing up. “I mean, at some point I played The Sims as a teenager,” laughs Zeiner, “but games were rare for us. When I eventually went to study game design, I felt like I didn’t really fit in, because my game literacy was pretty limited.”
The goal is to create for people who enjoy authentic female experiences in games.
Lea Schönfelder, cofounder and co-CEO of Fein Games
Cofounder and co-CEO Schönfelder also says she felt like an outsider, but soon found game design a surprisingly organic match for her background in illustration and animation. “In my early years, I saw a lot of people doing unconventional things with games and thought, ‘Wow, this is really powerful.’ And I knew I loved telling stories, maybe not in a linear form but a more systematic way.” Those early years included time with studios like Nerial and ustwo Games, where she worked on Monument Valley 2 and Assemble With Care.
Drawing on their years of experience — and maybe that shared unconventional background — the pair went out on their own to launch Fein Games in 2020. From day one, the studio was driven by more than financial success. “The goal is to create for people who enjoy authentic female experiences in games,” says Schönfelder. “But the product is only one side of the coin — there’s also the process of how you create, and we’ve been able to make inclusive games that maybe bring different perspectives to the world.”
Finding Hannah was driven by those perspectives from day one. The story was always meant to be a time-hopping journey featuring women in Berlin, and though it isn’t autobiographical, bits and pieces do draw from their creators’ lives. “There’s a scene inspired by my grandmother, who was a nurse during the second world war and would tan with her friends on a hospital roof while the planes circled above,” says Schönfelder. The script was written by Berlin-based author Rebecca Harwick, who also served as lead writer on June’s Journey and writer on Switchcraft, The Elder Scrolls Online, and many others.
In the beginning, I felt like I wasn’t part of the group, and maybe even a little ashamed that I wasn’t as games-literate as my colleagues. But what I thought was a weakness was actually a strength.
Lea Schönfelder, cofounder and co-CEO of Fein Games
To design the art for the different eras, the team tried not to think like gamers. “The idea was to try to reach people who weren’t gamers yet, and we thought we’d most likely be able to do that if we found a style that hadn’t been seen in games before,” says Zeiner. To get there, they hired Elena Resko, a Russian-born artist based in Berlin who’d also never worked in games. “What you see is her style,” says Schönfelder. “She didn’t develop that for the game. I think that’s why it has such a deep level of polish, because Elena has been developing her style for probably a decade now.”
And the hidden-object and merge gameplay mechanic itself is an example of sticking with a proven success. “When creating games, you usually want to invent a new mechanic, right?” says Schönfelder. “But Finding Hannah is for a more casual audience. And it’s been proven that the hidden-object mechanic works. So we eventually said, ‘Well, maybe we don’t need to reinvent the wheel here,’” she laughs.
The result is a hidden-object game like none other, part puzzler, part historically flavored narrative, part meditation on the choices faced by women across generations. And it couldn’t have come from a team with any other background. “In the beginning, I felt like I wasn’t part of the group, and maybe even a little ashamed that I wasn’t as games-literate as my colleagues,” says Schönfelder. “But what I thought was a weakness was actually a strength. Players don’t always play your game like you intended. And I felt a very strong, very sympathetic connection to people, and wanted to make the experience as smooth and accessible as possible. And I think that shows.”
Learn more about Finding Hannah
Download Finding Hannah from the App Store
Behind the Design is a series that explores design practices and philosophies from finalists and winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Security is at the core of every Apple platform. The Mac notary service team is part of Apple Security Engineering and Architecture, and in this Q&A, they share their tips on app distribution and account security to help Mac developers have a positive experience — and protect their users.
When should I submit my new app for notarization?Apps should be mostly complete at the time of notarization. There’s no need to notarize an app that isn’t functional yet.
How often should I submit my app for notarization?You should submit all versions you might want to distribute, including beta versions. That’s because we build a profile of your unique software to help distinguish your apps from other developers’ apps, as well as malware. As we release new signatures to block malware, this profile helps ensure that the software you’ve notarized is unaffected.
What happens if my app is selected for additional analysis?Some uploads to the notary service require additional evaluation. If your app falls into this category, rest assured that we’ve received your file and will complete the analysis, though it may take longer than usual. In addition, if you’ve made changes to your app while a prior upload has been delayed, it’s fine to upload a new build.
What should I do if my app is rejected?Keep in mind that empty apps or apps that might damage someone’s computer (by changing important system settings without the owner’s knowledge, for instance) may be rejected, even if they’re not malicious. If your app is rejected, first confirm that your app doesn’t contain malware. Then determine whether it should be distributed privately instead, such as within your enterprise via MDM.
What should I do if my business changes?Keep your developer account details — including your business name, contact info, address, and agreements — up to date. Drastic shifts in account activity or software you notarize can be signs that your account or certificate has been compromised. If we notice this type of activity, we may suspend your account while we investigate further.
I’m a contractor. What are some ways to make sure I’m developing responsibly?Be cautious if anyone asks you to:
Remember: It’s your responsibility to know your customer and the functionality of all software you build and/or sign.
What can I do to maintain control of my developer account?Since malware developers may try to gain access to legitimate accounts to hide their activity, be sure you have two-factor authentication enabled. Bad actors may also pose as consultants or employees and ask you to add them to your developer team. Luckily, there’s an easy solve: Don’t share access to your accounts.
Should I remove access for developers who are no longer on my team?Yes. And we can revoke Developer ID certificates for you if you suspect they may have been compromised.
Learn more about notarizationNotarizing macOS software before distribution
Welcome to Hello Developer. In this edition:
FEATURED
Step inside the Apple Developer CentersThe new Apple Developer Centers are open around the world — and we can’t wait for you to come by. With locations in Bengaluru, Cupertino, Shanghai, and now Singapore, Apple Developer Centers are the home bases for in-person sessions, labs, workshops, and consultations around the world.
Whether you’re looking to enhance your existing app or game, refine your design, or launch a new project, there’s something exciting for you at the Apple Developer Centers. Browse activities in Bengaluru, Cupertino, Shanghai, and Singapore.
BEHIND THE DESIGN
Uncover the hidden joys of Finding HannahOn its surface, Finding Hannah is a bright and playful hidden-object game — but dig a little deeper and you’ll find something more. “It’s really a story about three women from three generations looking for happiness,” says Franziska Zeiner, cofounder and co-CEO of the Fein Games studio. “For each one, times are changing. But the question is: Are they getting better?” Find out how Zeiner and her Berlin-based team created this compelling Apple Design Award finalist.
Uncovering the hidden joys of Finding Hannah View nowQ&A
Get answers from the Mac notary service teamSecurity is at the core of every Apple platform. The Mac notary service team is part of Apple Security Engineering and Architecture, and in this Q&A, they share their tips on app distribution and account security to help Mac developers have a positive experience — and protect their users.
Q&A with the Mac notary service team View nowVIDEOS
Improve your subscriber retention with App Store featuresIn this new video, App Store experts share their tips for minimizing churn and winning back subscribers.
Improve your subscriber retention with App Store features Watch nowGROW YOUR BUSINESS
Make the most of custom product pagesLearn how you can highlight different app capabilities and content through additional (and fully localizable) versions of your product page. With custom product pages, you can create up to 35 additional versions — and view their performance data in App Store Connect.
Plus, thanks to seamless integration with Apple Search Ads, you can use custom product pages to easily create tailored ad variations on the App Store. Read how apps like HelloFresh, Pillow, and Facetune used the feature to gain performance improvements, like higher tap-through and conversion rates.
DOCUMENTATION
Find the details you need in new and updated docsView the full list of new resources
NEWS
Catch up on the latest updatesWant to get Hello Developer in your inbox? Make sure you’ve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWe’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
We’re expanding the analytics available for your apps to help you get even more insight into your business and apps’ performance.
Over 50 new reports are now available through the App Store Connect API to help you analyze your apps’ App Store and iOS performance. These reports include hundreds of new metrics that can enable you to evaluate your performance and find opportunities for improvement. Reports are organized into the following categories:
Additionally, new reports are also available through the CloudKit console with data about Apple Push Notifications and File Provider.
Over the past several weeks, we’ve communicated with thousands of developers to discuss DMA-related changes to iOS, Safari, and the App Store impacting apps in the European Union. As a result of the valuable feedback received, we’ve revised the Alternative Terms Addendum for Apps in the EU to update the following policies and provide developers more flexibility:
If you’ve already entered into the Addendum, you can sign the updated version here.
You can now submit your apps and games built with Xcode 15.3 and all the latest SDKs for iOS 17.4, iPadOS 17.4, macOS 14.4, tvOS 17.4, visionOS 1.1, and watchOS 10.4.
Developers who have agreed to the Alternative Terms Addendum for Apps in the EU can now submit apps offering alternative payment options in the EU. They can also now measure the number of first annual installs their apps have accumulated.
If you’d like to discuss changes to iOS, Safari, and the App Store impacting apps in the EU to comply with the Digital Markets Act, request a 30-minute online consultation with an Apple team member.
The App Store Review Guidelines have been revised to support updated policies, upcoming features, and to provide clarification.
The following guidelines have been updated:
View the App Review Guidelines
Translations of the guidelines will be available on the Apple Developer website within one month.
Developers are responsible for all code included in their apps. At WWDC23, we introduced new privacy manifests and signatures for commonly used third-party SDKs and announced that developers will need to declare approved reasons for using a set of APIs in their app’s privacy manifest. These changes help developers better understand how third-party SDKs use data, secure software dependencies, and provide additional privacy protection for users.
Starting March 13: If you upload a new or updated app to App Store Connect that uses an API requiring approved reasons, we’ll send you an email letting you know if you’re missing reasons in your app’s privacy manifest. This is in addition to the existing notification in App Store Connect.
Starting May 1: You’ll need to include approved reasons for the listed APIs used by your app’s code to upload a new or updated app to App Store Connect. If you’re not using an API for an allowed reason, please find an alternative. And if you add a new third-party SDK that’s on the list of commonly used third-party SDKs, these API, privacy manifest, and signature requirements will apply to that SDK. Make sure to use a version of the SDK that includes its privacy manifest and note that signatures are also required when the SDK is added as a binary dependency.
This functionality is a step forward for all apps and we encourage all SDKs to adopt it to better support the apps that depend on them.
iOS 17.4, iPadOS 17.4, macOS 14.4, tvOS 17.4, visionOS 1.1, and watchOS 10.4 will soon be available to customers worldwide. Build your apps and games using the Xcode 15.3 Release Candidate and latest SDKs, then test them using TestFlight. You can submit your iPhone and iPad apps today.
Apps in the European UnionDevelopers who’ve agreed to the Alternative Terms Addendum for Apps in the EU can set up marketplace distribution in the EU. Eligible developers can also submit marketplace apps and offer apps with alternative browser engines.
Once these platform versions are publicly available:
If you’d like to discuss changes to iOS, Safari, and the App Store impacting apps in the EU to comply with the Digital Markets Act, request a 30-minute online consultation to meet with an Apple team member. In addition, if you’re interested in getting started with operating an alternative app marketplace on iOS in the EU, you can request to attend an in-person lab in Cork, Ireland.
Apple developer activities are in full swing. Here’s a look at what’s happening:
And we’ll have lots more activities in store — online, in person, and in multiple languages — all year long.
Writing is fundamental — especially in your apps and games, where the right words can have a profound impact on your experience. During WWDC23, the Apple UX writing team hosted a wide-ranging Q&A that covered everything from technical concepts to inspiring content to whether apps should have “character.” Here are some highlights from that conversation and resources to help you further explore writing for user interfaces.
Writing for interfaces Watch now My app has a lot of text. What’s the best way to make copy easier to read?Ask yourself: What am I trying to accomplish with my writing? Once you’ve answered that, you can start addressing the writing itself. First, break up your paragraphs into individual sentences. Then, go back and make each sentence as short and punchy as possible. To go even further, you can start each sentence the same way — like with a verb — or add section headers to break up the copy. Or, to put it another way:
Break up your paragraphs into individual sentences.
Make each sentence as short and punchy as possible.
Start each sentence the same way — like with a verb.
Keep other options in mind too. Sometimes it might be better to get your point across with a video or animation. You might also put a short answer first and expand on it elsewhere. That way, you’re helping people who are new to your app while offering a richer option for those who want to dive a little deeper.
What’s your advice for explaining technical concepts in simple terms?First, remember that not everyone will have your level of understanding. Sometimes we get so excited about technical details that we forget the folks who might be using an app for the first time.
Try explaining the concept to a friend or colleague first — or ask an engineer to give you a quick summary of a feature.
From there, break down your idea into smaller components and delete anything that isn’t absolutely necessary. Technical concepts can feel even more intimidating when delivered in a big block of text. Can you link to a support page? Do people need that information in this particular moment? Offering small bits of information is always a good first step.
How can I harness the “less is more” concept without leaving people confused?Clarity should always be the priority. The trick is to make something as long as it needs to be, but as short as it can be. Start by writing everything down — and then putting it away for a few days. When you come back to it, you’ll have a clearer perspective on what can be cut.
One more tip: Look for clusters of short words — those usually offer opportunities to tighten things up.
How should I think about writing my onboarding?Naturally, this will depend on your app or game — you’ll have to figure out what’s necessary and right for you. But typically, brevity is key when it comes to text — especially at the beginning, when people are just trying to get into the experience.
Consider providing a brief overview of high-level features so people know why they should use your app and what to expect while doing so. Also, think about how they got there. What text did they see before opening your app? What text appeared on the App Store? All of this contributes to the overall journey.
Human Interface Guidelines: Onboarding
Should UX writing have a personal tone? Or does that make localization too difficult?When establishing your voice and tone, you should absolutely consider adding elements of personality to get the elusive element of “character.” But you're right to consider how your strings will localize. Ideally, you’ll work with your localization partners for this. Focus on phrases that strike the tone you want without resorting to idioms. And remember that a little goes a long way.
How should I approach writing inclusively, particularly in conveying gender?This is an incredibly important part of designing for everyone. Consider whether specifying gender is necessary for the experience you’re creating. If gender is necessary, it’s helpful to provide a full set of options — as well as an option to decline the question. Many things can be written without alluding to gender at all and are thus more inclusive. You can also consider using glyphs. SF Symbols provides lots of inclusive options. And you can find more guidance about writing inclusively in the Human Interface Guidelines.
Human Interface Guidelines: Inclusion
What are some best practices for writing helpful notifications?First, keep in mind that notifications can feel inherently interruptive — and that people receive lots of them all day long. Before you write a notification at all, ask yourself these questions:
If you answered yes to all of the above, learn more about notification best practices in the Human Interface Guidelines.
Human Interface Guidelines: Notifications
Can you offer guidance on writing for the TipKit framework?With TipKit — which displays tips that help people discover features in your app — concise writing is key. Use tips to highlight a brand-new feature in your app, help people discover a hidden feature, or demonstrate faster ways to accomplish a task. Keep your tips to just one idea, and be as clear as possible about the functionality or feature you’re highlighting.
What’s one suggestion you would give writers to improve their content?One way we find the perfect (or near-perfect) sentence is to show it to other people, including other writers, designers, and creative partners. If you don’t have that option, run your writing by someone else working on your app or even a customer. And you can always read out loud to yourself — it’s an invaluable way to make your writing sound conversational, and a great way to find and cut unnecessary words.
Welcome to the first Hello Developer of the spatial computing era. In this edition: Join us to celebrate International Women’s Day all over the world, find out how the Fantastical team brought their app to life on Apple Vision Pro, get UX writing advice straight from Apple experts, and catch up on the latest news and documentation.
FEATURED
Join us for International Women's Day celebrationsThis March, we’re honoring International Women’s Day with developer activities all over the world. Celebrate and elevate women in app development through a variety of sessions, panels, and performances.
FEATURED
“The best version we’ve ever made”: Fantastical comes to Apple Vision ProThe best-in-class calendar app Fantastical has 11 years of history, a shelf full of awards, and plenty of well-organized fans on iPad, iPhone, Mac, and Apple Watch. Yet Fantastical’s Michael Simmons says the app on Apple Vision Pro is “the best version we’ve ever made.” Find out what Simmons learned while building for visionOS — and what advice he’d give fellow developers bringing their apps to Apple Vision Pro.
“The best version we’ve ever made”: Fantastical comes to Apple Vision Pro View nowQ&A
Get advice from the Apple UX writing teamWriting is fundamental — especially in your apps and games, where the right words can have a profound impact on your app’s experience. During WWDC23, the Apple UX writing team hosted a wide-ranging Q&A that covered everything from technical concepts to inspiring content to whether apps should have “character.”
Q&A with the Apple UX writing team View nowNEWS
Download the Apple Developer app on visionOSApple Developer has come to Apple Vision Pro. Experience a whole new way to catch up on WWDC videos, browse news and features, and stay up to date on the latest Apple frameworks and technologies.
Download Apple Developer from the App Store
VIDEOS
Dive into Xcode Cloud, Apple Pay, and network selectionThis month’s new videos cover a lot of ground. Learn how to connect your source repository with Xcode Cloud, find out how to get started with Apple Pay on the Web, and discover how your app can automatically select the best network for an optimal experience.
Connect your project to Xcode Cloud Watch now Get started with Apple Pay on the Web Watch now Adapt to changing network conditions Watch nowBEHIND THE DESIGN
Rebooting an inventive puzzle game for visionOSBringing the mind-bending puzzler Blackbox to Apple Vision Pro presented Ryan McLeod with a challenge and an opportunity like nothing he'd experienced before. Find out how McLeod and team are making the Apple Design Award-winning game come to life on the infinite canvas. Then, catch up on our Apple Vision Pro developer interviews and Q&As with Apple experts.
Blackbox: Rebooting an inventive puzzle game for visionOS View now Apple Vision Pro developer stories and Q&As View nowMEET WITH APPLE EXPERTS
Sign up for developer activitiesThis month, you can learn to minimize churn and win back subscribers in an online session hosted by App Store experts, and meet with App Review to explore best practices for a smooth review process. You can also request to attend an in-person lab in Cork, Ireland, to help develop your alternative app marketplace on iOS in the European Union. View the full schedule of activities.
DOCUMENTATION
Explore and create with new and updated docsView the full list of new resources.
Discover what’s new in the Human Interface Guidelines.
NEWS
Catch up on the latest updatesWant to get Hello Developer in your inbox? Make sure you’ve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWe’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
The best-in-class calendar app Fantastical has more than a decade of history, a shelf full of awards, and plenty of well-organized fans on iPad, iPhone, Mac, and Apple Watch. Yet Michael Simmons, CEO and lead product designer for Flexibits, the company behind Fantastical, says the Apple Vision Pro app is “the best version we’ve ever made.” We asked Simmons about what he’s learned while building for visionOS, his experiences visiting the developer labs, and what advice he’d give fellow developers bringing their apps to Vision Pro.
What was your initial approach to bringing Fantastical from iPad to Apple Vision Pro?The first thing we did was look at the platform to see if a calendar app made sense. We thought: “Could we do something here that’s truly an improvement?” When the answer was yes, we moved on to, “OK, what are the possibilities?” And of course, visionOS gives you unlimited possibilities. You’re not confined to borders; you have the full canvas of the world to create on.
We wanted to take advantage of that infinite canvas. But we also needed to make sure Fantastical felt right at home in visionOS. People want to feel like there’s a human behind the design — especially in our case, where some customers have been with us for almost 13 years. There’s a legacy there, and an expectation that what you’ll see will feel connected to what we’ve done for more than a decade.
I play guitar, so to me it felt like learning an instrument.
Michael Simmons, CEO and lead product designer for Flexibits
In the end, it all felt truly seamless, so much so that once Fantastical was finished, we immediately said, “Well, let’s do [the company’s contacts app] Cardhop too!”
Was there a moment when you realized, “We’ve really got something here”?It happened as instantly as it could. I play guitar, so to me it felt like learning an instrument. One day it just clicks — the songs, the notes, the patterns — and feels like second nature. For me, it felt like those movies where a musical prodigy feels the music flowing out of them.
How did you approach designing for visionOS?We focused a lot on legibility of the fonts, buttons, and other screen elements. The opaque background didn’t play well with elements from other operating systems, for example, so we tweaked it. We stayed consistent with design language, used system-provided colors as much as possible, built using mainly UIKit, and used SwiftUI for ornaments and other fancy Vision Pro elements. It’s incredible how great the app looked without us needing to rewrite a bunch of code.
How long did the process take?It was five months from first experiencing the device to submitting a beautiful app. Essentially, that meant three months to ramp up — check out the UI, explore what was doable, and learn the tools and frameworks — and two more months to polish, refine, and test. That’s crazy fast! And once we had that domain knowledge, we were able to do Cardhop in two months. So I’d say if you have an iPad app and that knowledge, it takes just months to create a Apple Vision Pro version of your app.
What advice would you give to other developers looking to bring their iPhone or iPad apps to Apple Vision Pro?Make sure your app is appropriate for the platform. Look at the device — all of its abilities and possibilities — and think about how your app would feel with unlimited real estate. And if your app makes sense — and most apps do make sense — and you’re already developing for iPad, iPhone, or Mac, it’s a no-brainer to bring it to Apple Vision Pro.
We recently announced changes to iOS, Safari, and the App Store impacting developers’ apps in the European Union (EU) to comply with the Digital Markets Act (DMA), supported by more than 600 new APIs, a wide range of developer tools, and related documentation.
And we’re continuing to provide new ways for developers to understand and utilize these changes, including:
Developers who have agreed to the new business terms can now use new features in App Store Connect and the App Store Connect API to set up marketplace distribution and marketplace apps, and use TestFlight to beta test these features. TestFlight also supports apps using alternative browser engines, and alternative payments through payment service providers and linking out to a webpage.
And soon, you’ll be able to view expanded app analytics reports for the App Store and iOS.
Apps uploaded to App Store Connect must be built with Xcode 15 for iOS 17, iPadOS 17, tvOS 17, or watchOS 10, starting April 29, 2024.
Every year, the Swift Student Challenge aims to inspire students to create amazing app playgrounds that can make life better for their communities — and beyond.
Have an app idea that’s close to your heart? Now’s your chance to make it happen. Build an app playground and submit by February 25.
All winners receive a year of complimentary membership in the Apple Developer Program and other exclusive awards. And for the first time ever, we’ll award a select group of Distinguished Winners a trip to Apple Park for an incredible in-person experience.
Meet with an Apple team member to discuss changes to iOS, Safari, and the App Store impacting apps in the European Union to comply with the Digital Markets Act. Topics include alternative distribution on iOS, alternative payments in the App Store, linking out to purchase on your webpage, new business terms, and more.
Request a 30-minute online consultation to ask questions and provide feedback about these changes.
In addition, if you’re interested in getting started with operating an alternative app marketplace on iOS in the European Union, you can request to attend an in-person lab in Cork, Ireland.
If you’ve ever played Blackbox, you know that Ryan McLeod builds games a little differently.
In the inventive iOS puzzler from McLeod’s studio, Shapes & Stories, players solve challenges not by tapping or swiping but by rotating the device, plugging in the USB cable, singing a little tune — pretty much everything except touching the screen.
“The idea was to get people in touch with the world outside their device,” says McLeod, while ambling along the canals of his Amsterdam home base.
I’m trying to figure out what makes Blackbox tick on iOS, and how to bring that to visionOS. That requires some creative following of my own rules — and breaking some of them.
Ryan McLeod
In fact, McLeod freed his puzzles from the confines of a device screen well before Apple Vision Pro was even announced — which made bringing the game to this new platform a fascinating challenge. On iOS and iPadOS, Blackbox plays off the familiarity of our devices. But how do you transpose that experience to a device people haven’t tried yet? And how do you break boundaries on a canvas that doesn’t have any? “I do love a good constraint,” says McLeod, “but it has been fun to explore the lifting of that restraint. I’m trying to figure out what makes Blackbox tick on iOS, and how to bring that to visionOS. That requires some creative following of my own rules — and breaking some of them.”
After a brief onboarding, the game becomes an all-new visionOS experience that takes advantage of the spatial canvas right from the first level selection. “I wanted something a little floaty and magical, but still grounded in reality,” he says. “I landed on the idea of bubbles. They’re like soap bubbles: They’re natural, they have this hyper-realistic gloss, and they move in a way you’re familiar with. The shader cleverly pulls the reflection of your world into them in this really believable, intriguing way.”
And the puzzles within those bubbles? “Unlike Blackbox on iOS, you’re not going to play this when you’re walking home from school or waiting in line,” McLeod says. “It had to be designed differently. No matter how exciting the background is, or how pretty the sound effects are, it’s not fun to just stare at something, even if it’s bobbing around really nicely.”
Now, McLeod cautions that Blackbox is still very much a work in progress, and we’re certainly not here to offer any spoilers. But if you want to go in totally cold, it might be best to skip this next part.
In Blackbox, players interact with the space — and their own senses — to explore and solve challenges. One puzzle involves moving your body in a certain manner; another involves sound, silence, and a blob of molten gold floating like an alien in front of you. A second puzzle involves Morse code. And solving a third puzzle causes part of the scene to collapse into a portal. “Spatial Audio makes the whole thing kind of alarming but mesmerizing,” he says.
There's an advantage to not knowing expected or common patterns.
Ryan McLeod
It's safe to say Blackbox will continue evolving, especially since McLeod is essentially building this plane as he’s flying it — something he views as a positive. “There’s an advantage to not knowing expected or common patterns,” he says. “There’s just so much possibility.”
Meet some of the incredible teams building for visionOS, and get answers from Apple experts on spatial design and creating great apps for Apple Vision Pro.
Developer stories “The best version we’ve ever made”: Fantastical comes to Apple Vision Pro View now Blackbox: Rebooting an inventive puzzle game for visionOS View now “The full impact of fruit destruction”: How Halfbrick cultivated Super Fruit Ninja on Apple Vision Pro View now Realizing their vision: How djay designed for visionOS View now JigSpace is in the driver’s seat View now PTC is uniting the makers View now Q&As Q&A: Spatial design for visionOS View now Q&A: Building apps for visionOS View nowThe App Store is designed to make it easy to sell your digital goods and services globally, with support for 44 currencies across 175 storefronts.
From time to time, we may need to adjust prices or your proceeds due to changes in tax regulations or foreign exchange rates. These adjustments are made using publicly available exchange rate information from financial data providers to help ensure that prices for apps and in-app purchases remain consistent across all storefronts.
Price updatesOn February 13, pricing for apps and in-app purchases* will be updated for the Benin, Colombia, Tajikistan, and Türkiye storefronts. Also, these updates consider the following tax changes:
Prices will be updated on the Benin, Colombia, Tajikistan, and Türkiye storefronts if you haven’t selected one of these as the base for your app or in‑app purchase.*
Prices won’t change on the Benin, Colombia, Tajikistan, or Türkiye storefront if you’ve selected that storefront as the base for your app or in-app purchase.* Prices on other storefronts will be updated to maintain equalization with your chosen base price.
Prices won’t change in any region if your in‑app purchase is an auto‑renewable subscription and won’t change on the storefronts where you manually manage prices instead of using the automated equalized prices.
The Pricing and Availability section of My Apps has been updated in App Store Connect to display these upcoming price changes. As always, you can change the prices of your apps, in‑app purchases, and auto‑renewable subscriptions at any time.
Learn more about managing your pricesView or edit upcoming price changes
Edit your app’s base country or region
Pricing and availability start times by region
Set a price for an in-app purchase
Tax updatesYour proceeds for sales of apps and in-app purchases will change to reflect the new tax rates and updated prices. Exhibit B of the Paid Applications Agreement has been updated to indicate that Apple collects and remits applicable taxes in Benin.
On January 30, your proceeds from the sale of eligible apps and in‑app purchases were modified in the following countries to reflect introductions or changes in VAT rates.
*Excludes auto-renewable subscriptions.
The beta versions of iOS 17.4, iPadOS 17.4, macOS 14.4, tvOS 17.4, and watchOS 10.4 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 15.3 beta.
New analytics reports coming in March for developers everywhere
Developers can also enable new sign-in options for their apps
Today, Apple is introducing new options for how apps globally can deliver in-app experiences to users, including streaming games and mini-programs. Developers can now submit a single app with the capability to stream all of the games offered in their catalog.
Apps will also be able to provide enhanced discovery opportunities for streaming games, mini-apps, mini-games, chatbots, and plug-ins that are found within their apps.
Additionally, mini-apps, mini-games, chatbots, and plug-ins will be able to incorporate Apple’s In-App Purchase system to offer their users paid digital content or services for the first time, such as a subscription for an individual chatbot.
Each experience made available in an app on the App Store will be required to adhere to all App Store Review Guidelines and its host app will need to maintain an age rating of the highest age-rated content included in the app.
The changes Apple is announcing reflect feedback from Apple’s developer community and is consistent with the App Store’s mission to provide a trusted place for users to find apps they love and developers everywhere with new capabilities to grow their businesses. Apps that host this content are responsible for ensuring all the software included in their app meets Apple’s high standards for user experience and safety.
New app analyticsApple provides developers with powerful dashboards and reports to help them measure their apps’ performance through App Analytics, Sales and Trends, and Payments and Financial Reports. Today, Apple is introducing new analytics for developers everywhere to help them get even more insight into their businesses and their apps’ performance, while maintaining Apple’s long-held commitment to ensure users are not identifiable at an individual level.
Over 50 new reports will be available through the App Store Connect API to help developers analyze their app performance and find opportunities for improvement with more metrics in areas like:
Engagement — with additional information on the number of users on the App Store interacting with a developer’s app or sharing it with others;
Commerce — with additional information on downloads, sales and proceeds, pre-orders, and transactions made with the App Store’s secure In-App Purchase system;
App usage — with additional information on crashes, active devices, installs, app deletions, and more.
Frameworks usage — with additional information on an app’s interaction with OS functionality such as PhotoPicker, Widgets, and CarPlay.
Additional information about report details and access will be available for developers in March.
Developers will have the ability to grant third-party access to their reports conveniently through the API.
More flexibility for sign in options in appsIn line with Apple’s mission to protect user privacy, Apple is updating its App Store Review Guideline for using Sign in with Apple. Sign in with Apple makes it easy for users to sign in to apps and websites using their Apple ID and was built from the ground up with privacy and security in mind. Starting today, developers that offer third-party or social login services within their app will have the option to offer Sign in with Apple, or they will now be able to offer an equivalent privacy-focused login service instead.
We’re sharing some changes to iOS, Safari, and the App Store, impacting developers’ apps in the European Union (EU) to comply with the Digital Markets Act (DMA). These changes create new options for developers who distribute apps in any of the 27 EU member states, and do not apply to apps distributed anywhere else in the world. These options include how developers can distribute apps on iOS, process payments, use web browser engines in iOS apps, request interoperability with iPhone and iOS hardware and software features, access data and analytics about their apps, and transfer App Store user data.
If you want nothing to change for you — from how the App Store works currently in the EU and in the rest of the world — no action is needed. You can continue to distribute your apps only on the App Store and use its private and secure In-App Purchase system.
The App Store Review Guidelines have been revised to support updated policies, upcoming features, and to provide clarification. We now also indicate which guidelines only apply to Notarization for iOS apps in the European Union.
The following guidelines have been divided into subsections for the purposes of Notarization for iOS apps in the EU:
The following guidelines have been deleted:
2.5.6: Added a link to an entitlement to use an alternative web browser engine in your app in the EU.
3.1.6: Moved to 4.9.
3.2.2(ii): Moved to 4.10.
4.7: Edited to set forth new requirements for mini apps, mini games, streaming games, chatbots, and plug-ins.
4.8: Edited to require an additional login service with certain privacy features if you use a third-party or social login service to set up or authenticate a user’s primary account.
4.9: The original version of this rule (Streaming games) has been deleted and replaced with the Apple Pay guideline.
5.1.2(i): Added that apps may not require users to enable system functionalities (e.g., push notifications, location services, tracking) in order to access functionality, content, use the app, or receive monetary or other compensation, including but not limited to gift cards and codes. A version of this rule was originally published as Guideline 3.2.2(vi).
After You Submit — Appeals: Edited to add an updated link for suggestions for changes to the Guidelines.
The term “auto-renewing subscriptions” was replaced with “auto-renewable subscriptions” throughout.
Translations of the guidelines will be available on the Apple Developer website within one month.
We’re so excited applications for the Swift Student Challenge 2024 will open on February 5.
Looking for some inspiration? Learn about past Challenge winners to gain insight into the motivations behind their apps.
Just getting started? Get tools, tips, and guidance on everything you need to create an awesome app playground.
Fruit Ninja has a juicy history that stretches back more than a decade, but Samantha Turner, lead gameplay programmer at the game’s Halfbrick Studios, says the Apple Vision Pro version — Super Fruit Ninja on Apple Arcade — is truly bananas. “When it first came out, Fruit Ninja kind of gave new life to the touchscreen,” she notes, “and I think we have the potential to do something very special here.”
What if players could squeeze juice out of an orange? What if they could rip apart a watermelon and cover the table and walls with juice?
Samantha Turner, lead gameplay programmer at Halfbrick Studios
Turner would know. She’s worked on the Fruit Ninja franchise for nearly a decade, which makes her especially well suited to help grow the game on a new platform. “We needed to understand how to bring those traditional 2D user interfaces into the 3D space,” she says. “We were full of ideas: What if players could squeeze juice out of an orange? What if they could rip apart a watermelon and cover the table and walls with juice?” She laughs, on a roll. “We were really playing with the environment.”
But they also needed to get people into that environment. “That’s where we came up with the flying menu,” she says, referring to the old-timey home screen that’ll feel familiar to Fruit Ninja fans, except for how it hovers in space. “We wanted a friendly and welcoming way to bring people into the immersive space,” explains Turner. “Before we landed on the menu, we were doing things like generating 3D text to put on virtual objects. But that didn’t give us the creative freedom we needed to set the theme for our world.”
That theme: The good citizens of Fruitasia have discovered a portal to our world — one that magically materializes in the room. “Sensei steps right through the portal,” says Turner, “and you can peek back into their world too.”
Next, Turner and Halfbrick set about creating a satisfying — and splashy — way for people to interact with their space. The main question: What’s the most logical way to launch fruit at people?
“We started with, OK, you have a couple meters square in front of you. What will the playspace look like? What if there’s a chair or a table in the way? How do we work around different scenarios for people in their office or living room or kitchen?” To find their answers, Halfbrick built RealityKit prototypes. “Just being able to see those really opened up the possibilities.” The answer? A set of cannons, arranged in a semicircle at the optimal distance for efficient slashing.
Instead of holding blades, you simply use your hands.
Samantha Turner, lead gameplay programmer at Halfbrick Studios
It also let them move onto the question of how players can carve up a bunch of airborne bananas in a 3D space. The team experimented with a variety of hand motions, but none felt as satisfying as the final result. “Instead of holding blades, you simply use your hands,” she says. “You become the weapon.”
And you’re a powerful weapon. Slice and dice pineapples and watermelons by jabbing with your hands. Send bombs away by pushing them to a far wall, where they harmlessly explode at a distance. Fire shuriken into floating fruit by brushing your palms in an outward direction — a motion Turner particularly likes. “It’s satisfying to see it up close, but when you see it happen far away, you get the full impact of fruit destruction,” she laughs. All were results of hand gesture explorations.
“We always knew hands would be the center of the experience,” she says. “We wanted players to be able to grab things and knock them away. And we can tailor the arc of the fruit to make sure it's a comfortable fruit-slicing experience — we’re actually using the vertical position of the device itself to make sure that we're not throwing fruit over your head or too low.”
The result is the most immersive — and possibly most entertaining — Fruit Ninja to date, not just for players but for the creators. “Honestly,” Turner says, “this version is one of my favorites.”
Starting today, because of a recent United States Court decision, App Store Review Guideline 3.1.1 has been updated to introduce the StoreKit Purchase Link Entitlement (US), which allows apps that offer in-app purchases in the iOS or iPadOS App Store on the United States storefront the ability to include a link to the developer’s website that informs users of other ways to purchase digital goods or services.
We believe Apple’s in-app purchase system is the most convenient, safe, and secure way for users to purchase digital goods and services. If you’re considering using this entitlement along with in‑app purchase, which continues to be required for the purchase of digital goods and services within your app — it’s important to understand that some App Store features, such as Ask to Buy or Family Sharing, won’t be available to your customers when they make purchases on your website. Apple also won’t be able to assist customers with refunds, purchase history, subscription management, and other issues encountered when purchasing digital goods and services. You will be responsible for addressing such issues with customers.
A commission will apply to digital purchases facilitated through the StoreKit Purchase Link Entitlement (US). For additional details on commissions, requesting the entitlement, usage guidelines, and implementation details, view our support page.
Years ago, early in his professional DJ career, Algoriddim cofounder and CEO Karim Morsy found himself performing a set atop a castle tower on the Italian coast. Below him, a crowd danced in the ruins; before him streched a moonlit-drenched coastline and the Mediterranean Sea. “It was a pretty inspiring environment,” Morsy says, probably wildly underselling this.
Through their app djay, Morsy and Algoriddim have worked to recreate that live DJ experience for nearly 20 years. The best-in-class DJ app started life as boxed software for Mac; subsequent versions for iPad offered features like virtual turntables and beat matching. The app was a smashing success that won an Apple Design Award in both 2011 and 2016.
But Morsy says all that previous work was prologue to djay on the infinite canvas. “When we heard about Apple Vision Pro,” he says, “it felt like djay was this beast that wanted to be unleashed. Our vision — no pun intended — with Algoriddim was to make DJing accessible to everyone,” he says. Apple Vision Pro, he says, represents the realization of that dream. “The first time I experienced the device was really emotional. I wanted to be a DJ since I was a child. And suddenly here were these turntables, and the night sky, and the stars above me, and this light show in the desert. I felt like, ‘This is the culmination of everything. This is the feeling I’ve been wanting people to experience.’”
When we heard about Apple Vision Pro, it felt like djay was this beast that wanted to be unleashed.
Karim Morsy, Algoriddim cofounder and CEO
Getting to that culmination necessitated what Morsy calls “the wildest sprint of our lives.” With a 360-degree canvas to explore, the team rethought the entire process of how people interacted with djay. “We realized that with a decade of building DJ interfaces, we were taking a lot for granted,” he says. “So the first chunk of designing for Apple Vision Pro was going back to the drawing board and saying, ‘OK, maybe this made sense 10 years ago with a computer and mouse, but why do we need it now? Why should people have to push a button to match tempos — shouldn’t that be seamless?’ There was so much we could abstract away.”
They also thought about environments. djay offers a windowed view, a shared space that brings 3D turntables into your environment, and several forms of full immersion. The app first opens to the windowed view, which should feel familiar to anyone who’s spun on the iPad app: a simple UI of two decks. The volumetric view brings into your room not just turntables, but the app’s key moment: the floating 3D cube that serves as djay’s effects control pad.
But those immersive scenes are where Morsy feels people can truly experience reacting to and feeding off the environment. There’s an LED wall that reflects colors from the artwork of the currently playing song, a nighttime desert scene framed by an arena of lights, and a space lounge — complete with dancing robots — that offers a great view of planet Earth. The goal of those environments is to help create the “flow state” that’s sought by live DJs. “You want to get into a loop where the environment influences you and vice versa,” Morsy says.
In the end, this incredible use of technology serves a very simple purpose: interacting with the music you love. Morsy — a musician himself — points to a piano he keeps in his office. “That piano has had the same interface for hundreds of years,” he says. “That’s what we’re trying to reach, that sweet spot between complexity and ease of use. With djay on Vision Pro, it’s less about, ‘Let’s give people bells and whistles,’ and more, ‘Let’s let them have this experience.’”
Welcome to Hello Developer. In this Apple Vision Pro-themed edition: Find out how to submit your visionOS apps to the App Store, learn how the team behind djay approached designing for the infinite canvas, and get technical answers straight from Apple Vision Pro engineers. Plus, catch up on the latest news, documentation, and developer activities.
FEATURED
Submit your apps to the App Store for Apple Vision ProApple Vision Pro will have a brand-new App Store, where people can discover and download all the incredible apps available for visionOS. Whether you’ve created a new visionOS app or are making your existing iPad or iPhone app available on Apple Vision Pro, here’s everything you need to know to prepare and submit your app to the App Store.
BEHIND THE DESIGN
Realizing their vision: How djay designed for visionOSAlgoriddim CEO Karim Morsy says Apple Vision Pro represents “the culmination of everything” for his app, djay. In the latest edition of Behind the Design, find out how this incredible team approached designing for the infinite canvas.
Realizing their vision: How djay designed for visionOS View nowQ&A
Get answers from Apple Vision Pro engineersIn this Q&A, Apple Vision Pro engineers answer some of the most frequently asked questions from Apple Vision Pro developer labs all over the world.
Q&A: Building apps for visionOS View nowCOLLECTION
Reimagine your enterprise apps on Apple Vision ProDiscover the languages, tools, and frameworks you’ll need to build and test your apps for visionOS. Explore videos and resources that showcase productivity and collaboration, simulation and training, and guided work. And dive into workflows for creating or converting existing media, incorporating on-device and remote assets into your app, and much more.
Reimagine your enterprise apps on Apple Vision Pro View nowMEET WITH APPLE EXPERTS
Submit your request for developer labs and App Review consultationsJoin us this month in the Apple Vision Pro developer labs to get your apps ready for visionOS. With help from Apple, you’ll be able to test, refine, and finalize your apps and games. Plus, Apple Developer Program members can check out one-on-one App Review, design, and technology consultations, offered in English, Spanish, Brazilian Portuguese, and more.
DOCUMENTATION
Check out visionOS sample apps, SwiftUI tutorials, audio performance updates, and moreThese visionOS sample apps feature refreshed audio, visual, and timing elements, simplified collision boxes, and performance improvements.
Hello World: Use windows, volumes, and immersive spaces to teach people about the Earth.
Happy Beam: Leverage a Full Space to create a game using ARKit.
Diorama: Design scenes for your visionOS app using Reality Composer Pro.
Swift Splash: Use RealityKit to create an interactive ride in visionOS.
And these resources and updated tutorials cover iOS 17, accessibility, Live Activities, and audio performance.
SwiftUI Tutorials: Learn the latest best practices for iOS 17.
Accessibility Inspector: Review your app’s accessibility experience.
Starting and updating Live Activities with ActivityKit push notifications: Use push tokens to update and end Live Activities.
Analyzing audio performance with Instruments: Ensure a smooth and immersive audio experience using Audio System Trace.
View the full list of new resources.
Discover what’s new in the Human Interface Guidelines.
NEWS
Catch up on the latest updatesAnnouncing contingent pricing: Give customers discounted pricing when they’re subscribed to a different subscription on the App Store.
Updated agreements and guidelines now available: Check out the latest changes that have been made to support updated policies and provide clarification.
Want to get Hello Developer in your inbox? Make sure you’ve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWe’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
Over the past few months, Apple experts have fielded questions about visionOS in Apple Vision Pro developer labs all over the world. Here are answers to some of the most frequent questions they’ve been asked, including insights on new concepts like entities, immersive spaces, collision shapes, and much more.
How can I interact with an entity using gestures?There are three important pieces to enabling gesture-based entity interaction:
private var tapGesture: some Gesture {
TapGesture()
.targetedToAnyEntity()
.onEnded { gestureValue in
let tappedEntity = gestureValue.entity
print(tappedEntity.name)
}
}
It’s also a good idea to give an interactive entity a HoverEffectComponent, which enables the system to trigger a standard highlight effect when the user looks at the entity.
Should I use a window group, an immersive space, or both?Consider the technical differences between windows, volumes, and immersive spaces when you decide which scene type to use for a particular feature in your app.
Here are some significant technical differences that you should factor into your decision:
Explore the Hello World sample code to familiarize yourself with the behaviors of each scene type in visionOS.
How can I visualize collision shapes in my scene?Use the Collision Shapes debug visualization in the Debug Visualizations menu, where you can find several other helpful debug visualizations as well. For information on debug visualizations, check out Diagnosing issues in the appearance of a running app.
Can I position SwiftUI views within an immersive space?Yes! You can position SwiftUI views in an immersive space with the offset(x:y:) and offset(z:) methods. It’s important to remember that these offsets are specified in points, not meters. You can utilize PhysicalMetric to convert meters to points.
What if I want to position my SwiftUI views relative to an entity in a reality view?Use the RealityView attachments API to create a SwiftUI view and make it accessible as a ViewAttachmentEntity. This entity can be positioned, oriented, and scaled just like any other entity.
RealityView { content, attachments in
// Fetch the attachment entity using the unique identifier.
let attachmentEntity = attachments.entity(for: "uniqueID")!
// Add the attachment entity as RealityView content.
content.add(attachmentEntity)
} attachments: {
// Declare a view that attaches to an entity.
Attachment(id: "uniqueID") {
Text("My Attachment")
}
}
Can I position windows programmatically?
There’s no API available to position windows, but we’d love to know about your use case. Please file an enhancement request. For more information on this topic, check out Positioning and sizing windows.
Is there any way to know what the user is looking at?As noted in Adopting best practices for privacy and user preferences, the system handles camera and sensor inputs without passing the information to apps directly. There's no way to get precise eye movements or exact line of sight. Instead, create interface elements that people can interact with and let the system manage the interaction. If you have a use case that you can't get to work this way, and as long as it doesn't require explicit eye tracking, please file an enhancement request.
When are the onHover and onContinuousHover actions called on visionOS?The onHover and onContinuousHover actions are called when a finger is hovering over the view, or when the pointer from a connected trackpad is hovering over the view.
Can I show my own immersive environment textures in my app?If your app has an ImmersiveSpace open, you can create a large sphere with an UnlitMaterial and scale it to have inward-facing geometry:
struct ImmersiveView: View {
var body: some View {
RealityView { content in
do {
// Create the sphere mesh.
let mesh = MeshResource.generateSphere(radius: 10)
// Create an UnlitMaterial.
var material = UnlitMaterial(applyPostProcessToneMap: false)
// Give the UnlitMaterial your equirectangular color texture.
let textureResource = try await TextureResource(named: "example")
material.color = .init(tint: .white, texture: .init(textureResource))
// Create the model.
let entity = ModelEntity(mesh: mesh, materials: [material])
// Scale the model so that it's mesh faces inward.
entity.scale.x *= -1
content.add(entity)
} catch {
// Handle the error.
}
}
}
}
I have existing stereo videos. How can I convert them to MV-HEVC?
AVFoundation provides APIs to write videos in MV-HEVC format. For a full example, download the sample code project Converting side-by-side 3D video to multiview HEV.
To convert your videos to MV-HEVC:
var compressionProperties = outputSettings[AVVideoCompressionPropertiesKey] as! [String: Any]
// Specifies the parallax plane.
compressionProperties[kVTCompressionPropertyKey_HorizontalDisparityAdjustment as String] = horizontalDisparityAdjustment
// Specifies the horizontal FOV (90 degrees is chosen in this case.)
compressionProperties[kCMFormatDescriptionExtension_HorizontalFieldOfView as String] = horizontalFOV
// Create a tagged buffer for each stereoView.
let taggedBuffers: [CMTaggedBuffer] = [
.init(tags: [.videoLayerID(0), .stereoView(.leftEye)], pixelBuffer: leftSample.imageBuffer!),
.init(tags: [.videoLayerID(1), .stereoView(.rightEye)], pixelBuffer: rightSample.imageBuffer!)
]
// Append the tagged buffers to the asset writer input adaptor.
let didAppend = adaptor.appendTaggedBuffers(taggedBuffers,
withPresentationTime: leftSample.presentationTimeStamp)
How can I light my scene in RealityKit on visionOS?
You can light your scene in RealityKit on visionOS by:
You can create materials with custom shading in Reality Composer Pro using the Shader Graph. A material created this way is accessible to your app as a ShaderGraphMaterial, so that you can dynamically change inputs to the shader in your code.
For a detailed introduction to the Shader Graph, watch Explore materials in Reality Composer Pro.
How can I position entities relative to the position of the device?In an ImmersiveSpace, you can get the full transform of the device using the queryDeviceAnchor(atTimestamp:) method.
Learn more about building apps for visionOS Q&A: Spatial design for visionOS View now Spotlight on: Developing for visionOS View now Spotlight on: Developer tools for visionOS View nowSample code contained herein is provided under the Apple Sample Code License.
Apple Vision Pro will have a brand-new App Store, where people can discover and download incredible apps for visionOS. Whether you’ve created a new visionOS app or are making your existing iPad or iPhone app available on Apple Vision Pro, here’s everything you need to know to prepare and submit your app to the App Store.
The Apple Developer Program License Agreement has been revised to support updated policies and provide clarification. The revisions include:
Definitions, Section 3.3.3(N): Updated "Tap to Present ID" to "ID Verifier"
Definitions, Section 14.10: Updated terms regarding governing law and venue
Section 3.3: Reorganized and categorized provisions for clarity
Section 3.3.3(B): Clarified language on privacy and third-party SDKs
Section 6.7: Updated terms regarding analytics
Section 12: Clarified warranty disclaimer language
Attachment 1: Updated terms for use of Apple Push Notification Service and Local Notifications
Attachment 9: Updated terms for Xcode Cloud compute hours included with Apple Developer Program membership
Contingent pricing for subscriptions on the App Store — a new feature that helps you attract and retain subscribers — lets you give customers a discounted subscription price as long as they’re actively subscribed to a different subscription. It can be used for subscriptions from one developer or two different developers. We’re currently piloting this feature and will be onboarding more developers in the coming months. If you’re interested in implementing contingent pricing in your app, you can start planning today and sign up to get notified when more details are available in January.
The beta versions of iOS 17.3, iPadOS 17.3, macOS 14.3, tvOS 17.3, and watchOS 10.3 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 15.2 beta.
Welcome to Hello Developer. In this edition: Check out new videos on Game Center and the Journaling Suggestions API, get visionOS guidance straight from the spatial design team, meet three App Store Award winners, peek inside the time capsule that is Ancient Board Game Collection, and more.
VIDEOS
Manage Game Center with the App Store Connect APIIn this new video, discover how you can use the App Store Connect API to automate your Game Center configurations outside of App Store Connect on the web.
Manage Game Center with the App Store Connect API Watch nowAnd find out how the new Journaling Suggestions API can help people reflect on the small moments and big events in their lives through your app — all while protecting their privacy.
Discover the Journaling Suggestions API Watch nowQ&A
Get your spatial design questions answeredWhat’s the best way to make a great first impression in visionOS? What’s a “key moment”? And what are some easy methods for making spatial computing visual design look polished? Get answers to these questions and more.
Q&A: Spatial design for visionOS View nowFEATURED
Celebrate the winners of the 2023 App Store AwardsEvery year, the App Store celebrates exceptional apps that improve people’s lives while showcasing the highest levels of technical innovation, user experience, design, and positive cultural impact. Find out how the winning teams behind Finding Hannah, Photomator, and Unpacking approached their incredible work this year.
“We’re trying to drive change": Meet three App Store Award-winning teams View nowMissed the big announcement? Check out the full list of 2023 winners.
NEWS
Xcode Cloud now included with membershipStarting January 2024, all Apple Developer Program memberships will include 25 compute hours per month on Xcode Cloud as a standard, with no additional cost. Learn more.
BEHIND THE DESIGN
Travel back in time with Ancient Board Game CollectionKlemens Strasser’s Ancient Board Game Collection blends the new and the very, very old. Its games date back centuries: Hnefatafl is said to be nearly 1,700 years old, while the Italian game Latrunculi is closer to 2,000. “I found a book on ancient board games by an Oxford professor and it threw me right down a rabbit hole,” Strasser says. Find out how the Austria-based developer and a team of international artists gave these ancient games new life.
With Ancient Board Game Collection, Klemens Strasser goes back in time View nowDOCUMENTATION
Get creative with 3D immersion, games, SwiftUI, and moreThis month’s new sample code, tutorials, and documentation cover everything from games to passing control between apps to addressing reasons for common crashes. Here are a few highlights:
Game Center matchmaking essentials, rules, and testing: Learn how to create custom matchmaking rules for better matches between players and test the rules before applying them.
Incorporating real-world surroundings in an immersive experience: This sample code project helps you use scene reconstruction in ARKit to give your app an idea of the shape of the person’s surroundings and to bring your app experience into their world.
Creating a macOS app: Find out how to bring your SwiftUI app to macOS, including adding new views tailored to macOS and modifying others to work better across platforms.
Creating a watchOS app: Find out how to bring your SwiftUI app to watchOS, including customizing SwiftUI views to display the detail and list views on watchOS.
View the full list of new resources.
View what’s new in the Human Interface Guidelines.
NEWS
Catch up on the latest updatesApp Store holiday schedule: We’ll remain open throughout the holiday season and look forward to accepting your submissions. However, reviews may take a bit longer to complete from December 22 to 27.
Sandbox improvements: Now you can change a test account’s storefront, adjust subscription renewal rates, clear purchase history, simulate interrupted purchase flows directly on iPhone or iPad, and test Family Sharing.
New software releases: Build your apps using the latest developer tools and test them on this week’s OS releases. Download Xcode 15.1 RC, and the RC versions of iOS 17.2, iPadOS 17.2, macOS 14.2, tvOS 17.2, and watchOS 10.2.
Want to get Hello Developer in your inbox? Make sure you’ve opted in to receive emails about developer news and events by updating your email preferences in your developer account.
Share your thoughtsWe’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
Spatial computing offers unique opportunities and challenges when designing apps and games. At WWDC23, the Apple design team hosted a wide-ranging Q&A to help developers explore designing for visionOS. Here are some highlights from that conversation, including insights on the spectrum of immersion, key moments, and sound design.
What’s the best way to make a great first impression on this platform?While it depends on your app, of course, starting in a window is a great way to introduce people to your app and let them control the amount of immersion. We generally recommend not placing people into a fully immersive experience right away — it’s better to make sure they’re oriented in your app before transporting them somewhere else.
What should I consider when bringing an existing iPadOS or iOS app to visionOS?Think about a key moment where your app would really shine spatially. For example, in the Photos app for visionOS, opening a panoramic photo makes the image wrap around your field of view. Ask yourself what that potential key moment — an experience that isn’t bound by a screen — is for your app.
From a more tactical perspective, consider how your UI will need to be optimized for visionOS. To learn more, check out “Design for spatial user interfaces”.
Design for spatial user interfaces Watch now Can you say a bit more about what you mean by a “key moment”?A key moment is a feature or interaction that takes advantage of the unique capabilities of visionOS. (Think of it as a spatial or immersive highlight in your app.) For instance, if you’re creating a writing app, your key moment might be a focus mode in which you immerse someone more fully in an environment or a Spatial Audio soundscape to get them in the creative zone. That’s just not possible on a screen-based device.
I often use a grid system when designing for iOS and macOS. Does that still apply here?Definitely! The grid can be very useful for designing windows, and point sizes translate directly between platforms. Things can get more complex when you’re designing elements in 3D, like having nearby controls for a faraway element. To learn more, check out “Principles of spatial design.”
Principles of spatial design Watch now What’s the best way to test Apple Vision Pro experiences without the device?You can use the visionOS simulator in Xcode to recreate system gestures, like pinch, drag, tap, and zoom.
What’s the easiest way to make my spatial computing design look polished?As a starting point, we recommend using the system-provided UI components. Think about hover shapes, how every element appears by default, and how they change when people look directly at them. When building custom components or larger elements like 3D objects, you'll also need to customize your hover effects.
What interaction or ergonomic design considerations should I keep in mind when designing for visionOS?Comfort should guide experiences. We recommend keeping your main content in the field of view, so people don't need to move their neck and body too much. The more centered the content is in the field of view, the more comfortable it is for the eyes. It's also important to consider how you use input. Make sure you support system gestures in your app so people have the option to interact with content indirectly (using their eyes to focus an element and hand gestures, like a pinch, to select). For more on design considerations, check out “Design considerations for vision and motion.”
Design considerations for vision and motion Watch now Are there design philosophies for fully immersive experiences? Should the content wrap behind the person’s head, above them, and below them?Content can be placed anywhere, but we recommend providing only the amount of immersion needed. Apps can create great immersive experiences without taking over people's entire surroundings. To learn more, check out the Human Interface Guidelines.
Human Interface Guidelines: Immersive experiences
Are there guidelines for creating an environment for a fully immersive experience?First, your environment should have a ground plane under the feet that aligns with the real world. As you design the specifics of your environment, focus on key details that will create immersion. For example, you don't need to render all the details of a real theater to convey the feeling of being in one. You can also use subtle motion to help bring an environment to life, like the gentle movement of clouds in the Mount Hood environment.
What else should I consider when designing for spatial computing?Sound design comes to mind. When designing for other Apple platforms, you may not have placed as much emphasis on creating audio for your interfaces because people often mute sounds on their devices (or it's just not desirable for your current experience). With Apple Vision Pro, sound is crucial to creating a compelling experience.
People are adept at understanding their surroundings through sound, and you can use sound in your visionOS app or game to help people better understand and interact with elements around them. When someone presses a button, for example, an audio cue helps them recognize and confirm their actions. You can position sound spatially in visionOS so that audio comes directly from the item a person interacts with, and the system can use their surroundings to give it the appropriate reverberation and texture. You can even create spatial soundscapes for scenes to make them feel more lifelike and immersive.
For more on designing sound for visionOS, check out “Explore immersive sound design.”
Explore immersive sound design Watch now Learn moreFor even more on designing for visionOS, check out more videos, the Human Interface Guidelines, and the Apple Developer website.
Develop your first immersive app Watch now Get started with building apps for spatial computing Watch now Build great games for spatial computing Watch nowEvery year, the App Store Awards celebrate exceptional apps that improve people’s lives while showcasing the highest levels of technical innovation, user experience, design, and positive cultural impact.
This year’s winners were drawn from a list of 40 finalists that included everything from flight trackers to retro games to workout planners to meditative puzzles. In addition to exhibiting an incredible variety of approaches, styles, and techniques, these winners shared a thoughtful grasp and mastery of Apple tools and technologies.
Meet the winners and finalists of the 2023 App Store Awards
For the team behind the hidden-object game Finding Hannah, their win for Cultural Impact is especially meaningful. “We’re trying to drive change on the design level by bringing more personal stories to a mainstream audience,” says Franziska Zeiner, cofounder and managing director of the Fein Games studio, from her Berlin office. “Finding Hannah is a story that crosses three generations, and each faces the question: How truly free are we as women?”
The Hannah of Finding Hannah is a 39-year-old Berlin resident trying to navigate a career, relationships (including with her best friend/ex, Emma), and the meaning of true happiness. Players complete a series of found-object puzzles that move along the backstory of Hannah’s mother and grandmother to add a more personal touch to the game.
We’re trying to drive change on the design level by bringing more personal stories to a mainstream audience.
Franziska Zeiner, Fein Games co-founder and managing director
To design the art for the game’s different time periods, the team tried a different approach. “We wanted an art style that was something you’d see more on social media than in games,” says Zeiner. “The idea was to try to reach people who weren’t gamers yet, and we thought we’d most likely be able to do that if we found a style that hadn’t been seen in games before. And I do think that added a new perspective, and maybe helped us stand out a little bit.”
Learn more about Finding Hannah
Download Finding Hannah from the App Store
Pixelmator, the team behind Mac App of the Year winner Photomator, is no stranger to awards consideration, having received multiple Apple Design Awards in addition to their 2023 App Store Award. The latter is especially meaningful for the Lithuania-based team. “We’re still a Mac-first company,” says Simonas Bastys, lead developer of the Pixelmator team. “For what we do, Mac adds so many benefits to the user experience.”
To start adding Photomator to their portfolio of Mac apps back in 2020, Bastys and his team of engineers decided against porting over their UIKit and AppKit code. Instead, they set out to build Photomator specifically for Mac with SwiftUI. “We had a lot of experience with AppKit,” Bastys says, “but we chose to transition to SwiftUI to align with cutting-edge, future-proof technologies.”
The team zeroed in on maximizing performance, assuming that people would need to navigate and manipulate large libraries. They also integrated a wealth of powerful editing tools, such as repairing, debanding, batch editing, and much more. Deciding what to work on — and what to prioritize — is a constant source of discussion. “We work on a lot of ideas in parallel,” Bastys says, “and what we prioritize comes up very naturally, based on what’s ready for shipment and what new technology might be coming.” This year, that meant a focus on HDR.
We had a lot of experience with AppKit, but we wanted to create with native Mac technologies.
Simonas Bastys, lead developer of the Pixelmator team
How does Bastys and the Pixelmator team keep growing after so long? “This is the most exciting field in computer science to me,” says Bastys. “There’s so much to learn. I’m only now starting to even understand the depth of human vision and computer image processing. It’s a continuous challenge. But I see endless possibilities to make Photomator better for creators.”
Download Photomator from the Mac App Store
To create the Cultural Impact winner Unpacking, the Australian duo of creative director Wren Brier and technical director Tim Dawson drew on more than a decade of development experience. Their game — part zen puzzle, part life story — follows a woman through the chapters of her life as she moves from childhood bedroom to first apartment and beyond. Players solve puzzles by placing objects around each new dwelling while learning more about her history with each new level — something Brier says is akin to a detective story.
“You have this series of places, and you’re opening these hints, and you’re piecing together who this person is,” she says from the pair’s home in Brisbane.
Brier and Dawson are partners who got the idea for Unpacking from — where else? — one of their own early moves. “There was something gamelike about the idea of finishing one box to unlock the one underneath,” Brier says. “You’re completing tasks, placing items together on shelves and in drawers. Tim and I started to brainstorm the game right away.”
While the idea was technically interesting, says Dawson, the pair was especially drawn to the idea of unpacking as a storytelling vehicle. “This is a really weird example,” laughs Dawson, “but there’s a spatula in the game. That’s a pretty normal household item. But what does it look like? Is it cheap plastic, something that maybe this person got quickly? Is it damaged, like they’ve been holding onto it for a while? Is it one of those fancy brands with a rubberized handle? All of that starts painting a picture. It becomes this really intimate way of knowing a character.”
There was something game-like about the idea of finishing one box to unlock the one underneath.
Wren Brier, Unpacking creative director
Those kinds of discussions — spatula-based and otherwise — led to a game that includes novel uses of technology, like the haptic feedback you get when you shake a piggy bank or board game. But its diverse, inclusive story is the reason behind its App Store Award nod for Cultural Impact. Brier and Dawson say players of all ages and backgrounds have shared their love of the game, drawn by the universal experience of moving yourself, your belongings, and your life into a new home. “One guy even sent us a picture of his bouldering shoes and told us they were identical to the ones in the game,” laughs Brier. “He said, ‘I have never felt so seen.’”
Klemens Strasser will be the first to tell you that prior to launching his Ancient Board Game Collection, he wasn’t especially skilled at Hnefatafl. “Everybody knows chess and everybody knows backgammon,” says the indie developer from his home office in Austria, “but, yeah, I didn’t really know that one.”
Today, Strasser runs what may well be the hottest Hnefatafl game in town. The Apple Design Award finalist for Inclusivity Ancient Board Game Collection comprises nine games that reach back not years or decades but centuries — Hnefatafl (or Viking chess) is said to be nearly 1,700 years old, while the Italian game Latrunculi is closer to 2,000. And while games like Konane, Gomoku, and Five Field Kono might not be household names, Strasser’s collection gives them fresh life through splashy visuals, a Renaissance faire soundtrack, efficient onboarding, and even a bit of history.
Strasser is a veteran of Flexibits (Fantastical, Cardhop) and the developer behind such titles as Letter Rooms, Subwords and Elementary Minute (for which he won a student Apple Design Award in 2015). But while he was familiar with Nine Men’s Morris — a game popular in Austria he’d play with his grandma — he wasn’t exactly well versed in third-century Viking pastimes until a colleague brought Hnefatafl to his attention three years ago. “It was so different than the traditional symmetric board games I knew,” he says. “I really fell in love with it.”
Less appealing were mobile versions of Hnefatafl, which Strasser found lacking. “The digital versions of many board games have a certain design,” he says. “It’s usually pretty skeuomorphic, with a lot of wood and felt and stuff like that. That just didn’t make me happy. And I thought, ‘Well, if I can’t find one I like, I’ll build it.’”
I found a book on ancient board games by an Oxford professor and it threw me right down a rabbit hole.
Klemens Strasser
Using SpriteKit, Strasser began mocking up an iOS Hnefatafl prototype in his downtime. A programmer by trade — “I’m not very good at drawing stuff,” he demurs — Strasser took pains to keep his side project as simple as possible. “I always start with minimalistic designs for my games and apps, but these are games you play with some stones and maybe a piece of paper,” he laughs. “I figured I could build that myself.”
His Hnefatafl explorations came surprisingly fast — enough so that he started wondering what other long-lost games might be out there. “I found a book on ancient board games by an Oxford professor and it threw me right down a rabbit hole,” Strasser laughs. “I kept saying, ‘Oh, that’s an interesting game, and that’s also an interesting game, and that’s another interesting game.’” Before he knew it, his simple Hnefatafl mockup had become a buffet of games. “And I still have a list of like 20 games I’d still like to digitize,” he says.
For the initial designs of his first few games, Strasser tried to maintain the simple style of his Hnefatafl prototype. “But I realized that I couldn’t really represent the culture and history behind each game in that way,” he says, “so I hired people who live where the games are from.”
That’s where Ancient Board Game Collection really took off. Strasser began reaching out to artists from each ancient game’s home region — and the responses came fast. Out went the minimalist version of Ancient Board Game Collection, in came a richer take, powered by a variety of cultures and design styles. For Hnefatafl, Strasser made a fortuitous connection with Swedish designer Albina Lind. “I sent her a few images of like Vikings and runestones, and in two hours she came up with a design that was better than anything I could have imagined,” he says. “If I hadn’t run into her, I might not have finished the project. But it was so perfect that I had to continue.”
Lind was a wise choice. The Stockholm-based freelance artist had nearly a decade of experience designing games, including her own Norse-themed adventure, Dragonberg. “I instantly thought, ‘Well, this is my cup of tea,’” Lind says. Her first concept was relatively realistic, all dark wood and stone textures, before she settled on a more relaxed, animation-inspired style. “Sometimes going unreal, going cartoony, is even more work than being realistic,” she says with a laugh. Lind went on to design two additional ancient games: Dablot, the exact origins of which aren’t known but it which first turned up in 1892, and Halatafl, a 14th century game of Scandinavian origin.
Work arrived from around the globe. Italian designer Carmine Acierno contributed a mosaic-inspired version of Nine Men’s Morris; Honolulu-based designer Anna Fujishige brought a traditional Hawaiian flavor to Konane. And while the approach succeeded in preserving more of each game’s authentic heritage, it did mean iterating with numerous people over numerous emails. One example: Tokyo-based designer Yosuke Ando pitched changing Strasser’s initial designs for the Japanese game Gomoku altogether. “Klemens approached me initially with the idea of the game design to be inspired by ukiyo-e (paintings) and musha-e (woodblocks prints of warriors),” Ando says. “Eventually, we decided to focus on samurai warrior armor from musha-e, deconstructing it, and simplifying these elements into the game UI.”
While the design process continued, Strasser worked on an onboarding strategy — times nine. As you might suspect, it can be tricky to explain the rules and subtleties of 500-year-old games from lost civilizations, and Strasser’s initial approach — walkthroughs and puzzles designed to teach each game step by step — quickly proved unwieldy. So he went in the other direction, concentrating on writing “very simple, very understandable” rules with short gameplay animations that can be accessed at any time. “I picked games that could be explained in like three or four sentences,” he says. “And I wanted to make sure it was all accessible via VoiceOver.”
In fact, accessibility remained a priority throughout the entire project. (He wrote his master’s thesis on accessibility in Unity games.) As an Apple Design Award finalist for Inclusivity, Ancient Board Game Collection shines with best-in-class VoiceOver adoption, as well as support for Reduce Motion, Dynamic Type, and high-contrast game boards. “It’s at least some contribution to making everything better for everyone,” he says.
I picked games that could be explained in like three or four sentences. And I wanted to make sure it was all accessible via VoiceOver.
Klemens Strasser
Ancient Board Game Collection truly is for everyone, and it’s hardly hyperbole to call it a novel way to introduce games like Hnefatafl to a whole new generation of players. “Most people,” he says, “are just surprised that they’ve never heard of these games.”
Learn more about Ancient Board Game Collection
Download Ancient Board Game Collection from the App Store
Behind the Design is a series that explores design practices and philosophies from each of the winners and finalists of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Xcode Cloud, the continuous integration and delivery service built into Xcode, accelerates the development and delivery of high-quality apps. It brings together cloud-based tools that help you build apps, run automated tests in parallel, deliver apps to testers, and view and manage user feedback.
We’re pleased to announce that as of January 2024, all Apple Developer Program memberships will include 25 compute hours per month on Xcode Cloud as a standard, with no additional cost. If you’re already subscribed to Xcode Cloud for free, no additional action is required on your part. And if you haven’t tried Xcode Cloud yet, now is the perfect time to start building your app for free in just a few minutes.
Third-party SDK privacy manifest and signatures. Third-party software development kits (SDKs) can provide great functionality for apps; they can also have the potential to impact user privacy in ways that aren’t obvious to developers and users. As a reminder, when you use a third-party SDK with your app, you are responsible for all the code the SDK includes in your app, and need to be aware of its data collection and use practices.
At WWDC23, we introduced new privacy manifests and signatures for SDKs to help app developers better understand how third-party SDKs use data, secure software dependencies, and provide additional privacy protection for users. Starting in spring 2024, if your new app or app update submission adds a third-party SDK that is commonly used in apps on the App Store, you’ll need to include the privacy manifest for the SDK. Signatures are also required when the SDK is used as a binary dependency. This functionality is a step forward for all apps, and we encourage all SDKs to adopt it to better support the apps that depend on them.
Learn more and view list of commonly-used third-party SDKs
New use cases for APIs that require reasons. When you upload a new app or app update to App Store Connect that uses an API (including from third-party SDKs) that requires a reason, you’ll receive a notice if you haven’t provided an approved reason in your app’s privacy manifest. Based on the feedback we received from developers, the list of approved reasons has been expanded to include additional use cases. If you have a use case that directly benefits users that isn’t covered by an existing approved reason, submit a request for a new reason to be added.
Starting in spring 2024, in order to upload your new app or app update to App Store Connect, you’ll be required to include an approved reason in the app’s privacy manifest which accurately reflects how your app uses the API.
Have questions on designing your app or implementing a technology? We’re here to help you find answers, no matter where you are in your development journey. One-on-one consultations with Apple experts in December — and newly published dates in January — are available now.
We’ll have lots more consultations and other activities in store for 2024 — online, in person, and in multiple languages.
The busiest season on the App Store is almost here! Make sure your apps and games are up to date and ready in advance of the upcoming holidays. We’ll remain open throughout the season and look forward to accepting your submissions. On average, 90% of submissions are reviewed in less than 24 hours. However, reviews may take a bit longer to complete from December 22 to 27.
Join us in celebrating the work of these outstanding developers from around the world!
Every year, the App Store celebrates exceptional apps that improve people’s lives while showcasing the highest levels of technical innovation, user experience, design, and positive cultural impact. This year we’re proud to recognize nearly 40 outstanding finalists. Winners will be announced in the coming weeks.
APPLE VISION PRO APPS FOR ENTERPRISE
PTC’s CAD products have been at the forefront of the engineering industry for more than three decades. And the company’s AR/VR CTO, Stephen Prideaux-Ghee, has too. “I’ve been doing VR for 30 years, and I’ve never had this kind of experience before,” he says. “I almost get so blasé about VR. But when I had [Apple Vision Pro] on, walking around digital objects and interacting with others in real time — it’s one of those things that makes you stop in your tracks."
Prideaux-Ghee says Apple Vision Pro offers PTC an opportunity to bring together components of the engineering and manufacturing process like never before. “Our customers either make stuff, or they make the machines that help somebody else make stuff,” says Prideaux-Ghee. And that stuff can be anything from chairs to boats to spaceships. “I can almost guarantee that the chair you’re sitting on is made by one of our customers,” he says.
As AR/VR CTO (which he says means “a fancy title for somebody who comes up with crazy ideas and has a reasonably good chance of implementing them”), Prideaux-Ghee describes PTC’s role as the connective tissue between the multiple threads of production. “When you’ve got a big, international production process, it's not always easy for the people involved to talk to each other. Our thought was: ‘Hey, we’re in the middle of this, so let’s come up with a simple mechanism that allows everyone to do so.’”
I’ve been doing VR for 30 years, and I’ve never had this kind of experience before.
Stephen Prideaux-Ghee, AR/VR CTO of PTC
For PTC, it’s all about communication and collaboration. “You can be a single user and get a lot of value from our app,” says Prideaux-Ghee, “but it really starts when you have multiple people collaborating, either in the same room or over FaceTime and SharePlay.” He speaks from experience; PTC has tested its app with everyone in the same space, and spread out across different countries.
"It enables some really interesting use cases, especially with passthrough," says Prideaux-Ghee. "You can use natural human interactions with a remote device."
Development is going fast. In recent weeks, PTC completed a prototype in which changes made on their iPad CAD software immediately reflect in Apple Vision Pro. “Before, we weren’t able to drive from the CAD software,” he explains. “Now, one person can run our CAD software pretty much unmodified and another can see changes instantly in 3D, at full scale. It’s really quite magical.”
Read moreBusinesses of all kinds and sizes are exploring the possibilities of the infinite canvas of Apple Vision Pro — and realizing ideas that were never before possible.
JigSpace is in the driver’s seat View nowIn this series of videos, you can learn how to level up your pro app or game by harnessing the speed and power of Apple platforms. We’ll discover GPU advancements, explore new Metal profiling tools for M3 and A17 Pro, and share performance best practices for Metal shaders.
Explore GPU advancements in M3 and A17 Pro Watch now Discover new Metal profiling tools for M3 and A17 Pro Watch now Learn performance best practices for Metal shaders Watch nowNew to developing games for Apple platforms? Familiarize yourself with the tools and technologies you need to get started.
APPLE VISION PRO APPS FOR ENTERPRISE
It’s one of the most memorable images from JigSpace’s early Apple Vision Pro explorations: A life-size Alfa Romeo C43 Formula 1 car, dark cherry red, built to scale, reflecting light from all around, and parked right in the room. The camera pans back over the car’s front wings; a graceful animation shows airflow over the wings and body.
Numa Bertron, cofounder and chief technology officer for JigSpace — the creative and collaborative company that partnered with Alfa Romeo for the model — has been in the driver’s seat for the project from day one and still wasn’t quite prepared to see the car in the spatial environment. “The first thing everyone wanted to do was get in,” he says. “Everyone was stepping over the side to get in, even though you can just, you know, walk through.”
The F1 car is just one component of JigSpace’s grand plans for visionOS. The company is leaning on the new platform to create avenues of creativity and collaboration never before possible.
Bertron brings up one of JigSpace’s most notable “Jigs” (the company term for spatial presentations): an incredibly detailed model of a jet engine. “On iPhone, it’s an AR model that expands and looks awesome, but it’s still on a screen,” he explains. On Apple Vision Pro, that engine becomes a life-size piece of roaring, spinning machinery — one that people can walk around, poke through, and explore in previously unimaginable detail.
“One of our guys is a senior 3D artist,” says Bertron, “and the first time he saw one of his models in space at scale — and walked around it with his hands free — he actually cried.”
We made that F1 Jig with tools everyone can use.
Numa Bertron, JigSpace cofounder and chief technology officer
Getting there required some background learning. Prior to developing for visionOS, Bertron had no experience with SwiftUI. “We’d never gone into Xcode, so we started learning SwiftUI and RealityKit. Honestly, we expected some pain. But since everything is preset, we had really nice rounded corners, blur effects, and smooth scrolling right off the bat.”
For people who’ve used JigSpace on iOS, the visionOS version will look familiar but feel quite different. “We asked ourselves: What's the appropriate size for an object in front of you?” asks Bertron. “What’s comfortable? Will that model be on the table or on the floor? Spatial computing introduces so many more opportunities — and more decisions.”
In the case of the F1 example, it also offers a chance to level up visually. “For objects that big, we’d never been able to achieve this level of fidelity on smaller devices, so we always had to compromise,” says Bertron. In visionOS, they were free to keep adding. “We’d look at a prototype and say, ‘Well, this still runs, so let’s double the size of the textures and add more screws and more effects!’” (It’s not just about functionality, but fun as well. You can remove a piece of the car — like a full-sized tire — and throw it backwards over your head.)
The incredible visual achievement is matched by new powers of collaboration. “If I point at the tire, the other person sees me, no matter where they are,” says Bertron. “I can grab the wheel and give it to them. I can circle something we need to fix, I can leave notes or record audio. It’s a full-on collaboration platform.” And it’s also for everyone, not just F1 drivers and aerospace engineers. “We made that F1 Jig with tools everyone can use.”
Download JigSpace from the App Store
Read moreBusinesses of all kinds and sizes are exploring the possibilities of the infinite canvas of Apple Vision Pro — and realizing ideas that were never before possible.
PTC is uniting the makers View nowGames simply don’t get much cuter than Kimono Cats, a casual cartoon adventure about two cats on a date (awww) that creator Greg Johnson made as a present for his wife. “I wanted to make a game she and I could play together,” says the Maui-based indie developer, “and I wanted it to be sweet, creative, and romantic.”
Kimono Cats is all three, and it’s also spectacularly easy to play and navigate. This Apple Design Award finalist for Interaction in games is set in a Japanese festival full of charming mini-games — darts, fishing, and the like — that are designed for maximum simplicity and casual fun. Players swipe up to throw darts at balloons that contain activities, rewards, and sometimes setbacks that threaten to briefly derail the date. Interaction gestures (like scooping fish) are simple and rewarding, and the gameplay variation and side activities (like building a village for your feline duo) fit right in.
“I’m a huge fan of Hayao Miyazaki and that kind of heartfelt, slower-paced style,” says Johnson. “What you see in Kimono Cats is a warmth and appreciation for Japanese culture.”
You also see a game that’s a product of its environment. Johnson’s been creating games since 1983 and is responsible for titles like Starfight, ToeJam and Earl, Doki-Doki Universe, and many more. His wife, Sirena, is a builder of model houses — miniature worlds not unlike the village in Kimono Cats. And the game’s concept was a reaction to the early days of COVID-19 lockdowns. “When we started building this in 2020, everybody was under so much weight and pressure,” he says. “We felt like this was a good antidote.”
To start creating the game, Johnson turned to artist and longtime collaborator Ferry Halim, as well as Tanta Vorawatanakul and Ferrari Duanghathai, a pair of developers who happen to be married. “Tanta and Ferrari would provide these charming little characters, and Ferry would come in to add animations — like moving their eyes,” says Johnson. “We iterated a lot on animating the bubbles — how fast they were moving, how many there were, how they were obscured. That was the product of a lot of testing and listening all throughout the development process.”
When we started with this in 2020, everybody was under so much weight and pressure. We felt like this was a good antidote.
Greg Johnson, Kimono Cats
Johnson notes that players can select characters without gender distinction — a detail that he and the Kimono Cats team prioritized from day one. “Whenever any companion kisses the player character on the cheek, a subtle rainbow will appear in the sky over their heads,” Johnson says. “This allows the gender of the cat characters to be open to interpretation by the users.”
Kimono Cats was designed with the simple goal of bringing smiles. “The core concept of throwing darts at bubbles isn't an earth-shaking idea by any stretch,” says Johnson, “but it was a way to interact with the storytelling that I hadn’t seen before, and the festival setting felt like a natural match.”
Find Kimono Cats on Apple Arcade
Behind the Design is a series that explores design practices and philosophies from each of the winners and finalists of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Businesses of all kinds and sizes are exploring the possibilities of the infinite canvas of Apple Vision Pro — and realizing ideas that were never before possible. We caught up with two of those companies — JigSpace and PTC — to find out how they’re approaching the new world of visionOS.
JigSpace is in the driver’s seat View now PTC is uniting the makers View nowDiscover the languages, tools, and frameworks you’ll need to build and test your apps in visionOS. Explore videos and resources that showcase productivity and collaboration, simulation and training, and guided work. And dive into workflows for creating or converting existing media, incorporating on-device and remote assets into your app, and much more.
Apple Vision Pro at work Keynote Watch now Keynote (ASL) Watch now Platforms State of the Union Watch now Platforms State of the Union (ASL) Watch now Design for Apple Vision ProWWDC sessions
Design for spatial input Watch now Design for spatial user interfaces Watch now Principles of spatial design Watch now Design considerations for vision and motion Watch now Explore immersive sound design Watch nowSample code, articles, documentation, and resources
Developer paths to Apple Vision ProWWDC sessions
Go beyond the window with SwiftUI Watch now Meet SwiftUI for spatial computing Watch now Meet ARKit for spatial computing Watch now What’s new in SwiftUI Watch now Discover Observation in SwiftUI Watch now Enhance your spatial computing app with RealityKit Watch now Build spatial experiences with RealityKit Watch now Evolve your ARKit app for spatial experiences Watch now Create immersive Unity apps Watch now Bring your Unity VR app to a fully immersive space Watch now Meet Safari for spatial computing Watch now Rediscover Safari developer features Watch now Design for spatial input Watch now Explore the USD ecosystem Watch now Explore USD tools and rendering Watch nowSample code, articles, documentation, and resources
Unity – XR Interaction Toolkit package
Unity – How Unity builds applications for Apple platforms
three.js – webGL and WebXR library
babylon.js – webGL and WebXR library
PlayCanvas – webGL and WebXR library
Immersiveweb – WebXR Device API
WebKit.org – Bug tracking for WebKit open source project
Frameworks to exploreWWDC sessions
Discover streamlined location updates Watch now Meet MapKit for SwiftUI Watch now What's new in MapKit Watch now Build spatial SharePlay experiences Watch now Share files with SharePlay Watch now Design spatial SharePlay experiences Watch now Discover Quick Look for spatial computing Watch now Create 3D models for Quick Look spatial experiences Watch now Explore pie charts and interactivity in Swift Charts Watch now Elevate your windowed app for spatial computing Watch now Create a great spatial playback experience Watch now Deliver video content for spatial experiences Watch nowSample code, articles, documentation, and resources
Placing content on detected planes
Incorporating real-world surroundings in an immersive experience
Tracking specific points in world space
Tracking preregistered images in 3D space
Explore a location with a highly detailed map and Look Around
Drawing content in a group session
Supporting Coordinated Media Playback
Adopting live updates in Core Location
Monitoring location changes with Core Location
Access enterprise data and assetsWWDC sessions
Meet Swift OpenAPI Generator Watch now Advances in Networking, Part 1 Watch now Advances in App Background Execution Watch now The Push Notifications primer Watch now Power down: Improve battery consumption Watch now Build robust and resumable file transfers Watch now Efficiency awaits: Background tasks in SwiftUI Watch now Use async/await with URLSession Watch now Meet SwiftData Watch now Explore the USD ecosystem Watch now What’s new in App Store server APIs Watch nowSample code, articles, documentation, and resources
Apple is proud to support and uplift the next generation of student developers, creators, and entrepreneurs. The Swift Student Challenge has given thousands of students the opportunity to showcase their creativity and coding capabilities through app playgrounds, and build real-world skills that they can take into their careers and beyond. From connecting their peers to mental health resources to identifying ways to support sustainability efforts on campus, Swift Student Challenge participants use their creativity to develop apps that solve problems they’re passionate about.
We’re releasing new coding resources, working with community partners, and announcing the Challenge earlier than in previous years so students can dive deep into Swift and the development process — and educators can get a head start in supporting them.
Applications will open in February 2024 for three weeks.
New for 2024, out of 350 overall winners, we’ll recognize 50 Distinguished Winners for their outstanding submissions and invite them to join us at Apple in Cupertino for three incredible days next summer.
Ready to level up your app or game? Join us around the world for a new set of developer labs, consultations, sessions, and workshops, hosted in person and online throughout November and December.
You can explore:
Discover activities in multiple time zones and languages.
The App Store’s commerce and payments system was built to enable you to conveniently set up and sell your products and services on a global scale in 44 currencies across 175 storefronts. Apple administers tax on behalf of developers in over 70 countries and regions and provides you with the ability to assign tax categories to your apps and in‑app purchases.
Periodically, we make updates to rates, categories, and agreements to accommodate new regulations and rate changes in certain regions. As of today, the following updates have been made in App Store Connect.
Tax ratesYour proceeds from the sale of eligible apps and in‑app purchases (including auto‑renewable subscriptions) have been increased to reflect the following reduced value-added tax (VAT) rates. Prices on the App Store haven’t changed.
If any of these categories or attributes are relevant to your apps or in-app purchases, you can review and update your selections in the Pricing and Availability section of My Apps.
Learn about setting tax categories
Paid Applications AgreementThe beta versions of iOS 17.2, iPadOS 17.2, macOS 14.2, tvOS 17.2, and watchOS 10.2 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 15.1 beta.
To check if a known issue from a previous beta release has been resolved or if there’s a workaround, review the latest release notes. Please let us know if you encounter an issue or have other comments. We value your feedback, as it helps us address issues, refine features, and update documentation.
TestFlight provides an easy way to get feedback on beta versions of your apps, so you can publish on the App Store with confidence. Now, improved controls in App Store Connect let you better evaluate tester engagement and manage participation to help you get the most out of beta testing. Sort testers by status and engagement metrics (like sessions, crashes, and feedback), and remove inactive testers who haven’t engaged. You can also filter by device and OS, and even select relevant testers to add to a new group.
Watch the October 30 event at apple.com.
The Push Notifications Console now includes metrics for notifications sent in production through the Apple Push Notification service (APNs). With the console’s intuitive interface, you’ll get an aggregated view of delivery statuses and insights into various statistics for notifications, including a detailed breakdown based on push type and priority.
Introduced at WWDC23, the Push Notifications Console makes it easy to send test notifications to Apple devices through APNs.
We’re thrilled with the excitement and enthusiasm from developers around the world at the Apple Vision Pro developer labs, and we’re pleased to announce new labs in New York City and Sydney. Join us to test directly on the device and connect with Apple experts for help with taking your visionOS, iPadOS, and iOS apps even further on this exciting new platform. Labs also take place in Cupertino, London, Munich, Shanghai, Singapore, and Tokyo.
Learn about other ways to work with Apple to prepare for visionOS.
The team behind Plex has a brilliant strategy for dealing with bugs and addressing potential issues: Find them first.
“We’ve got a pretty good process in place,” says Steve Barnegren, Plex senior software engineer on Apple platforms, “and when that’s the case, things don’t go wrong.”
Launched in 2009, Plex is designed to serve as a “global community for streaming content,” says engineering manager Alex Stevenson-Price, who’s been with Plex for more than seven years. A combination streaming service and media server, Plex aims to cover the full range of the streaming experience — everything from discovery to content management to organizing watchlists.
This allows us more time to investigate the right solutions.
Ami Bakhai, Plex product manager for platforms and partners
To make it all run smoothly, the Plex team operates on a six-week sprint, offering regular opportunities to think in blocks, define stop points in their workflow, and assess what’s next. “I’ve noticed that it provides more momentum when it comes to finalizing features or moving something forward,” says Ami Bakhai, product manager for platforms and partners. “Every team has their own commitments. This allows us more time to investigate the right solutions.”
The Plex team iterates, distributes, and releases quickly — so testing features and catching issues can be a tall order. (Plex releases regular updates during their sprints for its tvOS flagship, iOS, iPadOS, and macOS apps.)
Though Plex boasts a massive reach across all the platforms, it’s not powered by a massive number of people. The fully remote team relies on a well-honed mix of developer tools (like Xcode Cloud and TestFlight), clever internal organization, Slack integration, and a thriving community of loyal beta testers that stretches back more than a decade. “We’re relatively small,” says Danni Hemberger, Plex director of product marketing, “but we’re mighty.”
Over the summer, the Plex team made a major change to their QA process: Rather than bringing in their QA teams right before the release, they shifted QA to a continuous process that unfolds over every pull request. “The QA team would find something right at the end, which is when they’d start trying to break everything,” laughs Barnegren. “Now we can say, ‘OK, ten features have gone in, and all of them have had QA eyes on them, so we’re ready to press the button.’”
Now we can say, ‘OK, ten features have gone in, and all of them have had QA eyes on them, so we’re ready to press the button.'
Steve Barnegren, Plex senior software engineer on Apple platforms
The continuous QA process is a convenient mirror to the continuous delivery process. Previously, Plex tested before a new build was released to the public. Now, through Xcode Cloud, Plex sends nightly builds to all their employees, ensuring that everyone has access to the latest version of the app.
Once the release has been hammered out internally, it moves on to Plex’s beta testing community, which might be more accurately described as a beta testing city. It numbers about 8,000 people, some of whom date back to Plex’s earliest days. “That constant feedback loop is super valuable, especially when you have power users that understand your core product,” says Stevenson-Price.
All this feedback and communication is powered by TestFlight and Plex’s customer forums. “This is especially key because we have users supplying personal media for parts of the application, and that can be in all kinds of rare or esoteric formats,” says Barnegren.
[CI] is a safety net. Whenever you push code, your app is being tested and built in a consistent way. That’s so valuable, especially for a multi-platform app like ours.
Alex Stevenson-Price, Plex engineering manager
To top it all off, this entire process is automated with every new feature and every new bug fix. Without any extra work or manual delivery, the Plex team can jump right on the latest version — an especially handy feature for a company that’s dispersed all over the globe. “It’s a great reminder of ‘Hey, this is what’s going out,’ and allows my marketing team to stay in the loop,” says Hemberger.
It’s also a great use of a continuous integration system (CI). “I’m biased from my time spent as an indie dev, but I think all indie devs should try a CI like Xcode Cloud,” says Stevenson-Price. “I think some indies don’t always see the benefit on paper, and they’ll say, ‘Well, I build the app myself, so why do I need a CI to build it for me?’ But it’s a safety net. Whenever you push code, your app is being tested and built in a consistent way. That’s so valuable, especially for a multi-platform app like ours. And there are so many tools at your disposal. Once you get used to that, you can’t go back.”
Steffan Glynn’s Automatoys is a mix between a Rube Goldberg machine and a boardwalk arcade game — and there’s a very good reason why.
In 2018, the Cardiff-based developer visited the Musée Mécanique, a vintage San Francisco arcade packed with old-timey games, pinball machines, fortune tellers, and assorted gizmos. On that same trip, he stopped by an exhibit of Rube Goldberg sketches that showcased page after page of wildly intricate machines. “It was all about the delight of the pointless and captivating,” Glynn says. “There was a lot of crazy inspiration on that trip.”
That inspiration turned into Automatoys, an Apple Design Award finalist for Interaction in games. Automatoys is a single-touch puzzler in which players roll their marble from point A to point B by navigating a maze of ramps, elevators, catapults, switches, and more. True to its roots, the game is incredibly tactile; every switch and button feels lifelike, and players even insert a virtual coin to launch each level. And it unfolds to a relaxing and jazzy lo-fi soundtrack. “My brief to the sound designer was, ‘Please make this game less annoying,’” Glynn laughs.
While Automatoys’ machines may be intricate, its controls are anything but. Every button, claw, and catapult is controlled by a single tap. “And it doesn’t matter where you tap — the whole machine moves at once,” Glynn says. The mechanic doesn’t just make the game remarkably simple to learn; it also creates a sense of discovery. “I like that moment when the player is left thinking, ‘OK, well, I guess I’ll just start tapping and find out what happens.’”
To design each of the game’s 12 levels, Glynn first sketched his complex contraptions in Procreate. The ideas came fast and furious, but he found that building what he’d envisioned in his sketches proved elusive — so he changed his strategy. “I started playing with shapes directly in 3D space,” he says. “Once a level had a satisfying form, I’d then try to imagine what sort of obstacle each part could be. One cylinder would become a ferris wheel, another would become a spinning helix for the ball to climb, a square panel would become a maze, and so on.”
The game was a four-year passion project for Glynn, a seasoned designer who in 2018 left his gig with State of Play (where he contributed to such titles as Lumino City and Apple Design Award winner INKS.) to focus on creating “short, bespoke” games. There was just one catch: Though he had years of design experience, he’d never written a single line of code. To get up to speed, he threw himself into video tutorials and hands-on practice.
In short order, Glynn was creating Unity prototypes of what would become Automatoys. “As a designer, being able to prototype and test ideas is incredibly liberating. When you have those tools, you can quickly try things out and see for yourself what works.”
Download Automatoys from the App Store
Behind the Design is a series that explores design practices and philosophies from each of the winners and finalists of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Find out about our latest activities (including more Apple Vision Pro developer lab dates), learn how the Plex team embraced Xcode Cloud, discover how the inventive puzzle game Automatoys came to life, catch up on the latest news, and more.
Meet with Apple ExpertsHosted in person and online, our developer activities are for everyone, no matter where you are on your development journey. Find out how to enhance your existing app or game, refine your design, or launch a new project. Explore the list of upcoming activities worldwide.
Get ready for a new round of Apple Vision Pro developer lab datesDevelopers have been thrilled to experience their apps and games in the labs, and connect with Apple experts to help refine their ideas and answer questions. Ben Guerrette, chief experience officer at Spool, says, “That kind of learning experience is incredibly valuable.” Developer and podcaster David Smith says, “The first time you see your own app running for real is when you get the audible gasp.” Submit a lab request.
You can also request Apple Vision Pro compatibility evaluations. We’ll evaluate your apps and games directly on Apple Vision Pro to make sure they behave as expected and send you the results.
“Small but mighty”: Go behind the scenes with PlexDiscover how the streaming service and media player uses developer tools like Xcode Cloud to maintain a brisk release pace. “We’re relatively small,” says Danni Hemberger, director of product marketing at Plex, “but we’re mighty.”
“Small but mighty”: How Plex serves its global community View now Meet the mind behind Automatoys“I like the idea of a moment where players are left to say, ‘Well, I guess I’ll just start tapping and see what happens,’” says Steffan Glynn of Apple Design Award finalist Automatoys, an inspired puzzler in which players navigate elaborate contraptions with a single tap. Find out how Glynn brought his Rube Goldberg-inspired game to life.
The gorgeous gadgets of Automatoys View now Catch up on the latest news and updatesMake sure you’re up to date on feature announcements, important guidance, new documentation, and more.
Get ready with the latest beta releases: Build your apps using the latest developer tools and test them on the most recent OS releases. Download Xcode 15.1 beta, and the beta 2 versions of iOS 17.1, iPadOS 17.1, macOS 14.1, tvOS 17.1, and watchOS 10.1.
Use RealityKit to create an interactive ride in visionOS: The developer sample project “Swift Splash” leverages RealityKit and Reality Composer Pro to create a waterslide by combining modular slide pieces. And once you finish your ride, you can release an adventurous goldfish to try it out.
Take your iPad and iPhone apps even further on Apple Vision Pro: A brand‑new App Store will launch with Apple Vision Pro, featuring apps and games built for visionOS, as well as hundreds of thousands of iPad and iPhone apps that run great on visionOS too.
App Store Connect API 3.0: This release includes support for Game Center, pre-orders by region, and more.
Debugging universal links: Investigate why your universal links are opening in Safari instead of your app.
We’d love to hear from you. If you have suggestions for our activities or stories, please let us know.
The beta versions of iOS 17.1, iPadOS 17.1, macOS 14.1, tvOS 17.1, and watchOS 10.1 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 15.
To check if a known issue from a previous beta release has been resolved or if there’s a workaround, review the latest release notes. Please let us know if you encounter an issue or have other feedback. We value your feedback, as it helps us address issues, refine features, and update documentation.
Join us around the world for a variety of sessions, consultations, labs, and more — tailored for you.
Apple developer activities are for everyone, no matter where you are on your development journey. Activities take place all year long, both online and in person around the world. Whether you’re looking to enhance your existing app or game, refine your design, or launch a new project, there’s something for you.
Offering your app or game for pre-order is a great way to build awareness and excitement for your upcoming releases on the App Store. And now you can offer pre-orders on a regional basis. People can pre-order your app in a set of regions that you choose, even while it’s available for download in other regions at the same time. With this new flexibility, you can expand your app to new regions by offering it for pre-order and set different release dates for each region.
iOS 17, iPadOS 17, macOS Sonoma, tvOS 17, and watchOS 10 will soon be available to customers worldwide. Build your apps and games using the Xcode 15 Release Candidate and latest SDKs, test them using TestFlight, and submit them for review to the App Store. You can now start deploying seamlessly to TestFlight and the App Store from Xcode Cloud. With exciting new capabilities, as well as major enhancements across languages, frameworks, tools, and services, you can deliver even more unique experiences on Apple platforms.
Xcode and Swift. Xcode 15 enables you to code and design your apps faster with enhanced code completion, interactive previews, and live animations. Swift unlocks new kinds of expressive and intuitive APIs by introducing macros. The new SwiftData framework makes it easy to persist data using declarative code. And SwiftUI brings support for creating more sophisticated animations with phases and keyframes, and simplified data flows using the new Observation framework.
Widgets and Live Activities. Widgets are now interactive and run in new places, like StandBy on iPhone, the Lock Screen on iPad, the desktop on Mac, and the Smart Stack on Apple Watch. With SwiftUI, the system adapts your widget’s color and spacing based on context, extending its usefulness across platforms. Live Activities built with WidgetKit and ActivityKit are now available on iPad to help people stay on top of what’s happening live in your app.
Metal. The new game porting toolkit makes it easier than ever to bring games to Mac and the Metal shader converter dramatically simplifies the process of converting your game’s shaders and graphics code. Scale your games and production renderers to create even more realistic and detailed scenes with the latest updates to ray tracing. And take advantage of many other enhancements that make it even simpler to deliver fantastic games and pro apps on Apple silicon.
App Shortcuts. When you adopt App Shortcuts, your app’s key features are now automatically surfaced in Spotlight, letting people quickly access the most important views and actions in your app. A new design makes running your app’s shortcuts even simpler and new natural language capabilities let people execute your shortcuts with their voice with more flexibility.
App Store. It’s now even simpler to merchandise your in-app purchases and subscriptions across all platforms with new SwiftUI views in StoreKit. You can also test more of your product offerings using the latest enhancements to StoreKit testing in Xcode, the Apple sandbox environment, and TestFlight. With pre-orders by region, you can build customer excitement by offering your app in new regions with different release dates. And with the most dynamic and personalized app discovery experience yet, the App Store helps people find more apps through tailored recommendations based on their interests and preferences.
And more. Learn about advancements in machine learning, Object Capture, Maps, Passkeys, SharePlay, and so much more.
Starting in April 2024, apps submitted to the App Store must be built with Xcode 15 and the iOS 17 SDK, tvOS 17 SDK, or watchOS 10 SDK (or later).
Apple Entrepreneur Camp supports underrepresented founders and developers, and encourages the pipeline and longevity of these entrepreneurs in technology. Building on the success of our alumni from cohorts for female*, Black, and Hispanic/Latinx founders, starting this fall, we’re expanding our reach to welcome professionals from Indigenous backgrounds who are looking to enhance and grow their existing app-driven businesses. Attendees benefit from one-on-one code-level guidance, receive insight, inspiration, and unprecedented access to Apple engineers and experts, and become part of the extended global network of Apple Entrepreneur Camp alumni.
Applications are now open for founders and developers from these groups who have either an existing app on the App Store, a functional beta build in TestFlight, or the equivalent. Attendees will join us online starting in October 2023. We welcome eligible entrepreneurs with app-driven organizations to apply and we encourage you to share these details with those who may be interested.
Apply by September 24, 2023.
* Apple believes that gender expression is a fundamental right. We welcome all women to apply to this program.
A brand‑new App Store will launch with Apple Vision Pro, featuring apps and games built for visionOS, as well as hundreds of thousands of iPad and iPhone apps that run great on visionOS too. Users can access their favorite iPad and iPhone apps side by side with new visionOS apps on the infinite canvas of Apple Vision Pro, enabling them to be more connected, productive, and entertained than ever before. And since most iPad and iPhone apps run on visionOS as is, your app experiences can easily extend to Apple Vision Pro from day one — with no additional work required.
Timing. Starting this fall, an upcoming developer beta release of visionOS will include the App Store. By default, your iPad and/or iPhone apps will be published automatically on the App Store on Apple Vision Pro. Most frameworks available in iPadOS and iOS are also included in visionOS, which means nearly all iPad and iPhone apps can run on visionOS, unmodified. Customers will be able to use your apps on visionOS early next year when Apple Vision Pro becomes available.
Making updates, if needed. In the case that your app requires a capability that is unavailable on Apple Vision Pro, App Store Connect will indicate that your app isn’t compatible and it won’t be made available. To make your app available, you can provide alternative functionality, or update its UIRequiredDeviceCapabilities. If you need to edit your existing app’s availability, you can do so at any time in App Store Connect.
To see your app in action, use the visionOS simulator in Xcode 15 beta. The simulator lets you interact with and easily test most of your app’s core functionality. To run and test your app on an Apple Vision Pro device, you can submit your app for a compatibility evaluation or sign up for a developer lab.
Beyond compatibility. If you want to take your app to the next level, you can make your app experience feel more natural on visionOS by building your app with the visionOS SDK. Your app will adopt the standard visionOS system appearance and you can add elements, such as 3D content tuned for eyes and hands input. To learn how to build an entirely new app or game that takes advantage of the unique and immersive capabilities of visionOS, view our design and development resources.
Watch the replay from September 12 at apple.com.
The Apple Developer Program License Agreement has been revised to support upcoming features and updated policies, and to provide clarification. The revisions include:
Definitions, Section 3.3.39: Specified requirements for use of the Journaling Suggestions API.
Schedule 1 Exhibit D Section 3 and Schedules 2 and 3 Exhibit E Section 3: Added language about the Digital Services Act (DSA) redress options available to developers based in the European Union.
Schedule 1 Section 6.3 and Schedules 2 and 3 Section 7.3: Added clarifying language that the content moderation process is subject to human and systematic review and action pursuant to notices of illegal and harmful content.
As CEO of Flexibits, the team behind successful apps like Fantastical and Cardhop, Michael Simmons has spent more than a decade minding every last facet of his team’s work. But when he brought Fantastical to the Apple Vision Pro labs in Cupertino this summer and experienced it for the first time on the device, he felt something he wasn’t expecting.
“It was like seeing Fantastical for the first time,” he says. “It felt like I was part of the app.”
That sentiment has been echoed by developers around the world. Since debuting in early August, the Apple Vision Pro labs have hosted developers and designers like Simmons in London, Munich, Shanghai, Singapore, Tokyo, and Cupertino. During the day-long lab appointment, people can test their apps, get hands-on experience, and work with Apple experts to get their questions answered. Developers can apply to attend if they have a visionOS app in active development or an existing iPadOS or iOS app they’d like to test on Apple Vision Pro.
Learn more about Apple Vision Pro developer labs
For his part, Simmons saw Fantastical work right out of the box. He describes the labs as “a proving ground” for future explorations and a chance to push software beyond its current bounds. “A bordered screen can be limiting. Sure, you can scroll, or have multiple monitors, but generally speaking, you’re limited to the edges,” he says. “Experiencing spatial computing not only validated the designs we’d been thinking about — it helped us start thinking not just about left to right or up and down, but beyond borders at all.”
And as not just CEO but the lead product designer (and the guy who “still comes up with all these crazy ideas”), he came away from the labs with a fresh batch of spatial thoughts. “Can people look at a whole week spatially? Can people compare their current day to the following week? If a day is less busy, can people make that day wider? And then, what if like you have the whole week wrap around you in 360 degrees?” he says. “I could probably — not kidding — talk for two hours about this.”
‘The audible gasp’David Smith is a prolific developer, prominent podcaster, and self-described planner. Shortly before his inaugural visit to the Apple Vision Pro developer labs in London, Smith prepared all the necessary items for his day: a MacBook, Xcode project, and checklist (on paper!) of what he hoped to accomplish.
All that planning paid off. During his time with Apple Vision Pro, “I checked everything off my list,” Smith says. “From there, I just pretended I was at home developing the next feature.”
I just pretended I was at home developing the next feature.
David Smith, developer and podcaster
Smith began working on a version of his app Widgetsmith for spatial computing almost immediately after the release of the visionOS SDK. Though the visionOS simulator provides a solid foundation to help developers test an experience, the labs offer a unique opportunity for a full day of hands-on time with Apple Vision Pro before its public release. “I’d been staring at this thing in the simulator for weeks and getting a general sense of how it works, but that was in a box,” Smith says. “The first time you see your own app running for real, that’s when you get the audible gasp.”
Smith wanted to start working on the device as soon as possible, so he could get “the full experience” and begin refining his app. “I could say, ‘Oh, that didn’t work? Why didn’t it work?’ Those are questions you can only truly answer on-device.” Now, he has plenty more plans to make — as evidenced by his paper checklist, which he holds up and flips over, laughing. “It’s on this side now.”
‘We understand where to go’When it came to testing Pixite’s video creator and editor Spool, chief experience officer Ben Guerrette made exploring interactions a priority. “What’s different about our editor is that you’re tapping videos to the beat,” he says. “Spool is great on touchscreens because you have the instrument in front of you, but with Apple Vision Pro you’re looking at the UI you’re selecting — and in our case, that means watching the video while tapping the UI.”
The team spent time in the lab exploring different interaction patterns to address this core challenge. “At first, we didn’t know if it would work in our app,” Guerrette says. “But now we understand where to go. That kind of learning experience is incredibly valuable: It gives us the chance to say, ‘OK, now we understand what we’re working with, what the interaction is, and how we can make a stronger connection.’”
Chris Delbuck, principal design technologist at Slack, had intended to test the company’s iPadOS version of their app on Apple Vision Pro. As he spent time with the device, however, “it instantly got me thinking about how 3D offerings and visuals could come forward in our experiences,” he says. “I wouldn’t have been able to do that without having the device in hand.”
‘That will help us make better apps’As lab participants like Smith continue their development at home, they’ve brought back lessons and learnings from their time with Apple Vision Pro. “It’s not necessarily that I solved all the problems — but I solved enough to have a sense of the kinds of solutions I’d likely need,” Smith says. “Now there’s a step change in my ability to develop in the simulator, write quality code, and design good user experiences.”
I've truly seen how to start building for the boundless canvas.
Michael Simmons, Flexibits CEO
Simmons says that the labs offered not just a playground, but a way to shape and streamline his team’s thinking about what a spatial experience could truly be. “With Apple Vision Pro and spatial computing, I’ve truly seen how to start building for the boundless canvas — how to stop thinking about what fits on a screen,” he says. “And that will help us make better apps.”
As announced in April, your customers will soon be able to resolve payment issues without leaving your app, making it easier for them to stay subscribed to your content, services, and premium features.
Starting August 14, 2023, if an auto-renewable subscription doesn’t renew because of a billing issue, a system-provided sheet will appear in your app with a prompt that lets customers update the payment method for their Apple ID. You can test this sheet in Sandbox, as well as delay or suppress it using messages and display in StoreKit. This feature is available in iOS 16.4 and iPadOS 16.4 or later, and no action is required to adopt it.
Apple is committed to protecting user privacy on our platforms. We know that there are a small set of APIs that can be misused to collect data about users’ devices through fingerprinting, which is prohibited by our Developer Program License Agreement. To prevent the misuse of these APIs, we announced at WWDC23 that developers will need to declare the reasons for using these APIs in their app’s privacy manifest. This will help ensure that apps only use these APIs for their intended purpose. As part of this process, you’ll need to select one or more approved reasons that accurately reflect how your app uses the API, and your app can only use the API for the reasons you’ve selected.
Starting in fall 2023, when you upload a new app or app update to App Store Connect that uses an API (including from third-party SDKs) that requires a reason, you’ll receive a notice if you haven’t provided an approved reason in your app’s privacy manifest. And starting in spring 2024, in order to upload your new app or app update to App Store Connect, you’ll be required to include an approved reason in the app’s privacy manifest which accurately reflects how your app uses the API.
If you have a use case for an API with required reasons that isn’t already covered by an approved reason and the use case directly benefits the people using your app, let us know.
Join us for online sessions August 1 through 24 to learn about the latest App Store features and get your questions answered. Live presentations with Q&A are being held in multiple time zones and languages.
We can help you make sure your visionOS, iPadOS, and iOS apps behave as expected on Vision Pro. Align your app with the newly published compatibility checklist, then request to have your app evaluated directly on Vision Pro.
Apple Vision Pro developer labsExperience your visionOS, iPadOS, and iOS apps running on Vision Pro. With support from Apple, you’ll be able to test and optimize your apps for the infinite spatial canvas. Labs are available in Cupertino, London, Munich, Shanghai, Singapore, and Tokyo.
Apple Vision Pro developer kitHave a great idea for a visionOS app that requires building and testing on Vision Pro? Apply for a Vision Pro developer kit. With continuous, direct access to Vision Pro, you’ll be able to quickly build, test, and refine your app so it delivers amazing spatial experiences on visionOS.
The App Store’s commerce and payments system was built to empower you to conveniently set up and sell your products and services on a global scale in 44 currencies across 175 storefronts. When tax regulations or foreign exchange rates change, we sometimes need to update prices on the App Store in certain regions and/or adjust your proceeds. These updates are done using publicly available exchange rate information from financial data providers to help ensure prices for apps and in‑app purchases stay equalized across all storefronts.
On July 25, pricing for apps and in‑app purchases (excluding auto‑renewable subscriptions) will be updated for the Egypt, Nigeria, Tanzania, and Türkiye storefronts. These updates also consider the following tax changes:
The Pricing and Availability section of My Apps has been updated in App Store Connect to display these upcoming price changes. As always, you can change the prices of your apps, in‑app purchases, and auto‑renewable subscriptions at any time.
How this impacts proceeds and tax administrationYour proceeds for sales of apps and in-app purchases (including auto‑renewable subscriptions) will change to reflect the new tax rates and updated prices. Exhibit B of the Paid Applications Agreement has been updated to indicate that Apple collects and remits applicable taxes in Egypt and Tanzania.
Learn more about managing your pricesSelecting a base country or region
The App Store was created to be a safe and trusted place for users to get apps, and a great business opportunity for developers. Apple platforms and the apps you build have become important to many families, as children use our products and services to explore the digital world and communicate with family and friends. We hold apps for kids and those with user-generated content and interactions to the highest standards. To continue delivering safe experiences for families together, we wanted to remind you about the tools, resources, and requirements that are in place to help keep users safe in your app.
Made for KidsIf you have an app that’s intended for kids, we encourage you to use the Kids category, which is designed for families to discover age-appropriate content and apps that meet higher standards that protect children’s data and offer added safeguards for purchases and permissions (e.g., for Camera, Location, etc).
Learn more about building apps for Kids.
Parental controlsYour app’s age rating is integrated into our operating systems and works with parental control features, like Screen Time. Additionally, with Ask To Buy, when kids want to buy or download a new app or in-app purchase, they send a request to the family organizer. You can also use the Managed Settings framework to ensure the content in your app is appropriate for any content restrictions that may have been set by a parent. The Screen Time API is a powerful tool for parental control and productivity apps to help parents manage how children use their devices. Learn more about the tools we provide to support parents to help them know, and feel good about, what kids are doing on their devices.
Sensitive and inappropriate contentApps with user-generated content and interactions must include a set of safeguards to protect users, including a method for filtering objectionable material from being posted to the app, a mechanism to report offensive content and support timely responses to concerns, and the ability to block abusive users. Apps containing ads must include a way for users to report inappropriate and age-inappropriate ads.
iOS 17, iPadOS 17, macOS Sonoma, and watchOS 10, introduce the ability to detect and alert users to nudity in images and videos before displaying them onscreen. The Sensitive Content Analysis framework uses on-device technology to detect sensitive content in your app. Tailor your app experience to handle detected sensitive content appropriately for users that have Communication Safety or Sensitive Content Warning enabled.
Supporting usersUsers have multiple ways to report issues with an app, like Report a Problem. Users can also communicate app feedback to other users and developers by writing reviews of their own; users can Report a Concern with other individual user reviews. You should closely monitor your user reviews to improve the safety of your app, and have the ability to address concerns directly. Additionally, if you believe another app presents a trust or safety concern, or is in violation of our guidelines, you can share details with Apple to investigate.
These user review tools are critical to informing the work we do to keep the App Store safe. Apple deploys a combination of machine learning, automation, and human review to monitor concerns related to abuse submitted via user reviews and Report a Problem. We monitor for topics of concern such as reports of fraud and scams, copycat violations, inappropriate content and advertising, privacy and safety concerns, objectionable content and child exploitation; and use techniques such as semi-supervised Correlation Explanation (CorEx) models, and Bidirectional Encoder Representations from Transformers (BERT)-based large language models specifically trained to recognize these topics. Flagged topics are then surfaced to our App Review team, who investigate the app further and take action if violations of our guidelines are found.
We believe we have a shared mission with you as developers to create a safe and trusted experience for families, and look forward to continuing that important work. Here are some resources that you may find helpful:
Sensitive Content Analysis framework
Learn about Ratings, Reviews, and Responses
Report a Trust & Safety concern related to another app
Now it’s even easier to design your apps quickly and accurately with new and updated design resources for creating apps on Apple platforms.
You can now start creating cutting-edge spatial computing apps for the infinite canvas of Apple Vision Pro. Download Xcode 15 beta 2, which includes the visionOS SDK and Reality Composer Pro (a new tool that makes it easy to preview and prepare 3D content for visionOS). Add a visionOS target to your existing project or build an entirely new app, then iterate on your app in Xcode Previews. You can interact with your app in the all-new visionOS simulator, explore various room layouts and lighting conditions, and create tests and visualizations. New documentation and sample code are also available to help you through the development process.
With the visionOS SDK, developers worldwide can begin designing, building, and testing apps for Apple Vision Pro.
For Ryan McLeod, creator of iOS puzzle game Blackbox, the SDK brought both excitement and a little nervousness. “I didn’t expect I’d ever make apps for a platform like this — I’d never even worked in 3D!” he says. “But once you open Xcode you’re like: Right. This is just Xcode. There are a lot of new things to learn, of course, but the stuff I came in knowing, the frameworks — there’s very little change. A few tweaks and all that stuff just works.”
visionOS is designed to help you create spatial computing apps and offers many of the same frameworks found on other Apple platforms, including SwiftUI, UIKit, RealityKit, and ARKit. As a result, most developers with an iPadOS or iOS app can start working with the platform immediately by adding the visionOS destination to their existing project.
“It was great to be able to use the same familiar tools and frameworks that we have been using for the past decade developing for iOS, iPadOS, macOS, and watchOS,” says Karim Morsy, CEO and co-founder of Algoriddim. “It allowed us to get our existing iPad UI for djay running within hours.”
Even for developers brand new to Apple platforms, the onboarding experience was similarly smooth. “This was my first time using a Mac to work,” says Xavi H. Oromí, chief engineering officer at XRHealth. “At the beginning, of course, a new tool like Xcode takes time to learn. But after a few days of getting used to it, I didn’t miss anything from other tools I’d used in the past.”
In addition to support for visionOS, the Xcode 15 beta also provides Xcode Previews for visionOS and a brand new Simulator, so that people can start exploring their ideas immediately. “Transitioning between ideas, using the Simulator to test them, it was totally organic,” says Oromí. “It’s a great tool for prototyping.”
In the visionOS simulator, developers can preview apps and interactions on Vision Pro. This includes running existing iPad and iPhone apps as well as projects that target the visionOS SDK. To simulate eye movement while in an app, you can use your cursor to focus an element, and a click to indicate a tap gesture. In addition to testing appearance and interactions, you can also explore how apps perform in different background and lighting scenarios using Simulated Scenes. “It worked out of the box,” says Zac Duff, CEO and co-founder of JigSpace. “You could trust what you were seeing in there was representative of what what you would see on device.”
The SDK also includes a new development tool — Reality Composer Pro — which lets you preview and prepare 3D content for your visionOS apps and games. You can import and organize assets, add materials and particle effects, and bring them right back into Xcode with thanks to tight build integration. “Being able to quickly test things in Reality Composer Pro and then get it up and running in the simulator meant that we were iterating quickly,” says Duff. “The feedback loop for developing was just really, really short.”
McLeod had little experience with 3D modeling and shaders prior to developing for visionOS, but breaking Blackbox out of its window required thinking in a new dimension. To get started, McLeod used Reality Composer Pro to develop the almost-ethereal 3D bubbles that make up Blackbox’s main puzzle screen. “You can take a basic shape like a sphere and give it a good shader and make sure that it's moving in a believable way,” says McLeod. “That goes incredibly far.”
The visionOS SDK also brings new Instruments like RealityKit Trace to developers to help them optimize the performance of their spatial computing apps. As a newcomer to using RealityKit in his apps, McLeod notes that he was “really timid” with the rendering system at first. “Anything that's running every single frame, you're thinking, 'I can't be checking this, and animating that, and spawning things. I'm going to have performance issues!'” he laughs. “I was pretty amazed at what the system could handle. But I definitely still have performance gains to be made.”
For developers like Caelin Jackson-King, an iOS software engineer for Splunk’s augmented reality team, the SDK also prompted great team discussions about updating their existing codebase. “It was a really good opportunity to redesign and refactor our app from the bottom up to have a much cleaner architecture that supported both iOS and visionOS,” says Jackson-King.
The JigSpace team had similar discussions as they brought more RealityKit and SwiftUI into their visionOS experience. “Once we got comfortable with the system, it was like a paradigm shift,” says Duff. “Rather than going, ‘OK, how do we do this thing?’, we could be more like, ‘What do we want to do next?’ Because we now have command of the tools.”
You can explore those tools now on developer.apple.com along with extensive technical documentation and sample code, design kits and tools for visionOS, and updates to the Human Interface Guidelines.
Learn more about developing for visionOS
Thank you to everyone who joined us for an amazing week. We hope you found value, connection, and fun. You can continue to:
We’d love to know what you thought of this year’s conference. If you’d like to tell us about your experience, please complete the WWDC23 survey.
Looking to explore all the big updates from an incredible week of sessions? Start with this collection of essential videos across every topic. And as always, you can watch the full set of sessions any time.
Spatial Computing Principles of spatial design Watch now Meet SwiftUI for spatial computing Watch now Meet UIKit for spatial computing Watch now Design for spatial user interfaces Watch now Get started with building apps for spatial computing Watch now Build great games for spatial computing Watch now Develop your first immersive app Watch now Meet Object Capture for iOS Watch now Meet Safari for spatial computing Watch now Developer Tools What’s new in Xcode 15 Watch now Swift What’s new in Swift Watch now Meet SwiftData Watch now SwiftUI & UI Frameworks What’s new in SwiftUI Watch now What’s new in UIKit Watch now What’s new in AppKit Watch now Design What’s new in SF Symbols 5 Watch now Meet watchOS 10 Watch now Design dynamic Live Activities Watch now Graphics & Games Bring your game to Mac, Part 1: Make a game plan Watch now Your guide to Metal ray tracing Watch now App Store Distribution & Marketing What’s new in App Store Connect Watch now Explore App Store Connect for spatial computing Watch now ML & Vision Discover machine learning enhancements in Create ML Watch now Lift subjects from images in your app Watch now Privacy & Security What’s new in privacy Watch now App Services What’s new in Core Motion Watch now What’s new in Wallet and Apple Pay Watch now Meet StoreKit for SwiftUI Watch now Meet MapKit for SwiftUI Watch now Safari & Web What’s new in Safari extensions Watch now What’s new in web apps Watch now Explore media formats for the web Watch now Accessibility & Inclusion Build accessible apps with SwiftUI and UIKit Watch now Perform accessibility audits for your app Watch now Photos & Camera Discover Continuity Camera for tvOS Watch now Create a more responsive camera experience Watch now Audio & Video What’s new in voice processing Watch now Add SharePlay to your app Watch now System Services What’s new in Core Data Watch now Business & Education Meet device management for Apple Watch Watch now Explore advances in declarative device management Watch now What’s new in managing Apple devices Watch now Health & Fitness Build custom workouts with WorkoutKit Watch now Build a multi-device workout app Watch nowBuild great apps and games for everyone.
App ServicesExtend your app's experience.
App Store Distribution & MarketingMarket your app and grow your audience.
Audio & VideoBuild audio and video experiences for your app.
Business & EducationDeploy and manage Apple devices in your classroom or office.
Coding & Design EssentialsNew to WWDC? Start right here.
DesignCreate compelling interfaces and experiences.
Developer ToolsExplore the tools you need to build the next great app or game.
Graphics & GamesLaunch your games and level up your graphics.
Health & FitnessGet your health and fitness app in great shape.
Maps & LocationHelp people find where they are and where they’re going.
ML & VisionBring the power of machine learning to your app.
Photos & CameraFocus on the latest in camera and photography apps.
Privacy & SecurityTighten the privacy and security of your apps and games.
Safari & WebExplore Safari and web technologies.
Spatial ComputingGet ready to build and design an entirely new universe of apps and games.
SwiftLearn the latest updates for Swift.
SwiftUI & UI FrameworksBuild interfaces that feel right at home on Apple platforms.
System ServicesEmpower your app by leveraging the system.
The final day of WWDC is upon us — but before we power down, here's a look at some of the activities and sessions available today.
Get ready for day fiveWe’ve saved some of the best for last. Pop into Slack to learn more about Metal, meet some super SwiftUI presenters, and explore spatial computing.
Q&A: Games for visionOS View now Q&A: Bring your ARKit app to visionOS View now Q&A: SwiftUI for visionOS View now Q&A: Spatial design View now Q&A: Metal View now Meet the presenters: Design with SwiftUI View nowIn today’s new sessions, you can learn to animate with springs, explore Core Motion, and get a taste of the SwiftUI cookbook for focus.
Animate with springs Watch now What’s new in Core Motion Watch now The SwiftUI cookbook for focus Watch now Elevate your windowed app for spatial computing Watch nowIt’s your last chance to take part in Dev Tools Trivia Time! Plus, make new friends at the Friday icebreaker.
Dev Tools Trivia Time View now Immersive icebreaker View now Check out podcasts from WWDCCatch up on the week with podcasts from developers and developer advocates, recorded at Apple Park.
Send us your feedbackWe’d love to know what you liked about WWDC23 — and how we can do even better. Send in your feedback about this year’s conference.
And that's a wrap!Thanks for being part of another incredible WWDC. It’s been a fantastic week of meeting and celebrating, connecting online through labs and activities, and exploring all the new sessions. We appreciate the opportunity to share all of this with you.
Every year, the Swift Student Challenge recognizes students all over the world who’ve created remarkable app playgrounds.
The 2023 edition drew submissions from more than 30 countries and regions, and covered topics as varied as healthcare, sports, entertainment, and the environment. And while the submissions were diverse, their creators had a common goal: To share their passions with the world through coding.
Coding gives me the freedom to feel like an artist — my canvas is the code editor, and my brush is the keyboard.
Yemi Agesin, 2023 Swift Student Challenge winner
This year, Apple increased the number of winners from 350 to 375 to recognize even more students for their artistry and ingenuity — and we’re proud to introduce three of them. Meet first-time Swift Student Challenge winners Asmi Jain, Yemi Agesin, and Marta Michelle Caliendo.
Day four is here — and a fresh round of sessions, labs, and activities await.
Get started with labs and sessionsCurious about the difference between the Shared Space and a Full Space in visionOS? Want to learn more about Observable? There’s a Q&A for that. Kick off another full day by chatting with engineers and designers about SwiftUI, Xcode, and all things spatial.
Q&A: SwiftUI for visionOS View now Q&A: Build UIKit apps for visionOS View now Q&A: Bring your ARKit app to visionOS View now Q&A: SwiftUI View now Q&A: Xcode View nowWe’ve got incredible new sessions on Live Activities, Metal, spatial experiences, and more.
Design dynamic Live Activities Watch now Optimize GPU renderers with Metal Watch now Explore rendering for spatial computing Watch now Create a great spatial playback experience Watch nowAnd there are even more exciting activities happening today, including an informal icebreaker and another fierce round of Dev Tools Trivia Time.
Immersive icebreaker View now Dev Tools Trivia Time View nowIf you haven’t signed up for a one-on-one lab this week, time is running out! Today is your last day to request an appointment for Friday. To make a request, visit the WWDC tab in the Apple Developer app or go to the WWDC labs webpage. App Store labs are also available in Chinese, Japanese, and Korean.
Learn more about labs at WWDC23
Download the new Figma design kitNow, by popular demand, you can download an all-new iOS and iPadOS design kit for Figma.
Apple Design Resources – iOS 17 and iPadOS 17
Discover documentation and sample codeBrowse new and updated documentation and sample code to learn about the latest technologies, frameworks, and APIs introduced at WWDC23. You’ll find new ways to enhance your apps targeting the latest platform releases.
Browse portraits of the 2023 Apple Design Award winnersWe snapped some great portraits of our Apple Design Award-winning developers at Monday’s ceremony. Take a look at all 12 below, and then dive deeper into the stories of their apps through our Behind the Design series.
Behind the Design: 2023 Apple Design Awards View nowEvan Kice, Afterplace
Luke Beard, Any Distance
Bob Meese, Duolingo
Philipp Nägelsbach, Endling
Ryan Jones, Flighty
Jeff Birkeland, Headspace
Ben Brode, MARVEL SNAP
Luke Spierewka, Railbound
Tsuyoshi Kanda, Resident Evil Village
Jakob Lykkegaard, stitch.
Swupnil Sahai, SwingVision
Joseph Cohen, Universe
Have fun out there, and we’ll catch you tomorrow for the final day of WWDC!
Browse new and updated documentation and sample code to learn about the latest technologies, frameworks, and APIs introduced at WWDC23. You’ll find new ways to enhance your apps targeting the latest platform releases.
Two days are in the books — and there’s so much more to come. Get ready for another big day at WWDC.
Dive into sessions and activitiesStart off in Slack, where you can connect with Apple engineers and designers on spatial design, WidgetKit, machine learning, 3D content, and much more.
Q&A: Spatial design View now Q&A: WidgetKit View now Q&A: Machine learning open forum View now Q&A: Create 3D content for Apple platforms View nowWe’ve also posted new sessions on topics like SwiftUI, widgets, SwiftData, and Xcode test reports.
Design with SwiftUI Watch now Bring widgets to life Watch now Build an app with SwiftData Watch now Fix failures faster with Xcode test reports Watch nowTest your knowledge in Dev Tools Trivia Time, WWDC’s fiercest competition! And come hang out with the SwiftUI team and chat about sessions, meet other members of the community, and share tips and tricks.
Dev Tools Trivia Time View now Break the SwiftUIce View nowThere’s still time to request lab appointments to meet one-on-one with experts about technology, design, app review, the App Store, and more. To make a request, visit the WWDC tab in the Apple Developer app or the go to the WWDC labs webpage.
Learn more about labs at WWDC23
A sneak peek at the visionOS SDKDevelopers attending the special event at Apple Park visited the Apple Developer Center on Tuesday to learn more about building apps for Apple's new spatial operating system. “Going in, I was under the impression it was going to be tricky, or hard, or ‘where do I start?’” says Paul Hudson, iOS developer and founder of Hacking with Swift. “But actually — if you take what you know and add a little bit, you can make something good and then increment from there. It doesn’t take much to get something great. That’s my main takeaway.”
Find out how developers of apps like djay, Blackbox, JigSpace, and XRHealth are starting to build for spatial computing.
Learn more about developing for visionOS
Spotlight on: Developing for visionOS View now 你好,こんにちは、Human Interface Guidelines!The Human Interface Guidelines are now available in Chinese and Japanese! And you can check out updated design recommendations for watchOS, App Shortcuts, widgets, and all the latest platform releases.
Enjoy your day and we’ll catch you tomorrow for day four!
Welcome to day two of WWDC! There’s more than ever to explore this week: Xcode is getting updated, SwiftUI is getting animated, and — did we mention? — apps are getting a lot more spatial. Here’s a guide to what happened yesterday and what’s on tap today.
Catch up on day oneFor the second year in a row, we welcomed more than 1,000 developers to Apple Park for the WWDC keynote and Platforms State of the Union to learn about the future of Apple platforms.
With new frameworks, a new spatial operating system, and new hardware designed for developers, there’s an incredible amount to dig into this year. Catch up quickly with this recap of the most important big (and little!) moments from the keynote:
17 big & little things at WWDC23 Watch nowWant the complete experience? Here are the full replays for each event.
Keynote Watch now Platforms State of the Union Watch now Meet Apple Vision ProOn day one of WWDC, you got a peek at visionOS, Apple’s new spatial operating system — and that was just the beginning. There are familiar and new frameworks to learn, new tools like Reality Composer Pro to explore, and new in-person programs coming soon.
Learn more about developing for visionOS
Prepare your apps for visionOS
Explore sessions about visionOS
Start your TuesdayWe’re off and running with with more than 60 sessions, 100 online activities, and the opportunity to schedule one-on-one lab appointments with Apple experts. Here’s a quick look at all we’ve got in store:
What Apple developers need to know at WWDC23 Watch nowNeed a place to start? Check out the latest updates to watchOS 10, an introduction to SwiftData, and the principles of spatial design.
Meet watchOS 10 Watch now Meet SwiftData Watch now Principles of spatial design Watch nowNew this year: Many session videos now offer chapter markers, so you can skip right to the content you’re looking for. (You’ll find chapter markers for the keynote, as well.)
Join us in Slack to connect with the presenters of sessions like “Meet SwiftUI for spatial computing” and “What’s new in SwiftUI” and join Q&As about game design, Xcode 15, and much more.
Meet the presenter: Meet SwiftUI for spatial computing View now Meet the presenters: What’s new in SwiftUI View now Q&A: Games View now Q&A: Xcode View nowDev Tools Trivia Time is bigger and better than ever — test your knowledge in WWDC’s fiercest competition!
Dev Tools Trivia Time View nowAnd connect with Apple experts directly by requesting one-on-one lab appointments for answers to your questions about technology, design, and maximizing your App Store presence. To make a request, visit the WWDC tab in the Apple Developer app or go to the WWDC labs webpage.
Learn more about labs at WWDC23
Congrats to the 2023 Apple Design Award winnersYesterday, we handed out the 2023 Apple Design Awards and added 12 new titles to the list of the greatest apps and games ever created for Apple platforms. Check out the complete list of 2023 winners and finalists below. Then, get up close and personal with the winning developers, designers, and teams in our Behind the Design series.
Meet the 2024 Apple Design Award winners
Behind the Design: 2023 Apple Design Awards View now Press play: WWDC23 playlists are hereLastly, here’s an audio gift for you! Spin up our official playlists — the perfect soundtrack to an incredible week.
Playlist: WWDC23 Coding Energy
That's it for now. Have a great day, and we'll see you tomorrow!
Online labs and activities are a great way to connect with Apple engineers, designers, and experts all week long.
One-on-one labsGet personalized guidance about development basics, complex concepts, and everything in between. Learn how to implement new Apple technologies, explore UI design principles, improve your App Store presence, and much more.
ActivitiesThere are plenty of exciting activities happening daily on Slack.
Labs and activities are open to all members of the Apple Developer Program and Apple Developer Enterprise Program, as well as 2023 Swift Student Challenge applicants.
Every year, the Behind the Design series takes a special look at the remarkable teams behind the Apple Design Award-winning apps and games. Read on to meet 12 incredible teams from around the world and learn how they brought their winning ideas to life.
Winners in the category provide a great experience for all by supporting people from a diversity of backgrounds, abilites, and languages.
App
UniverseLaunched in 2017, the powerful, versatile, and almost unbelievably simple Universe makes creating a website as easy as building with blocks. The app operates on a grid system. To create a site, add blocks to the grid, and to edit a site, move those blocks around. The app doesn’t just remove barriers, it bulldozes them. “Our goal is making this technology available to everybody,” says founder Joseph Cohen.
Behind the Design: Universe View nowGame
stitch.For all its many genres and styles, the gaming world has been awfully threadbare when it comes to experiences about embroidery. That all changes with stitch., a charming cross between casual puzzler, meditative exercise, and afternoon craft project — and as cross-generational a game as you’re likely to find. “We pride ourselves on making games that anyone can play,” says Jakob Lykkegaard, founder of Lykke Studios, the team behind stitch. “It’s important to spend the time to make them available for everyone.”
Behind the Design: stitch. View nowWinners in this category provide memorable, engaging, and satisfying experiences that are enhanced by Apple technologies.
App
DuolingoWhat makes Duolingo such an engaging way to learn a language? The answer is hiding in plain sight. “The secret to Duolingo is that we’re not an education company. We’re a fun and motivation company,” says Ryan Sims, VP of design. “Fun is the most important part of the work we do.”
Behind the Design: Duolingo View nowGame
AfterplaceAt first glance, Evan Kice’s Afterplace appears to have time-traveled from the late 1980s. But it’s a decidedly modern game too — fast, fluid, and incredibly easy to pick up. Enemies lurk everywhere and levels stretch out in all directions; what looks like a humble library is secretly a multilevel maze. “I always loved it when a game just kept going,” says Kice. “I was fascinated by the idea that a game could hold an entire country.”
Behind the Design: Afterplace View nowWinners in this category deliver intuitive interfaces and effortless controls that are perfectly tailored to their platform.
App
FlightyFlighty might be the easiest thing travelers navigate on their entire trip. “Travel can be a high-stress situation,” says Ryan Jones, the Austin-based developer who founded the app in 2019. “We want Flighty to work so well that it feels almost boringly obvious.”
Behind the Design: Flighty View nowGame
RailboundIn Railbound, players are challenged to link train cars in proper order by laying down track through a mechanic that’s as simple as finger painting. “I pay a lot of attention to input,” says Luke Spierewka of the game’s Afterburn studio. “For Railbound, I wanted a system where you basically paint rail tiles with one finger.”
Behind the Design: Railbound View nowWinners in this category improve lives in a meaningful way and shine a light on crucial issues.
App
HeadspaceFew apps have made mindfulness as accessible as Headspace. More than a decade since its launch, the app continues to set the standard for mental health apps. “Mindfulness, meditation, mental health — none of these are easy to navigate,” says Jeff Birkeland, senior vice president for member products. “An app that feels warm, friendly, and easy to use can provide approachable support for tough issues.”
Behind the Design: Headspace View nowGame
EndlingEndling is a 3D adventure in which you play as a fox navigating a land charred by environmental disaster and human impact. It’s also a powerful mix of medium and message. “It’s a survival game, but a simplified one that focuses more on telling a story,” says Philipp Nägelsbach, game designer and producer at HandyGames.
Behind the Design: Endling View nowWinners in this category feature stunning imagery, skillfully drawn interfaces, and high-quality animations that lend to a distinctive and cohesive theme.
App
Any DistanceLuke Beard, the Atlanta-based designer who created Any Distance with engineer Daniel Kuntz, says the app is “for everyone, not just athletes.” Their app is a design-forward fitness tracker and social network that delivers workout stats in beautiful and shareable formats — dynamic charts and graphs, animated 3D maps, AR experiences, and gorgeous cards — that can integrate photos. And its name is also its philosophy: Any distance counts, not just a swim or bike ride, but a walking meeting, stroller run, or its most popular option, a dog walk.
Behind the Design: Any Distance View nowGame
Resident Evil VillageThe horror adventure comes to Mac with Apple silicon, with all the visual achievements fans of the long-running series could hope for. From its creepy castle to its decrepit factories to its magnificently hideous villains, Resident Evil Village offers some of the most realistic graphics ever seen on Apple devices. “The concept was a horror theme park with unique characters that stand out against a beautiful environment,” says producer Tsuyohi Kanda.
Behind the Design: Resident Evil Village View nowWinners in this category provide a state-of-the-art experience through novel use of Apple technologies that set them apart in their genre.
App
SwingVisionWhen Swupnil Sahai started creating SwingVision, he had no app-building experience — but he’d played a lot of tennis. “The initial idea was, ‘Maybe we can use the accelerometer and gyroscope on Apple Watch to figure out how fast I’m swinging, and maybe we can use the [Apple Watch] screen to keep score,’” says Sahai. “That was really it.” Today, SwingVision has become an integral part of the tennis community.
Behind the Design: SwingVision View nowGame
MARVEL SNAPMARVEL SNAP reboots the collectible card game genre with brisk gameplay, a wild cast of superheroes, and its “snap” mechanic, a double-or-nothing bet that adds whole new layers of strategy-slash-psychological warfare. “Our goal as designers is to maximize that ratio of complexity and depth,” says Ben Brode, chief development officer for Second Dinner.
Behind the Design: MARVEL SNAP View nowAs a kid, Universe founder Joseph Cohen loved nearly everything about the internet: how it brought people together, created avenues for his twin passions of creativity and commerce, and democratized the flow of information. “I grew up in New York,” Cohen says, “but I like to say that I really grew up on the internet.”
Today, Cohen’s passion is still the internet — but he’s no longer just living in it. He’s striving to improve it. “Our goal is making this technology available to everybody,” he says.
Launched in 2017, the powerful, versatile, and almost unbelievably simple Universe makes creating a website as easy as building with blocks. The app operates on a grid system. To create a site, add blocks to the grid, and to edit a site, move those blocks around. No knowledge of coding, design, or publishing is necessary — Universe even handles the process of acquiring and publishing to specific domain names. The app doesn’t just remove barriers, it bulldozes them. And today, Universe currently powers storefronts, artist portfolios, musician pages, community group hubs, personal web presences, and everything in between.
In the past year, Universe empowered more people than ever with a series of accessibility upgrades directly inspired by people’s feedback. In one example, a high-school student in California who is blind reached out to ask for better VoiceOver support — and the Universe team quickly came up with an elegant idea.
“We learned that the grid system we designed is perfectly fitted to screen readers,” he says. “VoiceOver works by reading from the top left of the page, so when you have a grid-based coordinate system, it will walk right through what’s on the screen. It’ll say, ‘OK, in position one and two, you have an image of flowers,’ and so forth.”
The team refined the feature by working closely with a number of people who are blind or have low vision — many of whom have Universe-created sites online right now. And they kept going, adding Dynamic Type to scale text as well as accessibility upgrades to the app’s general navigation, settings, audience metrics, and more.
The team's latest project aims to make it even easier for anyone to get started with web design. Cohen and team are working on an AI feature that will instantly generate or refine a custom website based on natural language descriptions. Tell Universe, “Make a pink site with sparkles for my custom nail business in Chicago,” and the results will appear in seconds.
Our goal is making this technology available to everybody.
Joseph Cohen, Universe founder
“It’ll be a dialogue; it’s not a one-way street,” says Cohen. “You can still edit your site manually, or you can ask it to change the theme or background color.” (The AI designer is named GUS, both because it stands for “generative Universe sites” and because Universe employs a very skilled designer named Gus. “We have to call him Human Gus now,” laughs Cohen.)
Cohen plans to build the AI feature “in public,” releasing regular video updates about the team’s progress as part of a way to garner people’s feedback on the fly. It’s another example of his drive to make the app — and the internet — more open to everyone. “I still live in New York, and the best part of New York is that it’s incredibly diverse,” he says. “It’s gritty and organic and very human. I think the internet can look like that — but you need great tools to enable it.”
Download Universe - Website Builder from the App Store
Behind the Design is a series that explores design practices and philosophies from each of the winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
What makes Duolingo such an engaging way to learn a language? The answer is hiding in plain sight. “The secret to Duolingo is that we’re not an education company. We’re a fun and motivation company,” says Ryan Sims, VP of design. “Fun is the most important part of the work we do.”
More than a decade since its launch, Duolingo continues to boast best-in-class design, great interactions, and an easy-to-follow UI. It’s filled with fun touches, like gamified lessons, hilarious characters, and a learning path that leans on actual conversations. And then there’s Duo, the famously tenacious owl mascot who achieved viral notoriety for his skill at encouraging people to extend their learning streaks. The app has figured out how to make a daily language lesson feel not like classwork but a joy.
This past year, Duolingo launched a major learning path redesign. In previous versions, it focused on a main screen — known as “the tree” — that let people explore numerous routes. “Two people could spend the same number of hours doing the same number of lessons, but end up in different places,” says Sims. Today, all Duolingo users follow a single route. “We call it ‘the path,’” says Sims. “It was a complete reboot of our product strategy.”
The path redesign coincided with another important update: animations for Duolingo’s wonderful cast of characters. There’s Lily, a perpetually unimpressed teen with a dismissive slow-clap; Oscar, a dramatic teacher who takes his job very seriously; and Eddy, a fitness buff with an enthusiasm for just about everything. Their subtle animations when people get something right are a reward in themselves. “A lot of that character interaction was informed by seeing how people connected with Duo,” says Sims.
The secret to Duolingo is that we’re not an education company. We’re a fun and motivation company.
Ryan Sims, Duolingo VP of design
Filling the app with memorable personalities required world-building — a process not often found in language apps. “It’s such a gigantic task,” says Sims, “and it really just started with our head of art, Greg Hartman, who began drawing characters and saying, ‘Wouldn’t it be cool if you encountered the same people through the entire experience?’” And of course, there’s a team of experts on hand to make sure every character’s story is consistent. “There are quite a few people whose job is to help write these stories and make sure they don’t contradict each other,” says Sims, with a laugh.
Duolingo’s approach under the hood may have changed, but the sense of fun is still front and center.
The lessons are brisk and breezy, emphasizing the building blocks of language through repeated phrases and sentences. And it’s all designed not just to attract learners, but to get them to stick around through quick lessons, compelling rewards, and unapologetic encouragement to keep their streaks alive. “You learn a language to connect to another human. That’s all it comes down to,” says Sims. “That’s why we’re passionate about teaching folks to speak new languages — because it brings everyone together.”
Behind the Design is a series that explores design practices and philosophies from each of the winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
MARVEL SNAP reboots the entire collectible card game universe.
The game is stacked with incredible visuals, a multiverse of gameplay variations, and a “snap” mechanic — a double-or-nothing bet — that’s as simple as it is revolutionary. It’s got an encyclopedic collection of iconic and deep-cut Marvel characters, but players don’t need a background in comic-book lore or collectible card games in order to participate. And with brilliantly intuitive touch controls and speedy gameplay, it’s perfect for mobile devices.
MARVEL SNAP is the brainchild of Ben Brode and Hamilton Chu, the masterminds behind Hearthstone, which itself redefined the collectible card genre upon its 2014 release. In 2018, the duo launched their own studio, Second Dinner, with big aspirations and an even bigger problem: “We didn’t have any ideas,” laughs Brode, the studio’s chief development officer. “It’s a little terrifying to sit down at your new job and think, ‘OK, we have to come up with a game, and we have nothing.’”
To break their creative block, Brode and Chu started playing every board game they could get their hands on. “That’s the soup that SNAP arose from,” Brode explains. It also led them to the early breakthrough — what Brode calls Chu’s genius idea — that would define the game. “He said, ‘You know what would be really fun? Incorporating the doubling cube from backgammon,’” says Brode. "We tried it and immediately realized we were onto something.”
From there, things moved fast. The pair had inked a deal with Marvel, so they sat down to think about what made Marvel special. “It’s the conflict between heroes and villains, right?” he continues. “It’s not about mowing down enemies, it’s about that heroic 1v1 standoff. So we said, ‘That’s it. Let’s try a card game.’” The pair played the earliest rounds of SNAP on the back of business cards and the game’s foundations were established in all of two days.
While the core game was built fast, the iterations took much longer. Over the next four years, Second Dinner played, refined, and simplified — to a degree. “It was honestly less about making the game simple and more about maximizing the depth of the complexity we chose to add,” Brode says.
On one hand, SNAP is an incredibly simple game with one card type, three locations, and six turns. Rules for those card types and locations are easy to follow. Battles last a matter of minutes.
But those basic components combine for a game of near-infinite complexity — and perfectly calibrated balance. “Most people misunderstand randomness by thinking of it as a scale,” Brode says. “That absolutely is not how randomness works. While no two games of SNAP are alike, in every game you have to think: ‘How can I win this time?’ It’s about the intersection between randomness and skill. And if you lose, you always have an opportunity to reflect on how you could have done something differently.”
While no two games of SNAP are alike, in every game you have to think: ‘How can I win this time?’ It’s about the intersection between randomness and skill.
Ben Brode, MARVEL SNAP creator
Even the game’s language keeps players engaged. For instance, retreating in SNAP isn’t necessarily an admission of defeat; it might be a considered decision to minimize loss. “If you decide to leave because it’s strategically correct, that’s not losing!” says Brode. As such, players who retreat get a screen that says “Escaped!” — a much more palatable outcome than losing. “‘Escaped’ zeroes out the emotional negativity,” Brode notes.
As befitting its comic-book origins, SNAP is a visual feast. Characters have their own unique animations, like Ghost Rider using his chain to yank a discarded card back into the match or Devil Dinosaur unleashing a board-rattling roar. Players can even enable a 60 fps setting to make a Hulk smash look truly incredible. Even “snapping” an opponent triggers a dramatic light show and haptic feedback.
While SNAP certainly includes top-line Avengers, they’re by no means the game’s heaviest hitters. Big wins can come courtesy of characters like Blue Marvel, Mister Fantastic, Misty Knight, and Enchantress — names you’ve maybe not heard in a while, if you’ve heard them at all. Brode says showcasing lesser-known characters was part of the strategy to appeal to a wider audience, but also a nod to his own comic-book past. (Naturally, those who worked on the game also have their favorites — art director Jomaro Kindred is a big Black Panther fan, while producer Gareth Ackerman is really into Armor.)
It’s a comic-book game for non-comic-book people, a collectible card battler for those who’ve never heard the phrase, and an incredible achievement that appeals to players of all ages. “I got a suggestion this week from a 5-year-old in Wales who had an idea for a new location,” says Brode. “His parents forwarded it to me with a note that said, ‘We play this together as a family, and he’s learning numbers and math through this game and these characters.’ That’s incredibly rewarding, and it feels awesome.”
Download MARVEL SNAP from the App Store
Behind the Design is a series that explores design practices and philosophies from each of the winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
At first glance, Evan Kice’s Afterplace appears to have time-traveled from the late 1980s. It’s a 2D top-down pixelated adventure game full of blocky characters, blippy music, and retro typefaces. If you were ever into the games of the era — and Kice very much was — it feels like a delightful visit from an old friend.
“I grew up on those games,” says Kice. “I was always carrying them around. And I was also the kind of kid who’d look at a manhole cover and think, ‘That is definitely the entrance to a dungeon.’”
Yet for all its nostalgia, Afterplace is a decidedly modern game too. It’s fast, fluid, incredibly easy to pick up, and features a surprisingly huge map. Its characters look like 1989 but talk like 2023, especially the sarcastic vending machine that dispenses random jokes and the friendly rabbit that provides advice. (For instance: “If you think something’s going to attack you, don’t be there anymore. Like, move away.”)
It’s especially impressive when you realize that Kice is the game’s sole designer, developer, and artist. He began making video games at age 11, studied software engineering in college, and took a few game design courses. But mostly, he taught himself along the way. “Honestly, I just watched a lot of tutorials,” he says. “YouTube is how I learned art, sound, music, and basically everything that wasn’t programming or game design.”
The game is also brilliantly designed for mobile, with one-finger controls that make it easy to explore. Tap to interact with an object or slash your way out of trouble. Or tap and drag anywhere on the screen to move around. In fact, one of Kice’s earliest design decisions was to lean on touch screen interaction paradigms instead of drawing controls on screen. “I was never a fan of virtual buttons or d-pads,” says Kice. “I’ve played a lot of those kinds of games, and often ended up going a direction I didn’t want to go. And I personally enjoyed being able to play with one thumb while standing in line somewhere.”
That drive for simplicity also informed the interactions between hero and enemy. “Some games have simple enemies but a complex you,” he says. “Afterplace has a simple you but complex enemies. Whenever you walk up to something, you have to say, ‘What is this guy gonna do? I gotta figure this out.’ That’s your whole job. You’re not worried about doing double backflips because you’re too busy trying not to get smashed in the face.”
Enemies lurk everywhere in Afterplace’s massive worlds. Levels stretch out in all directions; what looks like a humble library is secretly a multilevel maze. “I always loved it when a game just kept going,” says Kice. “I was fascinated by the idea that a game could hold an entire country.”
I was never a fan of virtual buttons or d-pads. And I personally enjoyed being able to play with one thumb while standing in line somewhere.
Evan Kice, Afterplace creator
He was also fascinated by vintage heroes and villains. “All the characters in Afterplace are the same resolution as the characters in my favorite childhood games,” he says. “I really, really loved those characters. But they were just static images; they faced four directions and had a blank stare on them. As a kid, I would think, ‘I would love it if they did anything more than stand in place and say one line of dialogue.” Inspired, he challenged himself on Afterplace to see how expressive that vintage resolution could be. “Turns out they’re pretty expressive!” he laughs.
Afterplace’s expressiveness comes through in its clever dialogue, like the character who encourages players to be more strategic in their attacks by saying, “You wouldn’t imagine how many dunderheads just keep swingin’ away at a monster.” The game’s music — which Kice wrote and performed — starts in an 8-bit style but expands to become more orchestral later on, a trick he picked up from the game Undertale. “If you start out the game with a retro sound, then later break out the string quartet or horror violins, it has a lot more impact,” he says. “The rest of the game has maybe three melodies in it. I’ll pretend that’s because I’m a cool designer using leitmotifs, but it’s actually the maximum number of melodies I could think of.”
Never a fan of intro cutscenes, Kice designed Afterplace’s onboarding to get players right into the action. “I love story in games, but I almost always skip those introductions. I’m just not invested yet," he says.
Afterplace also features a bevy of accessibility options that let players adjust text scaling, camera shake amount, contrast, and more. There’s even an invincibility mode, if players are really having trouble with those monsters. It’s all part of a strategy to appeal to anyone, regardless of their video game history — if they have one at all.
I love story in games, but I almost always skip those introductions.
Evan Kice, Afterplace creator
“Afterplace is very niche," he says. It’s for people who maybe don’t play games on mobile. But if it helps bring more people into gaming, I think that’s great.”
Behind the Design is a series that explores design practices and philosophies from each of the winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
When Swupnil Sahai started creating SwingVision, he had no app-building experience — but he’d played a lot of tennis.
“The initial idea was, ‘Maybe we can use the accelerometer and gyroscope on Apple Watch to figure out how fast I’m swinging, and maybe we can use the Apple Watch screen to keep score,’” says Sahai from his workspace in the Bay Area. “That was really it.”
The app was a true passion project for Sahai, who jumped into SwingVision pretty much cold. “Although I’d programmed in other languages, Swift seemed much more approachable, so I thought 'Maybe I can pick this up on my own.’” He not only picked it up, he found the learning curve so speedy and enjoyable that he was staying up later and later to plunge into SwingVision and Swift. “I was building in Xcode on day one,” he says. “I don’t think I’ve ever had so much fun working.”
Today, SwingVision has become the definitive tennis app, and an incredible example of the combined power of cameras, machine learning, and the concept of filling a need. It’s beautifully and exclusively designed for iOS, with an easy-to-navigate UI that makes it accessible to both officially sanctioned matches and people practicing on the weekends.
It’s also become an integral part of the tennis community. SwingVision is now used for line calling, the definitive say on whether a ball is in or out — a call that’s still left to players themselves. “It’s rare to have judges on the court in tennis,” says Sahai. “In baseball, you have umpires. Even middle-school basketball has referees. Somehow in tennis you have to do everything yourself.”
I was building in Xcode on day one. I don’t think I’ve ever had so much fun working.
Swupnil Sahai, SwingVision founder
Founded in 2015 by Sahai, along with close friend and current CTO Richard Hsu, the app couldn’t be simpler. Point your iPhone or iPad camera at the court and SwingVision tells you how fast you’re serving, the consistency of your shots, and how to shape up your posture and footwork. It does so by using advanced machine learning to track shots (a pretty intensive process). “It allows you to call lines more accurately than you could with your own eyes. But if you don’t record at 60 fps, you won’t even see the ball bounce — it just moves too fast,” he says. “Of course, 1080p video is very, very high resolution. It’s something like 2 million pixels that all have to be processed 60 times a second. We had to innovate a lot to make these models as lean as possible. This app is basically not possible without Neural Engine.”
SwingVision, now powered by a team of 23, has evolved quite a bit. Players can now stream matches live — both the video and the on-screen data — and afterward, the app creates an easily shareable highlight reel. One of its latest features sets up “target zones” on the court to help players practice their serves — a great example of how the video-centric app integrates tightly with Apple Watch. “Serving is traditionally the most boring thing to practice,” laughs Sahai. “So we gamified it with different sound effects and a progress monitor on Apple Watch. Even with all our video, Apple Watch is still critical because it elevates the experience.”
In addition to driving the success of his app, Sahai shares his development expertise by continuing to teach a UC Berkeley course called Data 8: Foundations of Data Science — currently the largest class on campus. He’s known as “the SwingVision guy.” “Sometimes I’ll see a post from a student that says, ‘Wait, you made that?’” he laughs. “The community there is very supportive.”
Download SwingVision from the App Store
Behind the Design is a series that explores design practices and philosophies from each of the winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Few apps have made mindfulness as accessible as Headspace.
More than a decade since its launch, the app continues to set the standard for mental health apps — an especially notable accomplishment, as it can be difficult to communicate challenging topics like mindfulness and mental health through an app. "Demystification is a word we use a lot,” says Jeff Birkeland, Headspace senior vice president and general manager for member products. “Mindfulness and mental health can seem complex, perhaps mystical, maybe even inaccessible. So how do we make it approachable and friendly? And how do we get people to the right content faster?”
The answer is an intentional mix of design, organization, and style. “I think the root of our success from the very beginning was creating a warm feel and brand,” he says.
It’s hardly an overstatement to say that Headspace has been part of a tremendous social change regarding mental health. Birkeland says the app has been used by more than 100 million people in nearly 200 countries and regions, and it’s easy to see why. Headspace is an incredibly versatile tool for anyone looking for a quick clarity break, longer guided sessions, or help with sleep or exercise. And its huge library of resources is there whenever people need it.
Headspace smartly organizes its library of resources through language. Collections and exercises are labeled with understandable purposes, like Unlocking Creativity, Mindful Eating, and The Shine Collection, a set of activities drawn from Headspace’s recent merger with Shine, a mindfulness app dedicated to providing inclusive and accessible mental health resources that support marginalized communities. “It’s still a simple app,” Birkeland says, “but it’s not a very long trip into an extensive archive.”
How do we make [mindfulness and mental health] approachable and friendly? And how do we get people to the right content faster?
Jeff Birkeland, Headspace senior vice president for member products
In previous versions of Headspace, the core navigation included tabs for meditation, focus, movement, and sleep. But Birkeland says user research convinced the team to strip away that complexity and focus instead on the app’s Today tab, which facilitates one-tap access to activities of varying lengths for morning, afternoon, and night. Importantly, it does so without bringing up specific categories.
The Explore tab, meanwhile, is the gateway to that vast bank of content — including those former category-based parts of the core navigation. “There’s still simplicity at the surface,” says Birkeland. “But there’s an incredible depth of content underneath.” This tab is also where people find collections and activities of all kinds, including those with titles like Cultivating Black Joy and Navigating Injustice that illustrate Headspace’s commitment to representation.
“Mindfulness, meditation, mental health — none of these are easy to navigate,” Birkeland says. “An app that feels warm, friendly, and easy to use can provide approachable support for tough issues.”
Download Headspace from the App Store
Behind the Design is a series that explores design practices and philosophies from each of the winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Workout tracking has never looked — or felt — like it does in Any Distance.
“We’re building something for everyone, not just athletes,” says Luke Beard, the Atlanta-based designer who created this Swift app with engineer Daniel Kuntz. “We’re not athletically inclined people. We’re kind of dorks! I want to retire and take photos in Iceland one day. Dan wants to retire and play music in the desert. We’re not looking to go to the Olympics, but we do want to live long, healthy lives.”
Their app is a design-forward fitness tracker and social network that delivers workout stats in beautiful and shareable formats — dynamic charts and graphs, animated 3D maps, AR experiences, and gorgeous cards — that can integrate photos. It draws heavily on SF Symbols — as Beard cheekily puts it, “SF Symbols is the single greatest contribution to design Apple has ever made.” It offers elegant in-app collectibles and an in-house social network aimed at connecting people with a small circle of friends. And its name is also its philosophy: Any Distance counts, not just a swim or bike ride, but a walking meeting, stroller run, or its most popular option, a dog walk.
Any Distance is heavily powered by Apple tools and technologies. It uses ARKit for rendering routes, HealthKit for workout data, Metal rendering for what Kuntz calls the “gradient background swirly thing,” SceneKit for rendering, MapKit, Apple Watch integration, and more. It also demonstrates Any Distance's commitment to privacy. "Your data is all in HealthKit; we don’t store it unless you post it to your friends,” Beard says. “We don’t let you share a map. Route clipping (in which the beginning and end of your routes are trimmed from public view) is on by default.” And people can choose exactly what data they want to share with friends by simply tapping the eye icon under each metric.
SF Symbols is the single greatest contribution to design Apple has ever made.
Luke Beard, Any Distance founder
Beard conceived of the idea for Any Distance during the pandemic, when his lifestyle wasn’t quite as healthy as he would have liked. To shake himself out of his funk, he began going on long walks, posting photos of his journeys along the way. “I’m a chronic oversharer,” laughs Beard, “and a photographer at heart. And I was getting good feedback.” Eventually, he started designing templates for his social media posts. “Honestly, it was just a photo in a mask — the oval that’s now one of our main brand characteristics — with the route and stats in a fun font. But people would ask, ‘What app is making those?’”
By this point, he’d already connected with Daniel Kuntz, a programmer and musician who already had a few titles on the App Store. “As a developer, I’m often asked, ‘Hey, can you make this app?’ And I’m always like, ‘Nah,’” says Kuntz. “In this case, Luke had it all fleshed out. He had iOS components and a Sketch file. It was simple and clear and really cool.”
This year, the team also plans to add more unorthodox activity options like trick-or-treating. “Eventually we want to organize group bike rides or group dog walks,” says Beard. “The last few years have accelerated the loneliness epidemic so much, and we think working out or being active together is the new hanging out. It doesn’t matter if you’re walking half a mile, taking a stroller walk with your kid, or walking with a cane — there should be a space for you.”
Download Any Distance from the App Store
Behind the Design is a series that explores design practices and philosophies from each of the winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
The Apple Design Awards celebrate apps and games that excel in the categories of Inclusivity, Delight and Fun, Interaction, Social Impact, Visuals and Graphics, and Innovation. Learn about the 2023 winning apps and the talented developers behind them.
For all its many genres and styles, the gaming world has been awfully threadbare when it comes to experiences about embroidery. That all changes with stitch., a charming cross between casual puzzler, meditative exercise, and afternoon craft project — and as cross-generational a game as you’re likely to find.
“We pride ourselves on making games that anyone can play,” says Jakob Lykkegaard, founder of Lykke Studios, the team behind stitch. “It’s important to spend the time to make them available for everyone.”
stitch. sets up embroidery-based puzzles that players complete to finish a pattern, like an adorable penguin or a love note to bacon. Players swipe over an incredibly lifelike and beautifully textured surface that feels like it’s just beneath the display. There’s no linear progression in stitch.; challenges are presented in the form of “hoops” that players can explore at their own pace. And the game supports multiple languages and custom accessibility tools for people with color blindness, low vision, and motion sensitivities.
There’s precedent for those choices. Lykke Studios’ painting puzzler tint. was nominated for a 2022 Apple Design Award in the Inclusivity category, thanks in part to a colorblind mode that lets players solve each watercolor-based puzzle by using patterns and texture instead of color.
With stitch., which was built with Unity, the studio explored accessibility features even further. “Number Outlines” creates sharper and more contrasting outlines on the puzzles’ numbers. “Big Numbers” makes them larger and easier to read. “Reduce Motion” limits sudden movements and animations. And the left-handed mode shifts problematic UI out of the way for left-handed players. Lykkegaard says, “Originally, the icon indicator was actually under the hand for left-handed people. We thought, ‘That’s an issue we hadn’t considered. How can we fix it?’”
We pride ourselves on making games that anyone can play.
Jakob Lykkegaard, founder of Lykke Studios
Lykkegaard says the team took an unusual approach to sewing up the idea for stitch. “We build games a little bit upside down,” he says. “It usually starts with us falling in love with some material and building a game mechanic around it later. We’ll see how it feels on device and, if it’s not working, we’ll kill the project and move on to another material.” For stitch., that material came from a serendipitous day on social media. “We honestly just saw a post about embroidery and thought, ‘Wow, that looks really nice.’”
For that mechanic, the team found inspiration in an unlikely analog source: a geometric grid-based puzzle game called Shikaku found in Japanese newspapers. “We took the grid and skewed it into something that looks nice but isn’t uniform,” he says. “From there, we had a lot of options for how players could fill it out.”
As with tint., the team looked to strike a balance that would challenge players without making them feel lost or intimidated. “We didn’t want to make a game like sudoku where people thought, ‘Oh, that’s too difficult for me.’ But we also didn’t want something that was just an endless series of careless clicks. stitch. couldn’t be too hard for kids, but it couldn’t be too childish either.”
It’s working. Lykkegaard has heard from 8-year-olds and 80-year-olds who’ve been drawn to the game’s approachable, accessible style. “The question is: How can we get a player to enjoy it, feel smart, and want to relax with the game? Once you’ve generated that feeling, players will come back. And we want to make everyone feel like this is a game for them.”
Behind the Design is a series that explores design practices and philosophies from each of the winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Flighty might be the easiest thing travelers navigate on their entire trip. “Travel can be a high-stress situation,” says Ryan Jones, the Austin-based developer who founded the app in 2019. “We want Flighty to work so well that it feels almost boringly obvious.”
Conceived during — when else? — a long flight delay, Flighty puts key information front and center with an immediately understandable interface, live maps, and a look that mirrors time-honored airport design conventions. The best-in-class travel app is a flight tracker, airport navigator, and concierge — and with incredible implementations of Live Activities and the Dynamic Island, a companion that makes key information available at all times.
“There’s something comforting about information always being there,” Jones says. “You don’t have to check your phone and think, ‘OK, I have to be at the gate in 32 minutes,’ and then, ‘Now I have to be there in 29 minutes.’ And I don’t know about you, but every time I walk on a plane, I look at my seat number, put it down, and immediately say, ‘Wait, what was my seat number?’”
Since its 2019 launch, Flighty has been an incredible example of the carefully crafted use of Apple technologies. “We’re really doing this out of a passion and love for the product,” says Jones, “We all had our lives changed by iOS and mobile, so we get really excited about adopting new technologies.”
They’ve added a lot. Flighty supports widgets on the Home Screen and Lock Screen, highlighting content using Shared with You, and more. With a few taps, travelers can even live-share their flight path and arrival time with loved ones who may not even have the app installed — a wonderfully convenient feature for coordinating airport pickups.
We want Flighty to work so well that it feels almost boringly obvious.
Ryan Jones, Flighty founder
Flighty is consistently impressive in adjusting to the unpredictable nature of travel. “We really have to shine when things go awry,” says Jones. For instance, the app must account for how every single person will, at some point, lose their internet connection. “Whenever [someone] takes off, we have to assume that we won’t see them again until they land,” says Jones. The solve? At a certain point before a flight takes off, the Dynamic Island switches over to flight progress bars and counters, displaying minimal presentation in a simple circular chart that tracks a flight’s duration.
Visually, both Live Activities and the Dynamic Island are designed to recall airport signage conventions that have been in place for decades. “That’s our real-world analogy,” Jones says. “Those airport boards have one line per flight, and that’s a good guiding light — they’ve had 50 years of figuring out what’s important.”
While the design process is comprehensive, it’s not always fast. “It’s so tempting to start pulling from your existing asset library to see if you can quickly put something together,” he says. To avoid falling back on old ideas, the Flighty team creates 20 design ideas during the concept phase. “It’s what fits on a sheet of paper,” he says with a smile. “You get to six or seven ideas and think, ‘OK, that’s it, there’s none left.’ But then you think, ‘Well, I have an idea that will probably look bad,’ and then you try it and it’s not bad at all.”
Flighty is even fun at home. The Flighty Passport feature shows flights, miles, and travel stats through gorgeous, shareable custom artwork. It’s just more proof that Flighty really is for every step of the journey — even being back home.
Download Flighty from the App Store
Behind the Design is a series that explores design practices and philosophies from each of the winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Evil has never looked better than it does in Resident Evil Village.
The AAA horror adventure is a masterpiece of visual detail on Mac, a feast of creepy castles, decrepit factories, and majestically gothic villains. The game’s bleak village is thick with details and dread; characters like Lady Dimitrescu and the game’s army of mutant lycans nearly pop out of the screen. Simply put, Resident Evil Village contains some of the most realistic graphics ever seen on Apple hardware. And the lavish visuals don’t just look amazing; they drag players into the game’s horrifying landscape and through dark mysteries, vicious confrontations, and mind-blowing plot twists.
It’s all powered by a remarkable assembly of cutting-edge Apple technologies. Resident Evil Village takes full advantage of Apple silicon, ProMotion, Metal 3, and extended dynamic range to serve up its breathtaking visual achievements. “The game is very pretty, but it has this incredible sense of fear,” says Tsuyoshi Kanda, one of the game’s producers. “In some of the first scenes, you end up battling this horde of lycans. The sheer amount of them is impressive. But each has its own intention and personality. We’re happy with how it turned out.”
Those achievements are especially clear in the game’s village, which feels like a character in itself. The village shines in its decay; it’s a showcase of textures, geometry, and complex shaders. And players can enable MetalFX Upscaling to make it look especially breathtaking. Kazuki Kawato from the game’s engine team says the game benefits from both spatial and temporal upscaling. “Both were easy to use and gave us the results we wanted,” he says.
Masaru Ijuin, senior manager in the engine development team, says he always knew the game was beautiful. “Our main focus was taking the base game and making it run as fast and as stable as possible on Mac,” he says, “and I think we did that.”
Kanda calls out the Castle Dimitrescu, home of the game’s breakout villain, the 9-foot-tall vampire giantess Lady Dimitrescu. “The castle looks incredible no matter where you are,” he says. “There’s an entrance hall with a chandelier inside that we’re all really proud of. The team worked hard to create the best graphics possible on the hardware.”
The game’s visuals are deserving of acclaim, but Resident Evil Village also boasts an incredible story and character design. It’s a masterclass in horror pacing that skillfully mixes bursts of frantic action with long stretches of good old dread-building. Kanda says the team paid special attention to creating what he proudly calls a “variety of horrific entertainment.”
“The concept is a horror theme park with characters that stand out against this beautifully rendered environment,” he says. “The stages cycle between horror and action to help players stay balanced. That’s something we learned from other Resident Evil games.”
Balance was also key in creating the game’s story, which had to fit into the Resident Evil universe (Village is the eighth major game in the series) while taking the storyline in wild new directions. “One of the base concepts was Ethan Winters at home with his wife and baby daughter, Rose,” says Kanda. “You see Ethan’s fatherly love all throughout the game.”
The concept is a horror theme park, with characters that stand out against this beautifully rendered environment.
Tsuyoshi Kanda, Resident Evil Village producer
But in the game’s intro, Rose is kidnapped from the family home in a shocking confrontation with Chris Redfield, a character who’s been around since the first Resident Evil. “Chris was such a big part of world-building this; the way he enters the game was so important,” says Kanda. “We didn’t want you to know his intentions until the ending.”
To get to that ending — which is as dramatic as Kanda promises — players must battle through a murderer’s row of memorable villains that look alive, even if they’re (probably?) not. There’s Salvatore Moreau, a hideous mutant; Karl Heisenberg, who runs a factory with some serious health-code violations; Donna Beneviento and her scary doll, which is probably all we need to say about that; and Lady Dimitrescu, the superstar with huge claws, a deathly gothic wardrobe, and a surprisingly devoted fan base.
“The idea for Lady Dimitrescu was a huge character who was too big for the castle itself,” says Kanda. “She has to duck to get through the doors. And when she comes at you, you really feel her presence.”
As an incredible example of Mac gaming, Resident Evil makes its presence felt too. But this story has a twist ending of its own: Kanda, Ijuin, and Kawato personally aren’t all that into horror. “The (Resident Evil) creative team loves horror movies,” laughs Kanda, “but I’m more into the not-too-scary stuff.”
Learn more about Resident Evil Village
Download Resident Evil Village from the Mac App Store
Behind the Design is a series that explores design practices and philosophies from each of the winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
For a creative guy, Luke Spierewka, founder of the Poland-based game studio Afterburn, is certainly a fan of limitations.
“We make comfy puzzle games that convey an idea quickly,” he says, “but they’re all about using a limited amount of something, which is where the challenge comes in.”
In Afterburn’s Railbound, players are challenged to link train cars in proper order by laying down track, manipulating switches, and navigating an increasingly convoluted series of gates, tunnels, and stations. While the puzzles may be tricky, interacting with them is sheer joy. The track-laying mechanic is as simple as finger painting, mistakes can be easily undone, and the game is full of thoughtful details, like its duo of canine conductors or the squiggly frustration cloud that appears over a misdirected train car. And it’s all presented in a bright cartoon style inspired by European comics.
Railbound’s interaction design is the product of Spierewka’s drive to make his studio’s games ever easier to play. “I pay a lot of attention to input. For Railbound, I wanted a system where you basically paint rail tiles with one finger,” he says. “I knew if we didn’t make that mechanic fun and malleable, people would be much less inclined to play. And I think we got there,” he says, before pausing and adding, “but I’m still thinking about how to make it more intuitive.”
The studio, which Spierewka runs with his wife, Kamila, also paid close attention to the size of the puzzles. “In games like Stephen’s Sausage Roll or A Monster’s Expedition, the size of the level is exactly what you need to solve it. I’m not gonna pretend we’re as elegant as those, but I try to constrain our puzzles and space as much as I can, and leave only the stuff you need.”
That strategy also applies to the game’s onboarding, a process that’s largely wordless because of the unsubtle lessons the Afterburn team learned on previous games. “The first version of [our earlier game] Golf Peaks had all this onboarding text,” he says. “The first level introduced five different concepts. The second level was like, ‘This is a new tile type, deal with it.’ The third level was like, ‘Here’s another new type, deal with that too,’” he laughs. “And nobody read them! Every single person I handed a phone to tapped right past the blocks of onboarding text. It was kind of a shock, really.”
Nobody read them! Every single person I handed a phone to tapped right past the blocks of onboarding text. It was kind of a shock, really.
Luke Spierewka, Afterburn
For Railbound, Spierewka jettisoned words entirely. “We thought, ‘What is the simplest way we can break down and teach mechanics?’” The answer was to integrate them into early gameplay. Railbound’s first level gives players just one way to place a track; it’s actually impossible not to beat. In levels 1 through 3, you learn to bend and rotate tiles. “You’re not even taught how to delete tiles until several levels in, because you don’t need to yet. It’s all a dance of introducing and reinforcing concepts at the right pace.” In other words, even the onboarding is an example of using only the stuff you need.
Download Railbound from the App Store
Behind the Design is a series that explores design practices and philosophies from each of the winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
Endling is all about survival in a changed world — and it’s a powerful mix of medium and message.
The game is a gorgeous adventure in which you play as a fox navigating a land charred by environmental disaster and human impact. Endling is not subtle — particularly when the fox starts defending its tiny offspring from an ever-increasing array of man-made dangers. Still, it draws players in with beautiful visuals, lush animations, a moody soundtrack, and brilliantly intuitive gameplay.
“It’s a survival game, but a simplified one that focuses more on telling a story,” says Philipp Nägelsbach, game designer and producer at HandyGames.
When creating such a game, balance is paramount. “You need to have cute scenes with the foxes safe in their lair, learning and growing,” says Nägelsbach. “And you have to have dramatic scenes to illustrate the real dangers.”
After an onboarding process that drops players into the heat of the action, the game becomes an open-world adventure that rewards exploration. That wasn’t always the case; Nägelsbach notes that the game’s earliest versions had a more linear structure. “It didn’t suit the message as well,” he says. “It’s much easier to show the ecological impact humans have when you visit the same spot several times and see a river that’s full of trash or a forest that’s been cut down.”
To control the fox, players operate a simple one-thumb control on the lower-left corner of the screen. The game gradually introduces additional interactions, like the ability to climb or jump over an obstacle. “That’s the moment people realize this isn’t entirely a side-scroller,” says Nägelsbach.
And then there’s the fox itself. Endling casts players as the animal in distress to create an instant sense of empathy — and their choice of animal was well-considered. "Foxes are some of the most adaptable animals in the world,” says Nägelsbach. “They’re not the biggest or smallest; they’re in the middle of the food chain. But if they’re close to extinction, things are really bad.”
It’s a survival game, but a simplified one that focuses more on telling a story.
Philipp Nägelsbach, game designer and producer at HandyGames
Doing so required numerous design considerations. The fox needed to be adorable enough to engage with, realistic enough to feel authentic, and believable enough to navigate the apocalyptic landscape. The fox doesn’t realize what’s happening to the environment; only the player recognizes the meaning of factories, careening trucks, and men in hazmat suits. “And the fox can only do things real foxes can do,” says Naegelsbach. “We couldn’t have the fox pushing buttons or solving complex puzzles.”
Extra attention was paid to the fox’s kits, who grow and develop unique personalities as the game goes on. Each kit represents a player’s life and has an instrument attached to it; when players lose kits, the game feels quieter and more lonely.
Nägelsbach says the teams did make adjustments to ensure the game wasn’t too severe, including the ability to replay parts of the story after a loss instead of starting over. The kits have only one owl enemy; they can’t be directly hurt by humans or dogs. And the fox’s cute bark is a mix of several different animal sounds. “In the real world, foxes aren’t very pleasant to listen to,” says Nägelsbach, “and you shouldn’t be annoyed by your protagonist.”
Endling ultimately delivers a message that sticks around long after gameplay ends. "The message is harsh,” says QA lead and producer Jan Pytlik, “but the game didn’t need to be harsh too. We worked and fine-tuned and I think we hit the mark.”
Behind the Design is a series that explores design practices and philosophies from each of the winners of the Apple Design Awards. In each story, we go behind the screens with the developers and designers of these award-winning apps and games to discover how they brought their remarkable creations to life.
What’s it like to develop for visionOS? For Karim Morsy, CEO and co-founder of Algoriddim, “it was like bringing together all of the work we've built over many years.”
Algoriddim’s Apple Design Award-winning app djay has long pioneered new ways for music lovers and professional DJs alike to mix songs on Apple platforms; in 2020, the team even used hand pose detection features to create an early form of spatial gesture control on iPad. On Apple Vision Pro, they’ve been able to fully embrace spatial input, creating a version of djay controlled entirely by eyes and hands.
“I've been DJing for over twenty years, in all sorts of places and with all sorts of technology, but this frankly just blew my mind,” says Morsy. "It's a very natural way to interact with music, and the more we can embrace input devices that allow you to free yourself from all these buttons and knobs and fiddly things — we really feel it's liberating.”
“It’s emotional — it feels real.”It’s a sentiment shared by Ryan McLeod, creator of Apple Design Award-winning puzzle game Blackbox. “You have a moment of realizing — it's not even that interacting this way has become natural. There is nothing to ‘become natural’ about it. It just is!” he says. “I very vividly remember laughing at that, because I just had to stop for a moment and appreciate it — you completely forget that this [concept] is wild.”
Blackbox is famous on iOS for “breaking the fourth glass wall,” as McLeod puts it, using the sensors and inputs on iPhone in unusual ways to create dastardly challenges that ask you to do almost everything but touch the screen. Before bringing this experience to visionOS, however, McLeod had his own puzzle to solve: how to reimagine the game to take advantage of the infinite canvas offered by Vision Pro.
“You really have to go back to those first principles: What will feel native and natural on visionOS, and within a person’s world?” he says. “What will people expect — and what won’t they? How can you exist comfortably like that, and then tweak their expectations to create a puzzle, surprise, and satisfaction?”
After some early prototyping of spatial challenges, audio quickly became a core part of the Blackbox story. While McLeod and sound designer Gus Callahan had previously created sonic interfaces for the iOS app, Spatial Audio is bringing a new dimensionality to their puzzles in visionOS. “It’s a very fun, ineffable thing and completely changes the level of immersion,” he says. “Having sounds move past you is a wild effect because it evokes emotion — it feels real.”
“It will take you minutes to have your own stuff working in space.”As someone who had exclusively developed for iOS and iPadOS for almost a decade — and had little experience with either 3D modeling or RealityKit — McLeod was initially trepidatious about trying to build an app for spatial computing. “I really hadn’t done a platform switch like that,” he says. But once he got started in Xcode, “there was a wild, powerful moment of recognizing how to set this up.”
visionOS is built to support familiar frameworks, like SwiftUI, UIKit, RealityKit, and ARKit, which helps apps like Blackbox bring over a lot of their existing codebase without having to rewrite from scratch. “What gets me excited to tell other developers is just — you can make apps really easily,” says McLeod. “It will take you minutes to have your own stuff working in space.”
Even for developers working with a more complex assortment of frameworks, like the team behind augmented reality app JigSpace, the story is a similar one. “Within three days, we had something up and running,” says CEO and co-founder Zac Duff, crediting the prowess of his team for their quick prototype.
One member of that team is JigSpace co-founder Numa Bertron, who spent a few days early in their development process getting to know SwiftUI. “He’d just be out there, learning everything he could, playing with Swift Playgrounds, and then he’d come back the next day and go: ‘Oh, boy, you won’t believe how powerful this thing is,’” Duff says.
Though new to SwiftUI, the JigSpace team is no stranger to Apple’s augmented reality framework, having used it for years in their apps to help people learn about the world using 3D objects. On Vision Pro, the team is taking advantage of ARKit features to place 3D objects into the world and build custom gestures for scaling — all while keeping the app’s main interface in a window and easily accessible.
JigSpace is also exploring how people can work together with SharePlay and Spatial Personas. “It's a fundamental rethink of how people interact together around knowledge,” says Duff. “Now, we can just have you experience something right in front of you. And not only that — you can bring other people into that experience, and it becomes much more about having all the right people in the room with you.”
“You want to feel at home.”Shared experiences can be great for education and collaboration, but for Xavi H. Oromí, chief engineering officer at XRHealth, it’s also about finding new and powerful ways to help people. While Oromí and his team are new to Apple platforms, they have significant expertise building fully immersive experiences: They were creating apps for VR headsets as early as 2012 in order to assist people in recognizing phobias, physical rehabilitation, mental health, and other therapy services.
Vision Pro immediately clicked for Oromí and the team, especially the fluidity of immersion that visionOS provides. “Offering some sort of gradual exposure and letting the person decide what that should look like — it’s something that’s naturally very integrated with therapy itself,” says Oromí.
With that principle as their bedrock, the team designed an experience to help people with acrophobia (fear of heights), built entirely with Apple frameworks. Despite having no prior development experience with Swift or Xcode, the team was able to build a prototype they were proud of in just a month.
In their visionOS app, a person can open a portal in their current space that gives them the feeling of being positioned at a significant height without fully immersing themselves in that app’s environment. For Oromí, this opens up new possibilities to connect with patients and help them feel grounded without overtaxing their comfort level. “You want to feel at home,” says Oromí, “The alternative before [in a completely immersive experience] was that I needed to remove the headset, and then I totally broke the immersion.”
It also has the added benefit of giving people a way to stay true to themselves. In some of their previous immersive experiences on other platforms, Oromí notes, patients’ hands and bodies were represented in the space using virtual avatars. But this had its own challenges: “We had a lot of patients saying that they felt their body was not theirs,” he says. “It’s very difficult for our society that’s so diverse to create representations of avatars that match everyone in the world... [In Vision Pro], where you can see your own body through the passthrough, we don’t need to create a representation.”
When combined with SharePlay, people can stay connected and supported with their virtual therapists while pushing their boundaries and challenging common fears. “Years from now, when we look back,” Oromí says, “we will be able to say it all started with the launch of Vision Pro — it’s where we truly enabled real virtual therapy.”
“You’re off to the races.”When the SDK arrives later this month, developers worldwide will be able to download Xcode and start building their own apps and games for visionOS. With 46 sessions focused on Apple Vision Pro premiering at WWDC, there’s a lot of new knowledge to explore — but Duff and McLeod have a few supplemental recommendations.
“Pick up SwiftUI if you haven't yet,” says McLeod, noting that getting to know the framework can help developers add core platform functionality to their existing app. He also suggests getting comfortable with basic modeling and Reality Composer Pro. “At some point, you're gonna want to come off the page,” he says. But, he notes with a smile, you don't need to become a 3D graphics expert to build for this platform. "You can get really far with a simple model and [Reality Composer Pro] shaders."
Duff mirrors these recommendations, adding one last framework to the list: RealityKit. “If you’re transitioning from [other renderers] there are some fundamental changes you have to get to know,” he says. “But with those three things, you’re off to the races.”
Learn more about developing for visionOS and what you can do to get ready for the SDK on developer.apple.com.
Learn more about developing for visionOS
Get ready to design and build an entirely new universe of apps and games for Apple Vision Pro. Find out how developers of apps like djay, Blackbox, JigSpace, and XRHealth are starting to build for spatial computing.
We'll show you how you can prepare for the visionOS SDK, help you learn about best-in-class frameworks and tools, and explore programs and events to help support you along your development journey.
Spotlight on: Developing for visionOS View nowLearn more about developing for visionOS
Discover the latest advancements on all Apple platforms. With an incredible new opportunity in spatial computing in visionOS, new features in iOS, iPadOS, macOS, tvOS, and watchOS, and major enhancements across languages, frameworks, tools and services, you can create even more unique experiences for users worldwide.
Apple Vision Pro is a revolutionary spatial computer that seamlessly blends digital content with the physical world, while allowing users to stay present and connected to others. Apple Vision Pro creates an infinite canvas for apps that scales beyond the boundaries of a traditional display and introduces a fully three-dimensional user interface controlled by the most natural and intuitive inputs possible — a user’s eyes, hands, and voice. Featuring visionOS, the world’s first spatial operating system, Apple Vision Pro lets users interact with digital content in a way that feels like it is physically present in their space. The breakthrough design of Apple Vision Pro features an ultra-high-resolution display system that packs 23 million pixels across two displays, and custom Apple silicon in a unique dual-chip design to ensure every experience feels like it’s taking place in front of the user’s eyes in real time.
Discover the resources you can use to bring your spatial computing creations to life with a new, yet familiar, way to build apps that reimagine what it means to be connected, productive, and entertained.
The App Store Review Guidelines, the Apple Developer Program License Agreement, and the Apple Developer Agreement have been updated to support updated policies and upcoming features, and to provide clarification. Please review the changes below and accept the updated terms as needed.
App Store Review GuidelinesAt Apple, we believe privacy is a fundamental human right. That is why we’ve built a number of features to help users understand developers’ privacy and data collection and sharing practices, and put users in the driver’s seat when it comes to their data. App Tracking Transparency (ATT) empowers users to choose whether an app has permission to track their activity across other companies’ apps and websites for the purposes of advertising or sharing with data brokers. With Privacy Nutrition Labels and App Privacy Report, users can see what data an app collects and how it’s used.
Many apps leverage third-party software development kits (SDKs), which can offer great functionality but may have implications on how the apps handle user data. To make it even easier for developers to create great apps while informing users and respecting their choices about how their data is used, we’re introducing two new features.
First, to help developers understand how third-party SDKs use data, we’re introducing new privacy manifests — files that outline the privacy practices of the third-party code in an app, in a single standard format. When developers prepare to distribute their app, Xcode will combine the privacy manifests across all the third-party SDKs that a developer is using into a single, easy-to-use report. With one comprehensive report that summarizes all the third-party SDKs found in an app, it will be even easier for developers to create more accurate Privacy Nutrition Labels.
Additionally, to offer additional privacy protection for users, apps referencing APIs that could potentially be used for fingerprinting — a practice that is prohibited on the App Store — will now be required to select an allowed reason for usage of the API and declare that usage in the privacy manifest. As part of this process, apps must accurately describe their usage of these APIs, and may only use the APIs for the reasons described in their privacy manifest.
Second, we want to help developers improve the integrity of their software supply chain. When using third-party SDKs, it can be hard for developers to know the code that they downloaded was written by the developer that they expect. To address that, we’re introducing signatures for SDKs so that when a developer adopts a new version of a third-party SDK in their app, Xcode will validate that it was signed by the same developer. Developers and users alike will benefit from this feature.
We’ll publish additional information later this year, including:
Join us for an exhilarating week of technology and community. Be among the first to learn the latest about Apple platforms, technologies, and tools. You’ll also have the opportunity to engage with Apple experts and other developers. All online and at no cost.
Experience WWDC here and on the Apple Developer website.
Keynote and State of the UnionThe Apple Worldwide Developers Conference kicks off with exciting reveals and new opportunities. Join the developer community for an in-depth look at the future of Apple platforms, directly from Apple Park.
Keynote Watch now Platforms State of the Union Watch now Apple Design AwardsThe Apple Design Awards celebrate apps and games that excel in the categories of Inclusivity, Delight and Fun, Interaction, Social Impact, Visuals and Graphics, and Innovation. Join us in congratulating this year’s finalists and winners.
June 5, 6:30 p.m. PT.
SessionsLearn how to create your most innovative apps and games yet by taking advantage of the latest updates on Apple platforms. New videos and transcripts will be posted daily from June 6 through 9. Watch on the web or in the Apple Developer app for iPhone, iPad, Mac, and Apple TV.
LabsGet one-on-one guidance from Apple engineers, designers, and other experts. Learn how to implement new Apple technologies, explore UI design principles, improve your App Store presence, and much more.
ActivitiesJoin Apple engineers, designers, and other experts for Q&As, Meet the Presenter, icebreakers, and more.
ForumsConnect with the community on the Apple Developer Forums. Find WWDC23 content quickly and easily by searching conference-specific tags.
Beyond WWDCDiscover even more opportunities for learning, networking, and fun outside of the conference.
Stay connectedWe’ll be posting WWDC announcements leading up to and during the conference.
Check your email settings in your Apple Developer account. Check your notification settings in the Account tab.
Watching session videos, viewing related documentation and sample code, and posting on the forums are available to anyone. To request a lab appointment or sign up for activities, you must be a current member of the Apple Developer Program or Apple Developer Enterprise Program, or a 2023 Swift Student Challenge applicant.
The Xcode 15 beta supports the latest SDKs for iOS, iPadOS, macOS, tvOS, and watchOS. This version of Xcode helps you code and design your apps faster with enhanced code completion, interactive previews, and live animations. Use Git staging to craft your next commit without leaving your code. Explore and diagnose your test results with redesigned test reports with video recording. And start deploying seamlessly to TestFlight and the App Store from Xcode Cloud.
Check out the session schedule to plan your week! Sessions will be posted daily and will include links to resources and forums tags. And you can now request one-on-one lab appointments to get your questions answered, whether you’re just starting out or need to solve an advanced issue.
Empower your app by leveraging the system.
Build great apps and games for everyone.
Learn the latest updates for Swift.
Deploy and manage Apple devices in your classroom or office.
Extend your app's experience.
Tighten the privacy and security of your apps and games.
Launch your games and level up your graphics.
Create compelling interfaces and experiences.
Get your health and fitness app in great shape.
Market your app and grow your audience.
Build interfaces that feel right at home on Apple platforms.
Explore the tools you need to build the next great app or game.
Help people find where they are and where they’re going.
Explore Safari and web technologies.
Build audio and video experiences for your app.
New to WWDC? Start right here.
The App Store’s commerce and payments system was built to empower you to conveniently set up and sell your products and services at a global scale in 44 currencies across 175 storefronts. Apple administers tax on behalf of developers in over 70 countries and regions and provides you with the ability to assign tax categories to your apps and in‑app purchases. Periodically, we update your proceeds in certain regions based on changes in tax regulations.
On May 31, your proceeds from the sale of apps and in‑app purchases (including auto‑renewable subscriptions) will be adjusted to reflect the tax changes listed below. Prices will not change.
Due to changes to tax regulations in Brazil, Apple now withholds taxes for all App Store sales in Brazil. We’ll administer the collection and remittance of taxes to the appropriate tax authority on a monthly basis. You can view the amount of tax deducted from your proceeds starting in June 2023 with your May earnings. Developers based in Brazil aren’t impacted by this change.
Once these changes go into effect, the Pricing and Availability section of My Apps will be updated in App Store Connect. As always, you can change the prices of your apps and in‑app purchases (including auto‑renewable subscriptions) at any time. And now you can change them for any storefront with 900 price points to choose from.
WWDC23 is almost here. We’ll be kicking off with the Apple Keynote on June 5 at 10:00 a.m. PT. Watch online at apple.com or in the Apple Developer app. You can even use SharePlay to watch with friends.
Activities are now open for sign-up for eligible developers. Designed to connect you with the developer community and Apple experts, they’ll feature Q&As, Meet the Presenters, and community icebreakers in online group chats.
As part of ongoing efforts to improve security and privacy on Apple platforms, the App Store receipt signing intermediate certificate that’s used to verify the sale of apps and associated in‑app purchases is being updated to use the SHA‑256 cryptographic algorithm. This update will be completed in multiple phases and new apps and app updates may be impacted, depending on how they verify receipts.
What to expectIf your app verifies App Store transactions using the AppTransaction and Transaction APIs, or the verifyReceipt web service endpoint, no action is required.
If your app validates App Store receipts on device, make sure your app will support the SHA-256 version of this certificate. New apps and app updates that don’t support the SHA-256 version of this certificate will no longer be accepted by the App Store starting August 14, 2023.
Important datesFor more details, view TN3138: Handling App Store receipt signing certificate change.
As announced last year at WWDC, if you notarize your Mac software with the Apple notary service using the altool command-line utility or Xcode 13 or earlier, you’ll need to transition to the notarytool command-line utility or upgrade to Xcode 14 or later. Starting November 1, 2023, the Apple notary service will no longer accept uploads from altool or Xcode 13 or earlier. Existing notarized software will continue to function properly.
The Apple notary service is an automated system that scans Mac software for malicious content, checks for code-signing issues, and returns the results quickly. Notarizing your software assures users that Apple has checked it for malicious software and none was detected.
The Apple Design Awards celebrate apps and games that excel in the categories of Inclusivity, Delight and Fun, Interaction, Social Impact, Visuals and Graphics, and Innovation. Discover this year’s finalists, then check back June 5 at 6:30 p.m. PT to learn about the winners.
Get ready for an action-packed online experience at WWDC23. Join the developer community for a week of sessions, labs, and activities, starting June 5 at 10:00 a.m. PT.
The beta versions of iOS 16.6, iPadOS 16.6, macOS 13.5, tvOS 16.6, and watchOS 9.6 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 14.3.1.
To check if a known issue from a previous beta release has been resolved or if there’s a workaround, review the latest release notes. Please let us know if you encounter an issue or have other feedback. We value your feedback, as it helps us address issues, refine features, and update documentation.
Bugs are an inevitable part of the development process. Though they can be frustrating, you can help squash these sorts of problems quickly by identifying the issue you’re running into, reproducing it, and filing a report through Feedback Assistant.
Discover how you can make sure your feedback is clear and actionable.
Get ready for a world without passwords.
Passkeys are a replacement for passwords, offering a faster, easier, and more secure sign-in experience for your apps and websites. They’re strong, resistant to phishing, and designed to work across Apple devices and nearby non-Apple devices. Best of all, there’s nothing for people to create, guard, or remember.
To help explain how to implement passkeys, the Apple privacy and security team hosted a Q&A to answer common questions about device support, use cases, account recovery, and more. Here are some highlights from that conversation.
How do passkeys work?Passkeys are based on public key cryptography, which matches a private key saved on a device with a public key sent to a web server. When someone signs in to an account, their private key is verified by your app or website’s public key. That private key never leaves their device, so apps and websites never have access to it — and can’t lose it or reveal it in a hacking or phishing attempt. There’s nothing secret about the public key; it offers no access to anything until paired with the private key.
Which devices support passkeys?Passkeys work on devices running a minimum of iOS 16 on iPhone 8; iPadOS 16 on iPad 5th generation, iPad mini 5th generation, iPad Air 3rd generation, all iPad Pro models that offer Touch ID or Face ID; macOS Ventura; and tvOS 16. Passkeys are also supported in Safari 16 on macOS Monterey and Big Sur.
When Touch ID or Face ID can’t be used, people can enter their device passcode or system password to authenticate passkey credentials.
How do I adopt passkeys?The first step is to adopt WebAuthn on your back-end server and add our platform-specific API to your app. Take a deeper dive into next steps by watching the video below:
Meet passkeys Watch now What happens if a device is lost or stolen?Data remains safe. Passkeys are end-to-end encrypted through iCloud Keychain and require biometrics, such as Face ID or Touch ID, or the device passcode to decrypt them. Without these, passkeys remain securely stored on the lost device. For extra peace of mind, you can always remotely wipe your device with Find My.
What does account recovery look like for someone who’s only ever signed in with a passkey?The recovery method is independent of the authentication mechanism. Apps and websites are welcome to maintain the same recovery methods they use today (such as sending a link in an email to create a new passkey). Recovery will likely be a much less common scenario with passkeys, which are saved by the device. There’s nothing for a human to forget.
Can someone have multiple passkeys for my app; for instance, passkeys generated from multiple devices?Yes, someone can have one passkey per account per platform. In the special case that someone has more than one account for an app, they’ll have discrete passkeys for each account too.
What’s the difference between passkeys and multifactor authentication?Multifactor authentication adds additional layers of security on top of an existing password, but generally still leaves the possibility of phishing. Since passkeys eliminate the most pressing problems with passwords and are resistant to phishing, additional user-visible steps aren’t needed.
Is it possible to use an email address as the visible account identifier instead of a username?Yes, it’s definitely possible. Our videos and documentation use usernames and email addresses as examples. Nothing about account identifiers has to change.
Resources Meet passkeys Watch now Spotlight on: Passkeys View nowSecurity is at the core of every Apple platform. Discover how you can help guard your apps and games against potential threats, add support for passkeys, streamline authentication and authorization flows, and more. You’ll also get to know Developer Mode, which lets you develop, test, and deploy your products.
Videos Get to know Developer Mode Watch now Improve DNS security for apps and servers Watch now Meet passkeys Watch now Replace CAPTCHAs with Private Access Tokens Watch now Streamline local authorization flows Watch now What’s new in notarization for Mac apps Watch now Mitigate fraud with App Attest and DeviceCheck Watch now Safeguard your accounts, promotions, and content Watch now Secure your app: threat modeling and anti-patterns Watch now Feature stories Spotlight on: Passkeys View now Q&A with the passkeys team View now ResourcesIf you’ve ever dreamed of creating a more secure and phishing-resistant sign-in experience, we have good news.
“There is a high chance that in a few years, Apple’s release of passkeys as part of iOS 16 will be remembered as the beginning of a revolutionary change in how companies implement sign-in for their products,” wrote Matthias Keller, Kayak chief scientist and SVP of technology, in a 2022 op-ed piece on the subject.
Passkeys offer a faster, easier, and more secure sign-in experience for your apps and websites. They’re strong, resistant to phishing, and designed to work across Apple devices, as well as nearby non-Apple devices. And because they’re integrated with Touch ID and Face ID, people can use passkeys like they would any other sign-in system or routine.
A passkey is a cryptographic entity used in place of a password that’s made up of two keys: one public, one private. The public key is registered with an app or website and kept on a web server, while the private key is stored on devices. When someone attempts to sign in, the app or website creates a challenge. The private key signs the challenge to create a signature and the public key is used to verify that signature without revealing what the private key is.
While there’s a lot going on behind the scenes, most people won’t know — or need to think about — any of it. With passkeys, there’s nothing to create, guard, or remember. Plus, the private key is stored in iCloud Keychain and is end-to-end encrypted for another layer of security.
Kayak: “You just initiate the process”Kayak’s Keller isn’t just a longtime digital security evangelist with years of history in the field. He's also a dad — and that poses its own host of security challenges.
“Between activities and school, I’m constantly creating accounts and passwords, all of which have a variety of stipulations,” Keller says. “Some can’t be longer than 16 characters, some require special symbols, and others won’t even recognize an exclamation point. And I know from experience that companies face similar challenges when it comes to protecting passwords.”
Keller has been involved with Kayak’s various login approaches throughout his 10 years with the company. Prior to passkeys, the app relied largely on “magic links” sent via email. “But it was getting more and more complex to ensure the security of magic links, especially when supporting logins across devices,” Keller says.
Between activities and school, I'm constantly creating accounts and passwords, all of which have a variety of stipulations.
Matthias Keller, Kayak chief scientist and SVP of technology
When Keller first heard about passkeys, he knew they were right for Kayak. “The moment it clicked for me was when I saw the first prototype and how easy it was to use,” he says. Kayak was one of the very first to support passkeys, releasing their update at the same time as the feature’s public release in September 2022.
The Kayak team was able to adopt passkeys so quickly in part because of the underlying framework and documentation supporting the feature. “Working on the server is my day-to-day, but I’m not afraid of doing a little bit of Swift, too,” he says. “Luckily, integrating passkeys was light on the UI side. We only had to initiate the experience provided by Apple.”
Feedback was overwhelmingly positive. In the feature’s first three weeks of availability, thousands of people created passkeys on Kayak. Almost 20 percent of those were existing users who manually opted into the new technology.
“The world before passkeys was broken,” he says. “You have all these obscure password rules, as well as expiration and compliance issues — and it can be extremely expensive to offer authentication because you have to buy security products or hire someone to run it for you.” Keller’s work at Kayak is part of a larger drive to get more companies around the world to support this new open standard — one that protects its developers as much as its customers. “You no longer need to protect millions of passwords. Now we only store public keys, which are pretty useless to hackers.”
For Keller, passkeys are now a crucial part of Kayak’s security strategy. “We’ve got a long journey until the last password is gone, but it's exciting to see where we're headed,” he says.
Robinhood: "We're talking about the emotional angle"For investment app Robinhood, passkeys provide a key advantage over other secure sign-in options: speed.
“Robinhood is a product where you may want to sign in and complete a time-sensitive action,” says Hannarae Nam, the app’s product manager for account security. “Maybe the market’s opening and [you] want to make a trade immediately.” Typing a password or engaging in two-factor authentication can eat up precious seconds — and could cost you a deal or a valuable trade.
We're talking about the emotional angle of instantly accessing your account.
Yong Rhee, Robinhood product lead for customer trust and safety
With passkeys, the app can provide a speedy login process that also offers maximum security. “It’s critical to understand that we’re not talking about just the ability to engage with Robinhood to invest and trade,” says Yong Rhee, the product lead for customer trust and safety. “We’re talking about the emotional angle of instantly accessing your account.”
Ensuring that customers aren’t locked out is “critical” to Robinhood, says Rhee. Passkeys are managed by the operating system and backed up, synced, and available across all of someone’s devices. There's no typing needed and nothing to remember. And people can easily get back in to their accounts even if they lose their phone.
Robinhood’s security team pushed for passkeys early as a potential solution for their customers. “They’ve been a strong proponent of bringing up the vulnerabilities of passwords,” says Rhee. The team rolled out passkeys to a percentage of customers in December 2022, though they plan to continue maintaining their existing password and two-factor authentication system as passkeys adoption rolls out. “I think when customers catch up to the technology, they’ll understand and feel more confident in account security,” says Rhee.
Instacart: “It seemed like a perfect match”Instacart senior mobile engineer Josh Schroeder was on paternity leave when passkeys were introduced at WWDC22, but he made a note to dig into the idea upon his return. “Between the reduced friction and improved security, it seemed like a perfect match,” he says.
The Instacart team signed off on the idea quickly, encouraged by the opportunity to reduce sign-in friction. “That was the biggest selling point for me,” says Brandon Lawrence, Instacart’s senior software engineer. “Well, that and not having to remember another password.”
We believe in passkeys, and we think this will become really common.
Josh Schroeder, Instacart senior mobile engineer
For Instacart, there was a second benefit as well: the opportunity to pare down duplicate accounts. “When they don’t remember their password, a lot of people just create another account,” says Schroeder. Passkeys avoid that unnecessary (and annoying) duplication. Because devices keep track of passkeys, there's nothing to remember.
The early implementation process made Lawrence — who spent part of his pre-tech career as a meteorologist in the Marines — feel like something of a passkeys pioneer. “For much of what we build, we can look at the many people who’ve done it before. This time there was a lot more exploration, a little more feeling like we were in uncharted territory. Once we got it into place, it was relatively smooth.”
Today, passkeys are presented as the default sign-in option when creating an Instacart account with an email address (although if someone declines, the app offers the option to create a traditional password). More than half of new Instacart customers who created accounts with an email address have adopted the feature, and plans are underway to gradually convert existing accounts as well. “We believe in passkeys,” says Schroeder, “and we think this will become really common.”
Resources Meet passkeys Watch now Q&A with the passkeys team View nowSoon it will be easier than ever for your customers to resolve payment issues, so they can stay subscribed to your content, services, and premium features. Starting this summer, if an auto-renewable subscription doesn’t renew due to a billing issue, a system-provided sheet appears in your app with a prompt that lets customers update their payment method for their Apple ID. No action is required to adopt this feature. Starting today, you can get familiar with the sheet in Sandbox. You can also test delaying or suppressing it using messages and display in StoreKit. This feature will require a minimum of iOS 16.4 or iPadOS 16.4.
All of this adds to existing powerful App Store features that help you retain subscribers. For example, if a subscription is in the billing retry state, Apple uses machine learning to optimize payment retries for the best possible recovery rate. And when you enable Billing Grace Period, customers can continue accessing their subscriptions while Apple attempts to collect payment.
The App Store’s world-class commerce and payments system provides a convenient and effective way to set equalized prices across international markets, adapt to foreign exchange rate or tax changes, and manage prices per storefront. Last month, we introduced major pricing upgrades, including enhanced global pricing, across all purchase types. Now more customer friendly, the new price points follow the most common conventions in each country or region, and are globally equalized to your selected base country or region using publicly available exchange rate information from financial data providers.
As a reminder, on May 9, 2023, pricing for existing apps and one-time in-app purchases will be updated across App Store storefronts using your current price in the United States as the basis — unless you’ve made relevant updates after March 8, 2023. You can update your base country or region at anytime using App Store Connect or the App Store Connect API. If you choose to do so, prices in your selected base country or region won’t be adjusted when prices are globally equalized on the App Store to account for foreign currency changes or new taxes. You can also choose to manually adjust prices on multiple storefronts of your choice instead of using the equalized price.
Learn how to select a base country or region
We’re excited to continue our long-standing support of students around the world who love to code. Show us your passion for coding by submitting an incredible app playground on the topic of your choice using Swift Playgrounds or Xcode. Winners will receive an award, recognition, and more.
Mark your calendars for an exhilarating week of technology and community. Be among the first to learn the latest about Apple platforms, technologies, and tools. You’ll also have the opportunity to engage with Apple experts and other developers. All online and at no cost.
Special event at Apple ParkIn addition, Apple will host a special all-day event for developers and students on June 5 at Apple Park. Watch the keynote and State of the Union videos together, meet some of the teams at Apple, celebrate great apps at the Apple Design Awards ceremony, and enjoy activities into the evening.
Swift Student ChallengeCalling all talented students! Show us your creativity and passion for coding to be selected for an award. Apply by April 19.
We’ll be posting WWDC announcements leading up to and during the conference.
Check your email settings in your Apple developer account. Check your notification settings in the Account tab.
Mark your calendars June 5 through 9 for an exhilarating week of technology and community. Be among the first to learn the latest about Apple platforms, technologies, and tools. You’ll also have the opportunity to engage with Apple experts and other developers. All online and at no cost.
In addition, Apple will host a special all‑day event for developers and students on June 5 at Apple Park. Watch the keynote and State of the Union videos together, meet some of the teams at Apple, celebrate great apps at the Apple Design Awards ceremony, and enjoy activities into the evening.
Talented students can showcase their creativity for the opportunity to receive an award in the Swift Student Challenge.
The beta versions of iOS 16.5, iPadOS 16.5, macOS 13.4, tvOS 16.5, and watchOS 9.5 are now available. Get your apps ready by confirming they work as expected on these releases. And to take advantage of the advancements in the latest SDKs, make sure to build and test with Xcode 14.3.
To check if a known issue from a previous beta release has been resolved or if there’s a workaround, review the latest release notes. Please let us know if you encounter an issue or have other feedback. We value your feedback, as it helps us address issues, refine features, and update documentation.
Starting April 25, 2023, iOS, iPadOS, and watchOS apps submitted to the App Store must be built with Xcode 14.1 or later. The latest version of Xcode 14, which includes the latest SDKs for iOS 16, iPadOS 16, and watchOS 9, is available for free on the Mac App Store.
When building your app, we highly recommend taking advantage of the latest advances in iOS 16, iPadOS 16 and watchOS 9.
iOS 16 enhances iPhone with all-new personalization features, deeper intelligence, and more seamless ways to communicate and share. Take advantage of Live Activities to help people stay on top of what’s happening live in your app, right from the Lock Screen and the Dynamic Island on iPhone 14 Pro. Use App Intents to help users quickly accomplish tasks related to your app by voice or tap. And get the most out of the latest enhancements in MapKit, ARKit, Core ML, and more.
iPadOS 16 introduces new productivity features that let you deliver compelling collaboration experiences and build more capable, intuitive apps and powerful pro workflows on iPad. You can bring desktop-class features, such as an editor-style navigation bar, enhanced text editing menu, and external display support, to your iPad app. Metal 3 introduces powerful features that help your games and pro apps tap into the full potential of Apple silicon on the latest generations of iPad Pro and iPad Air.
watchOS 9 provides new and powerful communication features for watchOS apps. You can deliver timely information with rich complications on more Apple Watch faces, enable sharing of your app content, let users make VoIP calls directly from Apple Watch, and more. And with a simplified watchOS app structure, managing your projects is simpler than ever.
With Live Activities, your app can provide up-to-date, glanceable information — like weather updates, a plane’s departure time, or how long it’ll be until dinner is delivered — right on the Lock Screen. What’s more, thanks to lively features like the Dynamic Island on iPhone 14 Pro and iPhone 14 Pro Max, Live Activities can also be a lot of fun.
Apple evangelists, designers, and engineers came together at Ask Apple to answer your questions about Live Activities and the Dynamic Island. Here are a few highlights from those conversations, including guidance about sizing and styling, when to dismiss a Live Activity, and why widgets and Live Activities are different (except when they’re not).
How do I update a Live Activity without using Apple Push Notification service (APNs)?Your app can use a pre-existing background runtime functionality, such as Location Services, to provide Live Activity updates as you see fit. You can also use BGProcessingTask and background pushes to provide less frequent updates to your Live Activity. Keep in mind that these background tasks aren’t processed immediately by the system. You can read more below:
Displaying live data with Live Activities
The 4-hour default to dismiss a Live Activity is too long for my use case. What are the guidelines for dismissing a Live Activity after it ends?When ending a Live Activity, you can provide an ActivityUIDismissalPolicy
to tell the system when to dismiss your UI. Alternatively, you can choose to dismiss the Live Activity immediately or after a certain time has passed.
Your app should use the activityStateUpdates
async sequence to observe state changes for each Live Activity.
Live Activity life cycles aren’t tied to the host app’s process, so they’ll stay if the app is force quit. Your widget extension’s life cycle is also separate. It’s entirely possible that different instances of the widget extension are called to render views for the same Live Activity, so it’s important not to store any state locally in the widget extension.
How do Live Activities and widgets differ?Live Activities and widgets both provide glanceable information at a moment’s notice. Live Activities are great for displaying situational information related to an ongoing task that someone initiated. Good examples include food deliveries, workouts, and flight departure times. Widgets can provide glanceable information that’s always relevant. Good examples include to-do lists, this week’s weather forecast, or how close someone is to closing their rings on Apple Watch.
While both Live Activities and widgets rely on WidgetKit to lay out their UI, they’re structured a bit differently. Live Activities are a single view that updates programmatically, while widgets consist of a timeline of preconstructed views.
Should my Live Activity attempt to change the background color of the Dynamic Island?The Dynamic Island is most immersive when you don’t provide background color or imagery — think of it purely as a canvas of foreground view elements. More design guidance is provided in the Human Interface Guidelines.
Human Interface Guidelines: Live Activities
Do Live Activities support interactive buttons?Live Activities on the Lock Screen and in the Dynamic Island don’t support interactive buttons or other controls. Including buttons in your Live Activity could confuse someone into thinking they’re able to interact with the view. For this reason, you should avoid displaying anything in your UI that resembles a button.
The best user experience exists within your app, which is why all interaction with a Live Activity results in opening your app. A Live Activity’s Lock Screen presentation and expanded presentation can include multiple links into your app, so you can provide different destinations, depending on the context of your Live Activity.
Are Live Activities the only way to support the Dynamic Island?Your app can implement other system services, such as CallKit and Now Playing, that display system UI in the Dynamic Island. However, Live Activities are the only way for your app to provide its own UI in the Dynamic Island.
Is it possible to add animations to the Dynamic Island?While there’s no support for arbitrary animations in your Live Activity views, your app can change how a Live Activity’s content updates from one state to the next. Read more in the “Animate content updates” section of the article below.
Displaying live data with Live Activities
Where can I find more documentation about Live Activities?The ActivityKit documentation provides a wealth of information about implementing Live Activities, including how to update and end a Live Activity using APNs. In addition, the Human Interface Guidelines offer design guidance and recommended sizes for the various presentations. You can also find some inspiration in the Food Truck sample project from WWDC22.
Human Interface Guidelines: Live Activities
Displaying live data with Live Activities
Starting and updating Live Activities with ActivityKit push notifications
Discover how you can bring a new dimension of sound to your apps and games with Spatial Audio. We'll show you how you can easily bring immersive audio to listeners with compatible hardware, help you take advantage of the PHASE and Audio Engine APIs, and offer recommendations on tailoring your project's experience to tell stories in new, exciting ways. We'll also share how apps like Endel and Odio added Spatial Audio to deliver incredible sound.
Videos Immerse your app in Spatial Audio Watch now Discover geometry-aware audio with the Physical Audio Spatialization Engine (PHASE) Watch now Design for spatial interaction Watch now Feature stories Spotlight on: Spatial Audio View now Behind the Design: Odio View now ResourcesWhen designing soundscapes for apps and games, the right notes can make all the difference. And when those notes are built to support multichannel audio, they might even turn heads. (Literally.)
Endel and Odio are just two of the many apps and games taking advantage of Spatial Audio. They use multichannel mixes, Core Audio, and AVFoundation to add texture and dimensionality, creating resonating surround-sound experiences that further immerse listeners into the world within their apps.
Design for spatial interaction Watch nowEndel (pictured above) conjures up personalized and adaptive soundscapes based on biometrics and environment to help people focus and get better sleep. Its inaugural Spatial Audio soundscape — one with the satisfyingly otherworldly name of Spatial Orbit — brings the app’s remarkable mix of art and AI to a new dimension.
“It feels like you’re inside a vast, glittery space,” says Dmitry Evgrafov, Endel cofounder and chief sound officer. “It’s almost like the sonic equivalent of pointillism, where the small dots create a structure themselves and you kind of drown in the thing. It’s a very beautiful state, and it’s not something you can reproduce in stereo.”
When bringing Spatial Audio into their ecosystem, the Endel team’s first task was determining if the technology was compatible with their ever-changing, generative soundscape. That job fell largely to Kyrylo Bulatsev, cofounder and chief technology officer. “[Spatial Audio] meant we had to add one more dimension to the non-static element,” he says. “Besides choosing what sound to play and when, we had to think about where the sound would be and how it would move around you.”
That soundscape also had to hit the “thin line between augmenting an experience and making it distracting,” Evgrafov says. That’s because while most apps (and games and movies and songs) are designed for active engagement, Endel aims to be a perfect background companion — enhancing your experience without pulling from your focus. “Our use case is different from other products that utilize the technology,” says Evgrafov (whom fellow cofounder Oleg Stavisky credits with “all the beautiful sounds in the app”).
It’s almost like the sonic equivalent of pointillism.
Dmitry Evgrafov, Endel cofounder and chief sound officer
A pianist and musician with 10 albums to his credit, Evgrafov certainly knows his way around stereo. “But randomization of the position of audio in the space? That’s a whole different beast,” he says.
The first serious prototype of Spatial Orbit was earthbound, set to a realistic jungle scene. “The idea was you’d walk around this magical Garden of Eden and exotic tropical animals would sing around you,” he says. “We had a harp playing by the water, a creek, birds that don’t exist in the real world, stuff like that.”
Similar ideas kept coming: a Gregorian choir that slowly shuffled past you while chanting, field recordings from inside a cave. Although the concepts were cool and the prototypes sounded great, the team kept running up against the same problem. “They weren’t Endel,” says Evgrafov. “They transported you to a place, but that meant people were using the app consciously. They didn’t match what we stood for.”
The final version of Spatial Orbit does match what Endel stands for — and achieves the synthesis of art and technology that Endel strives for. “The rain [in our soundscape] is almost metaphorical,” says Evgrafov. “It’s got this slightly augmented feel that allows you to just drown a little and be with your thoughts, focus on your book, or whatever you’re doing.”
Tweaking the soundscape was an adventure in itself. “Watching people test Endel is kind of a funny exercise,” laughs Stavitsky. That’s because there’s really not an established way to test an personalized auto-generated soundscape for a group of people all at once.
[The rain has] this slightly augmented feel that allows you to just drown a little and be with your thoughts, focus on your book, or whatever you’re doing.
Dmitry Evgrafov
“We invented the process and the toolset,” says Evgrafov. It involved a lot of people wandering Endel’s Berlin offices... and elsewhere. “It was also a lot of me in public spaces just staring at nothing, like a cat.”
In the end, Spatial Orbit captures that elusive mix of innovative technology and artistic resolve. “When we realized the science was there and that it still checked all the Endel boxes, it was a big relief,” says Evgrafov. “We thought, ‘OK, we can be non-intrusive and Spatial at the same time.’”
Download Endel from the App Store
Odio also focuses on creating great ambient soundscapes — but with a sci-fi twist. “I want our composers to imagine inventing planets and filling them up with sound,” says Joon Kwak, the app’s Seoul-based cofounder. “We want to walk people through these new planets.”
The app’s soundscapes, which can evoke anything from a crashing waterfall to a buzzy digital backdrop to the spooky calm of the deep sea, use head tracking and multichannel audio to create a truly mesmerizing mix. (The app is also a visual feast, with each soundscape accompanied by ever-shifting techno-tinged art.)
But you’re no passive listener in these audio realms. The individual elements that make up each soundscape can be manipulated through an imaginative, playful UI that lets you reposition each audio element (like that waterfall) anywhere you like.
Befitting its futuristic feel, Odio's backstory is one of serendipitous meetings, well-timed hardware and software releases, and a stroke of good fortune. Kwak conceived the app’s initial version as a graduation project at the Design Academy Eindhoven. Originally known as Virtual Sky, the prototype contained the bones of what would become Odio, but was largely grounded in real-world sounds. It also required a mess of hardware and special equipment — all of which was rendered pretty much irrelevant once AirPods with Spatial Audio arrived.
“I was depressed for a while,” laughs Kwak. “I was like, ‘I’ve been working on this for months, and now it’s pointless!’ But then I thought about it more deeply and realized, ‘Oh, this just means I don’t need to provide hardware,’ and it was actually great.”
Kwak partnered with Volst, a company that was interested in a 3D soundscape app. With the building blocks in place, Odio's UI developer and designer, Rutger Schimmel, took on the challenge of bringing Kwak’s project to life — a process that went much faster than expected.
I want our composers to image inventing planets and filling them up with sound.
Joon Kwak, Odio cofounder
“We knew the AirPods had [surround sound] support, but we were skeptical,” he says. “We thought, ‘OK, they have head tracking, but it’s probably just for first-party stuff.’ But we were still excited, so we quickly set up an Xcode project to get the data from the AirPods to the device.”
They had a prototype up and running on the headphones within minutes. “We were blown away by how easy it was,” Schimmel says. “And in about an hour we decided on excellent 3D audio frameworks from Apple that were the perfect foundation for what we were working on.” Coding began in January. By April, the team had a Swift-built demo ready to go.
To build an Odio soundscape, composers like Kwak, Odio sound designer Max Frimout, and a team of outside musicians collaborate — generally in Logic Pro — by blending ambient sounds, synthetic bells and whistles, and music.
After the soundscapes are completed and duly field-tested in coffee shops, parks, and subways, the artists hand their files over to Schimmel. For a role that involves cutting-edge design, immersive audio, and incredible degrees of customization, Schimmel’s toolbox is surprisingly uncluttered: AVAudioEnvironmentNode (AVKit) for creating the 3D audio environment, CMHeadphoneMotionManager (Core Motion) to access headphone motion data, and Sentry for error tracking and QA.
“Everything else in Odio is created from scratch in Swift — from data management to interacting with soundscapes to real-time buffering the interactive sound files,” Schimmel says.
The result is a remarkable example of the power and simplicity of designing for Spatial Audio. “Honestly,” Schimmel says, “most of the hard work is done by the composer.”
Download Odio from the App Store
Discover geometry-aware audio with the Physical Audio Spatialization Engine (PHASE) Watch now Immerse your app in Spatial Audio Watch nowIn December, we announced the most comprehensive upgrade to pricing capabilities since the App Store first launched, including additional price points and new tools to manage pricing by storefront. Starting today, these upgrades and new prices are now available for all app and in‑app purchase types, including paid apps and one‑time in‑app purchases.
More flexible price points. Choose from 900 price points — nearly 10 times the number of price points previously available for paid apps and one‑time in‑app purchases. These options also offer more flexibility, increasing incrementally across price ranges (for example, every $0.10 up to $10, every $0.50 between $10 and $50, etc.).
Enhanced global pricing. Use globally equalized prices that follow the most common pricing conventions in each country or region, so you can provide pricing that’s more relevant to customers.
Worldwide options for base price. Specify a country or region you’re familiar with as the basis for globally equalized prices across the other 174 storefronts and 43 currencies for paid apps and one‑time in‑app purchases. Prices you set for this base storefront won’t be adjusted by Apple to account for taxes or foreign currency changes, and you’ll be able to set prices for each storefront if you prefer.
Regional options for availability. Define the availability of in‑app purchases (including subscriptions) by storefront, so you can deliver content and services customized for each market.
The App Store’s global equalization tools provide a simple and convenient way to manage pricing across international markets. On May 9, 2023, pricing for existing apps and one‑time in‑app purchases will be updated across all 175 App Store storefronts to take advantage of new enhanced global pricing. The updated prices will be globally equalized to your selected base country or region using publicly available exchange rate information from financial data providers. These price points will also follow the most common conventions in each country or region so that prices are more relevant to customers.
You can now update your current pricing to take advantage of the enhanced global pricing using App Store Connect or the App Store Connect API. If you haven’t made price updates for your existing apps and one‑time in‑app purchases by May 9, Apple will update them for you using your current price in the United States as the basis. If you’d like a different price to be used as the basis, update the base country or region for your apps or in‑app purchases to your preferred storefront. You can also choose to manually manage prices on storefronts of your choice instead of using the equalized price.
Learn how to select a base storefront
This International Women's Month, we’re celebrating women founders, creators, developers, and designers.
How Halfbrick cultivated Super Fruit Ninja on Apple Vision ProFruit Ninja has a juicy history that stretches back more than a decade, but Samantha Turner, lead gameplay programmer at the game’s Halfbrick Studios, says the Apple Vision Pro version — Super Fruit Ninja on Apple Arcade — is truly bananas. “When it first came out, Fruit Ninja kind of gave new life to the touchscreen,” she notes, “and I think we have the potential to do something very special here.”
“The full impact of fruit destruction”: How Halfbrick cultivated Super Fruit Ninja on Apple Vision Pro View nowFind Super Fruit Ninja on Apple Arcade
Behind the Design: Rebel GirlsThe Rebel Girls app uses immersive audio experiences, gorgeous art, and clever interactive elements to spotlight its historic heroines. “We’re creating an omnichannel for girls,” says Jes Wolfe, CEO of Rebel Girls. “The app takes the best of our books, podcasts, and audio stories and puts them into a flagship destination.”
Behind the Design: Rebel Girls View nowDownload Rebel Girls from the App Store
Behind the Design: Wylde FlowersThis charming Apple Design Award-winning game is a cross-pollination of farming simulation, eerie mystery, optional love story, and exploration of tolerance and understanding. Also, you’re a witch who sometimes turns into a cat. “The Wylde Flowers experience is a bit different for everybody,” says Amanda Schofield, cofounder, creative director, and managing director of indie developer Studio Drydock. “It’s all about self-expression and self-exploration.”
Behind the Design: Wylde Flowers View nowFind Wylde Flowers on Apple Arcade
Developer Spotlight: RootdWhen she started having panic attacks as a university student, Ania Wysocka (pictured above) wanted "to look for an app that could explain what was happening to me,” she says. But when the hypnosis and therapy apps she downloaded didn’t have what she was seeking, she decided to create Rootd to demystify panic attacks and bring on-the-spot relief.
Developer Spotlight: Rootd View nowDownload Rootd from the App Store
Behind the Design: Overboard!In the evocative murder mystery game Overboard!, you play not as the detective but the murderer most foul — Veronica Villensey, a fading 1930s starlet who’s tossed her husband off a cruise ship. To bring the story to life, artist and designer Anastasia Wyatt trawled into the rich potential of the game’s vintage setting, pulling designs from 1930s fashion, magazines, and even sewing pattern books.
Behind the Design: Overboard! View nowDownload Overboard! from the App Store
Behind the Design: Pok Pok PlayroomWhen the husband-and-wife team of Esther Huybreghts and Mathijs Demaeght first began dreaming up Pok Pok Playroom, they made a solemn vow: parents shouldn't need to mute the app in a restaurant. “We didn’t want media and jingles and jangles that get stuck in your head,” Huybreghts laughs. “We wanted a quieter experience.”
Behind the Design: Pok Pok Playroom View nowDownload Pok Pok Playroom from the App Store
Developer Spotlight: Ground NewsIn 2017, Harleen Kaur launched Ground News, a news aggregator that helps you see how media outlets across the political spectrum are covering—or ignoring—a topic. Not only does it let you read coverage from thousands of publications worldwide, it also shows the political bent of an article or outlet (which is ranked by a third-party service and Ground News users themselves).
Developer Spotlight: Ground News View nowDownload Ground News from the App Store
Developer Spotlight: Prêt-à-TemplateWhen Prêt-à-Template founder and CEO Roberta Weiand launched her app in 2014, it quickly became a darling among fashion designers around the world. With its library of templates, textures, and patterns, the app lets anyone sketch their dream outfit.
Developer Spotlight: Prêt-à-Template View nowDownload Prêt-à-Template from the App Store
Developer Spotlight: The DyrtSarah Smith, an avid camper and cofounder of The Dyrt, was frustrated by how hard it was to find details on a campsite before you booked. She wanted to know that, say, site 2 was next to a busy road, while site 7 was along a river. She wondered why nobody seemed to be solving the problem. Then she had a thought that changed everything: “Why can’t I do it?”
Developer Spotlight: The Dyrt View now Read moreApp Analytics in App Store Connect is a helpful tool with a breadth of features to help you understand and improve how your app is performing on the App Store. With metrics related to acquisition, usage, and monetization strategy, App Analytics enables you to monitor results in each stage of the customer lifecycle, from awareness to conversion and on to retention. Starting today, you can put your app’s performance into context using peer group benchmarks, which compare your app’s performance to that of similar apps on the App Store. Now you’ll have even more insights to help you identify growth opportunities.
Peer group benchmarks provide powerful new insights across the customer journey, so you can better understand what works well for your app and find opportunities for improvement. Apps are placed into groups based on their App Store category, business model, and download volume to ensure relevant comparisons. Using industry-leading differential privacy techniques, peer group benchmarks provide relevant and actionable insights — all while keeping the performance of individual apps private.
Review your new benchmark data, then leverage other tools in App Store Connect to improve conversion rates, proceeds, crash rates, and user retention. You can test different elements of your product page to find out which resonate with people most, create additional product page versions to highlight specific features or content, get feedback on beta versions of your app, offer in‑app events to encourage engagement, and so much more.
Learn how to view benchmark data
Software companies are constantly trying to add more and more AI features to their platforms, and it can be hard to keep up with it all. We’ve written this roundup to share updates from 10 notable companies that have recently enhanced their products with AI. OpenAI expands Operator to more countries Operator is now available … continue reading
The post Feb 21, 2025: Development tools that have recently added new AI capabilities appeared first on SD Times.
With AI making its way into code and infrastructure, it’s also becoming important in the area of data search and retrieval. I recently had the chance to discuss this with Steve Kearns, the general manager of Search at Elastic, and how AI and Retrieval Augmented Generation (RAG) can be used to build smarter, more reliable … continue reading
The post From search to conversational AI: How vector databases are powering smarter applications appeared first on SD Times.
Symbiotic Security has announced updates to its application and IDE extension, which provides secure coding recommendations and fixes vulnerabilities as code is written. “With Symbiotic’s software, security is no longer an afterthought; it is where it should have always been – integrated into the software development lifecycle (SDLC) as a foundational part of the coding … continue reading
The post Symbiotic Security updates its IDE extension to give developers better insights into insecure code as it is written appeared first on SD Times.
Microsoft has announced that Visual Studio now supports code referencing for GitHub Copilot completions. Code referencing enables developers to verify if the suggestions coming from Copilot are based on public code, which could potentially lead to open-source licensing issues depending on what the developer is using the code for. “By integrating code referencing into GitHub … continue reading
The post Visual Studio adds support for code referencing of GitHub Copilot completions appeared first on SD Times.
Application security posture management company Apiiro today has released two open-source tools to help organizations defend against malicious code in their applications. The action comes on the heels of Apiiro’s security research that shows thousands of malicious code instances in repositories and packages. According to the company, its focus in the research was deep code … continue reading
The post New open source tools to detect, defend against malicious code appeared first on SD Times.
Technology continues to rapidly advance, particularly with the ongoing evolution of generative AI, the growing emergence of innovative methods for leveraging data, and new platforms that enable companies to rapidly develop SaaS offerings. However, many organizations have approached innovation without a comprehensive strategy or holistic view of their applications, simply focusing on adding the latest … continue reading
The post New tech, new problems: Why application development needs a big-picture view appeared first on SD Times.
Planview®, the leading platform for Strategic Portfolio Management (SPM) and Digital Product Development (DPD), today announced it has completed its acquisition of Sciforma, a prominent provider of Project Portfolio Management (PPM) and Product Development solutions. This strategic acquisition further solidifies Planview’s position as the undisputed leader in enterprise portfolio management, bringing market-leading solutions to organizations … continue reading
The post Planview Acquires Sciforma, Expanding Global Leadership in Portfolio Management Solutions appeared first on SD Times.
DeepSeek has taken the tech world by storm for the past month after it was revealed that the company’s models were trained for a fraction of the cost of other top models while still delivering competitive performance in many areas. Not only were they cheaper to train, but they’re also cheaper to run, making DeepSeek’s … continue reading
The post DeepSeek Unpacked: Security, Innovation, and What’s Next appeared first on SD Times.
The integration company Boomi today announced a new API management solution that empowers organizations to control API sprawl. Organizations that are utilizing generative AI have about five times as many APIs as those who aren’t, according to IDC’s API Sprawl and AI Enablement report from December 2024. “APIs have become the backbone of AI-driven innovation, … continue reading
The post Boomi launches new API management solution to help companies deal with API sprawl appeared first on SD Times.
The AI gold rush is here. Unlike past tech booms, this isn’t just about who has the best technology—it’s about who can move fast, build efficiently, and iterate on the fly. Companies that thrive in this era are the ones mastering a tricky balancing act: building transformative AI products while moving at a blistering pace. … continue reading
The post The AI app gold rush: Move fast and build smart appeared first on SD Times.